top of page

Layla v6.7.0 has been published

This update fleshes out Layla's Agent Framework.


Python logo in blue and yellow with a low-poly blue butterfly overlay on a black background, symbolizing creativity and transformation.

Agent System Prompts

Agents now have their own system prompts. These will update your character's system prompt when they are activated. For example, you don't need to update both your system prompt and the agent for status cards; the system prompt to guide the model into generating status cards will automatically be injected when you attach the corresponding agent.


Introducing Short-term Memory

Why do we need a short-term memory when there is the LLM context itself? The main difference lies in the fact that Layla's short-term memory is **structured**. Both the LLM context and LTM is unstructured, and lookup is done purely via heuristics (via the Attention layer in LLMs, and embeddings in LTM). The short-term memory is structured a layer that lives between the LLM's context and Layla's LTM.


Information flows from the LLM's context -> short-term memory -> LTM (as demonstrated by one of the example agents below). This can be updated by both the LLM and Layla's Agents. Because short-term memory is structured, you can use code and logic to manipulate the contents, not just via prompts. This opens up the possibility to *program* information flow. An example could be writing an agent that updates a character's Hit-Points or inventory based on triggering conditions.


This layer will also serve as a sort of *RAM* for all agents. Any agent can read/write into Layla's short-term memory, allowing a horizontal layer of communication between agents (work-in-progress). This is now possible because...


Layla supports Python

Layla now comes with her own embedded Python interpreter as one of the mini-apps. Install it to enable writing and executing arbitrary python scripts directly in your phone! You can install packages via `pip`, and Layla comes with a small console and code area to run your scripts.


App screen showing "Python Interpreter" details. Displays code background, Python logo, and button "Go to App." Text describes features.

All agents in Layla can now execute python scripts. Which means you can program arbitrary logic within an agent. You can download stuff from the internet, scrape the web, generate images, solve math problems, even program small games within Layla's Agentic Framework.



Our Discord server contains a list of newly created agents with Python. You can check them out here: https://discord.gg/EaTUr9BK89


Full Changelogs


Improvements:

  • added setting to disable system prompt in Inference Settings -> Custom Prompts: this will disable all character and user information. This is useful if you want to chat with the "raw model" without any instructions. Note this also stops most apps in Layla from working properly.

  • redesigned the model selection area, now with the ability to export your imported models

  • Claude API now allows sending images in chat

  • added Japanese and Vietnamese translations

  • added ability to organise characters into folders

  • you can tap on "memories" in your character card to immediately view their memories in the LTM app

  • added small UI sounds to notify you when Layla is listening or finished speaking

  • allow adding custom JSON fields to OpenAI endpoints


Bug fixes:

  • fixed a bug where long prompts will crash LTM ingestion process with a custom LLM

  • fixed bug where Layla as default assistant on phone does not work with multiple inference engines attached to the character

  • fixed bug where setting default character was not working with using Layla as default assistant on phone

  • fixed a bug where custom SD models cannot be imported via the "Already downloaded" link

Email

Location

Gold Coast, Australia

SUBSCRIBE

Sign up to receive news and updates from Layla

Thanks for submitting!

© 2024 by Layla Network

  • Discord
  • Youtube
  • TikTok
  • X
  • Facebook
bottom of page