top of page

Layla v6.0.0 has been published!

Layla v6 supports Agents!


Layla agents mini-app icon

Agents are fully configurable, self-contained workflows that can be triggered and executed by Layla during chats. Their functionalities can range from injecting simple context, to fully automated workflows such as reading a webpage and saving it as a reference document. Each agent is also modular: you can attach them to your own characters; they will still complete their respective tasks, but under the personality of your attached character.


This is a major version upgrade! Please backup your data and uninstall then re-install the latest version.


An agent is comprised of two parts: triggers and tools. An agent can have multiple triggers and call multiple tools. You are free to mix and match them in any combination when creating your own agents!


Triggers

Triggers in Layla are a combination of "traditional" machine learning algorithms and "modern" generative AI; they range from LLM inputs, MCP data, intent detection, and more.


Available triggers:

  • Phrase/Keyword: the simplest of triggers; will trigger when a specific phrase is detected in the input or output

  • Regex: match patterns in input/output, can optionally capture patterns to pass into tools

  • Intent: Layla uses "traditional" ML algorithms to detect the intent of your input, and will trigger when a specific intent is detected

  • Date/Time: triggered when you mention any form of date or time in your messages, this trigger will capture it and pass it to your tools

  • Layla MCP: a simplified tool calling schema (more suited for mobile use) will be injected as part of the system prompt, and the LLM will decide when to call the tool

  • General MCP: the full tool calling schema is injected, and Layla will query the external MCP server to execute the tool when triggered


Tools

Tools in Layla are functional components built to perform a specific task. Each tool takes a range of defined inputs and outputs text, which can then be used as input for the next tool in the tool calling flow. Layla comes with 20+ tools (and more will be added continuously over time!). Tools can range from simple tasks which just injects a snippet of text into the LLM's context, to more complicated tasks such as web search, web scraping, or even tasks that allows you to execute arbitrary code in Layla's context.


There are too many tools to list here, but I will highlight some very useful ones in the next articles.


MCP Support

MCP stands for Model Context Protocol. It is an open standard which allows LLMs to interact with the "real-world". MCP works by injecting a snippet of the tool schema into the LLM's context (usually in the system prompt), then the LLM can decide when to use those tools. The output of those tools are injected back into the context, and the LLM decides from there whether to continue to use more tools, or hand the conversation/output over to the user.


Layla 6 comes with a TinyMCP server built into the app. (Although it's called a "server", this MCP "server" is completely local to your phone.) Layla's MCP differs from the open standard in that it combines a range of traditional ML algorithms + keyword searches in addition to LLM decisions. This is because smaller models often get confused and hallucinate too much to be completely reliable in determining the best tool just yet. In addition, using the full model context protocol is prohibitively slow and takes up too much of the very precious context size on mobile models. Layla's TinyMCP has access to all tools within Layla itself. A demonstration of this can be seen in the pre-built "Introspection" agent. This agent has access to the internal state of Layla herself, and is able to query information such as different mini-apps available, number of memories, storage size used by Layla, and even perform menial operations such as clearing the cache for you.


Layla's TinyMCP also supports her full standardised counterpart. For those that are patient (or if you are using a cloud model), if a standard MCP tool schema is injected, Layla will pass through the tool to an appropriate MCP server which you can configure in the agent settings. You can have multiple MCP server configured at the same time, the correct tool will be dispatched to the correct server. The "HuggingFace MCP Client" agent is pre-built as a demonstration of how this works. It offers capabilities such as model search.


To learn more about the Agentic features in Layla, start with the following articles:


Improved Long-term Memory

Long-term memory has been significantly improved!


LTM now implements a 4 stage retrieval process, to balance speed and quality:

  1. "raw memories" are created immediately as you chat; they are available immediately when starting the next conversation, no processing time needed

  2. "embeddings" are created in the background; this process is relatively fast, and will be available 5-10 minutes after a conversation ends

  3. "summaries" are created as a background process, when your phone is idle; this process can either be done by the built in summariser, or a chosen LLM

  4. "knowledge graphs" are created last, with all the information available in the previous step, and require an LLM to process


During recall, memories are retrieved starting from "raw memories", and re-ranked by content in each level as they become available, so you will immediately get memories, but the injections will increase in quality as new memories are ingested


General Improvements

  • general improvements in all areas of the UI, giving a more consistent experience across the app

  • improved prompt format slightly for vision models

  • backup and restore data now prevent back button presses or navigation to avoid accidentally corrupting your data if you move away from the screen

  • added ability configure a character level image generation prompt prefix

  • LTM no longer processes <think> tags from reasoning models

  • performance improvements to rendering chat messages

  • Layla will now automatically select the best prompt format after you've selected your model

  • backups & restore will now also backup your OpenAI inference settings


Bug fixes

  • fixed a bug where favourite characters were not being saved

  • fixed a bug where searching for language names in the Offline Translator app was searching for the language's English name instead of the name in the UI language

  • fixed a bug in native speech-to-text where it stops listening after the first round

  • fixed a bug where horoscopes were not showing

  • fixed a bug where a character with an empty description stops all characters from showing up

  • fixed a bug where OpenAI inference settings was not backed up

  • and numerous others too many to list here!

Email

Location

Gold Coast, Australia

SUBSCRIBE

Sign up to receive news and updates from Layla

Thanks for submitting!

© 2024 by Layla Network

  • Discord
  • Facebook
  • X
  • Youtube
  • Instagram
bottom of page