top of page
Writer's pictureLayla

Layla v4.12.0 has been published

This update significantly modifies Layla's core engine.

inference engine app

Layla now supports native multi-modal models!

  • use your custom MMProj image embeddings with models that recognise images.

  • tested with Llava 8B, llava-phi3-mini, Bunny-Llama3

  • continued support for RAG based image recognition using the old MobileVLM app


Layla's core inference engine has been re-written

  • Raw Model Instructions, OpenAI API, Claude API apps have been removed

  • New app: Inference Engine combines all model related settings into one app


With the Inference Engine app, you can configure:

  1. models: can be a local file, OpenAI endpoint, Claude API, or Layla Cloud

  2. vision encoders: optionally attach vision encoders to your model: mmproj files for supported GGUF models, MobileVLM for RAG based image recognition for Executorch/Cloud models

  3. prompt settings: fully customisable prompts just like before


You can save any combination of the above as a "Custom Engine". This engine can be attached to a character, so you can configure different models/prompts for different characters. This includes being able to configure some characters with local LLMs, while other characters uses Layla Cloud or OpenAI API. Switching is seamless (you don't have to install/uninstall the OpenAI app anymore!)


Oh, and you don't have to uninstall Executorch app to use GGUFs anymore either, if you choose a PTE file, Executorch is activated automatically. (However, you do have to keep the Executorch app installed, you can install once and forget)


Lastly: added a table view for long term memory for better viewing and management!


IMPORTANT: if you run into any model issues after the upgrade, please check the new Inference Engine app to make sure everything is setup properly


Full changelog


New features:

  • Layla now supports native multi-modal models!

  • use your custom MMProj image embeddings with models that recognise images.

  • Layla's core inference engine has been re-written

  • Raw Model Instructions, OpenAI API, Claude API apps have been removed, new app: Inference Engine combines all model related settings into one app

  • New offline translation models: "English <-> Russian", "English -> Polish"


With the Inference Engine app, you can configure:

  • models: can be a local file, OpenAI endpoint, Claude API, or Layla Cloud

  • vision encoders: optionally attach vision encoders to your model: mmproj files for supported GGUF models, MobileVLM for RAG based image recognition for Executorch/Cloud models

  • prompt settings: fully customisable prompts just like before

  • You can save any combination of the above as a "Custom Engine". This engine can be attached to a character, so you can configure different models/prompts for different characters. This includes being able to configure some characters with local LLMs, while other characters uses Layla Cloud or OpenAI API. Switching is seamless.


Improvements:

  • added table view for Long-term memories

  • added ability to configure number of injected prompts for LTM and Lorebook


Bug fixes:

  • adjusted scroll issues on some phones in the App page and Offline Translator language picker

184 views0 comments

Recent Posts

See All

Commenti

Valutazione 0 stelle su 5.
Non ci sono ancora valutazioni

Aggiungi una valutazione
bottom of page