IMPORTANT: this is a major version upgrade and clears all your cache! Your chats may load slowly the first time.
Layla now supports any GGUF models on the internet!
You can download models from your own sources and load them into Layla. A new app "Raw Model Instructions" is added where you can configure the settings for your own GGUF models. While the inbuilt "Layla" models are still recommended for general use, any sufficiently intelligent LLM model should be able to pick-up the prompt style of Layla relatively quickly after a few conversations and may achieve better results in specific areas such as storytelling, roleplay, etc. This is an advanced feature; use at your own discretion. Loading a model too large for your phone memory will crash without warning.
- added ability to choose any GGUF model as base
- added new model: Goldfish. A tiny model about as intelligent as the world's smartest goldfish. This model runs on devices with less than 4GB of memory and is mainly here to get Apple off my back about not supporting the ancient devices they use to test with during the review process.
- you can now select your base model on first-time launch of Layla
- model selection screen now gives you visual cues recommending the best model for your device
- backend stability improvements
- To-do app is now enabled by default and limited to "Layla" character only
- fixed bug where auto-update feature was not pushing out updates to everyone properly
- fixed bug where "hands-free" mode was not working altogether
Live on Google Play
Live on the App Store