Layla v6.1.0 has been published!
- Layla
- 2 days ago
- 2 min read
This update allows you to use Layla as your phone's default assistant, replacing Google Gemini!
Here's a demo of the feature in action:
Here's a tutorial on how to change the default assistant for your Samsung phone: https://www.samsung.com/us/support/answer/ANS10001575/
Step 1: go to your Settings ![]() | Step 2: search for "digital assistant" ![]() |
Step 3: Tap on the digital assitant app setting ![]() | Step 4: change the default assistant to Layla ![]() |
Voila! When you long press the power button, Layla will come up as your default assistant!
You can set a default character in Layla, so the assistant that gets brought up is your own character! This gives a new dimension to your personal assistant on your phone.
Changelog
New features:
Layla can replace Google Gemini as your phone's default assistant! (Go to Settings -> Default Digital Assistant -> configure it to use Layla)
press and hold the power button to bring up Layla!
combined with the new agentic framework in v6, Layla can complete various tasks without opening the main app
as I expand the agentic framework, Layla will be able to complete more tasks on your phone
your Inference Settings are brought over to Layla Assistant; all models supported by Layla will work, whether it be local GGUF, connected to your PC, or Layla Cloud
personalised assistant (hint: set a default character in Layla, the assistant will bring up "Ask [your character]" instead of Layla)
Improvements
Layla now supports the stable diffusion model format by local-dream app (you can use both Layla and local-dream converted models interchangeably)
improved set alarm agent
fixed small bugs in the calendar agent
Layla agents support RSS feeds
you can long-press on images generated during chat to save them
you can preview images generated in stable diffusion mini-app full-screen
MobileVLM is updated to v2, improving image recognition capabilities
Bug fixes:
fixed a bug where idle time settings were not being saved in Voice Chat
fixed a bug where agents stop on the first LLM tool call