Macaron GenUI: Graphical Personal Agent with Generative UI
At Mind Lab, we are launching Preview — a new space to share the ideas we are most excited about before they become production-ready. The first direction we want to highlight is Generative User Interface.
Most LLM-driven apps today are still primarily text-in / text-out. In some cases, however, it doesn't work really well. Text has low information density for visual data, increases cognitive load, and creates friction when users are required to choose or act (e.g., maps, charts, multi-option choices) [1]. Generative UI addresses this gap by allowing the model to produce interactive, structured interfaces during a conversation — a natural fit for flexible AI apps.
We focus on declarative generative UI, which strikes a practical balance between flexibility and safety of UI generation. Built on top of Google’s A2UI protocol [2], we've been working to deliver better user experiences.
Check out the demos below:
- Left: a live chat panel where you can generate and apply A2UI components by simply chatting. Publish the components when ready.
- Right: a live community-published components. Browse what others created and see what AI can build!
We've also curated Macaron GenUI [3] and now open-sourced it. Explore to interact with Macaron-style A2UI components!
- Composer Chat — AI replies with Macaron-styled UI.
- Gallery — composed examples built from our primitives.
- Components & Icons — the reusable building blocks that are used to construct the Gallery.
These previews are early works on what AI-native interfaces might feel like: more direct, more visual, and better at conveying information. We’ll follow up with model-level work and deeper experiments soon — stay tuned!
This is exactly what Preview is for: sharing the frontier as it becomes usable.
Try the demo here:

References
- Generative Interfaces for Language Models (Chen et al, 2025)
- A Protocol for Agent-Driven Interfaces (A2UI, 2025)
- Macaron GenUI (github repo coming soon)