[Case study] Bonfire - AI for Co-living
Overview
Bonfires.ai is an innovative platform that creates a shared knowledge layer for AI agents with a unique pairing of a Delve intake module and a Web3-based “knowledge economy” for contribution and validation. The use case we tested was meant to enhance co-living environments, and was intended as a shared knowledge resource for people as well as agents.
The main purposes of a “Bonfire” (an instance of Bonfires.ai) are to support a paradigm shift in how we:
Perceive knowledge as a collaborative resource
Compensate for intellectual and research work
Enable AI-driven knowledge networks
We designed a co-living experience (called “Sanctuary”) to collectively beta test Bonfire, observing the tendencies to reach the following objectives:
Connect adoption to income
Effectively communicate the value of recording conversations to increase trust
Verify that the extraction process anonymizes results and supports privacy
Encourage collective exploration, coordination, and supportive interaction
Create an onboarding flow that optimises UX
Identify friction points of the Bonfire while using it to facilitate moments of joy
You can find the answers to those objectives in the last section of this article.
The Sanctuary (5 participants, July 23rd - Aug 3rd, in Noto, Sicily) allowed Josh, Bonfire's founder, and his team to simulate the future environment they want to build by observing natural behaviour that online beta testing can’t replicate..
“Being present with people testing it is irreplaceable — a continuous stream of feedback and a road to drive on… this focus lets me optimize my energy for maximum return.”
Process
We ran an observational study for Bonfire, loosely resembling the structure of a Design Sprint, which was split into the following phases.
We captured both quantitative data with field tests and 4 interviews, and qualitative data with initial and post project surveys. This approach offered a holistic view of participant experiences with the technology.
Concept and prototype scoping
Gathering inspiration for how we can extend the current tool to realise the desired user experience
Breakdown and delegation of specific prototyping tasks
Prototyping
Collaboration on refining the Bonfire bot and extending some of the bot tasks with manual facilitation that can be automated in the future, like team reminders or specific group prompts and questions.
Testing
Observing how everyone interacts with the bot, and whether the observed and emerging behaviors of the group match with the intended experience
1:1 interviews to assess the personal experience
Closing survey to compare subjective changes in behavior and gather focused feedback
Results & Learnings
Using the Bonfire AI proved to be very useful in our coliving use case. The experiment showed that successful use depends on reducing UX friction and delivering a smooth onboarding. However, manual facilitation still outperformed the bot, suggesting adoption will require a clearly articulated value proposition for organisers—saving time, reducing effort, and centralising knowledge.
We also found that trust in data handling is achievable with transparency and consent, and that joyful, personality-driven interactions can boost engagement, but only if the experience is seamless.
Observed interactions
After introducing the bot to the group, we expected participants to engage in self-directed experimentation with Bonfire, interact with each other more, and use it as a primary point to coordinate resources. However, without baseline data, we could only track the self-directed experimentation. For future rounds, we will observe and measure participant interactions without the bot first, and compare the interaction patterns after introducing the Bonfire. This approach will help us objectively measure the bot's impact on group interactions and provide clear, quantifiable evidence of the influence of the technology.
That said, the underlying protocol allows future versions to automate some tasks of the facilitator (prompts for connecting with each other, suggesting field trips, calculating resources, etc.), which should effectively address the other points mentioned above.
Friction points
The primary friction point of Bonfire was the onboarding and UX. We addressed those issues manually and learned how they could be resolved with simple onboarding instructions and automated reminders.
For example, an onboarding presentation to the group explained how the data is stored and processed to address any privacy concerns. In the evenings, the group reflected on their experience as a group and planned the days ahead - which could be prompted by the bot. Additionally, exact calculations and tasks outside of the current version of Bonfire were replaced with GPT.
Strategic insights
During the event, results from our tests directly informed the prioritisation of engineering decisions, for example:
Improved the agent's decision-making for taking the right actions inside the message processing flow
Upgraded to the Claude Sonnet 4 model
Implemented retrieving query-relevant data from the vector store and storing important messages in the vector store during the message processing flow
Implemented dynamic labels (
topics) management using a new TNT service, and made the agent only respond to tags or replies
Implemented the first version of the user-facing agent overview frontend
Based on the observations and insights from interviews with the participants, we drew some conclusions about our initial aims:
Objective 1:
Can we connect adoption to income?
Long-term adoption will depend on a clear value proposition for specific target audiences and their needs. If the value proposition is directed towards reducing cost/effort/friction for hosts and organisers, it could make it appealing to invest in running a Bonfire.
Saving time on coordination tasks, “robot jobs”, calculations, etc., and keeping collective knowledge accessible from the same place can help facilitators focus on improving the human experience and open up time for productivity and collaboration.
With the current version, residents did not see enough value to switch from GPT or other mainstream LLMs. However, easier voice transcription, a better user experience, and more precise responses about shared knowledge could present enough benefits to use it in a co-living context.
Objective 2:
Can we effectively communicate the value of recording conversations to increase trust?
Yes. At the beginning of the Sanctuary, everyone expressed questions and concerns about the storage and handling of the input data. After an explanation and explicit consent, the group trusted the bot with their data.
It is worth mentioning that the group was also aware of the influence of a recorder on conversations and sometimes deliberately decided not to record.
Objective 3:
Can we prove that the extraction process de-personalises results and supports privacy?
No concerns or questions were raised regarding the extraction of data. In a high-trust environment with no interpersonal conflict, this feature may not be necessary. However, this could be explored in a future experiment where people might feel psychologically less safe or need to mediate a conflict.
Objective 4:
Can we encourage collective exploration, coordination, and supportive interaction?
Not yet, but potentially! The group explored new trips together, coordinated around tasks, and had constructive conversations about each other’s projects. The UX friction of the current version of Bonfire was higher than supporting these behaviors through manual facilitation, most conversations emerged from the “empty space” between structured activities, without prompts to the bot. That said, with a refined user experience and clearer affordances, the tasks of a facilitator could be replaced by the Bonfire.
Objective 5:
Create an onboarding flow and optimise UX?
Yes, with a brief onboarding message, a survey to add personal information, and an onboarding game that effectively presents the most unique capabilities of the bot, it will be clear to users how to use the bot.
Objective 6:
Can we identify the friction points of the Bonfire and bring moments of joy?
Yes, based on the prompts during the Sanctuary, the mix of goofy-ness of the bot and curiosity around personal information fed to the bot, the interaction can feel fun and joyful. That said, joy comes as a consequence of a seamless user experience.
You can find the full report here:
Next up - Sanctuary breakdown
In the next article, I will cover our learnings from the Sanctuary itself - a combination of a decompressing co-living retreat, an intellectually stimulating skill-sharing environment, and a prototype design sprint for socio-technical infrastructure.
I will explain how we went about organising it, designing the experience for all participants, and where we plan to go from here.







