Whiteboard AI agent
The context
Context aware AI that assists teams in synthesising ideas and surfacing next steps without interuupting creative flow.
I led the end-to-end design of an AI agent embedded in Confluence Whiteboards that transforms freeform brainstorming into a connected, context-aware experience. The goal was to help teams turn ideas into action more efficiently by leveraging organisational context and generative suggestions — not just text prompts.
My role: UX research, design strategy, interaction design, prototyping, cross-team alignment.
Team context: Built under tight timelines with multi-team dependencies, spearheading new internal patterns for AI behaviours.
AI challenge: Designing trustworthy AI experiences, to boost the likelyhood users would want the agent to continue to work as their teammate.
Design principles I drove:
Assist, don’t interrupt: AI should augment workflows without pausing or pulling users out of their thinking.
Context over prompting: The board itself — not text prompts — should provide meaningful input to the AI.
Trust and control: AI outputs were designed to be skimmable and dismissible, reducing fear of “wrong” suggestions.
The agent will search for relevant content across the organisation
Problem and solution
Continue your train of thought by clicking on the ‘next step’ suggestions
To improve efficiency in the brainstorming space, we explored two complementary approaches: a prompt-based entry point and a proactive AI agent that uses board context and user intent to suggest and take action on work across the Atlassian ecosystem. The goal was to reduce friction while preserving creative flow, user agency, and trust.
Within a compressed timeframe of several weeks, we delivered both a working demo for TEAM 2025 and a longer-term vision for agent-driven experiences within whiteboards. This required close collaboration across seven teams spanning platform, AI, design systems, and product, where we aligned on shared patterns, constraints, and success criteria for agent behaviour.
The agent observes selected content and board-level signals to surface relevant suggestions inline at key moments, rather than relying solely on explicit prompts. We introduced new AI interaction logic and rules, including auto-generating content non-modally—without preview or confirmation steps—when confidence thresholds were met, and providing inline next-step suggestions that allow users to continue thinking without stopping to prompt.
Whiteboards enable fast, exploratory ideation, but teams struggle to synthesise and act on ideas at scale. We designed a context-aware AI agent embedded in Confluence Whiteboards that assists without interrupting creative flow, grounds suggestions in board context, and delivers predictable, controllable outcomes that users can trust.
Select content on the whiteboard and ‘reference’ it in the new prompt
Outputs were designed to be skimmable, dismissible, and easy to refine, supporting a human-in-the-loop experience. To better our principles of trust, control and transparency, we implemented a feedback system, so that if the user is unhappy with the output, they can specify why. This then impacts all other outputs for the user.
Because agent-based workflows were still emerging as an industry paradigm, the solution went through multiple iterations as we gradually aligned across core teams on responsible autonomy, technical feasibility, and appropriate levels of predictability and control.
The process
-
With the emergence of generative AI, we saw an opportunity to introduce an AI teammate that could assist in real time — without interrupting creative flow or undermining trust.
Key questions we needed to answer:
How can AI add value without overwhelming users?
How do we ground AI output in board context, not generic prompts?
How do we design for trust, predictability, and user control in an exploratory space?
I started by doing an audit on the current AI experiences across the company and connecting with key stakeholders from teams we knew we needed to get alignment on. As we were setting the standard for auto-populating agent actions, inline suggestions, and the concept of ‘referencing’ content within the org. I setup brainstorming sessions and joined other teams spikes to get involved in the space. I contacted our legal advisors to get the most up to date polcies on how companies are using users data, and also read up on designing with trust.
-
I setup regular rituals with the key stakeholders and began to visualise concepts. These concepts would go to weekly reviews and continue to be iterated as each team’s expectations and requirements were aligned on. There was a total of 21 iterations done on the agent model alone, from team requirements to marketing requirements to scope adjustments. We had core principles for the feature that we weren’t going to budge on e.g it must auto populate, there cannot be any confirmation buttons - so we used those principles to advocate against other teams requirements - even designing for those teams to show how our design can suit their needs as well. All in the aim to get alignment and quick - as we had a strict deadline to meet and not a lot of time.
Based on all the discussions in the wonder and explore phase, we started to form principles that have remained as more and more ai functionality gets built:
Assist, don’t interrupt: AI should augment workflows without pausing or pulling users out of their thinking.
Context over prompting: The board itself — not text prompts — should provide meaningful input to the AI.
Trust and control: AI outputs were designed to be skimmable and dismissible, reducing fear of “wrong” suggestions.
-
The engineers were equally under the pump due to the deadlines, so I took the initiative to setup a UX debt backlog. We would blitz the feature every day and send it out to be dogfooded internally multiple times before we reached the deadline.
During this time, I also worked alongside content designers to set the prompting logic up for success by mapping adjectives and core words to content that could be created across Atlassian - as our prompting system can work across multiple tools (Image below)I also worked closely with the marketing team to co-create the slides for the TEAM25 presentation, where we would show the future state version of our demo.
The impact
The work got shared in the CEO keynote presentation at the company summit Atlassian TEAM 25 in front of thousands. This resulted in a positive reception from our users and higher investment in our team.
The work contributed to broader internal standards for AI interaction design across Atlassian.
This project received two Atlassian Innovation Awards (2025) for AI-driven product outcomes.
Early feedback and internal adoption supported continued investment in AI capabilities within the Whiteboards team.