Enter the password to view this case study.
Four AI agents, one marketing platform, and a 62% drop-off rate before anyone published a single post. I slowed the system down on purpose — and that's what made it work.
An efficient AI-Marketing team solve all your marketing problems.
PhotoG promised e-commerce brands something ambitious: a full AI marketing team in one platform. A Research Analyst to study your market. A Brand Strategist to position you. A Creative Director to build the posts. An Ops Manager to publish them. The technology worked. The experience didn't.
I watched eight people use PhotoG in test sessions. Same thing every time. They'd type in their product, the AI would start working, and within thirty seconds their eyes would glaze. They didn't know which agent was active, what stage they were in, or whether they could change anything. Three root causes, one pattern.
A single "Thinking..." spinner for a four-agent system. No indication of who was working or what they were doing. Five of eight testers used the exact same word: black box.
Research, strategy, content creation, and publishing — all dumped into one scrolling chat thread. No stages, no structure, no breadcrumbs. People lost track of where they were.
The flow auto-advanced from one agent to the next. No checkpoints. No approval gates. Users were watching a machine make decisions for them — and they hated it.
Here's the counterintuitive thing about agentic AI: people don't want instant results. They want to feel like they're part of the process. Speed without comprehension is just noise. So I designed friction back into the system — deliberate pauses where users could read, review, and decide before the next agent took over.
Zero friction in a multi-agent system means zero trust. Every "Continue" button in v3.5 isn't a speed bump — it's a moment where a user goes from watching to owning.
Mapped every step of v3.0 across 8 user sessions. Found 12 friction points — and identified exactly which ones mattered.
Introduced three "Continue" checkpoints between agents. Each one is a conscious decision to proceed, not an automatic handoff.
Split the screen. Left side: the conversation with the AI. Right side: the editable deliverable. No more guessing what the system produced.
Added a stepper showing which agent is active, what it's doing, and what comes next. Real-time thinking states replaced the generic spinner.
The redesigned flow treats each agent as a chapter. You don't move to the next one until you've read what the current one produced — and decided it's right.
Pulls industry data and competitor analysis. Delivers an editable report with source verification — every claim linked back to where it came from.
Takes the research and builds positioning. Reference posts, tone guidelines, strategic recommendations. All reviewable before the next step.
Generates post content and product images based on the approved strategy. This was where 5 of 8 users dropped off in v3.0 — now they stay.
Handles the logistics: timing, hashtags, platform formatting. One final preview before anything goes live.
Every agent transition requires a deliberate click. Not because the system needs it — because the person does. After the redesign, 7 of 8 users said they felt in control. In v3.0, it was 1.
The conversational panel follows the AI's reasoning. The deliverable panel shows what it actually produced — editable, downloadable, real. You can read and act at the same time.
Under the hood, agents run concurrently. The user never sees that. They experience a clear sequence: research, then strategy, then content, then publish. Systems thinking, translated into narrative.
Same four agents. Same underlying technology. Completely different experience.
Everything in a single scrolling pane. The AI auto-advanced between agents. No progress indicator, no control, no reason to trust it. 62% left before their first post.
Conversation on the left, deliverable on the right. A stepper showing exactly where you are. Three "Continue" gates where you decide when to move forward. 87% said they felt in control.
Any system with four AI agents running sequentially is going to break sometimes. The question isn't whether — it's how gracefully. I designed for every failure mode I could find.
When the Research Analyst can't find enough data, it doesn't just say "no results." It shows a "Low Confidence" state with alternative queries and explains what went wrong.
Every claim in the research report gets a source tag. Green means verified from a real source. Amber means the AI synthesized it. Users know exactly what to trust.
Each "Continue" gate doubles as a save point. Close the browser mid-flow, come back tomorrow — you're exactly where you left off, with all previous agent work preserved.
If the Creative Director's image generation fails, users see a placeholder with two options: regenerate with adjusted parameters, or upload their own. The flow never dead-ends.
My role went beyond screens. I defined the interaction architecture, set the design principles, and made the case to stakeholders that slowing down the AI was the right call.
Defined the entire interaction architecture — the stage-gate model, the dual-panel layout, the transparency system. Led engineering reviews to make sure the frontend could support real-time agent state changes without jank.
The hardest sell was convincing stakeholders that adding friction to an AI product was a feature, not a bug. I presented the v3.5 strategy using session recordings and drop-off data. They funded the full restructure.
Jia, Danni, and Jingyi owned the visual execution — component design, illustration, motion. We synced weekly. Their constraints around animation performance and component reuse actually strengthened the architecture.
The most impactful thing I designed for PhotoG was a pause. Not a feature, not an animation — a moment where someone stops and thinks before continuing. I'll use that pattern again.
Every user in our tests preferred watching the AI work step-by-step over getting instant results. Understanding builds trust. Trust builds retention. Speed alone doesn't.
Engineers think in parallel processes. Users think in narratives — first this happens, then that. The designer's job is to bridge those two worlds without losing either one.
A hospital scheduling system that replaced phone trees and spreadsheets with something the staff could actually use at three in the morning.
Read the case study →