
Drop a topic, watch AI agents debate it live — then a landing page writes itself from the conclusions.
WIGENT is a multi-agent debate platform where a PM agent orchestrates auto-spawned domain experts in a Slack-style chat UI. Agents debate, challenge each other, retire, and summon new specialists. Once consensus is reached, the chat transforms into a landing page generated from the debate conclusions. Grand Prize winner at Build with TRAE Seoul — built by 3 engineers in 3.5 hours.
View on GitHubSolo brainstorming is biased, team discussions take hours, and AI chat tools give single-perspective responses with no debate or convergence. There's no way to get the clash of viewpoints that produces strong ideas — quickly, with a tangible deliverable.
AI agents autonomously debate your topic in a Slack-style UI — PM anchors the discussion, domain experts argue and get replaced, and the debate converges into a structured business idea that instantly renders as a landing page from 9 design templates.
AsyncGenerator-based debate engine controls 30-turn free debates with automatic phase transitions, agent retirement at turns 12 and 22, and forced convergence after turn 25.
9 design templates (Glassmorphism, Neobrutalism, Editorial, etc.) render instantly when the debate concludes — no 60-second GPT wait. Dual-path rendering keeps GPT generation as a silent fallback.
Full Slack-style dark-theme chat with typing indicators, agent join/leave system messages, sidebar with online status, and Framer Motion page transitions.
Users can reject the result. The PM announces the rejection, the team runs 8 more turns of focused debate, and a new landing page is generated — all without the user typing into the debate.
WIGENT is a multi-agent debate platform built in 3.5 hours during the Build with TRAE Seoul Hackathon (2026-03-28). Three engineers, each using Claude Code for parallel development, took the project from PRD to a working product in 55 minutes — then spent the remaining 2 hours on feature additions and polish.
The core idea: when you brainstorm alone, your thinking is biased. Team discussions take time. Existing AI tools give you a single response with no clash of perspectives. WIGENT fills that gap — AI agents autonomously debate your idea, challenge each other, and the chat UI itself transforms into the landing page they built from the conclusions.
Brainstorming a business idea is broken in three ways:
We wanted the collision and convergence of a real team debate — but driven by AI agents, in minutes, with a tangible deliverable at the end.
The user drops a topic. WIGENT does the rest:
If the user doesn't like the result, they can reject it. The PM announces the rejection, the team runs 8 more turns of debate, and a new landing page is generated. Human-in-the-Loop, without typing a single word into the debate itself.
WIGENT is a Next.js 16 full-stack application with three cleanly separated layers:
This separation means the orchestrator knows nothing about HTTP, the API route knows nothing about debate logic, and the frontend knows nothing about GPT-4o. Each layer communicates through typed SSE events — 13 event types, all defined in a shared types.ts.
Three engineers needed to work simultaneously on a codebase that didn't exist yet. The solution: define the contract before writing any implementation.
At 13:43 (minute 43 of the hackathon), a 281-line types.ts was committed. It defined every interface the team would need: Agent, ChatItem, SSEEvent, DebateState, FinalIdea, HistoryEntry, and all 13 SSE event payloads. This file was the contract — the single source of truth that enabled zero-conflict parallel work.
| Stream | Engineer | Committed | Output |
|---|---|---|---|
| P1: Backend Core | hwcho | 13:48 | orchestrator.ts, prompts.ts, SSE API route |
| P2: Slack Chat UI | swson | 13:49 | 10 components (ChatLayout, Sidebar, ChatMessage, etc.) |
| P3: Hooks + I/O | hskim | 13:49 | useDebate (441 lines), TopicInput, LandingPageView |
The merge at 13:51 had zero conflicts. Integration took 3 minutes. By 13:54 — 55 minutes into the hackathon — the prototype was functional end-to-end.
The lesson: when time is the constraint, invest 5 minutes defining interfaces upfront. That small investment prevented every merge conflict and misunderstanding for the remaining 3 hours.
The initial PRD described a straightforward chat UI with result cards. Over 3 hours, four pivots transformed it into something much more impactful. Each pivot followed the same pattern: problem noticed → solution designed → implemented within 30 minutes.
The hackathon judging was three rounds: (1) AI visits the URL and evaluates the frontend, (2) full code review, (3) human panel. We reverse-engineered the architecture from the judging criteria.
| Round | Criteria | Our Response |
|---|---|---|
| 1st (AI) | Frontend visual quality | Slack dark-theme UI — familiar yet polished |
| 2nd (Code) | Code structure & quality | TypeScript strict + single-responsibility components |
| 3rd (Human) | Demo impact | "The chat transforms into a landing page" — Full Page Swap |
The Slack-style UI was chosen because it naturally visualizes agent join/leave events, typing indicators, and multi-participant conversations — exactly the patterns our multi-agent system needed to showcase. The Full Page Swap (chat → landing page transition) became the demo's "wow moment" that the human judges remembered.
The original design used 3 structured rounds: R1 Brainstorming → R2 Debate → R3 Convergence, with 10 GPT-4o calls total. This was scrapped in favor of a 30-turn free debate with automatic phase transitions.
| Aspect | Before (Rounds) | After (Free Debate) |
|---|---|---|
| Structure | 3 rigid rounds | 30 turns, 4 auto-phases (early/mid/late/closing) |
| Agent swaps | Round boundaries | Deterministic turns (12, 22) |
| GPT calls | 10 | ~35 |
| Naturalness | Artificial round breaks | Continuous conversation flow |
| Demo stability | Complex edge cases | Predictable, reliable progression |
Why: Real meetings don't have "Round 2" announcements. The round transitions felt artificial and created edge cases in the orchestrator logic. Free debate with phase-based prompt adjustments ("argue harder" in mid-phase, "converge now" in late-phase) produced more natural conversations and far fewer bugs.
Trade-off: GPT-4o calls tripled (10 → 35+), increasing cost and debate duration from 2-3 to 5-8 minutes. We accepted this because debate quality and demo stability mattered more than speed at a hackathon.
The original flow: debate ends → GPT-4o generates full HTML → user waits 30-60 seconds → landing page appears. During a hackathon demo, 60 seconds of "Generating..." is fatal.
The solution was a dual-path rendering strategy:
This eliminated the wait entirely. The moment the debate concluded, the landing page appeared. The 9 templates also provided consistent visual quality — GPT-generated HTML varied wildly in quality and sometimes got refused entirely by the model.
Trade-off: Templates can't produce truly custom designs per idea. But consistent, instant, high-quality output beat unpredictable, slow, variable-quality output — especially for a live demo.
With 30 turns of free debate, agents kept introducing new ideas right up to the last turn. The debate would end without consensus, and the summary prompt couldn't extract a coherent conclusion from a divergent conversation.
The fix was simple but critical: after turn 25, the system prompt changes to force convergence:
This transformed the last 5 turns from divergent brainstorming into natural consensus-building. The agents started agreeing, refining, and producing actionable conclusions — exactly what the summary prompt needed to generate a strong final result.
The hackathon theme was agent design, so we deliberately incorporated as many patterns as possible into a single coherent system:
| Pattern | Implementation | Purpose |
|---|---|---|
| Orchestrator | orchestrator.ts — AsyncGenerator controlling the full debate lifecycle | Central coordination of agent turns, phases, and state transitions |
| Specialist Agent | createAgentPrompt — auto-generates domain expert based on topic | Topic-appropriate expertise (e.g., marketer for a marketing topic) |
| Persistent Agent | PM_AGENT & DESIGNER_AGENT with isFixed: true | Anchors that prevent scope creep and ensure design perspective |
| Agent Spawning | doRetireSpawn — new agent creation at turns 12 and 22 | Fresh perspectives when the debate needs a new angle |
| Agent Retirement | agent_retire event with natural exit message | Graceful departure when an agent's expertise is exhausted |
| Multi-turn Debate | 30-turn free debate with 4 automatic phases | Deep exploration before convergence |
| Result Synthesis | finalResultPrompt — investor-pitch-level structured output | Actionable output, not just conversation logs |
| Human-in-the-Loop | Reject → 8 additional turns → new result | User control without direct debate participation |
SSE + AsyncGenerator Pattern. The orchestrator is an async generator function that yields typed SSE events. The API route consumes these with for await...of and writes them as SSE text. The frontend hook parses the SSE stream and dispatches actions to a useReducer. This three-layer separation keeps each concern isolated — the orchestrator doesn't know about HTTP, the route doesn't know about debate logic, and the frontend doesn't know about GPT-4o.
Speaker Selection Algorithm. Each turn, the system selects the next speaker by: (1) filtering to online agents only, (2) excluding the last speaker to prevent consecutive turns, (3) sorting by speak count ascending to ensure balanced participation. Simple, but effective — every agent gets roughly equal airtime.
Agent Persona Engineering. The quality of debate is entirely determined by persona design. Generic descriptions like "market-focused thinker" produce boring conversations. Specific speech patterns produce engaging ones: the PM says "근데 이거 누가 쓰는데?" ("But who would actually use this?"), the Designer says "유저가 3초 안에 이해 못 하면 실패" ("If the user can't understand it in 3 seconds, it's a failure"). These speech habits make the agents feel like real people arguing in a meeting room.
Random Inter-Agent Delay. Between each agent's turn, a random delay of 800-2500ms is injected. Without it, agents respond instantly one after another, breaking the illusion of a real conversation. This small detail significantly improves demo immersion.
GPT-4o Refusal Handling. GPT-4o sometimes refuses to generate landing page HTML, treating it as a security concern. We addressed this by: (1) switching the landing page prompt to English, (2) framing it as "a design prototype with placeholder demo content," (3) detecting refusals by checking for <!DOCTYPE or <html tags, and (4) falling back to a minimal HTML template when generation fails.
| Time | Milestone | Impact |
|---|---|---|
| 12:59 | PRD v1.0 written | Initial concept: "Agent Arena" |
| 13:37 | PRD v2.0 — Slack UI + Full Page Swap pivot | Demo strategy locked in |
| 13:43 | types.ts contract (281 lines) committed | Parallel development enabled |
| 13:48-49 | P1 + P2 + P3 committed simultaneously | Backend, UI, hooks — all in parallel |
| 13:51 | All branches merged (0 conflicts) | Integration in 3 minutes |
| 13:54 | End-to-end prototype working | 55 minutes from start to working product |
| 14:09 | Designer agent added | Third persistent agent for UX perspective |
| 14:48 | Rounds → Free Debate 30 turns | Major architecture pivot |
| 14:55 | Code export (ZIP download) | User can take the landing page home |
| 15:06 | 9 design templates — instant rendering | Eliminated 60-second wait |
| 15:07 | Reject → continue debate (8 more turns) | Human-in-the-Loop pattern |
| 15:13 | Forced convergence prompts | Agents actually reach conclusions |
| 15:23 | Rebranding: Agent Arena → Wigent | Final identity |
| 15:55 | Lint fixes — final commit | Ship it |
| Metric | Value |
|---|---|
| Development time | 3.5 hours (12:30 — 16:00) |
| Commits | 26 |
| Time to working prototype | 55 minutes |
| Source files | 26 |
| Components | 16 (8 chat + 8 other) |
| SSE event types | 13 |
| GPT-4o calls per session | ~35 |
| Design templates | 9 |
| Prompt functions | 7 |
| Agent design patterns | 8 |
| Team members | 3 |
| Merge conflicts | 0 |
| Brand name changes | 2 (Agent Arena → Wegent → Wigent) |