WIGENT
W

WIGENT

Released

Drop a topic, watch AI agents debate it live — then a landing page writes itself from the conclusions.

WIGENT is a multi-agent debate platform where a PM agent orchestrates auto-spawned domain experts in a Slack-style chat UI. Agents debate, challenge each other, retire, and summon new specialists. Once consensus is reached, the chat transforms into a landing page generated from the debate conclusions. Grand Prize winner at Build with TRAE Seoul — built by 3 engineers in 3.5 hours.

View on GitHub
THE PROBLEM

Solo brainstorming is biased, team discussions take hours, and AI chat tools give single-perspective responses with no debate or convergence. There's no way to get the clash of viewpoints that produces strong ideas — quickly, with a tangible deliverable.

THE SOLUTION

AI agents autonomously debate your topic in a Slack-style UI — PM anchors the discussion, domain experts argue and get replaced, and the debate converges into a structured business idea that instantly renders as a landing page from 9 design templates.

Multi-Agent Orchestration

AsyncGenerator-based debate engine controls 30-turn free debates with automatic phase transitions, agent retirement at turns 12 and 22, and forced convergence after turn 25.

Instant Landing Page

9 design templates (Glassmorphism, Neobrutalism, Editorial, etc.) render instantly when the debate concludes — no 60-second GPT wait. Dual-path rendering keeps GPT generation as a silent fallback.

Slack-Style Debate UI

Full Slack-style dark-theme chat with typing indicators, agent join/leave system messages, sidebar with online status, and Framer Motion page transitions.

Human-in-the-Loop

Users can reject the result. The PM announces the rejection, the team runs 8 more turns of focused debate, and a new landing page is generated — all without the user typing into the debate.

KEY METRICS
55minTo Prototype
26Commits
0Merge Conflicts
8Agent Patterns
TECH STACK

Framework & Language

Next.js 16 (App Router)React 19TypeScript (strict)Tailwind CSS v4Framer Motion

AI & Backend

OpenAI GPT-4oSSE (Server-Sent Events)AsyncGenerator orchestratorAbortController (timeout/cancel)

Frontend State

React useReducer (13 event types)SSE ReadableStream parserSandbox iframe (landing page)localStorage (history)

Overview

WIGENT is a multi-agent debate platform built in 3.5 hours during the Build with TRAE Seoul Hackathon (2026-03-28). Three engineers, each using Claude Code for parallel development, took the project from PRD to a working product in 55 minutes — then spent the remaining 2 hours on feature additions and polish.

The core idea: when you brainstorm alone, your thinking is biased. Team discussions take time. Existing AI tools give you a single response with no clash of perspectives. WIGENT fills that gap — AI agents autonomously debate your idea, challenge each other, and the chat UI itself transforms into the landing page they built from the conclusions.

Key Results

  • Grand Prize — Build with TRAE Seoul (ByteDance)
  • PRD to working prototype — 55 minutes
  • 3 engineers, 3.5 hours — 26 commits, 0 merge conflicts
  • 8+ agent design patterns — Orchestrator, Spawning, Retirement, Human-in-the-Loop, etc.
  • 9 design templates — instant landing page rendering

The Problem We Solved

Brainstorming a business idea is broken in three ways:

  • Solo thinking is biased. You fall in love with your own idea and can't see blind spots. There's no one to say "who would actually pay for this?"
  • Team brainstorming is slow. Scheduling, facilitating, keeping people on track — it takes hours to reach a conclusion that might still be half-baked.
  • AI chat is one-dimensional. ChatGPT gives you a single, agreeable response. There's no debate, no tension, no convergence from opposing viewpoints.

We wanted the collision and convergence of a real team debate — but driven by AI agents, in minutes, with a tangible deliverable at the end.

How It Works

The user drops a topic. WIGENT does the rest:

  • Step 1 — Agent Creation. A PM agent (always present) and a topic-specific expert are auto-generated. The PM is the realist ("Who would pay for this?"), the expert is the visionary.
  • Step 2 — Free Debate (30 turns). Agents debate in a Slack-style chat UI. A Designer agent joins at turn 3. At turns 12 and 22, non-fixed agents are retired and replaced with new specialists — the conversation evolves as the topic deepens.
  • Step 3 — Convergence. After turn 25, agents are forced to converge. No new ideas allowed — just conclusions and action items.
  • Step 4 — Result Synthesis. The debate is summarized, then distilled into a structured business idea (title, one-liner, target audience, revenue model, differentiator, market size, next steps).
  • Step 5 — Landing Page. The chat UI transforms into a full landing page generated from the idea. Nine design templates (Glassmorphism, Neobrutalism, Editorial, etc.) ensure instant rendering — no waiting for GPT to generate HTML.

If the user doesn't like the result, they can reject it. The PM announces the rejection, the team runs 8 more turns of debate, and a new landing page is generated. Human-in-the-Loop, without typing a single word into the debate itself.

System Architecture

WIGENT is a Next.js 16 full-stack application with three cleanly separated layers:

Architecture Layers

  • Orchestrator (AsyncGenerator). The debate engine yields SSE events as it progresses — agent creation, speech chunks, retirements, spawns, final results. It controls the full lifecycle: 30 turns of debate, summarization, result synthesis, and landing page generation.
  • API Route (SSE Stream). A thin layer that consumes the orchestrator's AsyncGenerator via for-await-of and forwards each event as Server-Sent Events to the client. Two endpoints: /api/debate (new debate) and /api/debate/continue (rejection follow-up).
  • Frontend (useReducer State Machine). A React hook (useDebate) parses the SSE stream and dispatches 13 event types into a useReducer state machine. The UI transitions through idle → creating → debating → retiring → spawning → landing states.

This separation means the orchestrator knows nothing about HTTP, the API route knows nothing about debate logic, and the frontend knows nothing about GPT-4o. Each layer communicates through typed SSE events — 13 event types, all defined in a shared types.ts.

Contract-First Parallel Development

Three engineers needed to work simultaneously on a codebase that didn't exist yet. The solution: define the contract before writing any implementation.

At 13:43 (minute 43 of the hackathon), a 281-line types.ts was committed. It defined every interface the team would need: Agent, ChatItem, SSEEvent, DebateState, FinalIdea, HistoryEntry, and all 13 SSE event payloads. This file was the contract — the single source of truth that enabled zero-conflict parallel work.

Parallel development — three streams, committed within 1 minute of each other
StreamEngineerCommittedOutput
P1: Backend Corehwcho13:48orchestrator.ts, prompts.ts, SSE API route
P2: Slack Chat UIswson13:4910 components (ChatLayout, Sidebar, ChatMessage, etc.)
P3: Hooks + I/Ohskim13:49useDebate (441 lines), TopicInput, LandingPageView

The merge at 13:51 had zero conflicts. Integration took 3 minutes. By 13:54 — 55 minutes into the hackathon — the prototype was functional end-to-end.

The lesson: when time is the constraint, invest 5 minutes defining interfaces upfront. That small investment prevented every merge conflict and misunderstanding for the remaining 3 hours.

The decisions that shaped the final product

Four Pivots in Three Hours

The initial PRD described a straightforward chat UI with result cards. Over 3 hours, four pivots transformed it into something much more impactful. Each pivot followed the same pattern: problem noticed → solution designed → implemented within 30 minutes.

Minute 38 — The demo strategy that won the hackathon

Pivot 1 — Slack UI + Full Page Swap

The hackathon judging was three rounds: (1) AI visits the URL and evaluates the frontend, (2) full code review, (3) human panel. We reverse-engineered the architecture from the judging criteria.

Judging criteria → engineering response
RoundCriteriaOur Response
1st (AI)Frontend visual qualitySlack dark-theme UI — familiar yet polished
2nd (Code)Code structure & qualityTypeScript strict + single-responsibility components
3rd (Human)Demo impact"The chat transforms into a landing page" — Full Page Swap

The Slack-style UI was chosen because it naturally visualizes agent join/leave events, typing indicators, and multi-participant conversations — exactly the patterns our multi-agent system needed to showcase. The Full Page Swap (chat → landing page transition) became the demo's "wow moment" that the human judges remembered.

Minute 109 — Naturalness over structure

Pivot 2 — Rounds → Free Debate (30 Turns)

The original design used 3 structured rounds: R1 Brainstorming → R2 Debate → R3 Convergence, with 10 GPT-4o calls total. This was scrapped in favor of a 30-turn free debate with automatic phase transitions.

Before vs. after the pivot
AspectBefore (Rounds)After (Free Debate)
Structure3 rigid rounds30 turns, 4 auto-phases (early/mid/late/closing)
Agent swapsRound boundariesDeterministic turns (12, 22)
GPT calls10~35
NaturalnessArtificial round breaksContinuous conversation flow
Demo stabilityComplex edge casesPredictable, reliable progression

Why: Real meetings don't have "Round 2" announcements. The round transitions felt artificial and created edge cases in the orchestrator logic. Free debate with phase-based prompt adjustments ("argue harder" in mid-phase, "converge now" in late-phase) produced more natural conversations and far fewer bugs.

Trade-off: GPT-4o calls tripled (10 → 35+), increasing cost and debate duration from 2-3 to 5-8 minutes. We accepted this because debate quality and demo stability mattered more than speed at a hackathon.

Minute 157 — Solving the 60-second wait

Pivot 3 — Template-Based Instant Rendering

The original flow: debate ends → GPT-4o generates full HTML → user waits 30-60 seconds → landing page appears. During a hackathon demo, 60 seconds of "Generating..." is fatal.

The solution was a dual-path rendering strategy:

Dual-Path Rendering

  • Primary path (instant): When FINAL_RESULT arrives, a keyword-matching algorithm selects one of 9 design templates (Glassmorphism, Neobrutalism, Editorial, Minimalism, Dark Neon, Bento Grid, Organic Shapes, Corporate, Gradient Mesh) and renders it immediately with the debate conclusions.
  • Background path (GPT): GPT-4o continues generating custom HTML in the background. If the template is already displayed, the GPT result is silently ignored.

This eliminated the wait entirely. The moment the debate concluded, the landing page appeared. The 9 templates also provided consistent visual quality — GPT-generated HTML varied wildly in quality and sometimes got refused entirely by the model.

Trade-off: Templates can't produce truly custom designs per idea. But consistent, instant, high-quality output beat unpredictable, slow, variable-quality output — especially for a live demo.

Minute 174 — Making agents actually conclude

Pivot 4 — Forced Convergence Prompts

With 30 turns of free debate, agents kept introducing new ideas right up to the last turn. The debate would end without consensus, and the summary prompt couldn't extract a coherent conclusion from a divergent conversation.

The fix was simple but critical: after turn 25, the system prompt changes to force convergence:

Closing Phase Prompt Directive

  • "The debate is ending soon. You MUST reach a final conclusion."
  • "No new ideas allowed. Summarize and converge on what's been agreed."
  • "Start your message with '정리하면~' (To summarize~) or '결론적으로~' (In conclusion~)."

This transformed the last 5 turns from divergent brainstorming into natural consensus-building. The agents started agreeing, refining, and producing actionable conclusions — exactly what the summary prompt needed to generate a strong final result.

Agent Design Patterns (8 Patterns)

The hackathon theme was agent design, so we deliberately incorporated as many patterns as possible into a single coherent system:

PatternImplementationPurpose
Orchestratororchestrator.ts — AsyncGenerator controlling the full debate lifecycleCentral coordination of agent turns, phases, and state transitions
Specialist AgentcreateAgentPrompt — auto-generates domain expert based on topicTopic-appropriate expertise (e.g., marketer for a marketing topic)
Persistent AgentPM_AGENT & DESIGNER_AGENT with isFixed: trueAnchors that prevent scope creep and ensure design perspective
Agent SpawningdoRetireSpawn — new agent creation at turns 12 and 22Fresh perspectives when the debate needs a new angle
Agent Retirementagent_retire event with natural exit messageGraceful departure when an agent's expertise is exhausted
Multi-turn Debate30-turn free debate with 4 automatic phasesDeep exploration before convergence
Result SynthesisfinalResultPrompt — investor-pitch-level structured outputActionable output, not just conversation logs
Human-in-the-LoopReject → 8 additional turns → new resultUser control without direct debate participation

Technical Deep Dive

SSE + AsyncGenerator Pattern. The orchestrator is an async generator function that yields typed SSE events. The API route consumes these with for await...of and writes them as SSE text. The frontend hook parses the SSE stream and dispatches actions to a useReducer. This three-layer separation keeps each concern isolated — the orchestrator doesn't know about HTTP, the route doesn't know about debate logic, and the frontend doesn't know about GPT-4o.

Speaker Selection Algorithm. Each turn, the system selects the next speaker by: (1) filtering to online agents only, (2) excluding the last speaker to prevent consecutive turns, (3) sorting by speak count ascending to ensure balanced participation. Simple, but effective — every agent gets roughly equal airtime.

Agent Persona Engineering. The quality of debate is entirely determined by persona design. Generic descriptions like "market-focused thinker" produce boring conversations. Specific speech patterns produce engaging ones: the PM says "근데 이거 누가 쓰는데?" ("But who would actually use this?"), the Designer says "유저가 3초 안에 이해 못 하면 실패" ("If the user can't understand it in 3 seconds, it's a failure"). These speech habits make the agents feel like real people arguing in a meeting room.

Random Inter-Agent Delay. Between each agent's turn, a random delay of 800-2500ms is injected. Without it, agents respond instantly one after another, breaking the illusion of a real conversation. This small detail significantly improves demo immersion.

GPT-4o Refusal Handling. GPT-4o sometimes refuses to generate landing page HTML, treating it as a security concern. We addressed this by: (1) switching the landing page prompt to English, (2) framing it as "a design prototype with placeholder demo content," (3) detecting refusals by checking for <!DOCTYPE or <html tags, and (4) falling back to a minimal HTML template when generation fails.

26 commits in 3 hours

Complete Timeline

Hackathon timeline — from PRD to polish
TimeMilestoneImpact
12:59PRD v1.0 writtenInitial concept: "Agent Arena"
13:37PRD v2.0 — Slack UI + Full Page Swap pivotDemo strategy locked in
13:43types.ts contract (281 lines) committedParallel development enabled
13:48-49P1 + P2 + P3 committed simultaneouslyBackend, UI, hooks — all in parallel
13:51All branches merged (0 conflicts)Integration in 3 minutes
13:54End-to-end prototype working55 minutes from start to working product
14:09Designer agent addedThird persistent agent for UX perspective
14:48Rounds → Free Debate 30 turnsMajor architecture pivot
14:55Code export (ZIP download)User can take the landing page home
15:069 design templates — instant renderingEliminated 60-second wait
15:07Reject → continue debate (8 more turns)Human-in-the-Loop pattern
15:13Forced convergence promptsAgents actually reach conclusions
15:23Rebranding: Agent Arena → WigentFinal identity
15:55Lint fixes — final commitShip it

Lessons Learned

What Worked

  • Contract-first development. 5 minutes defining types.ts prevented all merge conflicts for 3 hours. When time is the constraint, invest in interfaces before implementations.
  • Demo-driven architecture. We designed backwards from "what will impress the judges" to "what architecture supports that." Hackathons reward demos, not codebases.
  • Fearless pivoting. Four pivots in 3 hours, each implemented within 30 minutes. The initial design was a starting point, not a commitment.
  • Claude Code for parallel generation. Each engineer used Claude Code to generate entire component trees in single commits. P2 generated 10 Slack UI components at once. This is not traditional pair programming — it's multiplied individual output.

What We'd Do Differently

  • User participation. Users can only watch and reject. Letting them steer the debate mid-conversation would produce much better results.
  • Model tiering. Every turn uses GPT-4o. In production, only critical turns (opening, closing, synthesis) need the top model — the rest could use GPT-4o-mini at 1/10th the cost.
  • Testing. Zero tests — a hackathon reality. The orchestrator, reducer, and SSE parser all deserve unit tests.
  • Landing page editing. The only way to improve the result is to reject and re-debate. A drag-and-drop editor for the generated page would be far more practical.

WIGENT by the Numbers

MetricValue
Development time3.5 hours (12:30 — 16:00)
Commits26
Time to working prototype55 minutes
Source files26
Components16 (8 chat + 8 other)
SSE event types13
GPT-4o calls per session~35
Design templates9
Prompt functions7
Agent design patterns8
Team members3
Merge conflicts0
Brand name changes2 (Agent Arena → Wegent → Wigent)

Drop a topic, watch AI agents debate it live — then a landing page writes itself from the conclusions.