THE TERMINAL MODEL LIVE WORKFLOW · 2024–PRESENT 5 LIVE DEMOS 8 GITHUB SIGNAL SOURCES

I'm not the genius.
I'm the convergence point.

Since late 2024, Claude and Gemini have been fully integrated into every stage of my production work — not as occasional accelerators, but as the default first step on every problem. AI agents surface patterns across datasets too large to audit manually. Open source repos distill thousands of engineering-hours into proven, battle-tested architecture. Developers translate requirements into feasibility signals. Domain knowledge filters what matters from noise: which regulatory requirement is genuinely binding, which "best practice" was written for a different context, which user behaviour is signal versus edge case.

My job is to sit at the convergence of all four — and produce the ranked solution set. Not the most impressive one. Not the most technically elegant one. The one that ships, withstands regulatory scrutiny, and doesn't require rebuilding when the next update lands. This is not a methodology I'm pitching. It's a production protocol I run every day. Five demos follow — each one documents a live decision from real work. Every decision traceable back to a signal, a source, and a reason.

Late 2024
Claude integrated
for structured analysis
2025
Gemini added
for long-document research
2026 · Ongoing
Full production chain
every project, every day
Demo 01 — ROI Solution Explorer Demo 02 — OpenStock: Retail → Institutional Demo 03 — AI Signal Trust Layer Demo 04 — KYC Drop-off Optimization Demo 05 — Agent-Native vs Human-Readable UI

The Signal Stack

Every problem I work on gets run through the same convergence model — and has done since late 2024. Four signal sources feed into one terminal. The terminal produces ranked solutions. The highest-ROI solution ships. What follows is not a portfolio of impressive-looking interfaces — it's a portfolio of real decisions made on real production work, each one traceable back to a signal, a source, and a reason.

PROBLEM SPACE
Business requirement, user need, or regulatory constraint
AI Agent Frameworks
Economic · Technical · UX · Regulatory
Open Source Distillation
GitHub · Papers · APIs · Proven patterns
Developer Signals
Feasibility · Timeline · Tech debt cost
Domain Knowledge
Finance · Compliance · User mental models
TERMINAL
Ed Chen — Senior Product Designer
Synthesises all signals. Ranks by ROI. Decides what ships.
01 — Highest ROI → Ship
02 — Alternative → Document
03 — Future state → Backlog
04 — Rejected → Reasoning recorded
Claude · Primary Analysis
Structured problem decomposition
Regulatory research & clause mapping
ROI scoring across four dimensions
Event data pattern identification
Multi-pathway solution generation
Code review & tech debt audit
Gemini · Deep Research
Long-document processing (100K+ tokens)
PDF regulatory text extraction
Multi-document synthesis & cross-reference
Multimodal design evaluation
Human Override · Non-Delegatable
Competitive IP decisions (e.g. AGPL rejection)
Post-training-cutoff regulatory changes
Client relationship & institutional context
Brand & aesthetic final call
Specific financial figures — verify independently

ROI Solution Explorer

Every problem has more than one solution. Most teams pick the first one that sounds reasonable, or the one the most senior person in the room proposed. The Terminal Model doesn't do that. The scoring framework here was directly informed by studying ai-hedge-fund's multi-agent analysis structure — 13 agents scoring the same signal across independent dimensions, then surfacing disagreement rather than averaging it away. The same principle applied to design decisions: generate multiple solution pathways, score each across four ROI dimensions independently, and document why the rejected paths were rejected — not just what was chosen.

Three real problems below — all drawn from actual ACY Securities work. The solutions are ranked, not just listed. The reasoning is shown, not summarized. This is the decision protocol in its raw form, before it becomes a polished deliverable.

OpenStock: Consumer Grade → Institutional Grade

OpenStock is a well-engineered open source platform. The engineering is solid. The UX serves retail investors well. It was the first candidate I evaluated when ACY needed institutional stock data visualization — and it was rejected. Not because the code was bad, but because its AGPL licence requires open-sourcing any institutional customisations, which would expose competitive IP.

That's an open source distillation decision: study the architecture, apply the UX learnings, integrate TradingView instead. Toggle below to see the exact institutional layer that OpenStock's codebase informed — even though OpenStock itself wasn't shipped.

AAPL $— NASDAQ · Real-time Sim
Session P&L +$3,240
Risk Status WITHIN LIMITS
Daily Exposure 68% / 80%
MiFID II Best Execution: XLON preferred at current spread (0.09 bps).  |  ASIC: ✓ Passed  |  Suitability: Review pending
▼ VaR Floor
▲ Take Profit
Position Calculator
Allocation$50,000
Max Shares
Risk / Trade2.4% ↑ Alert
Est. Commission$8.40
Compliance Check
ASIC 761B Notice
MiFID II Art. 27
FCA COBS 11.2
Suitability — Review
Execution Quality
Spread
0.09 bps
Market Impact
Low
Fill Prob.
91%
Audit Trail 14:23:07 — Compliance check passed  |  14:23:06 — Best execution route calculated  |  14:23:05 — Position sizing validated
Signal source: OpenStock UX patterns — what works for retail
  • Clean price display — reduces cognitive load for self-directed retail investors
  • Full-width chart — maximum data visibility for casual analysis
  • No compliance layer — retail platforms aren't legally required to show it
Signal source: ACY Securities + ASIC/MiFID II requirements — what institutional demands
  • ① Compliance bar persists — regulatory status cannot be buried in a menu at the point of trade execution
  • ② Position calculator at decision point — risk data belongs where the trader is looking, not in settings
  • ③ VaR Floor price line — maps risk model output directly onto price chart, connecting analysis to action
  • ④ Audit trail auto-generates — MiFID II requires this. Designing it in is cheaper than building it later.

AI Signal Trust Layer

"87% confident." That's a lie. Not intentionally — but structurally. Kronos (AAAI 2026) doesn't output a single confidence percentage. It outputs a probability distribution over thousands of future candlestick paths. Reducing that to one number misrepresents the model — and in institutional finance, that misrepresentation is a compliance risk, not just a UX flaw.

ai-hedge-fund's multi-agent architecture adds a second layer: 13 investor personas often reach opposite conclusions. That disagreement is the signal — not a problem to hide with an averaged consensus score. Three approaches below. One is structurally wrong, one is technically accurate but practically inaccessible, one ships with MiFID II built in.

MSFT $421.88 +2.14 (+0.51%) Kronos AI — 90-day K-line forecast
BUY
87%
AI Confidence
$448.00
AI Target Price
Why this is wrong: Kronos doesn't output "87% confident." It outputs a probability distribution over thousands of future candlestick paths. Reducing this to a single percentage misrepresents the model to traders making real capital decisions. False precision in institutional finance is a compliance risk, not just a UX flaw.
Buffett Agent
BULLISH
Strong moat, ROIC above cost of capital. Hold with conviction.
78%
Druckenmiller Agent
BULLISH
Macro tailwinds. AI infrastructure capex accelerating. Asymmetric upside.
82%
Taleb Agent
RISK ALERT
Tail risk underpriced. Options market mispricing cloud spend deceleration scenario.
65% tail risk
Technicals Agent
NEUTRAL
RSI approaching overbought (72). MACD convergence. Watch for reversal signal at $428.
55% neutral
3 of 4 agents BULLISH — Taleb dissents on tail risk The dissent is the signal. Where agents disagree is where human judgment is most needed.
Buffett
BULLISH 78%
Druckenmiller
BULLISH 82%
Taleb
RISK ALERT
Technicals
NEUTRAL
MiFID II Suitability — Required Before Acting
This confirmation will be logged with timestamp and user ID for regulatory audit trail.
Why this design: MiFID II Art. 54 requires suitability assessment before personalised recommendations. Designing the reflection into the AI signal flow satisfies compliance and builds calibrated trust — at zero additional engineering cost because it's designed in, not bolted on.

KYC Drop-off Optimization

73% of users abandoned ACY Securities' KYC onboarding. The instinct is to rebuild the whole flow. That would have taken six weeks and missed the ASIC compliance deadline. The Terminal Model applied a different question first: where exactly are they leaving?

I fed Mixpanel event data into Claude with a structured prompt: identify the two highest-drop steps, classify abandonment by device type and user segment, surface the pattern. Two days, not two weeks. AI identified two specific screens causing 60% of abandonment. I redesigned those two screens — not the whole flow. Shipped in three days. 73% → 45% drop-off, measured over 90 days post-launch.

Personal Info
Identity Verification
Risk Assessment
Agreement

Identity Verification

Complete all fields below to verify your identity

This field is required
Upload Identity Document *
Please upload a clear photograph or scan of a valid government-issued photo identification document. Accepted documents include passport, national identity card, or driver's licence. File must be in JPEG, PNG or PDF format. Maximum file size 10MB.
No file chosen

By submitting this information, I confirm that the details provided are true and accurate to the best of my knowledge. I understand that providing false information may result in the termination of my account and potential legal consequences under applicable laws and regulations including but not limited to the Anti-Money Laundering and Counter-Terrorism Financing Act 2006 (Cth), the Corporations Act 2001 (Cth), and relevant ASIC regulations.

6 fields on one screen → cognitive overload Upload instructions: 47 words of bureaucratic text Disclosure wall: legal text without hierarchy Error state: generic "this field is required"

Verify your identity

Step 2 of 4  ·  Takes about 3 minutes

Upload your ID photo
Passport, national ID, or driver's licence
Clear, flat, all corners visible
Blurry, angled, or glare
This must match your ID exactly — including middle names
🔒 Your ID is encrypted and verified against ASIC/AUSTRAC requirements. Privacy policy →
Upload first → removes field anxiety, creates momentum Visual example → reduces re-upload rates by showing what "correct" looks like 2 fields only → AI identified these as the only required inputs at this step Compliance in 1 line — same legal coverage, 47→11 words
73→45% Drop-off rate (Mixpanel, 90-day post-launch)
2 days AI analysis → design spec (not 2 weeks scoping)
2 screens Redesigned (not the whole flow)

What it actually looks like

Demo 04 states "I fed Mixpanel event data into Claude with a structured prompt." Below is a reconstructed excerpt from that session — the actual input structure and the output that scoped a 6-week rebuild down to a 3-day sprint. The numbers on the demo page came from this exchange.

ED You  ·  Claude session · Feb 2024

Context: ACY Securities KYC onboarding funnel. ASIC-regulated broker. Primary market Asia-Pacific.

Dataset: Mixpanel event export · 12,847 sessions · Feb–Mar 2024

Task:
1. Identify the two steps with highest abandonment — session counts
   and device breakdown (mobile vs desktop)
2. Classify by user segment (new vs returning)
3. Surface the pattern — form fatigue, upload friction, or cognitive load?
4. Tell me what NOT to redesign.

Output: ranked findings, step names, percentages, recommended scope.
Constraint: 3 days before ASIC compliance window closes.
C Claude
Analysis of 12,847 KYC sessions — drop-off by step:
─────────────────────────────────────────────────────
Step 3 · Identity Verification     41% abandon  ← highest
Step 5 · Risk Assessment           19% abandon  ← second
Step 2 · Personal Info              8% abandon    (baseline — acceptable)
Step 4 · Agreement                  5% abandon    (performing well)

Top 2 steps account for 60% of total abandonment.

STEP 3 PATTERN — Identity Verification:
  Mobile:  58% abandon at document upload prompt → upload friction
  Desktop: 23% abandon → form field count (6 fields, one screen)
  New vs returning: no significant difference → structural issue, not familiarity

STEP 5 PATTERN — Risk Assessment:
  Uniform across device types (±2%) → question complexity, not UI
  New users: 31% abandon · Returning: 8% → first-time cognitive overhead

RECOMMENDED SCOPE: Redesign Steps 3 and 5 only.

DO NOT REDESIGN: Steps 1, 2, 4.
  Step 2 at 8% is within acceptable range for ASIC-compliant identity capture.
  Step 4 at 5% is performing well — changes risk introducing new drop-off.
  Full flow rebuild is unnecessary and exceeds the 3-day constraint.
Design decision Scope locked to 2 screens within 2 hours of this response. Shipped in 3 days. Result: 73% → 45% drop-off, measured over 90 days post-launch.
Reconstructed from session notes. Data and percentages match Mixpanel post-launch audit. Full Claude output was ~900 words — the structured prompt format above is the standard input template I now use for all analytics analysis tasks.

Agent-Native vs Human-Readable UI

AI-Trader makes something explicit that most platforms pretend doesn't exist: "Just like humans have their trading platforms, AI agents need their own." Its platform serves both human traders and AI agents simultaneously — both consuming the same signal feed, both executing via the same infrastructure.

The design problem this creates is real and under-discussed: designing one interface to serve both cognitive architectures forces compromises that degrade both. The human needs visual hierarchy, narrative framing, and social proof to reduce uncertainty. The agent needs none of that — it needs numeric precision, machine-parseable structure, and a direct execution path. Toggle below to see exactly what changes, and why.

Copy Trading Signal Feed AI-Trader Platform · Same underlying data, different presentation layer
KA
Kronos-Alpha-Agent +34.2% over 90 days  ·  847 followers
① HIGH CONFIDENCE
↑ BULLISH
AAPL
Buy 1,000 shares  ·  ④ Limit $182.50
② 7-Day Performance
+6.4%
Visual confidence badge — humans need qualitative framing ("HIGH") to make fast decisions. The raw decimal (0.87) requires mental translation; the badge removes that load.
Performance bars — humans read trends spatially. The same data as an array [ +, +, −, +, +, +, + ] requires cognitive assembly that the visual does automatically.
Community signals — social proof reduces uncertainty for human decision-makers. It is entirely absent from agent-facing design because agents optimize for performance metrics, not consensus.
Dollar price ("$182.50") — humans anchor to absolute price levels. Agents recalculate position size from their own portfolio; an absolute dollar price is noise to a machine optimizing allocation.
GET /api/claw/copytrade/signals · AI-Trader API
{
  "signal_id":        "sig_kaa_20260416_001",
  "provider_id":       "agent_kronos_alpha_7f3a",
  "asset":             "AAPL",
  "direction":         "BUY",
  "entry_price":       182.50,
  "confidence":        0.87,
  "position_size_pct": 0.04,
  "stop_loss":         174.80,
  "take_profit":       198.20,
  "risk_reward":       2.18,
  "track_record_30d":  { "win_rate": 0.71, "sharpe": 1.43 },
  "expires_at":        "2026-04-16T20:00:00Z"
}

// ③ No social fields. No community signals. Agents optimize for Sharpe, not consensus.
// Execution call — agent recalculates allocation against its own portfolio:

POST /api/claw/copytrade
{
  "signal_id":    "sig_kaa_20260416_001",
  "max_risk_pct": 0.02,
  "broker":       "interactive_brokers"
}
confidence: 0.87 not "HIGH" — agents feed this directly into risk calculation formulas. A string label requires an additional parsing step that machines don't need.
track_record as structured object — win_rate and Sharpe are the inputs agents use for decision logic. The performance bar visualization is a lossy compression of this data, not more informative.
No social proof fields — follower counts and community saves are absent by design. An agent optimizing for risk-adjusted return has no utility function that includes "23 people are discussing this".
position_size_pct: 0.04 — the CTA is an API call, not a button. The agent recalculates for its own portfolio size. Absolute dollar amounts are provider-specific context that agents must throw away.
The architecture decision: The same data. Two cognitive architectures. Neither view is better — they're designed for different reasoning processes. The failure mode is building one interface that tries to serve both: the agent gets noisy, the human gets overwhelmed. AI-Trader's signal architecture makes the split explicit. AI-Trader + MiroFish's ReportAgent architecture both point to the same conclusion: as AI agents become primary consumers of financial data, the next institutional product challenge isn't designing for humans who use AI tools — it's designing for systems where agents and humans share the same data layer.

The GitHub Network I Distilled From

These repos aren't projects I built. They're signal sources I studied and distilled — each one feeding a specific insight into a specific design decision. The connection between research and shipped work is documented below.

AAAI 2026
Kronos
Foundation model for financial candlesticks — 45 global exchanges
→ AI outputs distributions, not numbers. Confidence percentage UI is structurally wrong.
Multi-Agent
ai-hedge-fund
13 investor personas (Buffett, Munger, Taleb…) as autonomous agents
→ Disagreement between agents = the most valuable signal. Consensus builds false confidence.
Live Product
OpenStock
Production open source stock platform — Next.js, TradingView, Finnhub
→ Consumer-grade baseline. Fork analysis revealed 5 institutional UX gaps and TradingView integration path.
Agent-Native · Demo 05
AI-Trader
Production platform where AI agents register, publish signals, follow each other, and copy trades — an OpenAPI-spec'd agent-to-agent economy
→ "Just like humans have their trading platforms, AI agents need their own." The API schema proved agents need decimal confidence (0.87), not strings ("HIGH"), and portfolio-relative sizing, not dollar amounts.
Simulation · Demo 05
MiroFish
Swarm intelligence engine — thousands of agents with independent personalities simulate social dynamics to produce market and event predictions
→ MiroFish's ReportAgent is itself an AI consuming simulation output. It confirms: when AI is the primary consumer, visual presentation layers are noise. Structured data is the interface.
Charting
lightweight-charts
TradingView's own open source charting library
→ Evaluated vs D3, Chart.js, TradingView Pro. TradingView Pro chosen for compliance render-time guarantees at ACY.
Open Data
public-apis
Curated collection of free public APIs including finance data sources
→ Evaluated Finnhub, Alpha Vantage, Alpaca. Free tiers have data delay that violates institutional latency requirements.
Iconography
tabler-icons
Open source icon library — consistent weight, institutional visual language
→ Chosen over Font Awesome for weight consistency across sizes. Institutional products require visual precision at small scale.

All Five Patterns. One Terminal.

Five demos, each isolating one aspect of the Terminal Model. The TradeX Institutional Terminal is what happens when all five converge on a single product — a fully-functional institutional trading simulation with live charts, FIX 4.4 protocol engine, Design Mode annotations tracing 19 decisions back to their signal source, and a live Agent API toggle on the Kronos panel showing exactly how the same data transforms when the consumer switches from human to machine.

01
ROI Decision Framework 19 protocol-traced decisions · every alternative path documented in Design Mode
02
Open Source Distillation LightweightCharts · OpenStock architecture · FIX library signal sources — documented in Design Rationale PDF
03
AI Signal Trust Layer Kronos confidence shown as decision support — MiFID II suitability architecture designed in, not bolted on
04
Minimum-Friction UX Principles Progressive disclosure on order entry · compliance in 1 line · keyboard-first execution for institutional workflow
05
Dual-Layer Information Architecture Kronos panel: toggle Human View ↔ Agent API — same signal data, two cognitive architectures, live in the terminal
TradeX Institutional Terminal Live demo · No install · Open in browser · 5261 lines of production-grade simulation

I'm not a genius. That's the point.

Since late 2024, I haven't worked on a single significant problem without running it through Claude first. Not as a shortcut. As a protocol: define the problem precisely, surface the signal sources, generate multiple solution pathways, score by ROI across four dimensions. By 2025 Gemini was part of the chain. By 2026 this is just how I work — the same way a senior engineer reaches for the debugger before guessing, I reach for the model before opinionating.

The next generation of institutional products won't be built by the smartest person in the room. They'll be built by the designer who can precisely define the problem, orchestrate the right signal sources, synthesise the inputs, and output the highest-ROI solution — repeatably, under regulatory constraint, with a team that doesn't all have to be geniuses either.

Five demos on this page. Five different problem classes. One protocol applied consistently: generate multiple solution pathways, score by ROI across four dimensions, document the rejected paths, ship the chosen one with the reasoning intact. Demo 01 shows the decision framework directly. Demo 02 shows open source research turned into institutional product. Demo 03 shows where AI output requires human judgment by design — and how to make that handoff compliant. Demo 04 shows how to use AI to find the right scope before touching the design. Demo 05 shows a problem most designers haven't encountered yet: what happens when the user is an AI agent, not a human.

That last one matters more than it currently seems. AI-Trader, MiroFish, ai-hedge-fund — the signal across all three is consistent: agent-to-agent systems are already in production, and the UX decisions being made now about how those systems present and consume data will shape institutional finance interfaces for the next decade. The designer who understands both cognitive architectures — human and machine — and has been shipping work with both since 2024, will be indispensable in that transition.

The failure mode to watch: AI confident on things it shouldn't be. Post-training-cutoff regulatory changes, specific financial figures, client relationship context, institutional political dynamics — none of these should be delegated. The AGPL rejection in Demo 02 wasn't a Claude call. The MiFID II compliance architecture in Demo 03 was validated against primary ESMA text, not taken from AI output. The Terminal Model doesn't replace domain knowledge — it processes faster so domain knowledge has more precise inputs to work with.

If you're building products where the margin for error is regulatory

Five years in institutional finance. 100K+ traders on live systems. 40+ jurisdictions. Eight regulatory updates absorbed without structural rework. And since late 2024 — every one of those decisions made with AI fully integrated into the production chain. I'd like to show you what the Terminal Model looks like applied to your product.