Beyond The Metrics:
Real Traders, Real Stories
My portfolio shows quantitative outcomes — fewer steps, faster compliance updates, lower KYC drop-off. But behind every metric is a human story. Here are the traders who shaped my design decisions.
My Research Philosophy
Metrics tell you what happened. Users tell you why.
I combine quantitative analytics (Hotjar heatmaps, GA4 funnels) with qualitative research
(interviews, usability testing, contextual inquiry) to understand both behavior and
motivation.
Moderated Interviews
45-60 min sessions with traders (novice to expert). I ask about their mental models, workflow contexts, and emotional responses to risk.
Usability Testing
Task-based testing with think-aloud protocol. I measure time-on-task, error rates, and subjective satisfaction (SUS scores).
Behavioral Analytics
Hotjar session recordings, heatmaps, and funnel analysis to identify where users struggle—then interviews to understand why.
Meet The Traders
These are composite personas based on 32 real interviews. Names changed for privacy.
Samantha Dumas
Role: Novice Retail Trader
Age: 28 | Location: Melbourne, AU
Background: Marketing Manager, no finance background
Sarah's Journey: From Overwhelmed to Confident
User Journey
1. Account Setup
"What's KYC? Why do they need my passport?"
2. First Trade
"I clicked 'Buy' but nothing happened. Did it work?"
3. Risk Management
"How much can I lose? I don't understand 'margin call'"
4. Proficiency
"Now I get it. I can actually do this!"
Key Pain Points
- Platform assumed trading literacy—no onboarding tooltips
- Jargon everywhere ("pips", "spread", "stop loss") without explanations
- Unclear feedback after placing orders—"Did it execute?"
- Fear of losing money because risk wasn't visualized
My Design Solutions
- Onboarding tooltips: Contextual glossary for every financial term
- Progress indicators: "Order Submitted → Executed → Confirmed" stepper
- Risk visualization: "You can lose up to $X" shown BEFORE trade execution
- Plain-language disclaimers: Replaced legal jargon with clear warnings
Usability Testing Methodology — These Are Real Measurements
These metrics are not persona narrative constructs. They come from a controlled usability study conducted on both the legacy and redesigned LogixTrader order placement flow. Persona context is used to situate the findings, but the numbers are independently measured.
- Protocol: Moderated think-aloud, same 15 participants tested both flows sequentially
- Task: "Place a market order for 1 lot EUR/USD as you normally would"
- Timing: Manual stopwatch + screen recording (dual-verified, ±0.2s human reaction variance)
- Start/End: Click 'New Order' → order confirmation modal appears
- Limitation: Legacy flow tested first (no counterbalancing); learning effect may understate improvement. Lab environment ≠ live trading conditions. n=15 is appropriate for qualitative usability insight, not statistical validation.
- Instrument: Standard 10-item System Usability Scale (Brooke, 1996)
- Participants: Same n=15 cohort (5 novice, 7 intermediate, 3 expert traders). Pre-score collected after legacy flow; post-score after redesigned flow in same session.
- Benchmarks: SUS 52 = "Poor / D grade" (below 68 acceptability threshold). SUS 85 = "Excellent / A grade" (industry top quartile per Bangor et al., 2009).
- Limitation: Potential order bias (tested legacy first). No washout period. Full methodology + session recordings available under NDA.
Michael Garnier
Role: Intermediate Day Trader
Age: 34 | Location: Singapore
Background: 3 years trading experience, follows market signals
Michael's Daily Workflow
Typical Trading Day (6am - 2pm)
- 6:00am: Opens Finlogix → scans 50+ market signals across 12 currency pairs
- 7:30am: Identifies 3 high-probability setups → sets price alerts
- 9:00am: Alert triggered → opens LogixTrader → executes trade in <3 seconds
- 9:15am: Monitors open positions across 3 charts simultaneously
- 12:00pm: Closes profitable trades → reviews P&L attribution in TradingCup
Workflow Friction Points
- Market data scattered across 3 different platforms—context switching kills speed
- No keyboard shortcuts—forced to use mouse for every action
- Chart customization resets between sessions—had to reconfigure daily
- Risk metrics hidden in dropdown menus—couldn't see exposure at a glance
My Design Solutions
- Unified data dashboard: Finlogix aggregates signals + charts + news in one view
- Keyboard-first execution: F9 = Buy, F10 = Sell, Ctrl+R = Close All (Bloomberg-inspired)
- Persistent workspace state: Chart configs, indicator settings saved per user
- Real-time risk dashboard: P&L, margin usage, open positions always visible (no dropdowns)
Measured Impact (Finlogix Redesign)
Beyond Retail: Institutional Stakeholders
The ACY Connect institutional B2B platform required a fundamentally different research methodology. Retail traders express frustration emotionally. Institutional stakeholders express it in SLAs, latency requirements, and regulatory audit clauses.
James Liang
Role: Relationship Manager — Prime Brokerage
Age: 41 | Location: Hong Kong
Background: 12 years institutional sales, manages 8 prime brokerage accounts ($10M–$500M daily flow)
James's Accountability Model
What Was Failing
- FIX session downtime discovered by client before RM — trust erosion
- No unified view: order status, credit limits, and connectivity on 3 separate screens
- Credential renewal: manual email chain, 5-day lead time, no self-service
- Compliance queries required IT ticket — 48-hour SLA, institutional clients expect minutes
Design Outcomes (ACY Connect)
- Unified dashboard: FIX session health, credit exposure, order volume — single view
- Proactive alerts: Session latency spike → push notification before client sees it
- Self-service credentials: RM generates API keys, resets passwords without IT ticket
- Audit trail exports: 1-click compliance reports formatted for ASIC/SFC review
Research Method: Contextual Inquiry + Shadowing
Shadowed 3 RMs during live trading hours (9am–12pm HKT) across 4 sessions. Observed real screen workflows — not simulated tasks. Key insight: RMs have zero tolerance for latency in information retrieval because any delay is felt by their institutional clients as service failure. Traditional usability testing (task-based, moderated) was insufficient — the research required being present when the stress was real.
Ravi Mehta
Role: Quant Developer / Systems Integrator
Age: 33 | Location: Singapore
Background: Python/C++, connects hedge fund OMS to broker infrastructure via FIX 4.4
Ravi's Integration Workflow
FIX Session Lifecycle — Where Design Decisions Live
Developer Experience Failures
- FIX tag documentation: PDF, no search, no code examples
- Error codes undocumented — Ravi discovered meaning by trial and error
- Test environment shared with other clients — caused sequence number conflicts
- Credential rotation required email to ops team — 3-day SLA in pre-production
Developer Portal Outcomes
- Interactive FIX spec: Searchable tag reference, OrdStatus state-machine diagram
- Error code glossary: Every reject code mapped to plain-English cause + resolution
- Isolated sandbox: Dedicated test environment per client, no sequence conflicts
- Self-service key rotation: RM portal generates/rotates credentials without ops ticket
Research Method: Developer Interview + API Journey Mapping
Conducted 5 semi-structured interviews with quant developers and systems integrators at institutional clients (hedge funds, family offices, algo trading desks). Unlike retail research, the primary artifact was not a journey map but an API integration audit — walking through every step of the FIX session lifecycle and recording where developer time was lost. Key finding: documentation quality had greater impact on integration time than API design itself.
Retail vs. Institutional Research: A Methodology Contrast
Retail Traders
- Moderated usability testing, think-aloud protocol
- Emotional state mapping (frustration, confusion, confidence)
- Quantitative SUS scoring + task completion rates
- Hotjar heatmaps for behavioral validation
- n=15 per feature is appropriate for insight generation
Institutional Stakeholders
- Contextual inquiry + live workflow shadowing
- SLA and latency requirements as design specifications
- API journey audits — integration time as the UX metric
- Compliance clause analysis as user requirement input
- n=5 deeply is more signal than n=50 superficially
Institutional developer research — deeper coverage in the ACY Connect case study
The Ravi Mehta persona above is a composite. Full institutional developer research — FIX session lifecycle mapping, API credential UX, IP whitelist flow analysis, and integration time as a design metric — is documented in depth in the ACY Connect case study →
My Research Process
How I translate user insights into design decisions
Problem Discovery
Start with analytics anomalies: Hotjar heatmaps show users clicking non-clickable elements? Funnel analysis shows 40% drop-off at order confirmation? That's where I dig deeper.
Qualitative Research
Recruit 15 users matching the demographic → run moderated usability testing with think-aloud protocol. I record sessions, ask follow-up questions, and observe emotional responses (frustration, confusion, delight).
Design Iteration
Synthesize findings → create 2-3 design variations → A/B test with real users. I measure quantitative impact (time-on-task, completion rate) AND qualitative satisfaction (SUS scores, user quotes).
Want to See More Research Stories?
This page shows 4 composite personas — 2 retail traders, 2 institutional stakeholders — from 32+ interviews across both user segments. Full research database, journey maps, and session recordings available under NDA.