How I Built an AI-Powered Stock Market Dashboard
Trader Koo uses YOLOv8 computer vision to detect chart patterns across all S&P 500 stocks, backed by five independent analysis layers, a nightly data pipeline, and a single-service FastAPI deployment.
What is Trader Koo?
Let me be honest: this project started because I was too lazy to manually scan 500 stock charts every day. So I did what any reasonable engineer would do and spent weeks building a system to do it for me. Classic.
Trader Koo is a market intelligence dashboard that covers all ~510 stocks in the S&P 500. Every night after market close, it pulls fresh price data, runs pattern detection across five independent layers (including a YOLOv8 computer vision model), and presents everything through an interactive Plotly chart.
The idea came from a simple frustration: most free charting tools show you the data but leave the analysis entirely to you. I wanted something that would surface patterns automatically, across every stock, every day, without manual scanning.
For each ticker, the dashboard overlays candlestick charts with support/resistance levels, trendlines, price gaps, candlestick signals, rule-based patterns (flags, wedges), and AI-detected patterns (head and shoulders, triangles, double tops/bottoms). It also pulls fundamentals from Finviz and shows valuation metrics like P/E, PEG ratio, and analyst target price discount.
Try it yourself
Head to trader.kooexperience.com. The interface has three main tabs:
- Guide explains every indicator and overlay on the chart, so you know what you're looking at.
- Opportunities is a screener. Filter by P/E ratio, PEG, analyst target discount, or sector to find stocks trading below consensus targets or with unusually low valuations.
- Chart + Levels is the main view. Pick any S&P 500 ticker, choose a timeframe (3, 6, or 12 months), and see the full analysis with all overlay layers.
YOLOv8 detections show up as bounding boxes on the chart. Daily detections appear as dotted outlines, weekly as dashed. Hover any overlay for details on the pattern type and confidence score.
Architecture overview
The whole system runs as a single FastAPI process on Railway. No message queues, no separate workers, no managed database. SQLite on a persistent volume handles all storage. APScheduler runs the nightly cron inside the same process.
┌──────────────────────────────────────────────────┐
│ Railway Microservice (single process) │
│ │
│ FastAPI + Uvicorn │
│ ├── Serves frontend (vanilla JS + Plotly) │
│ ├── /api/dashboard/{ticker} (chart payload) │
│ ├── /api/opportunities (valuation screen) │
│ └── /api/admin/* (data management) │
│ │
│ APScheduler (in-process cron) │
│ └── Runs Mon-Fri @ 22:00 UTC │
│ ├── Fetch prices + fundamentals │
│ ├── Run YOLO pattern detection │
│ └── Generate daily report │
│ │
│ SQLite (/data/trader_koo.db) │
│ ├── price_daily (510 tickers x 365 days) │
│ ├── finviz_fundamentals │
│ ├── yolo_patterns (pre-computed detections) │
│ └── options_iv │
└──────────────────────────────────────────────────┘
This single-service approach has tradeoffs. It can't scale horizontally, and a crash during YOLO inference means lost progress. But for a personal analytics tool, the simplicity is worth it. No infrastructure to manage, no services to coordinate, and SQLite is surprisingly fast when you're the only writer. Sometimes the best architecture is the one you can debug at midnight without crying.
Using computer vision to read charts
What is YOLOv8? YOLO (You Only Look Once) is an object detection model originally designed to identify objects in photos and video in real time. It draws bounding boxes around things it recognises: cars, people, dogs. YOLOv8 is the latest version from Ultralytics. The clever part here is applying it to stock charts instead of street scenes.
I used a pre-trained model from Hugging Face: foduucom/stockmarket-pattern-detection-yolov8. Someone trained YOLOv8 on thousands of annotated chart images, and the resulting model can look at a candlestick chart and say "that region looks like a head and shoulders pattern" with a confidence score. It recognises six pattern classes:
- Head and shoulders (top and bottom) – a reversal pattern with three peaks, the middle one highest. One of the most well-known patterns in technical analysis.
- M-Head (double top) – price hits a resistance level twice and fails, suggesting a potential downturn.
- W-Bottom (double bottom) – the inverse: price finds support twice, potentially signalling a recovery.
- Triangle – price range compresses into converging trendlines, often preceding a breakout.
- StockLine (consolidation) – price moves sideways in a range, building energy for the next move.
Why two timeframes?
The model was trained on charts showing roughly 100 to 130 candlesticks. Candle density affects detection quality significantly. Feed it 730 daily bars crammed into one image and each candle is too small for reliable recognition.
My solution: run detection at two timeframes.
- Daily (180 days, ~124 bars) catches recent patterns like flags forming or a head-and-shoulders developing over the past few months.
- Weekly (730 days resampled to ~104 bars) catches structural patterns spanning multiple months, like a major double top that took a year to form.
The weekly pass resamples daily OHLCV data into weekly candles (open = Monday's open, high = week's high, low = week's low, close = Friday's close). This maintains the 100-to-130 bar density the model expects while covering over two years of price history.
The per-ticker pipeline
- Load historical prices from SQLite
- For each timeframe: slice to the lookback window and resample if needed
- Render a 1200 x 600 candlestick chart using mplfinance
- Capture the matplotlib axis metadata (explained below)
- Run YOLOv8 inference on the rendered image
- Map pixel bounding boxes back to date/price coordinates
- Store results in the
yolo_patternstable
A full seed across all 510 tickers (both timeframes) takes about 25 minutes. The nightly incremental update only processes tickers with new price data, bringing it down to 5 to 10 minutes.
The coordinate mapping problem
This was the part that nearly broke me. YOLOv8 outputs bounding boxes in pixel
coordinates: (x0, y0, x1, y1) relative to the image. But the interactive
Plotly chart on the frontend uses dates and prices. Pixels don't mean anything to a
trader. "There's a triangle at pixel 340" is not useful information.
The trick is capturing the matplotlib axis layout at render time. When mplfinance draws the chart, I grab the axis position in figure coordinates and the data limits:
# Capture axis metadata at render time
pos = ax.get_position() # Bbox in figure coords (0-1)
xlim = ax.get_xlim() # (bar_index_min, bar_index_max)
ylim = ax.get_ylim() # (price_min, price_max)
# Convert pixel x to bar index, then to date
ax_x0_px = pos.x0 * image_width
frac_x = (x_px - ax_x0_px) / (ax_x1_px - ax_x0_px)
bar_index = xlim[0] + frac_x * (xlim[1] - xlim[0])
date_str = dates[round(bar_index)]
# Convert pixel y to price (image y is inverted)
fig_y = image_height - y_px
frac_y = (fig_y - ax_y0_px) / (ax_y1_px - ax_y0_px)
price = ylim[0] + frac_y * (ylim[1] - ylim[0])
This approach generalises to any matplotlib chart. The key insight is that
ax.get_position() and ax.get_xlim()/get_ylim() give you
everything needed to build the pixel-to-data mapping. I also filter out any detections
that land in the volume panel, where the price conversion would produce nonsense values.
Five layers of pattern detection
What is ensemble learning? In ML, an ensemble combines multiple independent models to produce a better result than any single model alone. The idea is simple: if five different methods each get it right 60% of the time, and they make different mistakes, then the cases where three or more agree tend to be much more reliable. It's the "ask five friends and see who agrees" approach.
Trader Koo applies this concept to chart pattern detection. Instead of relying on one method, it runs five independent layers and shows them all so you can see where they agree and disagree.
1. Support and resistance levels
I find local pivot highs and lows (bars where the high or low is the extreme within a 3-bar window on each side), then cluster nearby pivots using a greedy algorithm with ATR-relative tolerance. Each cluster gets scored by touch count with a 45-day half-life for recency weighting.
What is ATR? Average True Range measures a stock's volatility by averaging its daily price range over a period (typically 14 days). A stock that moves $5/day has a higher ATR than one that moves $0.50/day. By expressing all distances as ATR multiples instead of fixed dollar amounts, the system adapts automatically to each stock's volatility. A low-volatility utility stock gets tighter zones; a volatile tech stock gets wider ones. No manual tuning needed. This is a technique worth stealing for any price-based analysis.
2. Rule-based chart patterns
Classic patterns detected through geometric rules:
- Bull/bear flags: A strong move (4%+ pole) followed by a parallel channel pullback. Measured by pole size, channel slope, and pullback depth.
- Rising/falling wedges: Converging trendlines fitted through pivot points. Requires at least 35% convergence and R² of 0.45 or higher.
3. Candlestick patterns
Single and multi-candle patterns detected using TA-Lib (61 built-in pattern functions), with manual fallbacks for key patterns like hammers, engulfing candles, and morning/evening stars. Each gets a confidence score based on signal strength.
4. Hybrid scoring
Blends rule-based confidence (55%), candlestick agreement (20%), volume context (15%), and breakout status (10%) into a single score. This is the "should I pay attention?" number, combining multiple signals into one weighted view.
5. CV proxy patterns
An independent detector that uses the same pattern classes but different math. Instead of fitting all bars, it fits regression lines only through swing pivot points, producing a second opinion. When both the rule-based layer and CV proxy agree, the signal carries more weight.
The YOLOv8 layer sits alongside these as a sixth, independent voice. When three or four layers flag the same pattern on the same chart, that convergence means something. When only one layer sees it, take it with a grain of salt. That's the power of the ensemble approach: you don't need any single detector to be perfect.
The nightly data pipeline
Every weeknight at 22:00 UTC (after US market close), while I'm probably asleep or making stew, APScheduler kicks off the update pipeline:
- Fetch the S&P 500 list from Wikipedia's S&P 500 table, plus market context tickers (SPY, QQQ, VIX).
- Pull prices via yfinance with rate limiting (0.5 to 1.2 second random delay between requests, because getting IP-banned by Yahoo Finance is not on my bucket list).
- Scrape fundamentals from Finviz for P/E, PEG, EPS growth, analyst targets. Cached with a 20-hour minimum interval to avoid redundant fetches.
- Run incremental YOLO only on tickers that have new candles since their last detection run.
- Generate a daily report with pipeline metrics: tickers processed, failures, timing.
Each run records start/end timestamps and per-ticker status. The /api/status
endpoint exposes this so I can check data freshness at a glance. The pipeline also has
graceful shutdown: if the process gets SIGTERM mid-run (during a Railway redeploy, for example),
it catches the signal, saves partial progress, and exits cleanly.
Deployment on Railway
The app runs on Railway using Nixpacks. Deployment was mostly smooth, except for the part where it wasn't. A few quirks worth noting:
- PyTorch CPU-only: Installed from a custom index URL to save ~1 GB over the default CUDA build. There's no GPU on the Railway instance anyway.
- OpenCV headless: The default
opencv-pythonpackage needslibGL.so.1, which doesn't exist on headless servers. Force-installingopencv-python-headlessfixes this. - First-run seeding: On fresh deploy, the start script creates the schema, spawns a background seed (non-blocking), and starts Uvicorn immediately. The API is live while data populates. Health check timeout is set to 60 minutes to accommodate the initial ~30 minute seed.
- Model caching: YOLOv8 weights download on first inference and persist on the volume. Subsequent deploys skip the download.
Lessons learned
Every project has its humbling moments. Here are the ones that stuck with me.
Pre-compute beats on-demand for heavy inference. Running YOLOv8 per request would take 1 to 2 seconds per ticker. Pre-computing nightly and querying SQLite makes the dashboard respond in milliseconds. The tradeoff (patterns are one day stale) is fine for daily charts.
ATR as a universal scale works well. Expressing all distances as ATR multiples makes every parameter self-adjusting. No per-stock tuning needed, and it handles both quiet and volatile markets gracefully.
Multiple weak signals beat one strong signal. No single detector is reliable. But when three or four independent methods agree on the same pattern, you start paying attention. It's like when all your friends independently tell you the same restaurant is good.
Single-service simplicity is underrated. One process, one database file, one deploy target. When something breaks at 2am, there's exactly one place to look. No Kubernetes. No microservices. Just vibes and SQLite.
Coordinate mapping is a transferable technique. The matplotlib axis metadata approach for converting pixels back to data works for any rendered chart. If you're doing computer vision on generated visualisations, this pattern is worth knowing about.
If you made it this far, thanks for reading. Building Trader Koo was one of those projects where I learned something new at every turn. If you have questions, ideas, or just want to argue about whether head-and-shoulders patterns actually mean anything, feel free to reach out. Now go check if your favourite stock has a double bottom forming.
References
- Trader Koo (live app)
- Source code on GitHub
- Ultralytics YOLOv8 documentation
- foduucom/stockmarket-pattern-detection-yolov8 (Hugging Face model)
- mplfinance for rendering candlestick charts
- TA-Lib for candlestick pattern functions
- yfinance for market data
- Investopedia: Average True Range (ATR)
- Wikipedia: Ensemble learning
- Plotly.js for interactive frontend charts
- FastAPI
- Railway for deployment