← Selected Work
Full Stack2025

BettingAIPro

A live football prediction platform built on a Dixon-Coles statistical engine, a Groq/Gemini AI ensemble, and a PostgreSQL audit ledger that records the baseline, the AI deltas, and the variance reason for every prediction — so failures are as informative as wins.

EXTERNALLIVE ODDSAPI-FootballINGESTIONFASTAPI WORKERPython · async I/OMATH ENGINEPOISSON ENGINEDixon-Coles · xGAI ENSEMBLEGROQ + GEMINIQwen3-32B · Flash 2.0DELIVERYNEXT.JSTypeScript · App RouterPERSISTENCEPOSTGRESQL · AUDIT LEDGERPoisson baseline · Groq Δ · Gemini Δ · VarianceReason ENUM

24

Predictions

tracked · live system

47.4%

Win Rate

9W / 10L · 19 settled

2

AI Models

Groq Qwen3-32B · Gemini Flash

Feb 25

Active Since

tracking continues

Math Engine

Dixon-Coles with rho correction

Standard Poisson is terrible at low scores like 0-0. It inflates those probabilities in a way that doesn't reflect real football. I had to implement Dixon-Coles with the rho = -0.13 correction to actually get the draw probabilities right. The normalization drift that correction introduces gets absorbed by adjusting the largest output component rather than re-scaling everything, so the sum always lands at exactly 100.0.

Observability

I was guessing why the AI disagreed with the math

I realized I was guessing why the AI disagreed with the math. I added the audit ledger columns so I could see exactly where Groq or Gemini were overriding the baseline. Three columns per prediction: the raw Poisson output, the Groq delta, the Gemini delta. I also added a VarianceReason ENUM with eight values so post-mortems have a specific cause, not just a shrug.

AI Layer

Python because the math lives there

I picked Python and FastAPI because the Dixon-Coles math lives natively there. Porting it to TypeScript would have meant reimplementing scipy-adjacent logic by hand. FastAPI's async I/O lets the same process handle odds ingestion and serve the Next.js frontend without spinning up separate workers.

Observability · Fixed

INJURY_DATA_MISSED was never being set

I had a VarianceReason enum defined and thought it was being set everywhere. It wasn't. When injury data was missing from the scouting layer, the field silently skipped assignment and got logged as null. I was running accuracy calculations on a variance flag that was never populated. Fixed by tracing the pipeline end-to-end and adding the assignment at the exact point where injury data is confirmed absent.

Infrastructure · Fixed

Vercel was killing my AI requests at 10 seconds

Vercel was timing out my AI requests. I had to throw out the synchronous call and build a background sync job with a polling loop to stop the site from crashing. The request now returns immediately with a job ID and the client polls until the result lands.

Security · Fixed

I caught myself leaving admin routes open

I found some open backdoors in my admin routes. I wrote a require_admin middleware and applied it across the board before I let anyone see the site. Now there's no way to accidentally ship an unprotected admin endpoint.

I use AI for research and as a sounding board for syntax, but the logic here is mine. Whether it's the Dixon-Coles correction or the audit ledger schema, I spent the hours debugging these patterns myself. If you pick any line in this repo and ask me why it's there, I can give you the engineering reason and the specific bug it solved. I'm here to build systems, not just prompt them.