ASX Last Day Laggards Tracker
Automated a "ASX Last Day Laggards" list that journalist Stephen Mayne has been compiling manually for 15 years. The tool scrapes announcements, extracts financial data from hundreds of PDFs using AI, and produces a scored dashboard that surfaces patterns like repeat offenders and auditor concentration that were never previously identified. All built so it runs and updates in an automated fashion.
Why I built it
I heard Mayne on the Money Cafe podcast mention he was considering discontinuing his "last day laggards" list because of how tedious it had become. I was looking for projects and considered this a good candidate. A genuinely useful piece of market accountability, a repeatable process and publicly available data. It was also going to be more complex than my other projects.
Here is the accountability website.
How I approached it
I started with a detailed planning session in Claude before touching any code, mapping out the full scope from data sources and database design to visual presentation. That produced a spec document and CLAUDE.md file that guided every Claude Code session. This is my go to process before beginning any project, a detailed planning session.
The tool scrapes ASX announcements on deadline days, downloads each PDF filing, and sends the first few pages to the Anthropic API to extract profit/loss figures, auditor names, and going concern flags into structured data. Beyond automating the list, I built a Laggard Score ranking worst offenders across multiple seasons, an auditor concentration analysis showing which firms are persistently associated with last-day loss-makers, and a suspension tracker covering 14 reporting seasons of companies that failed to file at all.

The outputs include a dashboard on Vercel, an Excel export, and an HTML block matching Mayne's existing website format that he could paste straight into his CMS. I pitched the tool to Mayne over the Easter long weekend with a link to the dashboard, the spreadsheet, and an offer to walk him through it.

What I learned
The biggest surprise was a learning about the ASX itself and how it handles data access compared to other markets. In the US, the SEC provides every company filing for free through structured APIs. The ASX treats the same data as a commercial product and actively blocks scrapers. Global peers like Nasdaq generate more data revenue despite offering free base access because they sell premium analytics on top. The ASX has locked down the base layer instead, which is why this tool needs a headless browser and an LLM to do what should be a simple API call.
Getting the data was the most interesting problem to solve. The original plan was to scrape Market Index with a simple HTTP request, but it was blocked by Cloudflare. The ASX's own site was also blocked. Claude Code and I went back and forth trying different approaches, each one failing, until it landed on using Playwright to launch a headless browser that passes the Cloudflare challenge and then calls Market Index's internal API from inside the browser. Watching that process of hitting a wall, adapting, and finding a workaround was probably the most useful and satisfying learning experience of the whole project.
On cost, I initially ran the PDF extraction through Claude Sonnet and watched the API credits burn too quickly. Switching to Haiku (a smaller, cheaper model) for what is ultimately a structured data extraction task brought the cost down dramatically with no meaningful loss in accuracy. The whole project came in under $30 in API credits, and the infrastructure runs for free on Vercel.