An AI-powered job aggregator and resume coaching platform for the Singapore job market.
Before starting AIAP, I was job searching in Singapore and hit a wall.
The project started before AIAP, during my own job search. I was building a scraper for Careers@Gov to create a job matching and reminder system - back when Careers@Gov 2.0 was still in the pipeline and the existing portal had limited search functionality. The original version was simple: scrape, match keywords, send notifications.
But the AI landscape wasn't ready yet. LLMs hadn't reached the point where you could reliably rewrite resume bullets without hallucinating metrics or fabricating skills. The project sat dormant.
Then my friend Yanwen brought up resume building during a conversation, and it reignited everything. With SEA-LION (AI Singapore's open models) now available and capable enough for structured text generation, the missing piece was finally in place. What started as a Careers@Gov scraper evolved into a full resume coaching platform.
The tools that existed were either generic (not built for Singapore) or too simple (keyword stuffing without understanding the actual JD). I wanted something that could:
A FastAPI backend powers job aggregation and AI coaching. React frontend provides the search and resume editing experience.
The heart of Job Hunter is the resume tailoring pipeline. It takes your resume and a job description, then produces a tailored version through 7 validated stages.
The pipeline supports three intensity modes: nudge (local only, 5s), keywords (with bullet rewrites, 30s), and full (all stages including summary, 45-60s).
Every AI-generated rewrite passes through five gates before being accepted. Any failure reverts to the original text.
| Gate | What it checks | On failure |
|---|---|---|
| Fact Preservation | All numbers and metrics from original must appear in rewrite | Revert |
| AI Phrase Detection | Auto-replace 84 weak phrases ("leveraged", "synergize") unless phrase appears in the JD | Auto-fix |
| Keyword Verbatim | Required keywords appear exactly as in JD | Warn |
| Length Sanity | Max 40 words per bullet, not 1.8x longer than original | Revert |
| Hallucination Detection | Reject unknown terms not in resume or JD | Revert |
Here is what happens when the pipeline rewrites a bullet and the gates intervene:
The gate checks: "5-person", "40%", and "3 departments" all appear in the original and survive in the rewrite. If the AI had inflated "5-person" to "20-person", the fact preservation gate would revert the entire bullet to the original.
Two primary sources power the nightly crawl via a Railway cron job at 22:00 UTC. Extensible scraper architecture supports more.
The scraper architecture supports additional sources (NodeFlair, Indeed, JobStreet, Adzuna, Jooble) via a pluggable SOURCE_MAP, but the crawl currently runs only the two primary APIs. MCF alone covers most of the Singapore job market with structured salary and skills data. Enabling an existing scraper is a config change; adding a new source requires writing a scraper class.
CareersGovScraper fetches individual detail pages, extracts skillTags from the API response, and falls back to parsing skill cues from the JD text using jd_preparser.py (regex pattern matching, ~50ms/job).
The interesting engineering is in the failures, not the features.
The first version of the keyword integration just stuffed every missing JD keyword into the resume. The result read like a search engine, not a human. The fix: Stage 0 classifies every missing skill as injectable (user has adjacent experience) or non-injectable (user has no basis for this claim). The AI is only allowed to weave in injectable keywords. Non-injectable ones get flagged as skill gaps, not fabricated onto the resume.
Early rewrites had a duplication problem: the AI would rewrite three bullets in the same entry to all start with "Spearheaded" and repeat the same achievement. Stage 3 now passes sibling context to every rewrite call, so the model knows what the other bullets already say. Stage 4 then runs verb synonym dedup (15 verb groups) as a safety net.
SEA-LION's 70B model occasionally times out or returns malformed JSON. When Stage 1 (strategic analysis) fails, the pipeline doesn't crash. It falls back to a local heuristic: prioritize bullets with the most issues (from the scorer) and mark the result as _degraded. The user still gets a tailored resume, just without the full strategic reasoning.
Keyword matching misses jobs described differently from how you'd write your resume. The backend includes a semantic search layer using sentence-transformers/all-MiniLM-L6-v2 (384-dim embeddings). Both job descriptions and resumes are encoded, and cosine similarity surfaces matches that keyword search would miss. This is how a "data pipeline engineer" resume can match a "data infrastructure" JD.
Job Hunter is live at job.kooexperience.com. The core experience works, but there's more to build: