Five days. From blank directory to a live product with user auth, a working core feature, Stripe billing, and a landing page. A year ago that would have been two months minimum.
Let me be upfront: it wasn't pretty. The code needs refactoring. Some edge cases are held together with duct tape. The design is functional, not beautiful. But it works, it's live, real users can sign up and pay, and the core feature delivers value. That's an MVP.
Here's the day-by-day breakdown.
Day 1: Architecture and database
Started by describing the product to the AI. "I need a platform where users can upload documents, the AI analyzes them, and generates a structured report. Flask backend, React frontend, PostgreSQL."
The AI mapped out the database schema in about five minutes. Users, documents, analyses, reports — with proper foreign keys and constraints. I adjusted a few column types and added fields it forgot (like file_size and mime_type), then generated the SQLAlchemy models.
By end of day 1, I had: database schema, all models, JWT authentication (register, login, refresh, password reset), and a basic React shell with routing.
Time AI saved: probably 6-8 hours of boilerplate writing.
Day 2: Core feature
This was the hard day. The document analysis logic is the actual value of the product, and AI couldn't just generate it from a description. I spent most of the day designing the analysis pipeline, with AI helping on the implementation details.
"Write a function that extracts text from a PDF, chunks it into sections, and sends each section to Claude for analysis." The basic pipeline was generated quickly, but then reality hit: PDFs with weird formatting, tables that don't extract cleanly, API rate limits, error handling for partial failures.
Each of these edge cases required me to think through the solution, then AI helped implement it. It was collaborative, not automated.
Time AI saved: maybe 3-4 hours on implementation, but the thinking time was all mine.
Day 3: Frontend and API wiring
This was the fastest day. Most of the frontend is standard CRUD — forms, lists, detail views, loading states, error handling. AI generated component after component while I reviewed, adjusted styles, and connected everything to the backend.
The pattern: "Write a React component for [feature]. Use inline styles with these CSS variables. Include loading and error states. Call [specific API endpoint]."
Each component took 2-3 minutes to generate and 5-10 minutes to review and polish. I built about 15 components this way.
Time AI saved: easily 12-15 hours.
Day 4: Billing and deployment
Stripe integration was almost entirely AI-generated. The webhook handler, checkout sessions, subscription management, credit tracking — these are well-documented patterns that AI handles perfectly.
Deployment was the same story. Nginx config, gunicorn setup, systemd service, SSL certificate — all generated and executed through the chat. Having server access from the AI workspace meant I didn't need to open a terminal separately.
Time AI saved: 6-8 hours.
Day 5: Landing page and polish
The landing page was AI-assisted but heavily edited. AI generated the structure and copy; I rewrote most of it to sound like a human. Added testimonials (from beta testers), a demo video placeholder, and a pricing section.
Spent the rest of the day on polish: loading states, error messages, mobile responsiveness, and fixing the dozen small bugs that accumulated during the rush.
What worked
Boilerplate generation — auth, CRUD routes, database models, standard UI components. AI handles these perfectly and saves massive time.
Debugging — pasting errors and getting instant diagnoses. Saved hours of Stack Overflow browsing.
Config generation — nginx, gunicorn, systemd, SSL. Never getting these wrong on the first try is a luxury.
What didn't work
Complex business logic — the core analysis pipeline required human thinking. AI helped implement but couldn't design it.
Design — AI-generated UI is functional but generic. Anything that needs to look distinctive requires human design sense.
Testing — AI can generate test cases, but knowing what to test requires understanding the product deeply. The AI would generate happy-path tests and miss the edge cases that actually break things.
Would I do it again?
Absolutely. Five days for an MVP with billing, auth, and a live deployment is genuinely new. The result isn't production-perfect, but it doesn't need to be — it's an MVP. The goal was to validate the idea, not build the final product.