Prompts that
actually ship.
39 battle-tested prompts, organized by stage, ready to copy.
Got a prompt
that ships?
Used a prompt that actually worked? Submit it. If it's battle-tested, it gets in.
39 battle-tested prompts, organized by stage, ready to copy.
Used a prompt that actually worked? Submit it. If it's battle-tested, it gets in.
Before starting any AI-assisted project, or when your AGENTS.md is missing, incomplete, or out of date with your actual stack.
When an AI session has gone off-track, made unexpected changes, or the context has become stale after many messages — use this to stop the bleeding and re-anchor before continuing.
Once per project, after writing your PRD and before your first build session. Sets up the memory-bank/ folder that every AI session will read from — replaces copy-pasting context into every chat.
When you need to explicitly protect critical files from being modified by the AI — use this at the start of any session where those files are near the blast radius of your task.
At the start of every new AI coding session, before any code is written. Ensures the AI reads current context, understands the project state, and doesn't operate on stale assumptions.
Before writing a single line of code, when you need to validate your differentiation and understand the real gaps in the market you're entering.
When you want to verify that real demand exists for your product idea before investing weeks of build time — turn your gut feeling into a structured evidence checklist.
Before starting any build, to define in advance the specific, measurable conditions under which you will stop working on this project and move on.
When your problem statement is vague, accidentally includes your solution, describes a demographic instead of a real person, or could apply to dozens of different products.
Before implementing any feature, to define exactly what done looks like in terms an AI coding agent can verify — and that you can test in under 5 minutes.
When you want to implement a specific feature and need the AI to research your codebase, pull relevant docs, and produce a fully-specified implementation blueprint before writing a single line of code. The most reliable pattern for complex features.
When your feature list has grown beyond 5 items and you need to cut ruthlessly before handing it to an AI coding agent — more features means more drift, more bugs, and longer time to ship.
When you need to write a PRD before starting to build with AI — this is the document your AGENTS.md will reference and your AI coding agent will read to understand scope.
After writing your MVP features, to explicitly define what you are NOT building — this list goes directly into your AGENTS.md and PRD to prevent AI scope creep during the build.
Before implementing any backend endpoint or frontend data fetch — define the contract first so frontend and backend can be built in parallel without integration surprises.
Before running any migrations or writing database queries — catch schema problems now when they cost nothing, not after you have production data.
Before starting a project or after adding new dependencies — lock your versions and catch problems before a surprise breaking change kills your build mid-sprint.
Before writing any code, to lock all technology decisions in writing so your AI coding agent never has to guess or make stack decisions independently.
Before starting any build loop — break your PRD into one-task-per-session chunks that each have a clear done condition and a git commit point.
After completing and verifying a task — before committing to git, generate a message that future-you (or a collaborator) will actually understand.
Before accepting any AI-generated code change — run this review to catch scope creep, unintended modifications, and anything that slipped in beyond the stated task.
Before executing any non-trivial task in Claude Code or Cursor. Forces the AI to plan before touching code — the single habit that prevents the most regressions.
When starting each individual build loop task — use this frame to give the AI agent a focused, unambiguous task that leaves no room for interpretation or scope expansion.
When the AI is pushing back on your approach, suggesting alternatives you didn't ask for, warning you away from your own decision, or not following your instructions after you've already given them.
When AI responses are inconsistent, too long, padded with caveats, or not in the format you need — append these constraints to any prompt to lock the output shape.
When a task is too complex for one prompt and needs to be broken into sequential steps where each output feeds the next — design the chain before running any step.
When you have a fuzzy idea and need to turn it into a prompt precise enough for an AI coding agent to execute without guessing — before writing any task prompts.
After implementing a primary user flow, to write the Playwright test that serves as your shipping gate — if this test breaks, you don't ship.
Before merging any feature branch or deploying to production — run every item on this gate and do not ship until everything is PASS.
When you want a fresh AI perspective on code written by another AI session — bring in the second opinion before shipping anything significant.
Before any commit that involves API integrations, environment configuration, auth logic, or any file that has touched credentials — make this a non-negotiable step.
Immediately after launch, when you need to find your first real users without paid advertising — the first 100 users are about learning, not scaling.
24 hours before any public launch or significant user-facing release — run through every item and do not flip the switch until everything is confirmed.
Before writing any landing page copy, social posts, or product descriptions — nail the positioning first, then let every piece of copy flow from it.
After collecting user feedback from surveys, interviews, support emails, or social posts — extract the signal from the noise before deciding what to build next.
When monthly infrastructure costs are growing faster than revenue, or before scaling to significantly more users — find the waste and the single points of failure before they find you.
Before updating any npm package — especially major version bumps — to understand what you're actually changing and whether your tests would catch a regression.
Before going to production, to prepare for the most likely failure scenarios — write the runbook when you're calm, not at 2am when the site is down.
When pages are slow, API responses are lagging, users are reporting performance issues, or you're about to onboard significantly more users and want to find problems before they find you.