AI Development Prompt Library

Copy-ready prompts and workflows for agentic coding. Choose a template, fill the variables, and keep delivery consistent.

πŸ”

Jump back in

Emergency fixes

Stabilize production incidents and hotfixes.

Feature implementation

Ship net-new features with structured prompts.

Quick start

Spin up greenfield agentic projects with confidence.

Project Context & Persistent Memory β€” Guided Starter Kit

Model: ANY Task: SETUP
~1788 tokens

Summary

Supply LLMs with a stable, concise, and up-to-date representation of your project so prompts can build on a persistent context (dependencies, file layout, rules, goals).

Refined Core

Build and maintain small, versioned project artifacts that the model reads as persistent context β€” prompts then become lightweight commands because the model already knows your constraints, style, and history.

Advice Highlights

  • Store decision rationale, architecture notes, and pattern files as separate versioned documents so the model can reference why prior choices were made.
  • Keep a short set of persistent files per project (WHO_I_AM.md, WHAT_IM_DOING.md, CONTEXT.md, STYLE_GUIDE.md, NEXT_SESSION.md) and have the model read them before any coding request.
  • When editing code, include explicit file-path context and small related file groups rather than entire repo dumps.
  • Keep a change/error history (memlog) that the model must check or update before making edits; require the agent to verify and update this memlog before proceeding.

Fill Variables

Template Preview

project-memorycontext-filesllm-workflowguided-template

Task Decomposition & Session Plan (auto-split + prompt generator)

Model: ANY Task: PLAN
~245 tokens

Summary

Divide work into small, focused tasks and map each to a single chat/session or agent run to avoid context pollution and keep outputs targeted.

Refined Core

Split work into single-responsibility, reviewable units and manage each unit in a fresh, timeboxed session with an explicit plan and human gate before proceeding.

Advice Highlights

  • For long-running or rate-limited workflows, break work across sessions and mark clear stopping points to resume safely later.
  • Keep small, reviewable diffs: ask the model to propose changes step-by-step and review each patch (git add -p style) before applying.
  • For large files/complex refactors (>~300 lines), force a mandatory planning phase: list functions/sections to change, dependencies, and number/order of edits.
  • Break complex changes into sequential sub-tasks (e.g., extract text first, then summarize), and only ask for the next step after the previous one is validated.

Fill Variables

Template Preview

task-decompositionsession-managementplanning

progress.md / implementation.md Checkpoint (handoff + resume prompt)

Model: ANY Task: DOCUMENTATION
~157 tokens

Summary

Divide work into small, focused tasks and map each to a single chat/session or agent run to avoid context pollution and keep outputs targeted.

Refined Core

Split work into single-responsibility, reviewable units and manage each unit in a fresh, timeboxed session with an explicit plan and human gate before proceeding.

Advice Highlights

  • For long-running or rate-limited workflows, break work across sessions and mark clear stopping points to resume safely later.
  • Keep small, reviewable diffs: ask the model to propose changes step-by-step and review each patch (git add -p style) before applying.
  • For large files/complex refactors (>~300 lines), force a mandatory planning phase: list functions/sections to change, dependencies, and number/order of edits.
  • Break complex changes into sequential sub-tasks (e.g., extract text first, then summarize), and only ask for the next step after the previous one is validated.

Fill Variables

Template Preview

checkpointingdocumentationhandoff

Reasoning Scaffold: Plan-then-Code with Step-1 Gating

Model: ANY Task: IMPLEMENT
~490 tokens

Summary

Ask the model to reason step-by-step before producing final code or conclusions to surface the plan, edge cases, and potential pitfalls.

Refined Core

Demand a short, pre-generation reasoning scaffold (plan + assumptions + confidence/self-critique) so you can validate approach before code is produced.

Advice Highlights

  • Require the model to list assumptions and what references/files it used; if something is missing, ask it to request that input. (lt1f3dl)
  • Require an explanation of generated code using a structured prompt: 1) purpose, 2) step-by-step how it works, 3) alternatives considered and why selected.
  • Ask the model to produce a short high-level plan before coding (three to five bullet steps) and then request code for step 1 only.
  • Include a short self-critique or confidence marker describing risk areas and what needs manual review before applying changes. (lt1f3dl)

Fill Variables

Template Preview

chain-of-thoughtplan-then-codegated-outputreasoning-scaffold

Generate Feature Specification

Model: GPT-4 Task: PLAN
~100 tokens

Fill Variables

Template Preview

planningspecificationarchitecture

Generate Test Suite

Model: CLAUDE-SONNET Task: TEST
~96 tokens

Fill Variables

Template Preview

testingtddquality

Implement Code from Tests

Model: CLAUDE-SONNET Task: IMPLEMENT
~76 tokens

Fill Variables

Template Preview

implementationtddcoding

Review and Refactor Code

Model: CLAUDE-OPUS Task: REVIEW
~112 tokens

Fill Variables

Template Preview

reviewrefactoringquality

Generate Documentation

Model: GPT-4 Task: DOCUMENTATION
~88 tokens

Fill Variables

Template Preview

documentationapiguides

Reproduce and Isolate Bug

Model: ANY Task: DEBUG
~91 tokens

Fill Variables

Template Preview

debuggingtroubleshootingbugs

Showing 1 – 10 of 11

Model guidance

Claude Opus

Complex reasoning, architecture reviews, security drills.

Claude Sonnet

Implementation work, debugging, and fast iteration.

GPT-4

Discovery, planning, documentation, complex analysis.

Gemini

Large codebase exploration and data-heavy workflows.