Back to guides
New

The Claude Code Framework That Changed How I Ship Code (Research → Plan → Implement)

Stop letting Claude Code guess. The Research → Plan → Implement framework turns AI from a code generator into a senior engineering partner — with parallel agents, persistent context, and real results.

20-min guide
AI Development
Claude Code

I Was Using Claude Code Wrong for Six Months

I thought I was productive. Open terminal, fire up Claude, describe what I wanted, watch it code.

Then I looked at my git history.

47% of my commits were fixes to things Claude had just written. Not edge cases. Not tricky bugs. Basic stuff — wrong patterns, ignored conventions, duplicated logic that already existed three files away.

The worst part? I was the problem. I was treating a senior-engineer-level AI like a code generator. Skip research. Skip planning. Just "build me this thing."

Sound familiar?

After six months of this, I built a framework that fixed it. Not a vague methodology — a set of Claude Code commands that enforce the right workflow every time. My fix-it commits dropped to under 10%. Features that took days started shipping in hours.

This is the Research → Plan → Implement framework — and it's the single biggest productivity unlock I've found with Claude Code.

What Is the Claude Code Framework?

The Claude Code framework is a structured workflow that splits AI-assisted development into three distinct phases:

Research → Plan → Implement → Validate

Instead of asking Claude to "build feature X" and hoping for the best, you:

  1. Research the codebase so Claude understands what exists
  2. Plan the implementation so Claude knows exactly what to build
  3. Implement phase by phase with verification at each step
  4. Validate that the result matches the plan

Each phase is a custom Claude Code command. They coordinate parallel sub-agents, persist findings across sessions, and build organizational knowledge over time.

The framework is open source on GitHub and installs with a single command.

Why Most Developers Use Claude Code Ineffectively

Here's what typically happens. A developer opens Claude Code and types something like:

"Add OAuth 2.0 support with Google and GitHub providers"

Claude starts coding. It picks patterns you don't use. It structures auth differently from your existing system. It misses your Redis session store. It creates a new user model when you already have one.

Now you're debugging AI-generated code instead of shipping features.

The research prevents Claude from making ignorant changes. The plan prevents it from making wrong changes.

Without research, Claude doesn't know your codebase patterns. Without a plan, it doesn't know your architectural preferences. It's coding blind — and you're cleaning up after it.

The Complete Claude Code Workflow

Phase 1: Research Your Codebase (/1_research_codebase)

Before any code is written, Claude needs to understand what already exists.

/1_research_codebase
> How does the authentication system handle session management and token refresh?

Here's what happens behind the scenes — and this is where it gets interesting. The framework doesn't just read files. It spawns three parallel sub-agents, each specialized for a different type of discovery:

  • Codebase Locator — finds relevant files without reading them (fast mapping)
  • Codebase Analyzer — reads and understands implementation details
  • Pattern Finder — discovers conventions and patterns to follow

These agents work simultaneously, which means research that would take 15 minutes of manual exploration finishes in under a minute.

Example output:

## Authentication System Analysis
 
### Token Management
- JWT tokens stored in httpOnly cookies (auth-utils.ts:45)
- Refresh tokens in secure Redis store (session-manager.ts:112)
- 15-minute access token, 7-day refresh token
 
### Session Handling Pattern
The current implementation at auth-middleware.ts:78 uses token rotation
on every refresh with CSRF protection via double-submit cookies and
rate limiting on the refresh endpoint (5 requests/minute).
 
### Security Considerations Found
- CSRF protection via double-submit cookies
- Rate limiting on refresh endpoint (5 requests/minute)
- Token rotation on every refresh

Notice the specificity. File paths, line numbers, actual patterns. This is the context that makes the difference between Claude generating code that fits your project and code that fights it.

Phase 2: Create an Implementation Plan (/2_create_plan)

With research in hand, you create a detailed, phased plan:

/2_create_plan
> Based on the research, add OAuth 2.0 support with Google and GitHub providers

This isn't Claude dumping a wall of code. It's an interactive process:

  1. Claude analyzes requirements against the research
  2. Proposes a phased approach
  3. You iterate until the plan is right
  4. Each phase has specific success criteria

Here's what a real plan looks like:

# OAuth 2.0 Integration Plan
 
## Overview
Add OAuth 2.0 with Google and GitHub providers while maintaining
existing JWT session management.
 
## Approach
Extend current auth system rather than replace. Use Passport.js
for OAuth strategies, integrate with existing JWT/Redis sessions.
 
## Phase 1: Database & Models
File: prisma/schema.prisma
Changes: Add OAuthProvider model linked to existing User model
 
### Success Criteria:
- [ ] Migration runs: npm run db:migrate
- [ ] Schema validates: npm run db:validate
- [ ] Existing auth tests still pass
 
## Phase 2: OAuth Strategy Implementation
File: src/auth/strategies/oauth.ts
Changes: Google and GitHub Passport strategies
 
### Success Criteria:
- [ ] Google OAuth flow completes
- [ ] GitHub OAuth flow completes
- [ ] Sessions created with existing JWT pattern

The key detail: each phase is independently verifiable. If Phase 1 breaks something, you catch it before Phase 2. No more "I implemented the whole feature and now nothing works."

Phase 3: Implement the Plan (/4_implement_plan)

Now Claude executes — but systematically:

/4_implement_plan
> @thoughts/shared/plans/001_oauth_integration.md

The framework reads the plan, creates a todo list, and implements phase by phase. After each phase, it runs the verification steps. If something fails, it stops and tells you instead of plowing ahead and creating a mess.

Starting Phase 1: Database & Models
 
✓ Updated prisma/schema.prisma
✓ Created migration
✓ Running migration... Success!
✓ Tests passing
 
Phase 1 Complete. Starting Phase 2...
 
✓ Installed passport dependencies
✓ Created OAuth strategies
⚠️ Issue: Google Client ID not in .env
 
Blocker Found:
Need GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET.
Once added, I'll continue with Phase 2.

This is how professionals ship. Phase by phase. Verified at each step. Adapting when reality doesn't match the plan.

Phase 4: Validate Everything (/3_validate_plan)

Before you commit, validate the full implementation:

/3_validate_plan
> Validate the OAuth implementation

This runs a comprehensive check — build passes, tests pass, type checking clean, code matches plan, security considerations addressed. It produces a validation report you can review in 30 seconds.

How to Set Up CLAUDE.md for the Framework

Your CLAUDE.md file is the foundation. It's the first thing Claude reads at the start of every session, and it determines whether Claude understands your project or flails around guessing.

Here's what a good CLAUDE.md includes for the framework:

# CLAUDE.md
 
## Commands
- npm run dev (port 3000)
- npm run build
- npm test
- npm run lint
 
## Architecture
- Next.js 14 App Router
- Prisma ORM with PostgreSQL
- Redis for session management
- JWT authentication with httpOnly cookies
 
## Conventions
- All API routes in src/app/api/
- Shared types in src/types/
- Database queries through Prisma, never raw SQL
- Error handling: throw typed errors, catch at boundary

The rule of thumb: if removing a line would cause Claude to make a mistake on your codebase, keep it. Everything else is noise.

Don't paste code snippets — they go stale fast. Instead, point to files: "Authentication pattern: see src/auth/middleware.ts:78". Claude will read the current version every time.

Claude Code Best Practices with the Framework

After using this framework across dozens of projects, these are the patterns that consistently produce the best results:

Always Start with Research

Even when you think you know the codebase:

/1_research_codebase
> Current error handling patterns in API routes

I've been surprised more times than I can count. Code evolves. Patterns drift. What you remember from three months ago might not be what's there today.

Be Ruthlessly Specific in Plans

Vague plans produce vague implementations.

This leads to trouble:

"Add caching to improve performance"

This ships clean:

"Add Redis caching layer for product API with 5-minute TTL, invalidation on update, and fallback to database on cache miss"

Use Session Management for Multi-Day Features

The framework includes /5_save_progress and /6_resume_work commands. They create comprehensive checkpoints so you can context-switch without losing work:

# Day 1: Research and plan
/1_research_codebase
/2_create_plan
/5_save_progress
 
# Day 2: Implement core
/6_resume_work
/4_implement_plan (Phase 1-2)
/5_save_progress
 
# Day 3: Finish and ship
/6_resume_work
/4_implement_plan (Phase 3)
/3_validate_plan

No more re-explaining context at the start of every session. The session summary captures where you left off, what's done, and what's next.

Leverage Parallel Agents

When you ask the framework to research a complex question, it spawns multiple agents that work simultaneously:

/1_research_codebase
> How do authentication, authorization, and rate limiting work together?

Three agents research three aspects in parallel. 3x faster than sequential exploration. The results are synthesized into a single, coherent document.

The Framework File Structure

After installation, your project gets this structure:

your-project/
├── .claude/
│   ├── agents/             # Specialized sub-agents
│   │   ├── codebase-locator.md
│   │   ├── codebase-analyzer.md
│   │   └── codebase-pattern-finder.md
│   └── commands/           # Workflow commands
│       ├── 1_research_codebase.md
│       ├── 2_create_plan.md
│       ├── 3_validate_plan.md
│       ├── 4_implement_plan.md
│       ├── 5_save_progress.md
│       ├── 6_resume_work.md
│       └── 7_research_cloud.md
├── thoughts/               # Persistent knowledge
│   └── shared/
│       ├── research/       # Research findings
│       ├── plans/          # Implementation plans
│       ├── sessions/       # Work session checkpoints
│       └── cloud/          # Cloud infrastructure analyses
└── CLAUDE.md

Every research finding and plan becomes organizational memory. After 10 research docs, Claude understands your architecture. After 20 plans, it knows your implementation patterns. The framework gets smarter the more you use it.

Cloud Infrastructure Analysis

The framework includes a bonus — /7_research_cloud for analyzing cloud deployments:

/7_research_cloud
> Azure
> all

This is strictly read-only. It analyzes your cloud resources, security posture, costs, and configurations without making changes. It supports Azure, AWS, and Google Cloud.

It's particularly useful for cost optimization. On one project, it identified $1,250/month in savings from overprovisioned VMs and orphaned resources that nobody had noticed.

Common Workflow Patterns

Feature Addition

/1_research_codebase → /2_create_plan → /4_implement_plan → /3_validate_plan

Bug Investigation

/1_research_codebase (understand the bug) → /4_implement_plan (fix) → /3_validate_plan

Refactoring

/1_research_codebase (current state) → /2_create_plan (refactor strategy) → /4_implement_plan

Performance Optimization

/1_research_codebase (bottlenecks) → /2_create_plan (optimization) → /4_implement_plan → /3_validate_plan

Security Audit

/7_research_cloud (infrastructure) → /1_research_codebase (app security) → /2_create_plan (remediation)

Getting Started in 5 Minutes

Step 1: Install the Framework

git clone https://github.com/teambrilliant/claude-research-plan-implement.git
cd claude-research-plan-implement
./setup.sh /path/to/your-project

The setup script handles everything — checks for existing files, asks before overwriting, creates directories, preserves your customizations.

Step 2: Try Your First Research

/1_research_codebase
> How does the current search implementation work?

Watch the parallel agents work. Look at the output. That level of codebase understanding in under a minute — that's what you've been missing.

Step 3: Plan Something

/2_create_plan
> Add full-text search with Elasticsearch

See how the plan references patterns from your research. Specific files. Specific conventions. Phased implementation with verification at each step.

That's it. You're using the framework.

Beyond the Basics: What's Next

The Research → Plan → Implement framework is the foundation. But it's part of a larger ecosystem of Claude Code plugins and skills designed for professional development teams.

If you're working with a team, dev-skills and tap-skills take this framework further — adding product discovery, blast radius analysis, and system health monitoring built on principles from A Philosophy of Software Design.

For breaking complex systems into building blocks that AI can implement well, check out Product Primitives.

If you're running multiple Claude Code sessions in parallel, Claude Peek puts session monitoring and permission handling into your Mac's notch — no more hunting through terminal tabs.

Frequently Asked Questions

How is this different from just using Claude Code's plan mode?

Plan mode is a single session feature — you switch modes, Claude plans, you switch back. The RPI framework persists context across sessions, spawns parallel research agents, and builds organizational knowledge that compounds over time. Plan mode is a feature. This is a workflow.

Do I need to use all the commands?

No. Start with /1_research_codebase alone — it's the highest-value single command. Add planning when you're comfortable. The commands work independently, but they're most powerful together.

How does this work with Claude Code plugins and skills?

The framework uses Claude Code's custom commands system (.claude/commands/). It's compatible with plugins, skills, and hooks. In fact, the parallel sub-agents are defined in .claude/agents/, which is the standard Claude Code agent format.

Does this work with other AI coding tools like Cursor or Copilot?

The framework is built specifically for Claude Code's command system. The concepts (research before coding, plan before implementing) apply universally, but the commands and sub-agents require Claude Code.

How long does the research phase take?

Typically 30-60 seconds for a focused research question. The parallel agents mean it's roughly 3x faster than doing it manually. Complex research across multiple systems might take 2-3 minutes.

Can I customize the commands?

Absolutely. The commands are markdown files in .claude/commands/. Edit them to match your team's conventions, add project-specific instructions, or create new commands entirely.


Ready to stop debugging AI-generated code and start shipping features? Install the framework and see the difference in your first session.

Questions? I'm at alex@teambrilliant.ai — or connect with me on LinkedIn.