The "Creator’s Stack": The Definitive Guide to How Boris (Anthropic) Actually Uses Claude Code

January 14, 2026By Lakshya Soni10 min read
The "Creator’s Stack": The Definitive Guide to How Boris (Anthropic) Actually Uses Claude Code

When a tool like Claude Code is released, the developer community immediately floods the internet with "hacks," complex agent chains, and experimental shortcuts. Everyone is looking for the "God Mode" prompt that automates their entire job.

But recently, we received something far more valuable than a hack: a detailed breakdown of the actual daily workflow of Boris, the creator of Claude Code at Anthropic.

His setup is not flashy. It isn't filled with dangerous permissions or chaotic agent swarms. It is surprisingly "vanilla." But in the world of high-stakes software engineering, "vanilla" is code for robust, scalable, and safe.

At EchoPulse, we analyzed his workflow and found that what looks simple on the surface is actually a highly disciplined system of constraints, verification loops, and context management. If you are trying to integrate Claude Code into a serious production environment, stop looking for magic tricks and start adopting the "Creator’s Stack."

Here is the definitive, deep-dive breakdown of how the tool is meant to be used at the highest level.

The Core Philosophy: Verification-Led Development

The single biggest mistake developers make with Claude Code is treating it like a magic wand. They give it a vague, high-level task—"Build me a snake game" or "Refactor this API"—and then get frustrated when the snake runs into a wall or the API throws a 500 error.

Boris explains this failure mode with a simple but profound analogy: Humans don't work that way, so why should AI?

If you asked a junior developer to ship a complex feature without testing it, they would fail too. The core of Boris’s workflow is Verification-Led Development. This goes beyond simply "checking the work" after it is done. It involves architecting a loop where Claude cannot proceed without proving it is right.

The "Test-First" Protocol If you are asking Claude to write code (e.g., Python or TypeScript), you don't just ask for the function. You explicitly instruct it to write the tests for that function first.

  • The Workflow: "Claude, I need a function to process these CSV files. First, write a PyTest suite that covers the edge cases of empty files and malformed headers. Then, implement the function to pass those tests."
  • The Result: This provides the model with a "Ground Truth." When Claude runs the test and it fails, it receives an error message. That error message is not a failure; it is data. It is the feedback loop that allows the model to self-correct without your intervention.

Automated Self-Verification Boris emphasizes that you don't even need to be the one coming up with the tests. You can automate this requirement in your configuration.

  • The Strategy: In your global prompts, you can tell Claude: "Before you implement anything, state clearly how you will verify that it works."
  • The Shift: This simple instruction shifts the model from "Generation Mode" (guessing based on probability) to "Engineering Mode" (proving based on logic). It forces the model to hallucinate a solution and a validation method simultaneously, which drastically reduces logic errors.

Domain-Specific Verification Verification isn't just for code. Boris highlights that verification changes based on the domain:

  • For UI/Frontend: Use the built-in browser extension or screenshot capabilities. Don't just generate the CSS; force Claude to "look" at the rendered page and compare it against the design specs.
  • For Mobile: Use iOS/Android simulator MCPs (Model Context Protocols).
  • For DevOps: Use Bash verification commands to ensure servers are responding correctly.

The Context Engine (The claude.md Strategy)

The Project Brain: Mastering the claude.md File

If Verification is the engine of this workflow, the claude.md file is the steering wheel.

Most users treat this file as a generic "ReadMe," throwing in a few links and forgetting about it. For the Anthropic team, this file is a living, breathing document that defines the intelligence of the session. It is the single most important asset in your repository.

Operational Modes & The Speed Paradox

Strategic Modes: Why Experts Don't Use "God Mode"

There is a feature in Claude Code called "dangerously skip permissions." To the uninitiated, this sounds like the ultimate efficiency hack. It allows the AI to execute any terminal command—file deletion, git pushes, deployment triggers—without asking for human confirmation. It sounds fast. It feels frictionless.

Boris does not use it.

Why would the creator of the tool refuse to use its fastest feature? Because in a production environment, speed is secondary to safety. A single hallucinated rm -rf command, a misguided database migration, or an accidental push to main can cost a company millions in downtime or data loss.

Instead of "God Mode," Boris utilizes a granular permission structure via settings.json.

  • The Allow-List: He explicitly whitelists safe, high-frequency commands (like running tests, linting, or reading files).
  • The Guard-Rails: He requires explicit approval for dangerous commands (like deploying, deleting, or changing access keys).
  • The Team Sync: Crucially, he shares this settings.json file with his team. This ensures that every engineer's AI agent operates within the same safety rails, preventing a "rogue agent" scenario.

The Speed Paradox: Why Slower Models Win

In the race for AI efficiency, everyone is obsessed with latency. "Haiku" is instant. "Sonnet" is quick. Yet, Boris uses Opus 4.5 with Thinking Enabled for almost everything.

To the average user, this makes no sense. Opus 4.5 is slower. It takes time to "think" (Chain of Thought). It costs more tokens. But Boris argues that we are measuring speed incorrectly. We shouldn't measure "Tokens Per Second"; we should measure "Time To Completion."

  • The "Fast" Trap: If you use a fast model (Haiku), it might generate the code in 5 seconds. But if that code has a subtle logic bug, you spend 15 minutes debugging it, re-prompting, and fixing it. Total time: 15 minutes, 5 seconds.
  • The "Slow" Win: If you use a slow model (Opus 4.5), it might take 45 seconds to generate the code. But because it "thought" through the edge cases and architectural implications, the code works on the first try. Total time: 45 seconds.

For complex orchestration, deep reasoning, and architectural decisions, raw speed is a trap.First-Time Accuracy is the only metric that matters.

What Belongs in claude.md? Boris describes this file not as documentation, but as a set of boundaries. To be effective, it must contain:

  1. The Tech Stack: Explicit versions (e.g., "We use Next.js 14, not 15") and libraries.
  2. The Directory Structure: A map of where key components live, so the agent doesn't get lost.
  3. The "Anti-Patterns": This is the most critical part. It shouldn't just list what to do; it must list what not to do.

The "Living Document" Habit Boris has operationalized this with his team. It is a habit at Anthropic to update the claude.md file multiple times a week.

  • The Loop: If a developer sees Claude making a recurring mistake—like using a deprecated library, misnaming a variable, or hallucinating a file path—they don't just correct the code; they correct the Brain. They immediately add a rule to the claude.md file.
  • The Effect: This creates a "Compound Intelligence" effect. Your AI agent gets smarter every week, not because the underlying model changed, but because its context window is being refined by your team's collective experience.

Context Sharding for Scale Furthermore, Boris advocates for Context Sharding. In a massive full-stack application, you shouldn't have one massive prompt file that confuses the model.

Advanced Orchestration & The "Junior Dev" Mental Model

Orchestration at Scale: The Parallel Workflow

Boris doesn't just stare at one loading bar waiting for a response. He runs the tool like a Mission Control Center. He typically runs five Claude Code sessions in parallel.

  • The Tab System: He numbers his terminal tabs. When a notification pings, he knows exactly which agent is reporting back.
  • Role Assignment: One tab might be refactoring a legacy component, while another is running a long integration test suite in the background.

Web Sessions: The "Fire and Forget" Strategy He heavily utilizes Web Sessions for asynchronous workflows.

  • The Workflow: For long-running tasks (like "Analyze this entire repo for security vulnerabilities"), he assigns the task to a web-based Claude session.
  • The Freedom: He closes his laptop and walks away. The agent runs in the cloud, creates a new Git branch, pushes the code, and waits for review.
  • The Mobile Command: He even triggers these workflows from his phone. This isn't just "coding assistance"; it is Asynchronous Workforce Management.

Tool Integration: The Agent as Orchestrator Boris proves that Claude Code isn't just for writing code; it's for controlling your entire dev stack. He uses the agent to interact with Slack, BigQuery, and Sentry directly via their CLIs. Instead of context-switching between four different dashboards to debug an error, he asks Claude:"Check Sentry for the latest error logs, query BigQuery for the affected user IDs, and post a summary to the Slack engineering channel." The agent performs the orchestration; Boris makes the decision.

Conclusion: The "Junior Developer" Mental Model

The ultimate takeaway from Boris’s workflow is a fundamental shift in perspective. He does not treat Claude Code as a text generator or a search engine. He treats it as a Junior Developer.

Think about how you successfully manage a Junior Developer:

  1. You give them a Plan: You don't just say "build this." You write a spec (Plan Mode).
  2. You give them Rules: You give them an onboarding doc that explains how things are done (claude.md).
  3. You don't give them Root Access: You limit their permissions until they earn trust (settings.json).
  4. You ask for Tests: You refuse to merge their Pull Request until they prove it works (Verification Loops).

This is exactly how EchoPulse approaches AI architecture. We don't look for the magic button. We build the systems, the constraints, and the workflows that allow AI to function as a reliable, high-velocity part of your engineering team. The tool is powerful, but only if you respect the protocol.

  • The Strategy: The Front End should have its own claude.md. The Back End microservices should have their own.
  • The Benefit: This prevents context bloating (keeping tokens efficient, around 2.5k) and ensures that when Claude is working on React components, it isn't distracted by SQL database rules or DevOps configurations.

We Handle the Algorithms. You Handle the Business.

Reclaim 20+ hours a week. Get a dedicated creative team that manages your retention, editing, and growth on autopilot.

See How We Work