Cloudship
← Back to blog

Counselors: get a second opinion on your code — from four AI agents at once

Shaun Church flagged Counselors by Aaron Francis and I think it’s one of those ideas that’s obvious in hindsight.

Most parallel-agent tools — Uzi, FleetCode, AI Fleet — are designed to split different tasks across agents. Each agent gets its own git worktree and works on a separate problem. They’re throughput tools.

Counselors does something different. It sends the same prompt to Claude Code, Codex, Gemini, and Amp in parallel — then collects their independent responses. It’s a “council of advisors” pattern. You’re not splitting work, you’re getting second opinions.

How it works

Install it globally, run counselors init, and it discovers which AI CLIs you already have installed. No API keys to configure, no MCP servers — it literally calls your existing CLI binaries the same way you would from a terminal.

counselors run "Review src/api/ for security issues and missing edge cases"

Each agent works in parallel and writes its response to a structured output directory — a prompt.md, per-agent response files, and a run.json manifest with timing and status. Agents run in read-only mode by default so they can’t modify your codebase, only analyse it.

Where the value is

The README includes examples from real runs, and the interesting part isn’t when models agree — it’s when they disagree.

Here’s one from a Tauri close-request review dispatched to Claude Opus, Gemini Pro, and Codex:

TopicClaude OpusGemini ProCodex
CloseRequested APIset_prevent_default(true) is correctAgreesSays plan is wrong — claims api.prevent_close() is needed
emit_to reliabilityFlags potential Tauri bug (#10182)Says raw app.emit_to may be neededSays emit_to is correct

Three models, three different assessments. That table alone is more useful than any single model’s review, because it tells you exactly where to focus your own investigation.

Another example — a ghostty-web terminal upgrade review where all three agents agreed on the high-risk areas but Gemini confused native Ghostty’s Kitty Graphics Protocol support with the web build, which doesn’t have the rendering paths for it. Claude and Codex caught the distinction. That’s the kind of mistake you’d catch in a human code review with multiple reviewers too — one person conflates two things, another catches it.

Loop mode

The counselors loop command runs multiple rounds. Each subsequent round can see the output from previous rounds, so agents build on — or challenge — each other’s findings.

counselors loop --preset bughunt --rounds 3 "src/auth"

Built-in presets guide the focus: bughunt hunts edge cases and test gaps, security targets exploitable vulnerabilities, hotspots looks for performance bottlenecks and O(n²) patterns, invariants finds impossible states and synchronisation problems.

There’s also a /counselors slash command you can install in Claude Code. Your primary agent handles context gathering, tool selection, and prompt assembly — then dispatches to the other agents and synthesises the results. So the whole workflow happens without leaving your editor.

No infrastructure

No containers, no git worktrees, no complex configuration. It calls your locally installed CLIs and writes markdown files to a directory. The security model is simple — it uses your existing CLI auth, doesn’t extract tokens, and child processes only receive allowlisted environment variables.

It’s a 10-second install that makes “get a second opinion from a different model” a one-liner instead of a manual process. Worth trying if you’re already using AI coding tools — npm install -g counselors or brew install aarondfrancis/homebrew-tap/counselors.

If you’re thinking about how AI tools like this could fit into your development workflow — let’s talk.

Want to talk about AI for your business?

I help businesses figure out where AI can actually make a difference — and then build it.

Book a free call