SkillsAggSubmit Skill

multi-agent-code-review

Clean

Run parallel code reviews with multiple AI agents, then synthesize into one report. Triggers on "review code" or "multi-agent review".

0 stars🍴 0 forks0 installs

Install Command

npx skills add ktaletsk/multi-agent-code-review-skill
Author
ktaletsk
Repository
ktaletsk/multi-agent-code-review-skill
Discovered via
github topic
Weekly installs
0
Quality score
20/100
Last commit
1/30/2026

SKILL.md

---
name: multi-agent-code-review
description: Run parallel code reviews with multiple AI agents, then synthesize into one report. Triggers on "review code" or "multi-agent review".
---

# Multi-Agent Code Review Skill

This skill runs the same code review prompt against multiple AI agents in parallel using Cursor CLI, then synthesizes their findings into a single comprehensive report.

## When to Use

Activate this skill when the user asks to:
- "Review my code"
- "Run a code review"
- "Review the staged changes"
- "Do a multi-agent review"
- "Get multiple perspectives on this code"

## CRITICAL: Target Directory

**You must pass the USER'S PROJECT DIRECTORY as an argument to the script.**

The user's project directory is where they started their Claude Code session - NOT this skill's directory. Look for the git repository path in the conversation context (e.g., `/Users/.../git/jupyter_server`).

## Workflow

### Step 1: Identify the Target Repository

Determine the user's project directory from the conversation context. This is typically shown at the start of the session or can be found by checking where CLAUDE.md is located. It is NOT `/Users/.../skills/multi-agent-code-review/`.

### Step 2: Run Parallel Reviews

Run the review script and **pass the user's project directory as an argument**:

```bash
~/.claude/skills/multi-agent-code-review/scripts/run-reviews.sh /path/to/users/project
```

For example, if the user is working in `/Users/ktaletskiy/git/jupyter_server`:
```bash
~/.claude/skills/multi-agent-code-review/scripts/run-reviews.sh /Users/ktaletskiy/git/jupyter_server
```

**IMPORTANT**: Always pass the full path to the user's project as the first argument.

This will:
- Run multiple agents in parallel (configurable in the script)
- Save individual JSON results to `<project>/.reviews/`
- Take 1-3 minutes depending on code size

### Step 3: Synthesize Results

After the script completes, read all JSON files from `<project>/.reviews/` (in the user's project directory) and synthesize them into a combined report.

**Synthesis Rules:**
1. Do NOT mention which agent found which issue
2. Deduplicate similar issues (same file + same line + same problem = one entry)
3. If reviewers disagree on severity, use the higher severity
4. Preserve unique findings from each reviewer
5. Present findings as if from a single thorough review

**Output Format:**

Write the combined report to `<project>/.reviews/COMBINED_REVIEW.md` using this structure:

```markdown
# Code Review Report

**Repository:** [repo name from user's directory]
**Date:** [today's date]

---

## Summary

[1-2 paragraph summary]

**Consensus:** [X of Y reviewers recommended changes / approved]

---

## Critical Issues (Require Action)

### 1. [Issue Title]
**Severity:** 🔴 HIGH
**File:** `path/to/file` (line X)

[Description]

**Recommendation:** [How to fix]

---

## Medium Issues (Should Address)

[Same format, 🟠 MEDIUM]

## Low Issues (Consider Addressing)

[Same format, 🟡 LOW]

## Suggested Improvements

[Numbered list]

---

## Verdict

**[🔴 REQUEST CHANGES / 🟢 APPROVE]**

[Priority action items table]
```

### Step 4: Report to User

After writing the combined report, summarize the key findings:
- Total issues found (by severity)
- Top 3 priority items to address
- Overall verdict

## Customization

The user can customize:
- **Agents/Models**: Edit `~/.claude/skills/multi-agent-code-review/scripts/run-reviews.sh` → `MODELS` array
- **Review focus**: Edit `~/.claude/skills/multi-agent-code-review/prompts/review-prompt.md`
- **Thinking depth**: Add "think hard" or "ultrathink" to the prompt

## Files

```
~/.claude/skills/multi-agent-code-review/
├── SKILL.md              # This file
├── scripts/
│   └── run-reviews.sh    # Parallel review runner
└── prompts/
    └── review-prompt.md  # Review prompt template

# Output is saved to the user's project:
<project>/.reviews/
├── review_*.json         # Individual agent outputs
└── COMBINED_REVIEW.md    # Synthesized report
```

Similar Skills

Multi-model agent implementation workflow for software development. Orchestrates research, evaluation, design baseline, implementation, RCA, structured decomposition, constraint discovery, model selection, and agent-driven Stage 3 codemap exploration across external AI models (GPT, GLM, Claude). Use when implementing features through a structured multi-phase pipeline with worktrees, dynamic scheduling, and SQLite-backed agent coordination.

npx skills add nestharus/agent-implementation-skill

Stop AI agents from secretly bypassing your rules. Mechanical enforcement with git hooks, secret detection, deployment verification, and import registries. Born from real production incidents: server crashes, token leaks, code rewrites. Works with Claude Code, Clawdbot, Cursor. Install once, enforce forever.

securitydevopsgitai-safetycode-quality
npx skills add jzOcb/agent-guardrails
swarm-iosmClean

Orchestrate complex development with AUTOMATIC parallel subagent execution, continuous dispatch scheduling, dependency analysis, file conflict detection, and IOSM quality gates. Analyzes task dependencies, builds critical path, launches parallel background workers with lock management, monitors progress, auto-spawns from discoveries. Use for multi-file features, parallel implementation streams, automated task decomposition, brownfield refactoring, or when user mentions "parallel agents", "orchestrate", "swarm", "continuous dispatch", "automatic scheduling", "PRD", "quality gates", "decompose work", "Mixed/brownfield".

npx skills add rokoss21/swarm-iosm
user-proxyClean

Use when reviewing an agent's plan, work output, or completion claim on behalf of the user. Evaluates against established rules, known error patterns, and quality standards. Do not use for direct implementation work.

npx skills add metyatech/skill-user-proxy