In the last post, we set up an AI code review pipeline that comments on every pull request automatically. The pipeline works — Claude reads your diffs, leaves inline comments, and flags real issues. But if you’ve been running it with the default general persona, you’ve probably noticed a pattern: the reviews try to catch everything and end up catching nothing deeply. You get a surface-level pass that mentions a bug here, waves at a security concern there, and moves on. For teams that care deeply about specific concerns — security on an API service, performance on a data pipeline, style consistency across a growing codebase — that’s not enough. Focused personas produce dramatically better reviews because they give Claude a single job instead of five. These personas are part of the review pipeline in the fork.

This post builds on Self-Hosted AI Code Reviews with Gitea + Claude. You’ll need the review pipeline set up and running before diving in.

The Worked Example

Here’s a route handler that processes user orders. It was written in a hurry and shipped — the kind of code that ends up in a PR at 4pm on a Friday.

app.post("/api/orders", async (req, res) => {
const { user_id, items } = req.body;
// Look up user's discount tier
const result = await db.query(
`SELECT * FROM users WHERE id = '${user_id}'`
);
const userTier = result.rows[0].discount_tier;
// Calculate totals with per-item discount lookup
let order_total = 0;
for (const item of items) {
const priceRow = await db.query(`SELECT price FROM products WHERE id = $1`, [item.id]);
for (const discount of await getDiscounts()) {
if (discount.tier === userTier && discount.productId === item.id) {
item.discountApplied = true;
}
}
order_total += priceRow.rows[0].price * item.quantity;
}
await db.query(`INSERT INTO orders (userId, total) VALUES ($1, $2)`, [user_id, order_total]);
res.json({ success: true, total: order_total });
});

Run the general persona against this and you’ll get something like:

Bug: result.rows[0].discount_tier will throw if no user is found — add a null check. Also, the SQL query on line 6 appears to use string interpolation; consider using parameterized queries.

Useful, but shallow. It caught the null access, vaguely mentioned the SQL issue, and said nothing about the N+1 query pattern or the mixed naming conventions. A focused persona does better.

The Four Personas

General

Focus your review on: bugs, logic errors, code quality, readability, and maintainability. Flag potential runtime errors, edge cases, and unclear code. Suggest improvements where the intent is ambiguous.

review_persona: "general"

This is the default. On the worked example, it produces the review you saw above — catches the null dereference on result.rows[0], briefly flags the string-interpolated SQL, but doesn’t dig into the performance implications of the loop or the inconsistent naming. It’s a reasonable first pass, not a thorough audit.

Security

Focus your review on security vulnerabilities: injection attacks (SQL, command, XSS), authentication/authorization issues, secrets or credentials in code, insecure data handling, OWASP Top 10 vulnerabilities, and unsafe deserialization. Flag any code that handles user input without proper validation or sanitization.

review_persona: "security"

On the worked example:

SQL Injection: user_id is interpolated directly into the query string on line 6. An attacker can pass ' OR 1=1; DROP TABLE users; -- as the user ID. Use a parameterized query: WHERE id = $1 with [user_id] as the parameter. The other queries already do this correctly, which makes this one easy to miss.

It zeroes in on the injection vector, explains the attack, and suggests the fix. It doesn’t mention the naming inconsistency or the loop structure — that’s not its job.

Performance

Focus your review on performance: hot paths and unnecessary computation, excessive memory allocations, O(n^2) or worse algorithmic complexity, missing caching opportunities, unnecessary network calls or database queries, and blocking operations in async contexts.

review_persona: "performance"

On the worked example:

N+1 Query Pattern: getDiscounts() is called inside the for loop on line 13, issuing a database query per item. With 100 items, that’s 100 redundant calls returning the same discount list. Hoist the call above the loop and filter in memory. The per-item SELECT price on line 12 is also a candidate for a single WHERE id IN (...) batch query.

It flags both the N+1 pattern and the per-item price lookup, explains the scaling concern, and suggests concrete fixes. It says nothing about the SQL injection or the null access — those are outside its focus.

Style

Focus your review on code style and consistency: naming conventions, formatting consistency with the rest of the codebase, idiomatic patterns for the language, dead code, and adherence to project conventions described in the review guidelines.

review_persona: "style"

On the worked example:

Inconsistent naming: The codebase mixes snake_case (user_id, order_total) with camelCase (userTier, priceRow, productId). Pick one convention — JavaScript idiom is camelCase — and apply it throughout. The discountApplied flag is set on item but never read; remove it or use it.

It catches the naming inconsistency and the dead property assignment. It doesn’t flag the security hole or the null dereference — a style reviewer shouldn’t.

Writing Your Own

The built-in personas cover broad categories, but you can override them entirely with review_persona_prompt in your workflow configuration. This replaces the built-in prompt with whatever focus you need.

review_persona_prompt: |
Focus exclusively on error handling patterns.
Flag any catch blocks that swallow errors silently.
Verify all async operations have proper error propagation.

A few more examples for different contexts:

# API contract enforcement
review_persona_prompt: |
Check that all endpoint handlers validate request bodies against their schema.
Flag any response that doesn't include proper HTTP status codes.
Verify pagination parameters are handled consistently.
# Team-specific conventions
review_persona_prompt: |
All database access must go through the repository layer — no direct SQL in handlers.
Flag any use of console.log; use the structured logger instead.
Verify new endpoints have corresponding integration tests.

When writing custom prompts, keep a few things in mind. Be specific about what to flag — “review for quality” is too vague, “flag catch blocks that swallow errors” is actionable. Tell the persona what to look for, not what to ignore. And keep the prompt under 200 words. Long, rambling instructions dilute focus, which defeats the purpose of using a persona in the first place.

Stacking with CLAUDE.md

Personas control what the reviewer focuses on. A CLAUDE.md file in your repo root gives it context about your project — conventions, architecture decisions, patterns you’ve adopted. They complement each other. The persona says “look at security,” and CLAUDE.md says “here’s how we handle auth in this codebase.”

A concrete example: your CLAUDE.md might include “All authentication flows must use the AuthService wrapper; never call the session store directly.” The security persona will then flag any PR that bypasses AuthService and hits the session store — something it couldn’t catch without that project-specific context.

The review pipeline is available in the fork, including all four built-in personas and support for custom prompts. Try running different personas on different repos — security for your API, style for your shared component library, performance for your data processing service. Match the reviewer’s focus to what actually matters for that codebase. If you haven’t set up the pipeline yet, start with the setup post and come back here once you have your first review running.