This tutorial walks you through setting up a local Gitea instance with automated Claude-powered PR reviews. Every pull request gets reviewed by Claude — inline comments on specific lines, severity ratings, configurable review personas, and intelligent diff truncation for large PRs. No GitHub required. Fully self-hosted.

The review pipeline features described in this post — personas, model routing, caching, size labels, and the full pipeline architecture — are implemented in my fork: mcdgavin/claude-code-gitea-action. The workflow examples below reference this fork.

1 — Why Self-Hosted AI Reviews?

The Problem

Code review is a bottleneck. Reviewers are busy, context-switching is expensive, and PRs sit waiting for days. AI-assisted review gives developers immediate feedback — catching bugs, security issues, and style problems before a human reviewer ever looks at the code.

But if your code lives on a self-hosted Gitea instance (not GitHub), the existing tooling doesn’t work out of the box. GitHub Actions, GitHub Apps, and GitHub’s API are all tightly coupled. Gitea has its own API, its own Actions runner, and its own quirks.

What We’re Building

A Gitea Action that automatically reviews every PR using Claude. When a developer opens or updates a PR, the action:

  1. Generates a prioritized diff (source code first, tests second, docs last)
  2. Sends it to Claude with configurable review guidelines
  3. Posts a formal Gitea review with inline comments on specific lines
  4. Optionally labels PRs by size and appends a review summary to the PR body

2 — Prerequisites

What You’ll Need

  • Docker & Docker Compose — for running Gitea and the Actions runner
  • An Anthropic API key — sign up at console.anthropic.com
  • ~10 minutes of setup time

No Gitea experience needed. This guide starts from zero. If you already have Gitea running, skip to Section 5.

3 — Install Gitea

Docker Compose Setup

Create a directory for your Gitea installation and add this docker-compose.yml:

docker-compose.yml
version: "3"
services:
gitea:
image: gitea/gitea:latest
container_name: gitea
environment:
- USER_UID=1000
- USER_GID=1000
- GITEA__server__ROOT_URL=http://localhost:3000
- GITEA__actions__ENABLED=true
restart: always
volumes:
- ./gitea-data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "2222:22"

Step 1: Start Gitea

Terminal window
docker compose up -d

Step 2: Initial Configuration

Open http://localhost:3000 in your browser. You’ll see the installation wizard. The defaults work fine — just set your admin username and password, then click Install Gitea.

Step 3: Create a Test Repository

After installation, create a new repository. We’ll use this to test the review workflow. Push some code to it — anything works.

Step 4: Generate an API Token

Go to Settings → Applications → Manage Access Tokens. Generate a token with repo and issue scopes. Save it — you’ll need it for the workflow.

4 — Actions Runner

Setting Up act_runner

Gitea Actions uses act_runner — a compatible runner that executes workflows. Add it to your docker-compose.yml:

docker-compose.yml (add to services)
runner:
image: gitea/act_runner:latest
container_name: gitea-runner
restart: always
depends_on:
- gitea
environment:
- GITEA_INSTANCE_URL=http://gitea:3000
- GITEA_RUNNER_REGISTRATION_TOKEN=${RUNNER_TOKEN}
- GITEA_RUNNER_NAME=local-runner
volumes:
- ./runner-data:/data
- /var/run/docker.sock:/var/run/docker.sock

Step 1: Get the Registration Token

In Gitea, go to Site Administration → Actions → Runners and copy the registration token.

Step 2: Set the Token and Restart

Terminal window
export RUNNER_TOKEN="your-registration-token-here"
docker compose up -d

The runner should appear in Gitea’s admin panel within a few seconds. Verify it shows as Online.

Docker-in-Docker. The runner needs access to /var/run/docker.sock to run workflow containers. On macOS with Docker Desktop, this works out of the box. On Linux, ensure the docker socket is accessible.

5 — The Review Workflow

Creating the Workflow File

In your repository, create .gitea/workflows/claude-review.yml:

.gitea/workflows/claude-review.yml
name: Claude PR Review
on:
pull_request:
types: [opened, synchronize]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: mcdgavin/claude-code-gitea-action@main
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
gitea_token: ${{ secrets.GITEA_TOKEN }}
mode: agent
review_verdict_mode: "no-approve"
review_persona: "general"
max_diff_size: "50000"
enable_review_cache: "true"
enable_size_labels: "true"

Step 1: Add Secrets

In your Gitea repository, go to Settings → Actions → Secrets and add:

  • ANTHROPIC_API_KEY — your Anthropic API key
  • GITEA_TOKEN — the API token you generated earlier

Step 2: Commit and Push

Terminal window
git add .gitea/workflows/claude-review.yml
git commit -m "ci: add Claude PR review workflow"
git push

That’s it. The next PR opened against this repository will be automatically reviewed by Claude.

6 — Configuration Reference

All Review Inputs

Every input has a safe default. The action works with zero config beyond API keys. Tune these as you build trust in the reviews.

InputDefaultDescription
review_verdict_modecomment-onlyReview verdict: full (approve/request changes), no-approve, or comment-only Start with comment-only. Upgrade to no-approve once you trust the reviews.
max_diff_size50000Max diff characters before truncation. Files are prioritized: source > tests > docs.
review_personageneralBuilt-in persona: general, security, performance, style
review_persona_promptemptyCustom review focus (overrides built-in persona). Raw text.
review_guidelinesCLAUDE.mdPath to a file with project-specific review guidelines.
file_include**/*Glob patterns for files to review (comma-separated).
file_excludeemptyGlob patterns to skip. Default excludes: *.lock, vendor/**, dist/**, etc.
enable_size_labelsfalseAdd size/S, size/M, size/L, size/XL labels to PRs.
enable_description_updatefalseAppend a review summary section to the PR body.
enable_review_cachetrueSkip re-review if no new commits since last review.
enable_draft_reviewfalseReview draft PRs (default: wait until ready for review).
max_inline_comments25Cap inline comments per review. Excess folded into summary by severity.
model_routingemptyJSON mapping PR size to model. Example: {"S":"claude-haiku-4-5-20251001"}

Review Personas

The review_persona input focuses Claude’s attention:

  • general — Bugs, logic errors, code quality, readability
  • security — Injection attacks, auth issues, secrets exposure, OWASP Top 10
  • performance — Hot paths, allocations, algorithmic complexity, missing caching
  • style — Naming, formatting, consistency with codebase patterns

For custom focus, use review_persona_prompt:

review_persona_prompt: |
Focus exclusively on error handling patterns.
Flag any catch blocks that swallow errors silently.
Verify all async operations have proper error propagation.

Model Routing (Cost Optimization)

Use cheaper models for simple PRs, stronger models for complex ones:

model_routing: '{"S":"claude-haiku-4-5-20251001","M":"claude-sonnet-4-5-20250929","L":"claude-sonnet-4-5-20250929","XL":"claude-sonnet-4-5-20250929"}'

Size thresholds: S ≤ 50 lines, M ≤ 200, L ≤ 500, XL > 500.

7 — How the Pipeline Works

Architecture

The review pipeline is a sequence of stages, each writing to a shared context object. Any stage can short-circuit by setting skipReview = true.

graph TD
A["PR Event"] --> B["TriggerGate"]
B --> C["FileFilter"]
C --> D["DiffGenerator"]
D --> E["CacheCheck"]
E --> F["ModelRouter"]
F --> G["ReviewEngine"]
G --> H["ReviewParser"]
H --> I["VerdictPoster"]
H --> J["SizeLabeler"]
H --> K["DescUpdater"]

classDef gate fill:#fef3c7,stroke:#d97706,color:#92400e
classDef core fill:#dcfce7,stroke:#16a34a,color:#14532d
classDef side fill:#dbeafe,stroke:#2563eb,color:#1e3a8a

class B,C,E gate
class D,F,G,H core
class I,J,K side

Stage-by-Stage

  1. TriggerGate — Skips draft PRs and PRs authored by the bot itself (prevents review loops)
  2. FileFilter — Applies include/exclude globs. Default excludes: lock files, vendor, dist, node_modules, generated code
  3. DiffGenerator — Generates per-file diffs via git diff. Applies priority-ranked truncation when the diff exceeds max_diff_size
  4. CacheCheck — Looks for a previous Claude review with a SHA marker. Skips if HEAD hasn’t changed
  5. ModelRouter — Classifies PR size (S/M/L/XL) and selects the appropriate model
  6. ReviewEngine — Builds a structured prompt with diff, guidelines, and persona, then calls Claude’s Messages API
  7. ReviewParser — Parses Claude’s XML response, enforces verdict mode, maps line numbers to diff positions, caps comments
  8. Side Effects (parallel) — VerdictPoster posts the review, SizeLabeler adds labels, DescriptionUpdater updates the PR body

8 — Testing It Out

Your First AI Review

Step 1: Create a Branch with a Bug

Terminal window
git checkout -b test-review
cat > buggy.js << 'EOF'
function processUser(user) {
// Bug: no null check
const name = user.name.toUpperCase();
const email = user.email.split('@')[0];
// Bug: SQL injection
const query = `SELECT * FROM users WHERE name = '${name}'`;
// Bug: hardcoded secret
const apiKey = "sk-1234567890abcdef";
return { name, email, query, apiKey };
}
EOF
git add buggy.js
git commit -m "add user processor"
git push -u origin test-review

Step 2: Open a Pull Request

In Gitea, open a PR from test-review to main. Within a minute or two, Claude will post a review with inline comments pointing out:

  • The missing null check on user
  • The SQL injection vulnerability
  • The hardcoded API key

Step 3: Push a Fix

Fix the bugs and push. If enable_review_cache is enabled, Claude will detect the new commit SHA and re-review. The old review gets deleted, replaced by a fresh one.

9 — Tips & Troubleshooting

Getting the Best Results

Write a CLAUDE.md

Add a CLAUDE.md file to your repository root with project-specific review guidelines. This is the single most impactful thing you can do for review quality. Include:

  • Your project’s coding standards and conventions
  • Common patterns to follow (or avoid)
  • Architecture decisions that affect code review
  • Any project-specific security requirements

Language-Specific Guidelines

Create CLAUDE.typescript.md, CLAUDE.go.md, etc. for language-specific rules. These are loaded automatically when the PR contains files of that type.

Troubleshooting

Reviews not appearing? Check that the runner is online in Gitea’s admin panel. Verify the workflow file is at .gitea/workflows/ (not .github/workflows/). Check the Actions tab in your repository for logs.

Inline comments on wrong lines? Gitea’s review API has inconsistencies with line position mapping across versions. The action gracefully falls back to summary-only reviews when inline comments fail. Ensure you’re running the latest Gitea version.

Large PRs getting truncated? Increase max_diff_size (default: 50,000 chars). Files are prioritized: source code is reviewed first, tests second, documentation last. The truncation note tells you exactly which files were skipped.

Cost Control

Use model_routing to route small PRs to cheaper models. Enable enable_review_cache to avoid re-reviewing unchanged code. Use file_exclude to skip files that don’t need review (generated code, lock files, etc.).

You’re done. You now have a fully self-hosted AI code review pipeline. Every PR gets immediate, consistent feedback — catching bugs before humans have to. As you build trust, upgrade from comment-only to no-approve to let Claude flag blocking issues, or add the security persona for security-critical repositories.

What’s Next

This tutorial covered a single-model review pipeline, but there’s a lot more you can do. In the follow-up post, Teaching Your AI Reviewer What to Look For, I walk through the four built-in review personas — general, security, performance, and style — show how each one focuses on different problems in the same code, and cover how to write your own custom persona prompts.