diff --git a/docs/docs/tools/improve.md b/docs/docs/tools/improve.md
index 11f3b9cf9b..fac1d7c47c 100644
--- a/docs/docs/tools/improve.md
+++ b/docs/docs/tools/improve.md
@@ -304,6 +304,14 @@ Note: Chunking is primarily relevant for large PRs. For most PRs (up to 600 line
persistent_comment |
If set to true, the improve comment will be persistent, meaning that every new improve request will edit the previous one. Default is true. |
+
+ | persistent_inline_comments |
+ Controls how inline suggestions are deduplicated across re-runs on the same PR/MR. "update" (default) edits the matching existing inline comment in place; "skip" leaves the existing one untouched; "off" always posts a new inline comment (legacy behavior). The dedup key is the hash of the proposed edit (file + normalized improved_code), with a fallback to a prose-based hash only when no improved_code is available. Line numbers are intentionally excluded so dedup stays stable across upstream pushes that drift the target line. See How inline-comment deduplication works below for details. |
+
+
+ | resolve_outdated_inline_comments |
+ When dedup is enabled (persistent_inline_comments != "off"), automatically resolve inline-comment threads whose suggestion was not re-emitted on the latest run; the thread body gets a short auto-resolve note. Default is true. Has no effect when persistent_inline_comments = "off". Reviewers can manually unresolve an auto-resolved thread to opt it out of future auto-resolution — the bot detects the prior resolution marker in the body and respects it. |
+
| suggestions_score_threshold |
Any suggestion with importance score less than this threshold will be removed. Default is 0. Highly recommend not to set this value above 7-8, since above it may clip relevant suggestions that can be useful. |
@@ -344,3 +352,35 @@ Note: Chunking is primarily relevant for large PRs. For most PRs (up to 600 line
- **Hierarchy:** Presenting the suggestions in a structured hierarchical table enables the user to _quickly_ understand them, and to decide which ones are relevant and which are not.
- **Customization:** To guide the model to suggestions that are more relevant to the specific needs of your project, we recommend using the [`extra_instructions`](./improve.md#extra-instructions-and-best-practices) and [`best practices`](./improve.md#best-practices) fields.
- **Model Selection:** For specific programming languages or use cases, some models may perform better than others.
+
+## How inline-comment deduplication works
+
+When `persistent_inline_comments` is enabled (the default is `"update"`), re-running `/improve` on the same PR/MR will recognize and update — instead of duplicating — inline comments that were already posted for the same suggestion on a previous run. This uses a hidden marker (an HTML comment of the form ``) embedded in each inline-comment body.
+
+### Identity rule
+
+Two inline comments are considered the same suggestion when the marker matches. The marker is a short hash computed as follows:
+
+- **Structured (preferred):** When the suggestion has an `improved_code` field (i.e., the model proposed replacement code), the hash covers `(file, normalized improved_code)`. Wording changes in the suggestion's prose do **not** affect the key; `label` is **not** part of the key either — the edit itself is the identity.
+- **Prose fallback:** When no `improved_code` is present, the hash falls back to `(file, label, normalized prose prefix)`.
+
+Normalization of `improved_code` expands tabs, strips trailing whitespace, drops leading/trailing blank lines, removes the longest common leading indent, and collapses internal whitespace runs — so reindentation of the same proposed edit does not split comments.
+
+### Strict behaviour
+
+This is intentionally a strict rule: when a suggestion has `improved_code`, its prose is never consulted for dedup. Two suggestions at the same spot with identical prose but **different** proposed edits are treated as distinct and remain as two separate inline comments. We'd rather under-merge (and show two comments) than over-merge two genuinely different fixes into one.
+
+### What's invariant across runs
+
+- Prose paraphrase of the same finding (same proposed edit) — does **not** split.
+- Reindentation or whitespace variation in the proposed edit — does **not** split.
+- Upstream commits that push the target line up or down in the file — do **not** split. Line numbers are not part of the key.
+
+### What still splits
+
+- A genuinely different proposed edit at the same spot — by design.
+- A different `label` when the prose fallback is in use (e.g., the same prose-only suggestion now tagged "best practice" vs "possible issue").
+
+### Future work
+
+A fuzzy near-miss signal (e.g., shingle/Jaccard similarity) may be added later if users report recurring duplicates that this deterministic scheme doesn't catch. For now the behaviour is strictly deterministic, with no similarity threshold to tune.
diff --git a/docs/superpowers/plans/2026-05-01-repo-context-files.md b/docs/superpowers/plans/2026-05-01-repo-context-files.md
new file mode 100644
index 0000000000..5fdf833567
--- /dev/null
+++ b/docs/superpowers/plans/2026-05-01-repo-context-files.md
@@ -0,0 +1,383 @@
+# Repo Context Files Implementation Plan
+
+> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [x]`) syntax for tracking.
+
+**Goal:** Add explicit config-driven repository context files that are loaded from the target repository and injected into review, describe, and improve prompts.
+
+**Architecture:** Add a small context-loading helper in `pr_agent/algo/repo_context.py` that reads `[config].repo_context_files`, asks the active git provider for each file, formats loaded content with file headers, and enforces a total line cap. Add a base git-provider method plus a GitHub implementation for fetching repository files from the default branch. Tool classes add `repo_context` to prompt variables, and prompt templates render it only when present.
+
+**Tech Stack:** Python 3.12, Dynaconf settings, PyGithub provider APIs, Jinja2 prompt templates, pytest.
+
+---
+
+### Task 1: Add Repo Context Loader
+
+**Files:**
+- Create: `pr_agent/algo/repo_context.py`
+- Test: `tests/unittest/test_repo_context.py`
+
+- [x] **Step 1: Write failing loader tests**
+
+Add `tests/unittest/test_repo_context.py`:
+
+```python
+from pr_agent.algo.repo_context import build_repo_context
+from pr_agent.config_loader import get_settings
+
+
+class FakeProvider:
+ def __init__(self, files):
+ self.files = files
+ self.requested_paths = []
+
+ def get_repo_file_content(self, file_path: str):
+ self.requested_paths.append(file_path)
+ return self.files.get(file_path)
+
+
+def test_build_repo_context_returns_empty_when_no_files_configured():
+ original_files = get_settings().config.get("repo_context_files", [])
+ try:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", [])
+
+ assert build_repo_context(FakeProvider({"AGENTS.md": "repo purpose"})) == ""
+ finally:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", original_files)
+
+
+def test_build_repo_context_fetches_and_formats_configured_files():
+ original_files = get_settings().config.get("repo_context_files", [])
+ original_max_lines = get_settings().config.get("repo_context_max_lines", 500)
+ try:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", ["AGENTS.md", "CONTRIBUTING.md"])
+ get_settings().set("CONFIG.REPO_CONTEXT_MAX_LINES", 500)
+ provider = FakeProvider({
+ "AGENTS.md": "# Agent Guide\nUse focused tests.",
+ "CONTRIBUTING.md": "Keep PRs small.",
+ })
+
+ context = build_repo_context(provider)
+
+ assert context == (
+ "## AGENTS.md\n"
+ "# Agent Guide\n"
+ "Use focused tests.\n\n"
+ "## CONTRIBUTING.md\n"
+ "Keep PRs small."
+ )
+ assert provider.requested_paths == ["AGENTS.md", "CONTRIBUTING.md"]
+ finally:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", original_files)
+ get_settings().set("CONFIG.REPO_CONTEXT_MAX_LINES", original_max_lines)
+
+
+def test_build_repo_context_skips_missing_and_invalid_files():
+ original_files = get_settings().config.get("repo_context_files", [])
+ try:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", ["", 7, "MISSING.md", "AGENTS.md"])
+ provider = FakeProvider({"AGENTS.md": "Loaded context"})
+
+ assert build_repo_context(provider) == "## AGENTS.md\nLoaded context"
+ assert provider.requested_paths == ["MISSING.md", "AGENTS.md"]
+ finally:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", original_files)
+
+
+def test_build_repo_context_enforces_total_line_cap():
+ original_files = get_settings().config.get("repo_context_files", [])
+ original_max_lines = get_settings().config.get("repo_context_max_lines", 500)
+ try:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", ["AGENTS.md", "CONTRIBUTING.md"])
+ get_settings().set("CONFIG.REPO_CONTEXT_MAX_LINES", 4)
+ provider = FakeProvider({
+ "AGENTS.md": "one\ntwo\nthree",
+ "CONTRIBUTING.md": "four\nfive",
+ })
+
+ context = build_repo_context(provider)
+
+ assert context == "## AGENTS.md\none\ntwo\nthree"
+ finally:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", original_files)
+ get_settings().set("CONFIG.REPO_CONTEXT_MAX_LINES", original_max_lines)
+```
+
+- [x] **Step 2: Run tests to verify failure**
+
+Run: `PYTHONPATH=. ./.venv/bin/pytest tests/unittest/test_repo_context.py -q`
+
+Expected: FAIL during import with `ModuleNotFoundError: No module named 'pr_agent.algo.repo_context'`.
+
+- [x] **Step 3: Add minimal loader implementation**
+
+Create `pr_agent/algo/repo_context.py`:
+
+```python
+from pr_agent.config_loader import get_settings
+from pr_agent.log import get_logger
+
+
+def build_repo_context(git_provider) -> str:
+ context_files = get_settings().config.get("repo_context_files", [])
+ if not context_files:
+ return ""
+
+ max_lines = get_settings().config.get("repo_context_max_lines", 500)
+ try:
+ max_lines = max(0, int(max_lines))
+ except (TypeError, ValueError):
+ max_lines = 500
+
+ rendered_lines = []
+ for file_path in context_files:
+ if not isinstance(file_path, str) or not file_path.strip():
+ get_logger().warning("Skipping invalid repo context file path", artifact={"file_path": file_path})
+ continue
+
+ file_path = file_path.strip()
+ try:
+ content = git_provider.get_repo_file_content(file_path)
+ except Exception as e:
+ get_logger().warning(f"Failed to load repo context file: {file_path}", artifact={"error": str(e)})
+ continue
+
+ if not content:
+ get_logger().debug(f"Repo context file is empty or missing: {file_path}")
+ continue
+
+ if isinstance(content, bytes):
+ content = content.decode("utf-8", errors="replace")
+
+ file_lines = [f"## {file_path}", *str(content).strip().splitlines()]
+ remaining_lines = max_lines - len(rendered_lines)
+ if remaining_lines <= 0:
+ break
+
+ if rendered_lines:
+ rendered_lines.append("")
+ remaining_lines -= 1
+
+ rendered_lines.extend(file_lines[:remaining_lines])
+
+ return "\n".join(rendered_lines).strip()
+```
+
+- [x] **Step 4: Run tests to verify pass**
+
+Run: `PYTHONPATH=. ./.venv/bin/pytest tests/unittest/test_repo_context.py -q`
+
+Expected: PASS.
+
+### Task 2: Add Provider Fetch Method And Defaults
+
+**Files:**
+- Modify: `pr_agent/settings/configuration.toml`
+- Modify: `pr_agent/git_providers/git_provider.py`
+- Modify: `pr_agent/git_providers/github_provider.py`
+- Test: `tests/unittest/test_repo_context.py`
+
+- [x] **Step 1: Write failing provider tests**
+
+Append to `tests/unittest/test_repo_context.py`:
+
+```python
+from unittest.mock import Mock
+
+from pr_agent.git_providers.git_provider import GitProvider
+from pr_agent.git_providers.github_provider import GithubProvider
+
+
+def test_base_provider_repo_file_content_returns_empty():
+ provider = GitProvider.__new__(GitProvider)
+
+ assert provider.get_repo_file_content("AGENTS.md") == ""
+
+
+def test_github_provider_fetches_repo_file_content_from_default_branch():
+ provider = GithubProvider.__new__(GithubProvider)
+ provider.repo_obj = Mock()
+ provider.repo_obj.get_contents.return_value.decoded_content = b"repo context"
+
+ assert provider.get_repo_file_content("AGENTS.md") == "repo context"
+ provider.repo_obj.get_contents.assert_called_once_with("AGENTS.md")
+```
+
+- [x] **Step 2: Run tests to verify failure**
+
+Run: `PYTHONPATH=. ./.venv/bin/pytest tests/unittest/test_repo_context.py -q`
+
+Expected: FAIL because `get_repo_file_content` does not exist.
+
+- [x] **Step 3: Add config defaults and provider methods**
+
+In `pr_agent/settings/configuration.toml`, under `[config]`, add:
+
+```toml
+repo_context_files = []
+repo_context_max_lines = 500
+```
+
+In `pr_agent/git_providers/git_provider.py`, add near `get_repo_settings`:
+
+```python
+ def get_repo_file_content(self, file_path: str):
+ return ""
+```
+
+In `pr_agent/git_providers/github_provider.py`, add near `get_repo_settings`:
+
+```python
+ def get_repo_file_content(self, file_path: str):
+ try:
+ contents = self.repo_obj.get_contents(file_path).decoded_content
+ if isinstance(contents, bytes):
+ return contents.decode("utf-8", errors="replace")
+ return contents
+ except Exception as e:
+ get_logger().warning(f"Failed to load repo file: {file_path}, error: {e}")
+ return ""
+```
+
+- [x] **Step 4: Run tests to verify pass**
+
+Run: `PYTHONPATH=. ./.venv/bin/pytest tests/unittest/test_repo_context.py -q`
+
+Expected: PASS.
+
+### Task 3: Inject Repo Context Into Tool Variables And Prompts
+
+**Files:**
+- Modify: `pr_agent/tools/pr_reviewer.py`
+- Modify: `pr_agent/tools/pr_description.py`
+- Modify: `pr_agent/tools/pr_code_suggestions.py`
+- Modify: `pr_agent/settings/pr_reviewer_prompts.toml`
+- Modify: `pr_agent/settings/pr_description_prompts.toml`
+- Modify: `pr_agent/settings/code_suggestions/pr_code_suggestions_prompts.toml`
+- Modify: `pr_agent/settings/code_suggestions/pr_code_suggestions_prompts_not_decoupled.toml`
+- Test: `tests/unittest/test_repo_context.py`
+
+- [x] **Step 1: Write failing variable and prompt tests**
+
+Append to `tests/unittest/test_repo_context.py`:
+
+```python
+from jinja2 import Environment, StrictUndefined
+
+from pr_agent.settings import pr_description_prompts, pr_reviewer_prompts
+
+
+def test_reviewer_prompt_renders_repo_context_block():
+ variables = {
+ "extra_instructions": "",
+ "repo_context": "## AGENTS.md\nRepo purpose",
+ "require_can_be_split_review": False,
+ "related_tickets": "",
+ "require_estimate_contribution_time_cost": False,
+ "require_score": False,
+ "require_tests": True,
+ "question_str": "",
+ "require_security_review": True,
+ "require_todo_scan": False,
+ "require_estimate_effort_to_review": True,
+ "num_max_findings": 3,
+ "num_pr_files": 1,
+ "is_ai_metadata": False,
+ }
+
+ rendered = Environment(undefined=StrictUndefined).from_string(
+ pr_reviewer_prompts.pr_review_prompt.system
+ ).render(variables)
+
+ assert "Repository context:" in rendered
+ assert "## AGENTS.md" in rendered
+
+
+def test_description_prompt_renders_repo_context_block():
+ variables = {
+ "extra_instructions": "",
+ "repo_context": "## AGENTS.md\nRepo purpose",
+ "enable_custom_labels": False,
+ "custom_labels_class": "",
+ "enable_semantic_files_types": True,
+ "include_file_summary_changes": True,
+ "enable_pr_diagram": False,
+ }
+
+ rendered = Environment(undefined=StrictUndefined).from_string(
+ pr_description_prompts.pr_description_prompt.system
+ ).render(variables)
+
+ assert "Repository context:" in rendered
+ assert "## AGENTS.md" in rendered
+```
+
+- [x] **Step 2: Run tests to verify failure**
+
+Run: `PYTHONPATH=. ./.venv/bin/pytest tests/unittest/test_repo_context.py -q`
+
+Expected: FAIL because prompt templates do not render `repo_context`.
+
+- [x] **Step 3: Wire loader into tools**
+
+In `pr_agent/tools/pr_reviewer.py`, import:
+
+```python
+from pr_agent.algo.repo_context import build_repo_context
+```
+
+Add to `self.vars`:
+
+```python
+ "repo_context": build_repo_context(self.git_provider),
+```
+
+Repeat the same import and variable addition in `pr_agent/tools/pr_description.py` and `pr_agent/tools/pr_code_suggestions.py`.
+
+- [x] **Step 4: Add prompt blocks**
+
+In review, description, and both code suggestion prompt TOML files, add after the `extra_instructions` block:
+
+```jinja
+{%- if repo_context %}
+
+
+Repository context:
+======
+{{ repo_context }}
+======
+{% endif %}
+```
+
+- [x] **Step 5: Run tests to verify pass**
+
+Run: `PYTHONPATH=. ./.venv/bin/pytest tests/unittest/test_repo_context.py -q`
+
+Expected: PASS.
+
+### Task 4: Regression Verification
+
+**Files:**
+- No new files
+
+- [x] **Step 1: Run focused tests**
+
+Run: `PYTHONPATH=. ./.venv/bin/pytest tests/unittest/test_repo_context.py tests/unittest/test_pr_description.py -q`
+
+Expected: PASS.
+
+- [x] **Step 2: Inspect diff**
+
+Run: `git diff -- pr_agent tests docs/superpowers/plans/2026-05-01-repo-context-files.md`
+
+Expected: Diff is scoped to repo context loader, provider method/defaults, prompt wiring, tests, and this plan.
+
+- [x] **Step 3: Commit implementation**
+
+Run:
+
+```bash
+git add pr_agent tests docs/superpowers/plans/2026-05-01-repo-context-files.md
+git commit -m "feat: add configurable repo context files"
+```
+
+Expected: Commit succeeds with only relevant files staged.
diff --git a/docs/superpowers/specs/2026-05-01-repo-context-files-design.md b/docs/superpowers/specs/2026-05-01-repo-context-files-design.md
new file mode 100644
index 0000000000..1ab4369e5f
--- /dev/null
+++ b/docs/superpowers/specs/2026-05-01-repo-context-files-design.md
@@ -0,0 +1,90 @@
+# Config-Driven Repository Context Files
+
+## Goal
+
+PR-Agent should support explicitly configured repository context files, such as `AGENTS.md`, and inject their contents into the AI prompts for PR analysis. This gives the model stable project-level context about repository purpose, architecture, and conventions without requiring teams to duplicate that information into every tool's `extra_instructions`.
+
+## Non-Goals
+
+- Do not automatically read `AGENTS.md` by default.
+- Do not replace existing `extra_instructions` behavior.
+- Do not fail PR commands because an optional context file is missing or unreadable.
+- Do not add broad repository indexing or retrieval.
+
+## Configuration
+
+Add two config keys under `[config]`:
+
+```toml
+repo_context_files = []
+repo_context_max_lines = 500
+```
+
+`repo_context_files` is an ordered list of repository-relative file paths to load from the target repository's default branch. An empty list preserves current behavior.
+
+`repo_context_max_lines` caps the total rendered context across all configured files. This limits prompt growth and keeps the feature predictable.
+
+Example:
+
+```toml
+[config]
+repo_context_files = ["AGENTS.md", "CONTRIBUTING.md"]
+repo_context_max_lines = 300
+```
+
+## Data Flow
+
+1. Repository settings are applied as they are today.
+2. Tool initialization asks the git provider for configured context files.
+3. The provider fetches file contents from the repository's default branch where supported.
+4. PR-Agent formats the loaded content with clear file headers.
+5. Prompt variables include the formatted value as `repo_context`.
+6. Review, describe, and improve prompts render a `Repository context` block only when `repo_context` is non-empty.
+
+## Provider Interface
+
+Add a provider method for fetching arbitrary repository files by path from the default branch. The first implementation should support GitHub because this repo's existing `get_repo_settings` path already reads `.pr_agent.toml` from the default branch through PyGithub.
+
+Providers that do not implement the method should return empty content through the base implementation. Missing files should be logged and skipped.
+
+## Prompt Behavior
+
+`repo_context` should be separate from `extra_instructions`. Extra instructions remain user-authored prompt directives. Repository context is background information the model should consider when evaluating the PR.
+
+The prompt block should be concise:
+
+```text
+Repository context:
+======
+## AGENTS.md
+...
+======
+```
+
+The block will be added to:
+
+- `/review`
+- `/describe`
+- `/improve`
+
+Other tools can opt in later when there is a concrete use case.
+
+## Error Handling
+
+- Empty configuration: no work and no prompt changes.
+- Missing file: debug or warning log, skip the file.
+- Unsupported provider: no context loaded.
+- Invalid path type or empty path: skip with logging.
+- Max line cap exceeded: truncate after the configured total line count.
+
+## Testing
+
+Add focused unit tests covering:
+
+- no configured files produces empty context
+- configured files are fetched and formatted with headers
+- missing files are skipped without raising
+- total context respects `repo_context_max_lines`
+- review, describe, and improve tool variables include `repo_context`
+
+Use provider fakes where possible rather than network calls.
diff --git a/pr_agent/algo/__init__.py b/pr_agent/algo/__init__.py
index a035d08fe8..cdffa5d045 100644
--- a/pr_agent/algo/__init__.py
+++ b/pr_agent/algo/__init__.py
@@ -276,7 +276,16 @@
CLAUDE_EXTENDED_THINKING_MODELS = [
"anthropic/claude-3-7-sonnet-20250219",
- "claude-3-7-sonnet-20250219"
+ "claude-3-7-sonnet-20250219",
+ "anthropic/claude-sonnet-4-6",
+ "claude-sonnet-4-6",
+ "vertex_ai/claude-sonnet-4-6",
+ "bedrock/anthropic.claude-sonnet-4-6",
+ "bedrock/us.anthropic.claude-sonnet-4-6",
+ "bedrock/au.anthropic.claude-sonnet-4-6",
+ "bedrock/eu.anthropic.claude-sonnet-4-6",
+ "bedrock/jp.anthropic.claude-sonnet-4-6",
+ "bedrock/global.anthropic.claude-sonnet-4-6",
]
# Models that require streaming mode
diff --git a/pr_agent/algo/ai_handlers/litellm_ai_handler.py b/pr_agent/algo/ai_handlers/litellm_ai_handler.py
index de9993284d..8f4e1919b0 100644
--- a/pr_agent/algo/ai_handlers/litellm_ai_handler.py
+++ b/pr_agent/algo/ai_handlers/litellm_ai_handler.py
@@ -147,8 +147,22 @@ def __init__(self):
# Models that support reasoning effort
self.support_reasoning_models = SUPPORT_REASONING_EFFORT_MODELS
- # Models that support extended thinking
- self.claude_extended_thinking_models = CLAUDE_EXTENDED_THINKING_MODELS
+ # Models that support extended thinking (config override replaces the built-in list when non-empty)
+ override = get_settings().config.get("claude_extended_thinking_models_override", []) or []
+ if override and not isinstance(override, list):
+ get_logger().warning(
+ "Invalid claude_extended_thinking_models_override in config; expected a list of model names. "
+ "Falling back to the built-in Claude extended-thinking model list."
+ )
+ override = []
+ elif override and not all(isinstance(model, str) and model.strip() for model in override):
+ get_logger().warning(
+ "Invalid claude_extended_thinking_models_override in config; "
+ "expected a list of model name strings. "
+ "Falling back to the built-in Claude extended-thinking model list."
+ )
+ override = []
+ self.claude_extended_thinking_models = list(override) if override else CLAUDE_EXTENDED_THINKING_MODELS
# Models that require streaming
self.streaming_required_models = STREAMING_REQUIRED_MODELS
diff --git a/pr_agent/algo/inline_comments_dedup.py b/pr_agent/algo/inline_comments_dedup.py
new file mode 100644
index 0000000000..7a348c0c18
--- /dev/null
+++ b/pr_agent/algo/inline_comments_dedup.py
@@ -0,0 +1,210 @@
+"""
+Stable-marker deduplication for inline PR comments.
+
+When PR-Agent re-runs /improve or /add_docs on the same PR, each run would
+otherwise post fresh inline comments for suggestions that were already posted.
+This module generates a hidden, content-derived marker that providers embed
+in inline comment bodies so that subsequent runs can recognize and update
+(or skip) the prior comment instead of creating a duplicate.
+"""
+
+from __future__ import annotations
+
+import hashlib
+import re
+import textwrap
+from typing import Any, Optional
+
+MARKER_PREFIX = ""
+
+# Constants used by the resolve-outdated-inline-comments feature.
+# RESOLVED_BODY_MARKER is appended (with RESOLVED_NOTE) to the body of an
+# inline comment whose suggestion was not re-emitted on the current run.
+# It also serves as an idempotency signal: if a user manually unresolves a
+# thread we previously auto-resolved, the marker remains in the body and
+# tells us not to re-resolve on subsequent runs.
+RESOLVED_NOTE = "Resolved automatically: this suggestion was not re-emitted on the latest run."
+RESOLVED_BODY_MARKER = ""
+
+PERSISTENT_MODE_OFF = "off"
+PERSISTENT_MODE_UPDATE = "update"
+PERSISTENT_MODE_SKIP = "skip"
+VALID_PERSISTENT_MODES = {PERSISTENT_MODE_OFF, PERSISTENT_MODE_UPDATE, PERSISTENT_MODE_SKIP}
+
+_HASH_LEN = 12
+_CONTENT_PREFIX_LEN = 128
+_MARKER_RE = re.compile(
+ re.escape(MARKER_PREFIX) + r"([0-9a-f]{" + str(_HASH_LEN) + r"})" + re.escape(MARKER_SUFFIX)
+)
+_WHITESPACE_RE = re.compile(r"\s+")
+
+_SEP = "\x00"
+_HASH_VERSION_STRUCTURED = "v2s"
+_HASH_VERSION_PROSE = "v2p"
+
+
+def _pick_content(suggestion: dict) -> Optional[str]:
+ for key in ("suggestion_content", "suggestion_summary", "content"):
+ val = suggestion.get(key)
+ if val:
+ return str(val)
+ return None
+
+
+def _normalize(text: str) -> str:
+ return _WHITESPACE_RE.sub(" ", text).strip()
+
+
+def normalize_code(text: Optional[str]) -> str:
+ """Normalize a proposed-edit code snippet for stable hashing.
+
+ Expands tabs, strips trailing whitespace per line, drops leading and
+ trailing fully-blank lines, and removes the longest common leading
+ whitespace across remaining lines (textwrap.dedent).
+ """
+ if not text:
+ return ""
+ expanded = text.expandtabs()
+ lines = [line.rstrip() for line in expanded.split("\n")]
+ while lines and not lines[0]:
+ lines.pop(0)
+ while lines and not lines[-1]:
+ lines.pop()
+ if not lines:
+ return ""
+ return textwrap.dedent("\n".join(lines))
+
+
+# Dedup identity is structured-first, prose-fallback:
+# - If a suggestion has `improved_code`, the hash covers
+# (version_tag + file + normalized improved_code). Prose wording never
+# affects the key, and label is intentionally excluded — the edit
+# itself is the identity.
+# - Otherwise we fall back to (version_tag + file + label + prose prefix).
+#
+# This is a strict (a) design: prose is NEVER consulted when a structured
+# edit exists. Two suggestions at the same spot with the same prose but
+# different edits intentionally remain separate comments — we'd rather
+# under-merge than over-merge genuinely distinct fixes.
+#
+# Line-range is deliberately NOT in the key so dedup stays stable across
+# upstream pushes that drift the target line (a property the user-facing
+# docs explicitly promise).
+#
+# The version tag (v2s / v2p) lives INSIDE the hashed signature, making
+# the two namespaces preimage-distinct and preventing accidental cross-
+# namespace collisions. The marker grammar is unchanged, so pre-existing
+# v1 markers on live PRs self-heal via the outdated-pass auto-resolve
+# (resolve_outdated_inline_comments) on the first re-run after deployment.
+#
+# A fuzzy near-miss signal (shingle / Jaccard) was considered and deferred;
+# see docs/docs/tools/improve.md and the Serena memory
+# `future_fuzzy_inline_dedup`.
+def generate_marker(suggestion: dict) -> Optional[str]:
+ """Return a stable marker for this suggestion, or None if required fields are missing."""
+ file = suggestion.get("relevant_file")
+ if not file:
+ return None
+ file = str(file).strip()
+ if not file:
+ return None
+
+ improved_code = suggestion.get("improved_code")
+ if isinstance(improved_code, str) and improved_code.strip():
+ sig = _SEP.join([_HASH_VERSION_STRUCTURED, file, normalize_code(improved_code)])
+ else:
+ label = suggestion.get("label")
+ content = _pick_content(suggestion)
+ if not label or not content:
+ return None
+ sig = _SEP.join([
+ _HASH_VERSION_PROSE,
+ file,
+ str(label).strip(),
+ _normalize(content)[:_CONTENT_PREFIX_LEN],
+ ])
+
+ digest = hashlib.sha256(sig.encode("utf-8")).hexdigest()[:_HASH_LEN]
+ return f"{MARKER_PREFIX}{digest}{MARKER_SUFFIX}"
+
+
+def extract_marker(body: str) -> Optional[str]:
+ """Return the last marker hash found in `body`, or None."""
+ if not body:
+ return None
+ matches = _MARKER_RE.findall(body)
+ if not matches:
+ return None
+ return matches[-1]
+
+
+def append_marker(body: str, marker: str) -> str:
+ """Append `marker` to `body` if not already present; idempotent."""
+ if not marker:
+ return body
+ if marker in body:
+ return body
+ sep = "" if body.endswith("\n") else "\n\n"
+ return f"{body}{sep}{marker}"
+
+
+def build_marker_index(comments: list[dict[str, Any]]) -> dict[str, list[dict[str, Any]]]:
+ """Index comments by marker hash, preserving all collisions under the same hash."""
+ index: dict[str, list[dict[str, Any]]] = {}
+ for c in comments or []:
+ body = c.get("body") or ""
+ h = extract_marker(body)
+ if h:
+ index.setdefault(h, []).append(c)
+ return index
+
+
+def find_comment_by_location(
+ candidates: list[dict[str, Any]],
+ relevant_file: str,
+ relevant_lines_start: int,
+ relevant_lines_end: int,
+) -> Optional[dict[str, Any]]:
+ """Return the newest candidate whose stored inline coordinates match this suggestion."""
+ if not candidates:
+ return None
+ expected_path = (relevant_file or "").strip()
+ expected_line = relevant_lines_end if relevant_lines_end > relevant_lines_start else relevant_lines_start
+ expected_start = relevant_lines_start if relevant_lines_end > relevant_lines_start else None
+
+ for candidate in reversed(candidates):
+ candidate_path = str(candidate.get("path") or "").strip()
+ candidate_line = candidate.get("line")
+ candidate_start = candidate.get("start_line")
+ if candidate_path != expected_path:
+ continue
+ if candidate_line != expected_line:
+ continue
+ if candidate_start != expected_start:
+ continue
+ return candidate
+ return None
+
+
+def format_resolved_body(original_body: str) -> str:
+ """Append the auto-resolved note and idempotency marker to ``original_body``.
+
+ Shared by every provider's outdated pass so the on-screen format stays
+ identical and the body marker check (RESOLVED_BODY_MARKER in body) keeps
+ working across providers.
+ """
+ return (
+ (original_body or "").rstrip()
+ + f"\n\n---\n_{RESOLVED_NOTE}_\n{RESOLVED_BODY_MARKER}"
+ )
+
+
+def normalize_persistent_mode(raw: Any) -> str:
+ """Coerce config input to one of the valid modes. Unknown values fall back to 'off'."""
+ if raw is None:
+ return PERSISTENT_MODE_OFF
+ candidate = str(raw).strip().lower()
+ if candidate in VALID_PERSISTENT_MODES:
+ return candidate
+ return PERSISTENT_MODE_OFF
diff --git a/pr_agent/algo/repo_context.py b/pr_agent/algo/repo_context.py
new file mode 100644
index 0000000000..4d15aafe01
--- /dev/null
+++ b/pr_agent/algo/repo_context.py
@@ -0,0 +1,47 @@
+from pr_agent.config_loader import get_settings
+from pr_agent.log import get_logger
+
+
+def build_repo_context(git_provider) -> str:
+ context_files = get_settings().config.get("repo_context_files", [])
+ if not context_files:
+ return ""
+
+ max_lines = get_settings().config.get("repo_context_max_lines", 500)
+ try:
+ max_lines = max(0, int(max_lines))
+ except (TypeError, ValueError):
+ max_lines = 500
+
+ rendered_lines = []
+ for file_path in context_files:
+ if not isinstance(file_path, str) or not file_path.strip():
+ get_logger().warning("Skipping invalid repo context file path", artifact={"file_path": file_path})
+ continue
+
+ file_path = file_path.strip()
+ try:
+ content = git_provider.get_repo_file_content(file_path)
+ except Exception as e:
+ get_logger().warning(f"Failed to load repo context file: {file_path}", artifact={"error": str(e)})
+ continue
+
+ if not content:
+ get_logger().debug(f"Repo context file is empty or missing: {file_path}")
+ continue
+
+ if isinstance(content, bytes):
+ content = content.decode("utf-8", errors="replace")
+
+ file_lines = [f"## {file_path}", *str(content).strip().splitlines()]
+ remaining_lines = max_lines - len(rendered_lines)
+ if remaining_lines <= 0:
+ break
+
+ if rendered_lines:
+ rendered_lines.append("")
+ remaining_lines -= 1
+
+ rendered_lines.extend(file_lines[:remaining_lines])
+
+ return "\n".join(rendered_lines).strip()
diff --git a/pr_agent/git_providers/git_provider.py b/pr_agent/git_providers/git_provider.py
index 631e189c04..6e9745b11f 100644
--- a/pr_agent/git_providers/git_provider.py
+++ b/pr_agent/git_providers/git_provider.py
@@ -246,8 +246,9 @@ def get_user_description(self) -> str:
start_position = description_lowercase.find(user_description_header) + len(user_description_header)
end_position = len(description)
for header in possible_headers: # try to clip at the next header
- if header != user_description_header and header in description_lowercase:
- end_position = min(end_position, description_lowercase.find(header))
+ next_header_position = description_lowercase.find(header, start_position)
+ if header != user_description_header and next_header_position != -1:
+ end_position = min(end_position, next_header_position)
if end_position != len(description) and end_position > start_position:
original_user_description = description[start_position:end_position].strip()
if original_user_description.endswith("___"):
@@ -274,6 +275,9 @@ def _is_generated_by_pr_agent(self, description_lowercase: str) -> bool:
def get_repo_settings(self):
pass
+ def get_repo_file_content(self, file_path: str):
+ return ""
+
def get_workspace_name(self):
return ""
@@ -338,6 +342,52 @@ def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_
def publish_inline_comments(self, comments: list[dict]):
pass
+ def get_bot_review_comments(self) -> list[dict]:
+ """
+ Return the bot's existing inline (review) comments on the current PR.
+
+ Each dict must contain at least:
+ - 'id': provider-specific comment id (used by edit_review_comment)
+ - 'body': full comment body (used for marker extraction)
+
+ Default: return []. Providers that support inline-comment dedup should override.
+ """
+ return []
+
+ def edit_review_comment(self, comment_id, body: str) -> bool:
+ """
+ Edit an existing inline (review) comment in place.
+
+ Returns True on success, False otherwise. Default: return False (unsupported),
+ which causes persistent-inline-comment dedup to fall back to the create-new path.
+ """
+ return False
+
+ def resolve_review_thread(self, comment: dict) -> bool:
+ """
+ Mark the review thread containing `comment` as resolved.
+
+ `comment` is one of the dicts returned by get_bot_review_comments();
+ providers extract whichever id (thread_id, discussion_id, etc.) they need.
+
+ Returns True on success, False otherwise. Default: return False (unsupported),
+ which causes the resolve-outdated pass to skip this comment.
+
+ Providers wiring this in must also override get_bot_review_comments to
+ include is_resolved on each dict and wire the outdated pass into their
+ publish_code_suggestions; without all three the feature silently no-ops.
+ """
+ return False
+
+ def unresolve_review_thread(self, comment: dict) -> bool:
+ """
+ Mark the review thread containing `comment` as unresolved.
+
+ Used when a previously auto-resolved suggestion is re-emitted on a later run.
+ Returns True on success, False otherwise. Default: return False (unsupported).
+ """
+ return False
+
@abstractmethod
def remove_initial_comment(self):
pass
diff --git a/pr_agent/git_providers/github_provider.py b/pr_agent/git_providers/github_provider.py
index fa52b7dc05..60a3e50af3 100644
--- a/pr_agent/git_providers/github_provider.py
+++ b/pr_agent/git_providers/github_provider.py
@@ -7,7 +7,7 @@
import traceback
import json
from datetime import datetime
-from typing import Optional, Tuple
+from typing import Any, Optional, Tuple
from urllib.parse import urlparse
from github.Issue import Issue
@@ -17,6 +17,19 @@
from ..algo.file_filter import filter_ignored
from ..algo.git_patch_processing import extract_hunk_headers
+from ..algo.inline_comments_dedup import (
+ MARKER_PREFIX,
+ MARKER_SUFFIX,
+ PERSISTENT_MODE_OFF,
+ PERSISTENT_MODE_SKIP,
+ RESOLVED_BODY_MARKER,
+ append_marker,
+ build_marker_index,
+ find_comment_by_location,
+ format_resolved_body,
+ generate_marker,
+ normalize_persistent_mode,
+)
from ..algo.language_handler import is_valid_file
from ..algo.types import EDIT_TYPE
from ..algo.utils import (PRReviewHeader, Range, clip_tokens,
@@ -29,6 +42,35 @@
IncrementalPR)
+_TRUE_CONFIG_VALUES = {"1", "true", "t", "yes", "y", "on"}
+_FALSE_CONFIG_VALUES = {"0", "false", "f", "no", "n", "off"}
+
+
+def _normalize_bool_config(raw: Any, *, default: bool, setting_name: str) -> bool:
+ if isinstance(raw, bool):
+ return raw
+ if raw is None:
+ return default
+ if isinstance(raw, int) and raw in (0, 1):
+ return bool(raw)
+ if isinstance(raw, str):
+ value = raw.strip().lower()
+ if value in _TRUE_CONFIG_VALUES:
+ return True
+ if value in _FALSE_CONFIG_VALUES:
+ return False
+
+ get_logger().warning(
+ f"Invalid boolean value for {setting_name}: {raw!r}. "
+ f"Expected one of true/false, 1/0, yes/no, or on/off; using default {default!r}."
+ )
+ return default
+
+
+def _is_valid_review_comment_id(comment_id: Any) -> bool:
+ return isinstance(comment_id, int) and not isinstance(comment_id, bool)
+
+
class GithubProvider(GitProvider):
def __init__(self, pr_url: Optional[str] = None):
self.repo_obj = None
@@ -548,31 +590,269 @@ def _try_fix_invalid_inline_comments(self, invalid_comments: list[dict]) -> list
get_logger().error(f"Failed to fix inline comment, error: {e}")
return fixed_comments
- def publish_code_suggestions(self, code_suggestions: list) -> bool:
+ _BOT_REVIEW_COMMENTS_QUERY = """
+ query($owner:String!, $name:String!, $number:Int!, $cursor:String) {
+ repository(owner:$owner, name:$name) {
+ pullRequest(number:$number) {
+ reviewThreads(first:100, after:$cursor) {
+ pageInfo { hasNextPage endCursor }
+ nodes {
+ id
+ isResolved
+ comments(first:100) {
+ nodes {
+ databaseId
+ body
+ path
+ line
+ startLine
+ author { login }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ """
+
+ def _graphql_url(self) -> str:
+ # On GHES the REST base is `.../api/v3` but GraphQL lives at `.../api/graphql`.
+ # github.com has no `/v3` suffix, so the fallback covers it.
+ base = self.base_url
+ if base.endswith("/api/v3"):
+ return base[: -len("/v3")] + "/graphql"
+ return f"{base}/graphql"
+
+ def get_bot_review_comments(self) -> list[dict]:
"""
- Publishes code suggestions as comments on the PR.
+ Return the bot's existing inline review comments on this PR.
+
+ Uses GraphQL to expose per-thread resolution state and the thread node id
+ (needed by resolve_review_thread / unresolve_review_thread). Filters by
+ author to avoid matching human reviewers. Returns dicts with keys:
+ id, thread_id, body, path, line, start_line, is_resolved.
"""
- post_parameters_list = []
+ try:
+ our_app_name = (get_settings().get("GITHUB.APP_NAME", "") or "").lower()
+ bot_user_id = (self.get_user_id() or "").lower() if self.deployment_type == "user" else ""
+ owner, _, name = self.repo.partition("/")
+ number = self.pr.number
+
+ out: list[dict] = []
+ cursor: str | None = None
+ while True:
+ _, data = self.pr._requester.requestJsonAndCheck(
+ "POST",
+ self._graphql_url(),
+ input={
+ "query": self._BOT_REVIEW_COMMENTS_QUERY,
+ "variables": {"owner": owner, "name": name, "number": number, "cursor": cursor},
+ },
+ )
+ if not data or data.get("errors"):
+ get_logger().warning(
+ f"get_bot_review_comments GraphQL errors: {(data or {}).get('errors')}"
+ )
+ return []
+ threads = (((data.get("data") or {}).get("repository") or {})
+ .get("pullRequest") or {}).get("reviewThreads") or {}
+ page_info = threads.get("pageInfo") or {}
+ for t in threads.get("nodes") or []:
+ thread_id = t.get("id")
+ is_resolved = bool(t.get("isResolved"))
+ for c in ((t.get("comments") or {}).get("nodes") or []):
+ login = ((c.get("author") or {}).get("login") or "").lower()
+ same_author = False
+ if self.deployment_type == "app":
+ same_author = bool(our_app_name) and our_app_name in login
+ elif self.deployment_type == "user":
+ same_author = bool(bot_user_id) and login == bot_user_id
+ if not same_author:
+ continue
+ comment_id = c.get("databaseId")
+ if not _is_valid_review_comment_id(comment_id):
+ get_logger().warning(
+ f"Skipping GitHub review comment with invalid databaseId: {comment_id!r}"
+ )
+ continue
+ out.append({
+ "id": comment_id,
+ "thread_id": thread_id,
+ "body": c.get("body") or "",
+ "path": c.get("path"),
+ "line": c.get("line"),
+ "start_line": c.get("startLine"),
+ "is_resolved": is_resolved,
+ })
+ cursor = page_info.get("endCursor")
+ if not page_info.get("hasNextPage") or not cursor:
+ break
+ return out
+ except Exception as e:
+ get_logger().warning(f"Failed to list GitHub review comments via GraphQL: {e}")
+ return []
+
+ def edit_review_comment(self, comment_id, body: str) -> bool:
+ if not _is_valid_review_comment_id(comment_id):
+ get_logger().warning(f"Skipping GitHub review comment edit with invalid id: {comment_id!r}")
+ return False
+ try:
+ body = self.limit_output_characters(body, self.max_comment_chars)
+ self.pr._requester.requestJsonAndCheck(
+ "PATCH",
+ f"{self.base_url}/repos/{self.repo}/pulls/comments/{comment_id}",
+ input={"body": body},
+ )
+ return True
+ except Exception as e:
+ get_logger().warning(f"Failed to edit GitHub review comment {comment_id}: {e}")
+ return False
+
+ _RESOLVE_THREAD_MUTATION = """
+ mutation($threadId:ID!) {
+ resolveReviewThread(input:{threadId:$threadId}) { thread { isResolved } }
+ }
+ """
+
+ _UNRESOLVE_THREAD_MUTATION = """
+ mutation($threadId:ID!) {
+ unresolveReviewThread(input:{threadId:$threadId}) { thread { isResolved } }
+ }
+ """
+
+ def _run_thread_mutation(self, query: str, comment: dict) -> bool:
+ thread_id = comment.get("thread_id")
+ if not thread_id:
+ return False
+ try:
+ _, data = self.pr._requester.requestJsonAndCheck(
+ "POST",
+ self._graphql_url(),
+ input={"query": query, "variables": {"threadId": thread_id}},
+ )
+ if not data or data.get("errors"):
+ get_logger().warning(
+ f"GitHub thread mutation errors for {thread_id}: {(data or {}).get('errors')}"
+ )
+ return False
+ return True
+ except Exception as e:
+ get_logger().warning(f"GitHub thread mutation failed for {thread_id}: {e}")
+ return False
+ def resolve_review_thread(self, comment: dict) -> bool:
+ return self._run_thread_mutation(self._RESOLVE_THREAD_MUTATION, comment)
+
+ def unresolve_review_thread(self, comment: dict) -> bool:
+ return self._run_thread_mutation(self._UNRESOLVE_THREAD_MUTATION, comment)
+
+ def publish_code_suggestions(self, code_suggestions: list) -> bool:
+ """
+ Publishes code suggestions as review comments on the PR.
+
+ When `pr_code_suggestions.persistent_inline_comments` is 'update' (default)
+ or 'skip', a stable marker is embedded in each body so subsequent runs
+ can recognize and update (or skip) the existing comment rather than
+ creating duplicates.
+ """
code_suggestions_validated = self.validate_comments_inside_hunks(code_suggestions)
+ mode = normalize_persistent_mode(
+ get_settings().pr_code_suggestions.get("persistent_inline_comments", PERSISTENT_MODE_OFF)
+ )
+ resolve_outdated = _normalize_bool_config(
+ get_settings().pr_code_suggestions.get("resolve_outdated_inline_comments", True),
+ default=True,
+ setting_name="pr_code_suggestions.resolve_outdated_inline_comments",
+ )
+
+ existing_index: dict[str, list[dict]] = {}
+ if mode != PERSISTENT_MODE_OFF:
+ try:
+ existing_index = build_marker_index(self.get_bot_review_comments())
+ except Exception as e:
+ get_logger().warning(f"persistent_inline_comments: fetch failed, falling back to create-new: {e}")
+ existing_index = {}
+
+ reused_comment_ids: set[int] = set()
+ post_parameters_list = []
for suggestion in code_suggestions_validated:
- body = suggestion['body']
- relevant_file = suggestion['relevant_file']
- relevant_lines_start = suggestion['relevant_lines_start']
- relevant_lines_end = suggestion['relevant_lines_end']
+ body = suggestion["body"]
+ relevant_file = suggestion["relevant_file"]
+ relevant_lines_start = suggestion["relevant_lines_start"]
+ relevant_lines_end = suggestion["relevant_lines_end"]
if not relevant_lines_start or relevant_lines_start == -1:
get_logger().exception(
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}")
continue
-
if relevant_lines_end < relevant_lines_start:
- get_logger().exception(f"Failed to publish code suggestion, "
- f"relevant_lines_end is {relevant_lines_end} and "
- f"relevant_lines_start is {relevant_lines_start}")
+ get_logger().exception(
+ f"Failed to publish code suggestion, "
+ f"relevant_lines_end is {relevant_lines_end} and "
+ f"relevant_lines_start is {relevant_lines_start}")
continue
+ if mode != PERSISTENT_MODE_OFF:
+ marker = generate_marker(suggestion.get("original_suggestion") or suggestion)
+ if marker:
+ body = append_marker(body, marker)
+ marker_hash = marker[len(MARKER_PREFIX):-len(MARKER_SUFFIX)]
+ existing = find_comment_by_location(
+ existing_index.get(marker_hash, []),
+ relevant_file,
+ relevant_lines_start,
+ relevant_lines_end,
+ )
+ if existing is not None:
+ existing_id = existing.get("id")
+ if not _is_valid_review_comment_id(existing_id):
+ get_logger().warning(
+ f"persistent_inline_comments: ignoring existing GitHub comment with invalid id "
+ f"{existing_id!r} on {relevant_file}"
+ )
+ existing = None
+ else:
+ existing_id = int(existing_id)
+ if existing is not None:
+ if mode == PERSISTENT_MODE_SKIP:
+ if resolve_outdated and (
+ existing.get("is_resolved")
+ or RESOLVED_BODY_MARKER in (existing.get("body") or "")
+ ):
+ if self.edit_review_comment(existing_id, body):
+ reused_comment_ids.add(existing_id)
+ if existing.get("is_resolved"):
+ self.unresolve_review_thread(existing)
+ continue
+ get_logger().info(
+ f"persistent_inline_comments=skip: reopen failed for {existing_id}; "
+ f"falling back to create-new"
+ )
+ else:
+ reused_comment_ids.add(existing_id)
+ get_logger().info(
+ f"persistent_inline_comments=skip: existing comment {existing_id} "
+ f"on {relevant_file}; not re-posting")
+ continue
+ else:
+ # mode == update
+ if self.edit_review_comment(existing_id, body):
+ reused_comment_ids.add(existing_id)
+ # If we previously auto-resolved this thread but the
+ # suggestion is back, unresolve it.
+ if resolve_outdated and existing.get("is_resolved"):
+ self.unresolve_review_thread(existing)
+ continue
+ get_logger().info(
+ f"persistent_inline_comments=update: edit failed for {existing_id}; "
+ f"falling back to create-new")
+ elif mode == PERSISTENT_MODE_SKIP:
+ # No same-location match, so allow a new inline comment and
+ # let the outdated pass resolve any stale thread with this marker.
+ pass
+
if relevant_lines_end > relevant_lines_start:
post_parameters = {
"body": body,
@@ -581,7 +861,7 @@ def publish_code_suggestions(self, code_suggestions: list) -> bool:
"start_line": relevant_lines_start,
"start_side": "RIGHT",
}
- else: # API is different for single line comments
+ else:
post_parameters = {
"body": body,
"path": relevant_file,
@@ -590,6 +870,46 @@ def publish_code_suggestions(self, code_suggestions: list) -> bool:
}
post_parameters_list.append(post_parameters)
+ # ---- Outdated pass: resolve threads whose marker is no longer emitted ----
+ if mode != PERSISTENT_MODE_OFF and resolve_outdated:
+ for candidates in existing_index.values():
+ for c in candidates:
+ comment_id = c.get("id")
+ if not _is_valid_review_comment_id(comment_id):
+ get_logger().warning(
+ f"resolve_outdated_inline_comments: skipping GitHub comment with invalid id "
+ f"{comment_id!r}"
+ )
+ continue
+ comment_id = int(comment_id)
+ if comment_id in reused_comment_ids:
+ continue
+ if c.get("is_resolved"):
+ continue
+ if RESOLVED_BODY_MARKER in (c.get("body") or ""):
+ continue
+ # Per-iteration guard so a single failure cannot propagate; criterion 8 of resolve-outdated-inline-comments.
+ try:
+ if not self.resolve_review_thread(c):
+ continue
+ if not self.edit_review_comment(comment_id, format_resolved_body(c.get("body") or "")):
+ get_logger().warning(
+ f"resolve_outdated_inline_comments: failed to write resolved marker for "
+ f"GitHub comment {comment_id}; attempting to unresolve thread"
+ )
+ if not self.unresolve_review_thread(c):
+ get_logger().warning(
+ f"resolve_outdated_inline_comments: failed to unresolve GitHub thread "
+ f"after marker write failed for comment {comment_id}"
+ )
+ except Exception as e:
+ get_logger().warning(
+ f"resolve_outdated_inline_comments: outdated pass failed for {comment_id}: {e}"
+ )
+
+ if not post_parameters_list:
+ return True
+
try:
self.publish_inline_comments(post_parameters_list)
return True
@@ -737,7 +1057,18 @@ def get_repo_settings(self):
# more logical to take 'pr_agent.toml' from the default branch
contents = self.repo_obj.get_contents(".pr_agent.toml").decoded_content
return contents
- except Exception:
+ except Exception as e:
+ get_logger().warning(f"Failed to load .pr_agent.toml file, error: {e}")
+ return ""
+
+ def get_repo_file_content(self, file_path: str):
+ try:
+ contents = self.repo_obj.get_contents(file_path).decoded_content
+ if isinstance(contents, bytes):
+ return contents.decode("utf-8", errors="replace")
+ return contents
+ except Exception as e:
+ get_logger().warning(f"Failed to load repo file: {file_path}, error: {e}")
return ""
def get_workspace_name(self):
diff --git a/pr_agent/git_providers/gitlab_provider.py b/pr_agent/git_providers/gitlab_provider.py
index 7f5937343a..4230218417 100644
--- a/pr_agent/git_providers/gitlab_provider.py
+++ b/pr_agent/git_providers/gitlab_provider.py
@@ -14,6 +14,19 @@
from ..algo.file_filter import filter_ignored
from ..algo.git_patch_processing import decode_if_bytes
+from ..algo.inline_comments_dedup import (
+ MARKER_PREFIX,
+ MARKER_SUFFIX,
+ PERSISTENT_MODE_OFF,
+ PERSISTENT_MODE_SKIP,
+ RESOLVED_BODY_MARKER,
+ append_marker,
+ build_marker_index,
+ find_comment_by_location,
+ format_resolved_body,
+ generate_marker,
+ normalize_persistent_mode,
+)
from ..algo.language_handler import is_valid_file
from ..algo.utils import (clip_tokens,
find_line_number_of_relevant_line_in_file,
@@ -654,7 +667,119 @@ def get_relevant_diff(self, relevant_file: str, relevant_line_in_file: str) -> O
f'No relevant diff found for {relevant_file} {relevant_line_in_file}. Falling back to last diff.')
return self.last_diff # fallback to last_diff if no relevant diff is found
+ def get_bot_review_comments(self) -> list[dict]:
+ """
+ Return the bot's existing inline review comments on this MR.
+
+ Iterates MR discussions and collects DiffNote-type notes authored by the
+ authenticated bot user. Returns a list of dicts with:
+ id, body, path, line, start_line, thread_id, discussion_id, is_resolved.
+ On any exception, logs a warning and returns [].
+ """
+ try:
+ if getattr(self.gl, "user", None) is None:
+ try:
+ self.gl.auth()
+ except Exception:
+ pass
+ bot_username = getattr(getattr(self.gl, "user", None), "username", None)
+ if not bot_username:
+ get_logger().warning("get_bot_review_comments: could not determine bot username")
+ return []
+ bot_username = bot_username.lower()
+
+ out = []
+ for discussion in self.mr.discussions.list(get_all=True):
+ notes = discussion.attributes.get("notes") or []
+ for note in notes:
+ # Only inline/diff notes
+ if note.get("type") != "DiffNote":
+ continue
+ author_username = ((note.get("author") or {}).get("username") or "").lower()
+ if author_username != bot_username:
+ continue
+ position = note.get("position") or {}
+ line_range = position.get("line_range") or {}
+ start_line = None
+ if line_range:
+ start = line_range.get("start") or {}
+ start_line = start.get("new_line")
+ disc_notes = discussion.attributes.get("notes") or [{}]
+ is_resolved = getattr(discussion, "resolved", None)
+ if is_resolved is None:
+ is_resolved = disc_notes[0].get("resolved", False)
+ out.append({
+ "id": note.get("id"),
+ "thread_id": discussion.id,
+ "discussion_id": discussion.id, # back-compat alias
+ "body": note.get("body") or "",
+ "path": position.get("new_path"),
+ "line": position.get("new_line"),
+ "start_line": start_line,
+ "is_resolved": bool(is_resolved),
+ })
+ return out
+ except Exception as e:
+ get_logger().warning(f"Failed to list GitLab review comments: {e}")
+ return []
+
+ def edit_review_comment(self, comment_id, body: str) -> bool:
+ """
+ Edit an existing MR note by id.
+
+ Uses self.mr.notes.get(comment_id) and saves the updated body.
+ Returns True on success, False with a warning log on failure.
+ """
+ try:
+ body = self.limit_output_characters(body, self.max_comment_chars)
+ note = self.mr.notes.get(comment_id)
+ note.body = body
+ note.save()
+ return True
+ except Exception as e:
+ get_logger().warning(f"Failed to edit GitLab review comment {comment_id}: {e}")
+ return False
+
+ def _set_discussion_resolved(self, comment: dict, resolved: bool) -> bool:
+ thread_id = comment.get("thread_id") or comment.get("discussion_id")
+ if not thread_id:
+ return False
+ try:
+ d = self.mr.discussions.get(thread_id)
+ if not getattr(d, "resolvable", True): # fail-open: missing attr -> attempt
+ return False
+ d.resolved = resolved
+ d.save()
+ return True
+ except Exception as e:
+ get_logger().warning(f"GitLab set-resolved={resolved} failed for {thread_id}: {e}")
+ return False
+
+ def resolve_review_thread(self, comment: dict) -> bool:
+ return self._set_discussion_resolved(comment, True)
+
+ def unresolve_review_thread(self, comment: dict) -> bool:
+ return self._set_discussion_resolved(comment, False)
+
def publish_code_suggestions(self, code_suggestions: list) -> bool:
+ mode = normalize_persistent_mode(
+ get_settings().pr_code_suggestions.get("persistent_inline_comments", PERSISTENT_MODE_OFF)
+ )
+ resolve_outdated = bool(
+ get_settings().pr_code_suggestions.get("resolve_outdated_inline_comments", True)
+ )
+
+ existing_index: dict[str, list[dict]] = {}
+ reused_comment_ids: set[int] = set()
+ if mode != PERSISTENT_MODE_OFF:
+ try:
+ existing_index = build_marker_index(self.get_bot_review_comments())
+ except Exception as e:
+ get_logger().warning(
+ f"persistent_inline_comments: fetch failed, falling back to create-new: {e}"
+ )
+ existing_index = {}
+
for suggestion in code_suggestions:
try:
if suggestion and 'original_suggestion' in suggestion:
@@ -666,13 +791,57 @@ def publish_code_suggestions(self, code_suggestions: list) -> bool:
relevant_lines_start = suggestion['relevant_lines_start']
relevant_lines_end = suggestion['relevant_lines_end']
+ if mode != PERSISTENT_MODE_OFF:
+ marker = generate_marker(suggestion.get("original_suggestion") or suggestion)
+ if marker:
+ body = append_marker(body, marker)
+ marker_hash = marker[len(MARKER_PREFIX):-len(MARKER_SUFFIX)]
+ existing = find_comment_by_location(
+ existing_index.get(marker_hash, []),
+ relevant_file,
+ relevant_lines_start,
+ relevant_lines_end,
+ )
+ if existing is not None:
+ if mode == PERSISTENT_MODE_SKIP:
+ if resolve_outdated and (
+ existing.get("is_resolved")
+ or RESOLVED_BODY_MARKER in (existing.get("body") or "")
+ ):
+ if self.edit_review_comment(existing.get("id"), body):
+ reused_comment_ids.add(existing.get("id"))
+ if existing.get("is_resolved"):
+ self.unresolve_review_thread(existing)
+ continue
+ get_logger().info(
+ f"persistent_inline_comments=skip: reopen failed for {existing.get('id')}; "
+ f"falling back to create-new"
+ )
+ else:
+ reused_comment_ids.add(existing.get("id"))
+ get_logger().info(
+ f"persistent_inline_comments=skip: existing comment {existing.get('id')} "
+ f"on {relevant_file}; not re-posting"
+ )
+ continue
+ else:
+ # mode == update
+ if self.edit_review_comment(existing.get("id"), body):
+ reused_comment_ids.add(existing.get("id"))
+ if resolve_outdated and existing.get("is_resolved"):
+ self.unresolve_review_thread(existing)
+ continue
+ get_logger().info(
+ f"persistent_inline_comments=update: edit failed for {existing.get('id')}; "
+ f"falling back to create-new"
+ )
+
diff_files = self.get_diff_files()
target_file = None
for file in diff_files:
if file.filename == relevant_file:
- if file.filename == relevant_file:
- target_file = file
- break
+ target_file = file
+ break
range = relevant_lines_end - relevant_lines_start # no need to add 1
body = body.replace('```suggestion', f'```suggestion:-0+{range}')
lines = target_file.head_file.splitlines()
@@ -691,6 +860,35 @@ def publish_code_suggestions(self, code_suggestions: list) -> bool:
except Exception as e:
get_logger().exception(f"Could not publish code suggestion:\nsuggestion: {suggestion}\nerror: {e}")
+ # ---- Outdated pass: resolve threads whose marker is no longer emitted ----
+ if mode != PERSISTENT_MODE_OFF and resolve_outdated:
+ for candidates in existing_index.values():
+ for c in candidates:
+ if c.get("id") in reused_comment_ids:
+ continue
+ if c.get("is_resolved"):
+ continue
+ if RESOLVED_BODY_MARKER in (c.get("body") or ""):
+ continue
+ # Per-iteration guard so a single failure cannot propagate; criterion 8 of resolve-outdated-inline-comments.
+ try:
+ if not self.resolve_review_thread(c):
+ continue
+ if not self.edit_review_comment(c.get("id"), format_resolved_body(c.get("body") or "")):
+ get_logger().warning(
+ f"resolve_outdated_inline_comments: failed to write resolved marker for "
+ f"GitLab comment {c.get('id')}; attempting to unresolve thread"
+ )
+ if not self.unresolve_review_thread(c):
+ get_logger().warning(
+ f"resolve_outdated_inline_comments: failed to unresolve GitLab thread "
+ f"after marker write failed for comment {c.get('id')}"
+ )
+ except Exception as e:
+ get_logger().warning(
+ f"resolve_outdated_inline_comments: outdated pass failed for {c.get('id')}: {e}"
+ )
+
# note that we publish suggestions one-by-one. so, if one fails, the rest will still be published
return True
diff --git a/pr_agent/servers/azuredevops_server_webhook.py b/pr_agent/servers/azuredevops_server_webhook.py
index 8eacbf664c..36a93613d8 100644
--- a/pr_agent/servers/azuredevops_server_webhook.py
+++ b/pr_agent/servers/azuredevops_server_webhook.py
@@ -93,8 +93,8 @@ def authorize(credentials: HTTPBasicCredentials = Depends(security)):
async def _perform_commands_azure(commands_conf: str, agent: PRAgent, api_url: str, log_context: dict):
apply_repo_settings(api_url)
- if commands_conf == "pr_commands" and get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
- get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {api_url=}", **log_context)
+ if get_settings().config.disable_auto_feedback:
+ get_logger().info(f"Auto feedback is disabled, skipping auto commands for {api_url=}", **log_context)
return
commands = get_settings().get(f"azure_devops_server.{commands_conf}")
if not commands:
diff --git a/pr_agent/servers/bitbucket_app.py b/pr_agent/servers/bitbucket_app.py
index 272332767e..5baa6ff52b 100644
--- a/pr_agent/servers/bitbucket_app.py
+++ b/pr_agent/servers/bitbucket_app.py
@@ -138,8 +138,8 @@ async def _validate_time_from_last_commit_to_pr_update(data: dict) -> bool:
async def _perform_commands_bitbucket(commands_conf: str, agent: PRAgent, api_url: str, log_context: dict, data: dict):
apply_repo_settings(api_url)
- if commands_conf == "pr_commands" and get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
- get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {api_url=}")
+ if get_settings().config.disable_auto_feedback:
+ get_logger().info(f"Auto feedback is disabled, skipping auto commands for {api_url=}")
return
if commands_conf == "push_commands":
if not get_settings().get("bitbucket_app.handle_push_trigger"):
diff --git a/pr_agent/servers/gitea_app.py b/pr_agent/servers/gitea_app.py
index 4239cf51e2..6abe9114b2 100644
--- a/pr_agent/servers/gitea_app.py
+++ b/pr_agent/servers/gitea_app.py
@@ -129,8 +129,8 @@ async def handle_comment_event(body: Dict[str, Any], event: str, action: str, ag
async def _perform_commands_gitea(commands_conf: str, agent: PRAgent, body: dict, api_url: str):
apply_repo_settings(api_url)
- if commands_conf == "pr_commands" and get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
- get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {api_url=}")
+ if get_settings().config.disable_auto_feedback:
+ get_logger().info(f"Auto feedback is disabled, skipping auto commands for {api_url=}")
return
if not should_process_pr_logic(body): # Here we already updated the configuration with the repo settings
return {}
diff --git a/pr_agent/servers/github_action_runner.py b/pr_agent/servers/github_action_runner.py
index 687f4506c1..467dbfd52e 100644
--- a/pr_agent/servers/github_action_runner.py
+++ b/pr_agent/servers/github_action_runner.py
@@ -80,6 +80,10 @@ async def run_action():
except Exception as e:
get_logger().info(f"github action: failed to apply repo settings: {e}")
+ if get_settings().config.disable_auto_feedback:
+ get_logger().info("Auto feedback is disabled, skipping GitHub Action automatic tools")
+ return
+
# Append the response language in the extra instructions
try:
response_language = get_settings().config.get('response_language', 'en-us')
diff --git a/pr_agent/servers/github_app.py b/pr_agent/servers/github_app.py
index b94b79e32f..d72b1129cc 100644
--- a/pr_agent/servers/github_app.py
+++ b/pr_agent/servers/github_app.py
@@ -395,8 +395,8 @@ def _check_pull_request_event(action: str, body: dict, log_context: dict) -> Tup
async def _perform_auto_commands_github(commands_conf: str, agent: PRAgent, body: dict, api_url: str,
log_context: dict):
apply_repo_settings(api_url)
- if commands_conf == "pr_commands" and get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
- get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {api_url=}")
+ if get_settings().config.disable_auto_feedback:
+ get_logger().info(f"Auto feedback is disabled, skipping auto commands for {api_url=}")
return
if not should_process_pr_logic(body): # Here we already updated the configuration with the repo settings
return {}
diff --git a/pr_agent/servers/gitlab_webhook.py b/pr_agent/servers/gitlab_webhook.py
index 6647a41b11..2ce8752bb9 100644
--- a/pr_agent/servers/gitlab_webhook.py
+++ b/pr_agent/servers/gitlab_webhook.py
@@ -39,8 +39,8 @@ async def handle_request(api_url: str, body: str, log_context: dict, sender_id:
async def _perform_commands_gitlab(commands_conf: str, agent: PRAgent, api_url: str,
log_context: dict, data: dict):
apply_repo_settings(api_url)
- if commands_conf == "pr_commands" and get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
- get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {api_url=}", **log_context)
+ if get_settings().config.disable_auto_feedback:
+ get_logger().info(f"Auto feedback is disabled, skipping auto commands for {api_url=}", **log_context)
return
if not should_process_pr_logic(data): # Here we already updated the configurations
return
diff --git a/pr_agent/settings/code_suggestions/pr_code_suggestions_prompts.toml b/pr_agent/settings/code_suggestions/pr_code_suggestions_prompts.toml
index 36b4d0dcf6..1d2e402ae2 100644
--- a/pr_agent/settings/code_suggestions/pr_code_suggestions_prompts.toml
+++ b/pr_agent/settings/code_suggestions/pr_code_suggestions_prompts.toml
@@ -82,6 +82,15 @@ Extra user-provided instructions (should be addressed with high priority):
======
{%- endif %}
+{%- if repo_context %}
+
+
+Repository context:
+======
+{{ repo_context }}
+======
+{%- endif %}
+
The output must be a YAML object equivalent to type $PRCodeSuggestions, according to the following Pydantic definitions:
=====
diff --git a/pr_agent/settings/code_suggestions/pr_code_suggestions_prompts_not_decoupled.toml b/pr_agent/settings/code_suggestions/pr_code_suggestions_prompts_not_decoupled.toml
index 6178ee23c0..de4cb7de05 100644
--- a/pr_agent/settings/code_suggestions/pr_code_suggestions_prompts_not_decoupled.toml
+++ b/pr_agent/settings/code_suggestions/pr_code_suggestions_prompts_not_decoupled.toml
@@ -71,6 +71,15 @@ Extra user-provided instructions (should be addressed with high priority):
======
{%- endif %}
+{%- if repo_context %}
+
+
+Repository context:
+======
+{{ repo_context }}
+======
+{%- endif %}
+
The output must be a YAML object equivalent to type $PRCodeSuggestions, according to the following Pydantic definitions:
=====
diff --git a/pr_agent/settings/configuration.toml b/pr_agent/settings/configuration.toml
index 16ffbcae2a..4746d853ce 100644
--- a/pr_agent/settings/configuration.toml
+++ b/pr_agent/settings/configuration.toml
@@ -25,6 +25,8 @@ ai_timeout=120 # 2 minutes
skip_keys = []
custom_reasoning_model = false # when true, disables system messages and temperature controls for models that don't support chat-style inputs
response_language="en-US" # Language locales code for PR responses in ISO 3166 and ISO 639 format (e.g., "en-US", "it-IT", "zh-CN", ...)
+repo_context_files = [] # Explicit repository-relative files to include as AI prompt context, e.g. ["AGENTS.md"]
+repo_context_max_lines = 500 # Maximum total lines rendered from configured repo context files
# token limits
max_description_tokens = 500
max_commits_tokens = 500
@@ -61,6 +63,10 @@ reasoning_effort = "medium" # "low", "medium", "high"
enable_claude_extended_thinking = false # Set to true to enable extended thinking feature
extended_thinking_budget_tokens = 2048
extended_thinking_max_output_tokens = 4096
+# Optional: override the built-in list of Claude models that receive the extended-thinking payload.
+# When non-empty, this list fully replaces the built-in defaults (see CLAUDE_EXTENDED_THINKING_MODELS
+# in pr_agent/algo/__init__.py). Leave empty to use the defaults.
+claude_extended_thinking_models_override = []
# Extract issue number from PR source branch name (e.g. feature/1-auth-google -> issue #1). When true, branch-derived
# issue URLs are merged with tickets from the PR description for compliance. Set to false to restore description-only behaviour.
# Note: Branch-name extraction is GitHub-only for now; other providers planned for later.
@@ -137,6 +143,15 @@ extra_instructions = ""
enable_help_text=false
enable_chat_text=false
persistent_comment=true
+# Deduplicate inline suggestions across re-runs by embedding a content-hash marker.
+# "update": edit matching existing comment in place (default)
+# "skip": skip if a matching comment already exists
+# "off": always post a new comment (legacy behavior)
+persistent_inline_comments = "update"
+# When a previously-posted inline suggestion is no longer emitted on a re-run,
+# resolve its thread (and append a short note) so reviewers see only currently
+# relevant comments. Has no effect when persistent_inline_comments = "off".
+resolve_outdated_inline_comments = true
max_history_len=4
publish_output_no_suggestions=true
# suggestions scoring
diff --git a/pr_agent/settings/pr_description_prompts.toml b/pr_agent/settings/pr_description_prompts.toml
index 2627401ea9..9206fcf403 100644
--- a/pr_agent/settings/pr_description_prompts.toml
+++ b/pr_agent/settings/pr_description_prompts.toml
@@ -16,6 +16,13 @@ Extra instructions from the user:
=====
{% endif %}
+{%- if repo_context %}
+
+Repository context:
+=====
+{{ repo_context }}
+=====
+{% endif %}
The output must be a YAML object equivalent to type $PRDescription, according to the following Pydantic definitions:
=====
diff --git a/pr_agent/settings/pr_reviewer_prompts.toml b/pr_agent/settings/pr_reviewer_prompts.toml
index bbe6c6d04c..8cef301618 100644
--- a/pr_agent/settings/pr_reviewer_prompts.toml
+++ b/pr_agent/settings/pr_reviewer_prompts.toml
@@ -69,6 +69,15 @@ Extra instructions from the user:
======
{% endif %}
+{%- if repo_context %}
+
+
+Repository context:
+======
+{{ repo_context }}
+======
+{% endif %}
+
The output must be a YAML object equivalent to type $PRReview, according to the following Pydantic definitions:
=====
diff --git a/pr_agent/tools/pr_add_docs.py b/pr_agent/tools/pr_add_docs.py
index 3ec97b31ce..2c3a495ea8 100644
--- a/pr_agent/tools/pr_add_docs.py
+++ b/pr_agent/tools/pr_add_docs.py
@@ -120,9 +120,18 @@ def push_inline_docs(self, data):
add_original_line=True)
body = f"**Suggestion:** Proposed documentation\n```suggestion\n" + new_code_snippet + "\n```"
+ original_suggestion = {
+ "relevant_file": relevant_file,
+ "relevant_lines_start": relevant_line,
+ "relevant_lines_end": relevant_line,
+ "label": "documentation",
+ "suggestion_content": "Proposed documentation",
+ "improved_code": new_code_snippet,
+ }
docs.append({'body': body, 'relevant_file': relevant_file,
'relevant_lines_start': relevant_line,
- 'relevant_lines_end': relevant_line})
+ 'relevant_lines_end': relevant_line,
+ 'original_suggestion': original_suggestion})
except Exception:
if get_settings().config.verbosity_level >= 2:
get_logger().info(f"Could not parse code docs: {d}")
diff --git a/pr_agent/tools/pr_code_suggestions.py b/pr_agent/tools/pr_code_suggestions.py
index bbdf58e46d..9cfffc6fab 100644
--- a/pr_agent/tools/pr_code_suggestions.py
+++ b/pr_agent/tools/pr_code_suggestions.py
@@ -17,6 +17,7 @@
from pr_agent.algo.pr_processing import (add_ai_metadata_to_diff_files,
get_pr_diff, get_pr_multi_diffs,
retry_with_fallback_models)
+from pr_agent.algo.repo_context import build_repo_context
from pr_agent.algo.token_handler import TokenHandler
from pr_agent.algo.utils import (ModelType, load_yaml, replace_code_tags,
show_relevant_configurations, get_max_tokens, clip_tokens, get_model)
@@ -66,6 +67,7 @@ def __init__(self, pr_url: str, cli_mode=False, args: list = None,
"diff_no_line_numbers": "", # empty diff for initial calculation
"num_code_suggestions": num_code_suggestions,
"extra_instructions": get_settings().pr_code_suggestions.extra_instructions,
+ "repo_context": build_repo_context(self.git_provider),
"commit_messages_str": self.git_provider.get_commit_messages(),
"relevant_best_practices": "",
"is_ai_metadata": get_settings().get("config.enable_ai_metadata", False),
@@ -941,4 +943,4 @@ async def self_reflect_on_suggestions(self,
except Exception as e:
get_logger().info(f"Could not reflect on suggestions, error: {e}")
return ""
- return response_reflect
\ No newline at end of file
+ return response_reflect
diff --git a/pr_agent/tools/pr_description.py b/pr_agent/tools/pr_description.py
index 26ea5d190a..1b2135fba8 100644
--- a/pr_agent/tools/pr_description.py
+++ b/pr_agent/tools/pr_description.py
@@ -14,6 +14,7 @@
get_pr_diff,
get_pr_diff_multiple_patchs,
retry_with_fallback_models)
+from pr_agent.algo.repo_context import build_repo_context
from pr_agent.algo.token_handler import TokenHandler
from pr_agent.algo.utils import (ModelType, PRDescriptionHeader, clip_tokens,
get_max_tokens, get_user_labels, load_yaml,
@@ -67,6 +68,7 @@ def __init__(self, pr_url: str, args: list = None,
"language": self.main_pr_language,
"diff": "", # empty diff for initial calculation
"extra_instructions": get_settings().pr_description.extra_instructions,
+ "repo_context": build_repo_context(self.git_provider),
"commit_messages_str": self.git_provider.get_commit_messages(),
"enable_custom_labels": get_settings().config.enable_custom_labels,
"custom_labels_class": "", # will be filled if necessary in 'set_custom_labels' function
diff --git a/pr_agent/tools/pr_reviewer.py b/pr_agent/tools/pr_reviewer.py
index c4917f3597..437b8f9474 100644
--- a/pr_agent/tools/pr_reviewer.py
+++ b/pr_agent/tools/pr_reviewer.py
@@ -12,6 +12,7 @@
from pr_agent.algo.pr_processing import (add_ai_metadata_to_diff_files,
get_pr_diff,
retry_with_fallback_models)
+from pr_agent.algo.repo_context import build_repo_context
from pr_agent.algo.token_handler import TokenHandler
from pr_agent.algo.utils import (ModelType, PRReviewHeader,
convert_to_markdown_v2, github_action_output,
@@ -92,6 +93,7 @@ def __init__(self, pr_url: str, is_answer: bool = False, is_auto: bool = False,
'question_str': question_str,
'answer_str': answer_str,
"extra_instructions": get_settings().pr_reviewer.extra_instructions,
+ "repo_context": build_repo_context(self.git_provider),
"commit_messages_str": self.git_provider.get_commit_messages(),
"custom_labels": "",
"enable_custom_labels": get_settings().config.enable_custom_labels,
diff --git a/tests/unittest/test_add_docs_inline_dedup.py b/tests/unittest/test_add_docs_inline_dedup.py
new file mode 100644
index 0000000000..564761e37c
--- /dev/null
+++ b/tests/unittest/test_add_docs_inline_dedup.py
@@ -0,0 +1,41 @@
+from unittest.mock import MagicMock, patch
+
+from pr_agent.algo.inline_comments_dedup import generate_marker
+from pr_agent.tools.pr_add_docs import PRAddDocs
+
+
+def test_push_inline_docs_emits_markerable_original_suggestion():
+ add_docs = PRAddDocs.__new__(PRAddDocs)
+ add_docs.git_provider = MagicMock()
+ add_docs.git_provider.publish_code_suggestions = MagicMock(return_value=True)
+ add_docs.dedent_code = MagicMock(return_value='"""Explain foo."""\nfoo()')
+
+ fake_settings = MagicMock()
+ fake_settings.config.verbosity_level = 0
+
+ data = {
+ "Code Documentation": [
+ {
+ "relevant file": "src/app.py",
+ "relevant line": 12,
+ "documentation": '"""Explain foo."""',
+ "doc placement": "before",
+ }
+ ]
+ }
+
+ with patch("pr_agent.tools.pr_add_docs.get_settings", return_value=fake_settings):
+ add_docs.push_inline_docs(data)
+
+ add_docs.git_provider.publish_code_suggestions.assert_called_once()
+ docs = add_docs.git_provider.publish_code_suggestions.call_args[0][0]
+ assert len(docs) == 1
+ suggestion = docs[0]
+ original = suggestion["original_suggestion"]
+ assert original["relevant_file"] == "src/app.py"
+ assert original["relevant_lines_start"] == 12
+ assert original["relevant_lines_end"] == 12
+ assert original["label"] == "documentation"
+ assert original["suggestion_content"] == "Proposed documentation"
+ assert original["improved_code"] == '"""Explain foo."""\nfoo()'
+ assert generate_marker(original) is not None
diff --git a/tests/unittest/test_disable_auto_feedback.py b/tests/unittest/test_disable_auto_feedback.py
new file mode 100644
index 0000000000..a7bc28eb1d
--- /dev/null
+++ b/tests/unittest/test_disable_auto_feedback.py
@@ -0,0 +1,129 @@
+import json
+
+import pytest
+
+from pr_agent.agent.pr_agent import PRAgent
+from pr_agent.config_loader import get_settings
+from pr_agent.identity_providers import get_identity_provider
+from pr_agent.identity_providers.identity_provider import Eligibility
+from pr_agent.servers.github_action_runner import run_action
+from pr_agent.servers.github_app import handle_push_trigger_for_new_commits
+
+
+@pytest.mark.asyncio
+async def test_github_push_trigger_skips_when_disable_auto_feedback(monkeypatch):
+ settings = get_settings()
+ original_handle_push_trigger = settings.github_app.handle_push_trigger
+ original_push_commands = list(settings.github_app.push_commands)
+ original_disable_auto_feedback = settings.config.disable_auto_feedback
+ settings.github_app.handle_push_trigger = True
+ settings.github_app.push_commands = ["/review"]
+ settings.config.disable_auto_feedback = True
+
+ monkeypatch.setattr("pr_agent.servers.github_app.apply_repo_settings", lambda pr_url: None)
+ monkeypatch.setattr(
+ get_identity_provider().__class__,
+ "verify_eligibility",
+ lambda *args, **kwargs: Eligibility.ELIGIBLE,
+ )
+
+ ran = {"flag": False}
+
+ async def fake_handle_request(self, pr_url, request, notify=None):
+ ran["flag"] = True
+ return True
+
+ monkeypatch.setattr(PRAgent, "handle_request", fake_handle_request)
+
+ body = {
+ "before": "abc123",
+ "after": "def456",
+ "pull_request": {
+ "url": "https://example.com/fake/pr",
+ "state": "open",
+ "draft": False,
+ "created_at": "2026-04-20T00:00:00Z",
+ "updated_at": "2026-04-21T00:00:00Z",
+ "merge_commit_sha": None,
+ },
+ }
+
+ try:
+ await handle_push_trigger_for_new_commits(
+ body=body,
+ event="pull_request",
+ sender="tester",
+ sender_id="123",
+ action="synchronize",
+ log_context={},
+ agent=PRAgent(),
+ )
+ assert ran["flag"] is False
+ finally:
+ settings.github_app.handle_push_trigger = original_handle_push_trigger
+ settings.github_app.push_commands = original_push_commands
+ settings.config.disable_auto_feedback = original_disable_auto_feedback
+
+
+@pytest.mark.asyncio
+async def test_github_action_runner_skips_when_disable_auto_feedback(monkeypatch, tmp_path):
+ settings = get_settings()
+ original_disable_auto_feedback = settings.config.disable_auto_feedback
+ original_pr_actions = settings.get("GITHUB_ACTION_CONFIG.PR_ACTIONS", None)
+ settings.config.disable_auto_feedback = False
+ settings.set("GITHUB_ACTION_CONFIG.PR_ACTIONS", ["opened"])
+
+ event_path = tmp_path / "event.json"
+ event_path.write_text(json.dumps({
+ "action": "opened",
+ "pull_request": {
+ "url": "https://example.com/fake/pr",
+ "html_url": "https://github.com/example/repo/pull/1",
+ },
+ }))
+
+ monkeypatch.setenv("GITHUB_EVENT_NAME", "pull_request")
+ monkeypatch.setenv("GITHUB_EVENT_PATH", str(event_path))
+ monkeypatch.setenv("GITHUB_TOKEN", "token")
+
+ def fake_apply_repo_settings(pr_url):
+ get_settings().config.disable_auto_feedback = True
+
+ monkeypatch.setattr("pr_agent.servers.github_action_runner.apply_repo_settings", fake_apply_repo_settings)
+
+ ran = {"describe": False, "review": False, "improve": False}
+
+ class FakeDescription:
+ def __init__(self, pr_url):
+ self.pr_url = pr_url
+
+ async def run(self):
+ ran["describe"] = True
+
+ class FakeReviewer:
+ def __init__(self, pr_url):
+ self.pr_url = pr_url
+
+ async def run(self):
+ ran["review"] = True
+
+ class FakeCodeSuggestions:
+ def __init__(self, pr_url):
+ self.pr_url = pr_url
+
+ async def run(self):
+ ran["improve"] = True
+
+ monkeypatch.setattr("pr_agent.servers.github_action_runner.PRDescription", FakeDescription)
+ monkeypatch.setattr("pr_agent.servers.github_action_runner.PRReviewer", FakeReviewer)
+ monkeypatch.setattr("pr_agent.servers.github_action_runner.PRCodeSuggestions", FakeCodeSuggestions)
+
+ try:
+ await run_action()
+ assert ran == {"describe": False, "review": False, "improve": False}
+ finally:
+ settings.config.disable_auto_feedback = original_disable_auto_feedback
+ if original_pr_actions is None:
+ settings.unset("GITHUB_ACTION_CONFIG")
+ else:
+ settings.set("GITHUB_ACTION_CONFIG.PR_ACTIONS", original_pr_actions)
diff --git a/tests/unittest/test_git_provider_description.py b/tests/unittest/test_git_provider_description.py
new file mode 100644
index 0000000000..0dff802707
--- /dev/null
+++ b/tests/unittest/test_git_provider_description.py
@@ -0,0 +1,91 @@
+from pr_agent.git_providers.git_provider import GitProvider
+
+
+class DummyProvider(GitProvider):
+ def __init__(self, description: str):
+ self.description = description
+
+ def get_pr_description_full(self) -> str:
+ return self.description
+
+ def is_supported(self, capability: str) -> bool:
+ return False
+
+ def get_files(self) -> list:
+ return []
+
+ def get_diff_files(self) -> list:
+ return []
+
+ def publish_description(self, pr_title: str, pr_body: str):
+ pass
+
+ def publish_code_suggestions(self, code_suggestions: list) -> bool:
+ return True
+
+ def get_languages(self):
+ return {}
+
+ def get_pr_branch(self):
+ return ""
+
+ def get_user_id(self):
+ return ""
+
+ def get_repo_settings(self):
+ return ""
+
+ def publish_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str, original_suggestion=None):
+ pass
+
+ def publish_inline_comments(self, comments: list[dict]):
+ pass
+
+ def remove_initial_comment(self):
+ pass
+
+ def publish_comment(self, pr_comment: str, is_temporary: bool = False):
+ pass
+
+ def remove_comment(self, comment):
+ pass
+
+ def get_issue_comments(self):
+ return []
+
+ def publish_labels(self, labels):
+ pass
+
+ def get_pr_labels(self, update=False):
+ return []
+
+ def add_eyes_reaction(self, issue_comment_id: int, disable_eyes: bool = False):
+ return None
+
+ def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
+ return False
+
+ def get_commit_messages(self):
+ return ""
+
+
+def test_get_user_description_skips_generated_by_header_before_user_section():
+ description = """### 🤖 Generated by PR Agent at abc123
+
+### **User description**
+Manual description only.
+
+___
+
+### **PR Type**
+Enhancement
+
+___
+
+### **Description**
+Generated summary that should not be preserved as user text.
+"""
+
+ provider = DummyProvider(description)
+
+ assert provider.get_user_description() == "Manual description only."
diff --git a/tests/unittest/test_github_inline_dedup.py b/tests/unittest/test_github_inline_dedup.py
new file mode 100644
index 0000000000..8816fd88cf
--- /dev/null
+++ b/tests/unittest/test_github_inline_dedup.py
@@ -0,0 +1,729 @@
+from unittest.mock import MagicMock, patch
+
+import pytest
+
+from pr_agent.algo.inline_comments_dedup import (
+ MARKER_PREFIX,
+ MARKER_SUFFIX,
+ RESOLVED_BODY_MARKER,
+ RESOLVED_NOTE,
+ generate_marker,
+)
+
+
+def _sug(label="possible issue", file="src/app.py",
+ content="Check for None before dereferencing user_id on this line.",
+ start=10, end=12):
+ orig = {
+ "relevant_file": file,
+ "label": label,
+ "suggestion_content": content,
+ "relevant_lines_start": start,
+ "relevant_lines_end": end,
+ }
+ return {
+ "body": f"**Suggestion:** {content} [{label}]\n```suggestion\nfix\n```",
+ "relevant_file": file,
+ "relevant_lines_start": start,
+ "relevant_lines_end": end,
+ "original_suggestion": orig,
+ }
+
+
+@pytest.fixture
+def provider():
+ with patch("pr_agent.git_providers.github_provider.GithubProvider._get_repo"), \
+ patch("pr_agent.git_providers.github_provider.GithubProvider.set_pr"), \
+ patch("pr_agent.git_providers.github_provider.GithubProvider._get_pr"):
+ from pr_agent.git_providers.github_provider import GithubProvider
+ p = GithubProvider.__new__(GithubProvider)
+ p.pr = MagicMock()
+ p.last_commit_id = MagicMock(sha="abc123")
+ p.repo = "owner/repo"
+ p.base_url = "https://api.github.com"
+ p.max_comment_chars = 65000
+ p.github_user_id = "pr-agent-bot"
+ p.deployment_type = "user"
+ p.validate_comments_inside_hunks = lambda x: x
+ return p
+
+
+def _set_settings(persistent_mode="update", resolve_outdated=True):
+ """Patch get_settings to return both persistent_inline_comments and resolve_outdated_inline_comments."""
+ values = {
+ "persistent_inline_comments": persistent_mode,
+ "resolve_outdated_inline_comments": resolve_outdated,
+ }
+ return patch(
+ "pr_agent.git_providers.github_provider.get_settings",
+ return_value=MagicMock(
+ pr_code_suggestions=MagicMock(get=lambda key, default=None: values.get(key, default)),
+ ),
+ )
+
+
+# Backward-compat wrapper for existing tests that only set persistent mode.
+def _set_mode(mode):
+ return _set_settings(persistent_mode=mode, resolve_outdated=False)
+
+
+class TestOffMode:
+ def test_off_mode_skips_fetch(self, provider):
+ provider.get_bot_review_comments = MagicMock()
+ provider.edit_review_comment = MagicMock()
+ with _set_mode("off"):
+ provider.publish_code_suggestions([_sug()])
+ provider.get_bot_review_comments.assert_not_called()
+ provider.edit_review_comment.assert_not_called()
+ provider.pr.create_review.assert_called_once()
+
+
+class TestUpdateMode:
+ def test_no_match_creates_new(self, provider):
+ provider.get_bot_review_comments = MagicMock(return_value=[])
+ provider.edit_review_comment = MagicMock()
+ with _set_mode("update"):
+ provider.publish_code_suggestions([_sug()])
+ provider.edit_review_comment.assert_not_called()
+ provider.pr.create_review.assert_called_once()
+ args, kwargs = provider.pr.create_review.call_args
+ body_published = kwargs["comments"][0]["body"]
+ assert MARKER_PREFIX in body_published and MARKER_SUFFIX in body_published
+
+ def test_match_edits_and_skips_creation(self, provider):
+ s = _sug()
+ marker = generate_marker(s["original_suggestion"])
+ existing_body = "old body\n\n" + marker
+ provider.get_bot_review_comments = MagicMock(
+ return_value=[{
+ "id": 777,
+ "body": existing_body,
+ "path": s["relevant_file"],
+ "line": s["relevant_lines_end"],
+ "start_line": s["relevant_lines_start"],
+ }]
+ )
+ provider.edit_review_comment = MagicMock(return_value=True)
+ with _set_mode("update"):
+ provider.publish_code_suggestions([s])
+ provider.edit_review_comment.assert_called_once()
+ called_id, called_body = provider.edit_review_comment.call_args[0]
+ assert called_id == 777
+ assert marker in called_body
+ assert s["body"] in called_body
+ provider.pr.create_review.assert_not_called()
+
+ def test_edit_failure_falls_back_to_create(self, provider):
+ s = _sug()
+ marker = generate_marker(s["original_suggestion"])
+ provider.get_bot_review_comments = MagicMock(
+ return_value=[{
+ "id": 777,
+ "body": "old " + marker,
+ "path": s["relevant_file"],
+ "line": s["relevant_lines_end"],
+ "start_line": s["relevant_lines_start"],
+ }]
+ )
+ provider.edit_review_comment = MagicMock(return_value=False)
+ with _set_mode("update"):
+ provider.publish_code_suggestions([s])
+ provider.pr.create_review.assert_called_once()
+
+ def test_match_without_comment_id_creates_new_without_editing(self, provider):
+ s = _sug()
+ marker = generate_marker(s["original_suggestion"])
+ provider.get_bot_review_comments = MagicMock(
+ return_value=[{
+ "id": None,
+ "body": "old " + marker,
+ "path": s["relevant_file"],
+ "line": s["relevant_lines_end"],
+ "start_line": s["relevant_lines_start"],
+ }]
+ )
+ provider.edit_review_comment = MagicMock(return_value=True)
+ with _set_mode("update"):
+ provider.publish_code_suggestions([s])
+ provider.edit_review_comment.assert_not_called()
+ provider.pr.create_review.assert_called_once()
+
+ def test_mixed_match_and_new(self, provider):
+ matched = _sug(content="Matched suggestion")
+ unmatched = _sug(content="Brand new suggestion", start=40, end=42)
+ marker_matched = generate_marker(matched["original_suggestion"])
+ provider.get_bot_review_comments = MagicMock(
+ return_value=[{
+ "id": 1,
+ "body": marker_matched,
+ "path": matched["relevant_file"],
+ "line": matched["relevant_lines_end"],
+ "start_line": matched["relevant_lines_start"],
+ }]
+ )
+ provider.edit_review_comment = MagicMock(return_value=True)
+ with _set_mode("update"):
+ provider.publish_code_suggestions([matched, unmatched])
+ assert provider.edit_review_comment.call_count == 1
+ provider.pr.create_review.assert_called_once()
+ created = provider.pr.create_review.call_args.kwargs["comments"]
+ assert len(created) == 1
+ assert "Brand new" in created[0]["body"]
+
+ def test_fetch_failure_falls_back_to_creating_all(self, provider):
+ provider.get_bot_review_comments = MagicMock(side_effect=RuntimeError("api down"))
+ provider.edit_review_comment = MagicMock()
+ with _set_mode("update"):
+ provider.publish_code_suggestions([_sug()])
+ provider.edit_review_comment.assert_not_called()
+ provider.pr.create_review.assert_called_once()
+
+
+class TestSkipMode:
+ def test_match_skips_entirely(self, provider):
+ s = _sug()
+ marker = generate_marker(s["original_suggestion"])
+ provider.get_bot_review_comments = MagicMock(
+ return_value=[{
+ "id": 1,
+ "body": marker,
+ "path": s["relevant_file"],
+ "line": s["relevant_lines_end"],
+ "start_line": s["relevant_lines_start"],
+ }]
+ )
+ provider.edit_review_comment = MagicMock()
+ with _set_mode("skip"):
+ provider.publish_code_suggestions([s])
+ provider.edit_review_comment.assert_not_called()
+ provider.pr.create_review.assert_not_called()
+
+ def test_no_match_still_creates(self, provider):
+ provider.get_bot_review_comments = MagicMock(return_value=[])
+ provider.edit_review_comment = MagicMock()
+ with _set_mode("skip"):
+ provider.publish_code_suggestions([_sug()])
+ provider.pr.create_review.assert_called_once()
+
+ def test_resolved_match_reopens_and_refreshes_body(self, provider):
+ s = _sug()
+ marker = generate_marker(s["original_suggestion"])
+ existing = {
+ "id": 22,
+ "thread_id": "T22",
+ "body": f"old body\n\n---\n_{RESOLVED_NOTE}_\n{RESOLVED_BODY_MARKER}\n\n{marker}",
+ "path": s["relevant_file"],
+ "line": s["relevant_lines_end"],
+ "start_line": s["relevant_lines_start"],
+ "is_resolved": True,
+ }
+ provider.get_bot_review_comments = MagicMock(return_value=[existing])
+ provider.edit_review_comment = MagicMock(return_value=True)
+ provider.unresolve_review_thread = MagicMock(return_value=True)
+ with _set_settings(persistent_mode="skip", resolve_outdated=True):
+ provider.publish_code_suggestions([s])
+ provider.edit_review_comment.assert_called_once()
+ called_id, called_body = provider.edit_review_comment.call_args[0]
+ assert called_id == 22
+ assert s["body"] in called_body
+ assert RESOLVED_BODY_MARKER not in called_body
+ provider.unresolve_review_thread.assert_called_once_with(existing)
+ provider.pr.create_review.assert_not_called()
+
+
+class TestGetBotReviewCommentsGraphQL:
+ """Exercises the GraphQL-backed get_bot_review_comments."""
+
+ def _make_provider(self, deployment_type, user_id=None):
+ with patch("pr_agent.git_providers.github_provider.GithubProvider._get_repo"), \
+ patch("pr_agent.git_providers.github_provider.GithubProvider.set_pr"), \
+ patch("pr_agent.git_providers.github_provider.GithubProvider._get_pr"):
+ from pr_agent.git_providers.github_provider import GithubProvider
+ p = GithubProvider.__new__(GithubProvider)
+ p.pr = MagicMock()
+ p.pr.number = 42
+ p.repo = "owner/repo"
+ p.base_url = "https://api.github.com"
+ p.deployment_type = deployment_type
+ p.github_user_id = user_id
+ return p
+
+ def _gql_response(self, threads, has_next_page=False, end_cursor=None):
+ return ({}, {
+ "data": {
+ "repository": {
+ "pullRequest": {
+ "reviewThreads": {
+ "pageInfo": {"hasNextPage": has_next_page, "endCursor": end_cursor},
+ "nodes": threads,
+ }
+ }
+ }
+ }
+ })
+
+ def _thread(self, thread_id, is_resolved, comments):
+ return {"id": thread_id, "isResolved": is_resolved, "comments": {"nodes": comments}}
+
+ def _comment(self, db_id, login, body="x", path="a.py", line=1, start_line=None):
+ return {"databaseId": db_id, "body": body, "path": path, "line": line,
+ "startLine": start_line, "author": {"login": login}}
+
+ def test_app_deployment_filters_by_app_name(self):
+ provider = self._make_provider(deployment_type="app")
+ provider.pr._requester.requestJsonAndCheck = MagicMock(
+ return_value=self._gql_response([
+ self._thread("T1", False, [self._comment(1, "my-bot[bot]")]),
+ self._thread("T2", False, [self._comment(2, "alice")]),
+ ])
+ )
+ with patch("pr_agent.git_providers.github_provider.get_settings") as gs:
+ gs.return_value.get = lambda key, default="": "my-bot" if key == "GITHUB.APP_NAME" else default
+ out = provider.get_bot_review_comments()
+ assert [c["id"] for c in out] == [1]
+ assert out[0]["thread_id"] == "T1"
+ assert out[0]["is_resolved"] is False
+
+ def test_user_deployment_filters_by_user_id(self):
+ provider = self._make_provider(deployment_type="user", user_id=None)
+ provider.get_user_id = MagicMock(return_value="pr-agent-bot")
+ provider.pr._requester.requestJsonAndCheck = MagicMock(
+ return_value=self._gql_response([
+ self._thread("T1", True, [self._comment(5, "pr-agent-bot")]),
+ self._thread("T2", False, [self._comment(6, "someone-else")]),
+ ])
+ )
+ with patch("pr_agent.git_providers.github_provider.get_settings") as gs:
+ gs.return_value.get = lambda key, default="": default
+ out = provider.get_bot_review_comments()
+ assert [c["id"] for c in out] == [5]
+ assert out[0]["is_resolved"] is True
+ provider.get_user_id.assert_called_once()
+
+ def test_skips_comments_without_database_id(self):
+ provider = self._make_provider(deployment_type="user", user_id="pr-agent-bot")
+ provider.pr._requester.requestJsonAndCheck = MagicMock(
+ return_value=self._gql_response([
+ self._thread("T1", False, [self._comment(None, "pr-agent-bot")]),
+ self._thread("T2", False, [self._comment(6, "pr-agent-bot")]),
+ ])
+ )
+ with patch("pr_agent.git_providers.github_provider.get_settings") as gs:
+ gs.return_value.get = lambda key, default="": default
+ out = provider.get_bot_review_comments()
+ assert [c["id"] for c in out] == [6]
+
+ def test_paginates_until_has_next_page_false(self):
+ provider = self._make_provider(deployment_type="user", user_id="pr-agent-bot")
+ page1 = self._gql_response(
+ [self._thread("T1", False, [self._comment(1, "pr-agent-bot")])],
+ has_next_page=True, end_cursor="cur1",
+ )
+ page2 = self._gql_response(
+ [self._thread("T2", False, [self._comment(2, "pr-agent-bot")])],
+ has_next_page=False, end_cursor=None,
+ )
+ provider.pr._requester.requestJsonAndCheck = MagicMock(side_effect=[page1, page2])
+ with patch("pr_agent.git_providers.github_provider.get_settings") as gs:
+ gs.return_value.get = lambda key, default="": default
+ out = provider.get_bot_review_comments()
+ assert [c["id"] for c in out] == [1, 2]
+ assert provider.pr._requester.requestJsonAndCheck.call_count == 2
+
+ def test_pagination_breaks_when_end_cursor_missing_despite_has_next_page(self):
+ # Guard against a malformed server response (hasNextPage=True but no
+ # endCursor) infinite-looping by re-issuing the first-page query.
+ provider = self._make_provider(deployment_type="user", user_id="pr-agent-bot")
+ provider.pr._requester.requestJsonAndCheck = MagicMock(
+ return_value=self._gql_response(
+ [self._thread("T1", False, [self._comment(1, "pr-agent-bot")])],
+ has_next_page=True, end_cursor=None,
+ )
+ )
+ with patch("pr_agent.git_providers.github_provider.get_settings") as gs:
+ gs.return_value.get = lambda key, default="": default
+ out = provider.get_bot_review_comments()
+ assert [c["id"] for c in out] == [1]
+ assert provider.pr._requester.requestJsonAndCheck.call_count == 1
+
+ def test_graphql_errors_array_returns_empty(self):
+ provider = self._make_provider(deployment_type="user", user_id="pr-agent-bot")
+ provider.pr._requester.requestJsonAndCheck = MagicMock(
+ return_value=({}, {"errors": [{"message": "boom"}]})
+ )
+ with patch("pr_agent.git_providers.github_provider.get_settings") as gs:
+ gs.return_value.get = lambda key, default="": default
+ out = provider.get_bot_review_comments()
+ assert out == []
+
+ def test_graphql_exception_returns_empty(self):
+ provider = self._make_provider(deployment_type="user", user_id="pr-agent-bot")
+ provider.pr._requester.requestJsonAndCheck = MagicMock(side_effect=RuntimeError("net"))
+ with patch("pr_agent.git_providers.github_provider.get_settings") as gs:
+ gs.return_value.get = lambda key, default="": default
+ out = provider.get_bot_review_comments()
+ assert out == []
+
+ def test_graphql_url_strips_api_v3_suffix_for_ghes(self):
+ # GHES REST base is `.../api/v3` but GraphQL lives at `.../api/graphql`;
+ # naive `{base_url}/graphql` would 404 on GHES.
+ from pr_agent.git_providers.github_provider import GithubProvider
+ p = GithubProvider.__new__(GithubProvider)
+ p.base_url = "https://ghes.example.com/api/v3"
+ assert p._graphql_url() == "https://ghes.example.com/api/graphql"
+
+ def test_graphql_url_passes_through_for_github_com(self):
+ from pr_agent.git_providers.github_provider import GithubProvider
+ p = GithubProvider.__new__(GithubProvider)
+ p.base_url = "https://api.github.com"
+ assert p._graphql_url() == "https://api.github.com/graphql"
+
+
+class TestGitHubResolveUnresolve:
+ """Exercises resolve_review_thread / unresolve_review_thread mutations."""
+
+ def _make_provider(self):
+ with patch("pr_agent.git_providers.github_provider.GithubProvider._get_repo"), \
+ patch("pr_agent.git_providers.github_provider.GithubProvider.set_pr"), \
+ patch("pr_agent.git_providers.github_provider.GithubProvider._get_pr"):
+ from pr_agent.git_providers.github_provider import GithubProvider
+ p = GithubProvider.__new__(GithubProvider)
+ p.pr = MagicMock()
+ p.base_url = "https://api.github.com"
+ return p
+
+ def test_resolve_calls_graphql_and_returns_true(self):
+ p = self._make_provider()
+ p.pr._requester.requestJsonAndCheck = MagicMock(
+ return_value=({}, {"data": {"resolveReviewThread": {"thread": {"isResolved": True}}}})
+ )
+ assert p.resolve_review_thread({"thread_id": "T1"}) is True
+ method, url = p.pr._requester.requestJsonAndCheck.call_args[0]
+ assert method == "POST"
+ assert url.endswith("/graphql")
+ payload = p.pr._requester.requestJsonAndCheck.call_args.kwargs["input"]
+ assert "resolveReviewThread" in payload["query"]
+ assert payload["variables"] == {"threadId": "T1"}
+
+ def test_unresolve_calls_graphql_and_returns_true(self):
+ p = self._make_provider()
+ p.pr._requester.requestJsonAndCheck = MagicMock(
+ return_value=({}, {"data": {"unresolveReviewThread": {"thread": {"isResolved": False}}}})
+ )
+ assert p.unresolve_review_thread({"thread_id": "T1"}) is True
+ payload = p.pr._requester.requestJsonAndCheck.call_args.kwargs["input"]
+ assert "unresolveReviewThread" in payload["query"]
+ assert payload["variables"] == {"threadId": "T1"}
+
+ def test_resolve_returns_false_on_errors_array(self):
+ p = self._make_provider()
+ p.pr._requester.requestJsonAndCheck = MagicMock(
+ return_value=({}, {"errors": [{"message": "perm denied"}]})
+ )
+ assert p.resolve_review_thread({"thread_id": "T1"}) is False
+
+ def test_resolve_returns_false_on_exception(self):
+ p = self._make_provider()
+ p.pr._requester.requestJsonAndCheck = MagicMock(side_effect=RuntimeError("net"))
+ assert p.resolve_review_thread({"thread_id": "T1"}) is False
+
+ def test_resolve_returns_false_when_thread_id_missing(self):
+ p = self._make_provider()
+ p.pr._requester.requestJsonAndCheck = MagicMock()
+ assert p.resolve_review_thread({"id": 5}) is False
+ p.pr._requester.requestJsonAndCheck.assert_not_called()
+
+
+class TestGetRepoSettings:
+ def test_logs_warning_when_repo_settings_cannot_be_read(self, provider):
+ provider.repo_obj = MagicMock()
+ provider.repo_obj.get_contents.side_effect = RuntimeError("ghes read failed")
+
+ with patch("pr_agent.git_providers.github_provider.get_logger") as mock_get_logger:
+ logger = MagicMock()
+ mock_get_logger.return_value = logger
+ result = provider.get_repo_settings()
+
+ assert result == ""
+ logger.warning.assert_called_once()
+ warning_message = logger.warning.call_args[0][0]
+ assert "Failed to load .pr_agent.toml file" in warning_message
+ assert "ghes read failed" in warning_message
+
+
+def _existing(c_id, marker, *, is_resolved=False, body_extra="", path="src/app.py", thread_id=None):
+ return {
+ "id": c_id,
+ "thread_id": thread_id or f"T{c_id}",
+ "body": "old body" + body_extra + "\n\n" + marker,
+ "path": path,
+ "line": 12,
+ "start_line": 10,
+ "is_resolved": is_resolved,
+ }
+
+
+class TestOutdatedPass:
+ def test_same_marker_different_location_creates_new_and_resolves_old(self, provider):
+ s = _sug(start=40, end=42)
+ marker = generate_marker(s["original_suggestion"])
+ existing = _existing(c_id=776, marker=marker, path=s["relevant_file"])
+ provider.get_bot_review_comments = MagicMock(return_value=[existing])
+ provider.edit_review_comment = MagicMock(return_value=True)
+ provider.resolve_review_thread = MagicMock(return_value=True)
+ provider.unresolve_review_thread = MagicMock()
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ provider.publish_code_suggestions([s])
+ provider.pr.create_review.assert_called_once()
+ provider.resolve_review_thread.assert_called_once_with(existing)
+ provider.unresolve_review_thread.assert_not_called()
+ provider.edit_review_comment.assert_called_once()
+ called_id, called_body = provider.edit_review_comment.call_args[0]
+ assert called_id == 776
+ assert RESOLVED_NOTE in called_body
+ assert s["body"] not in called_body
+
+ def test_outdated_marker_resolves_and_edits(self, provider):
+ s_emitted = _sug(content="A new and different suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="An old suggestion that's no longer flagged")["original_suggestion"])
+ existing = _existing(c_id=777, marker=marker_outdated)
+ provider.get_bot_review_comments = MagicMock(return_value=[existing])
+ provider.edit_review_comment = MagicMock(return_value=True)
+ provider.resolve_review_thread = MagicMock(return_value=True)
+ provider.unresolve_review_thread = MagicMock()
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ provider.publish_code_suggestions([s_emitted])
+ provider.resolve_review_thread.assert_called_once()
+ called_comment = provider.resolve_review_thread.call_args[0][0]
+ assert called_comment["id"] == 777
+ # edit_review_comment called once for the resolution note
+ provider.edit_review_comment.assert_called_once()
+ called_id, called_body = provider.edit_review_comment.call_args[0]
+ assert called_id == 777
+ assert RESOLVED_NOTE in called_body
+ assert RESOLVED_BODY_MARKER in called_body
+ provider.pr.create_review.assert_called_once() # for the new suggestion
+
+ def test_already_resolved_is_skipped(self, provider):
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _existing(c_id=778, marker=marker_outdated, is_resolved=True)
+ provider.get_bot_review_comments = MagicMock(return_value=[existing])
+ provider.edit_review_comment = MagicMock()
+ provider.resolve_review_thread = MagicMock()
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ provider.publish_code_suggestions([s_emitted])
+ provider.resolve_review_thread.assert_not_called()
+ # edit_review_comment must not be called for the outdated comment;
+ # there are no matched existing comments either, so total calls == 0.
+ provider.edit_review_comment.assert_not_called()
+
+ def test_body_marker_signals_user_unresolved_skip(self, provider):
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _existing(
+ c_id=779, marker=marker_outdated, is_resolved=False,
+ body_extra=f"\n\n---\n_{RESOLVED_NOTE}_\n{RESOLVED_BODY_MARKER}",
+ )
+ provider.get_bot_review_comments = MagicMock(return_value=[existing])
+ provider.edit_review_comment = MagicMock()
+ provider.resolve_review_thread = MagicMock()
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ provider.publish_code_suggestions([s_emitted])
+ provider.resolve_review_thread.assert_not_called()
+ provider.edit_review_comment.assert_not_called()
+
+ def test_re_emit_after_prior_resolve_calls_unresolve(self, provider):
+ s = _sug()
+ marker = generate_marker(s["original_suggestion"])
+ existing = _existing(c_id=780, marker=marker, is_resolved=True)
+ provider.get_bot_review_comments = MagicMock(return_value=[existing])
+ provider.edit_review_comment = MagicMock(return_value=True)
+ provider.resolve_review_thread = MagicMock()
+ provider.unresolve_review_thread = MagicMock(return_value=True)
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ provider.publish_code_suggestions([s])
+ provider.edit_review_comment.assert_called_once()
+ provider.unresolve_review_thread.assert_called_once()
+ provider.resolve_review_thread.assert_not_called()
+ provider.pr.create_review.assert_not_called()
+
+ def test_setting_off_skips_outdated_pass(self, provider):
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _existing(c_id=781, marker=marker_outdated)
+ provider.get_bot_review_comments = MagicMock(return_value=[existing])
+ provider.edit_review_comment = MagicMock()
+ provider.resolve_review_thread = MagicMock()
+ with _set_settings(persistent_mode="update", resolve_outdated=False):
+ provider.publish_code_suggestions([s_emitted])
+ provider.resolve_review_thread.assert_not_called()
+ provider.edit_review_comment.assert_not_called()
+
+ @pytest.mark.parametrize("resolve_outdated", ["false", "0", "no", "off", "False", " OFF "])
+ def test_setting_false_string_skips_outdated_pass(self, provider, resolve_outdated):
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _existing(c_id=784, marker=marker_outdated)
+ provider.get_bot_review_comments = MagicMock(return_value=[existing])
+ provider.edit_review_comment = MagicMock()
+ provider.resolve_review_thread = MagicMock()
+ with _set_settings(persistent_mode="update", resolve_outdated=resolve_outdated):
+ provider.publish_code_suggestions([s_emitted])
+ provider.resolve_review_thread.assert_not_called()
+ provider.edit_review_comment.assert_not_called()
+
+ @pytest.mark.parametrize("resolve_outdated", ["true", "1", "yes", "on", "True", " ON "])
+ def test_setting_true_string_enables_outdated_pass(self, provider, resolve_outdated):
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _existing(c_id=785, marker=marker_outdated)
+ provider.get_bot_review_comments = MagicMock(return_value=[existing])
+ provider.edit_review_comment = MagicMock(return_value=True)
+ provider.resolve_review_thread = MagicMock(return_value=True)
+ with _set_settings(persistent_mode="update", resolve_outdated=resolve_outdated):
+ provider.publish_code_suggestions([s_emitted])
+ provider.resolve_review_thread.assert_called_once_with(existing)
+ provider.edit_review_comment.assert_called_once()
+
+ def test_invalid_resolve_outdated_setting_warns_and_uses_default(self, provider):
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _existing(c_id=786, marker=marker_outdated)
+ provider.get_bot_review_comments = MagicMock(return_value=[existing])
+ provider.edit_review_comment = MagicMock(return_value=True)
+ provider.resolve_review_thread = MagicMock(return_value=True)
+ with _set_settings(persistent_mode="update", resolve_outdated="definitely"), \
+ patch("pr_agent.git_providers.github_provider.get_logger") as mock_get_logger:
+ logger = MagicMock()
+ mock_get_logger.return_value = logger
+ provider.publish_code_suggestions([s_emitted])
+ provider.resolve_review_thread.assert_called_once_with(existing)
+ warning_message = logger.warning.call_args[0][0]
+ assert "Invalid boolean value for pr_code_suggestions.resolve_outdated_inline_comments" in warning_message
+ assert "'definitely'" in warning_message
+
+ def test_persistent_off_skips_outdated_pass_even_when_setting_on(self, provider):
+ s = _sug()
+ provider.get_bot_review_comments = MagicMock()
+ provider.resolve_review_thread = MagicMock()
+ with _set_settings(persistent_mode="off", resolve_outdated=True):
+ provider.publish_code_suggestions([s])
+ provider.get_bot_review_comments.assert_not_called()
+ provider.resolve_review_thread.assert_not_called()
+
+ def test_resolve_failure_skips_edit(self, provider):
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _existing(c_id=782, marker=marker_outdated)
+ provider.get_bot_review_comments = MagicMock(return_value=[existing])
+ provider.edit_review_comment = MagicMock()
+ provider.resolve_review_thread = MagicMock(return_value=False)
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ provider.publish_code_suggestions([s_emitted])
+ provider.resolve_review_thread.assert_called_once()
+ # The resolution-note edit must not happen for this outdated comment.
+ # No other edit calls expected (the emitted suggestion creates new).
+ provider.edit_review_comment.assert_not_called()
+
+ def test_edit_failure_after_resolve_does_not_raise(self, provider):
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _existing(c_id=783, marker=marker_outdated)
+ provider.get_bot_review_comments = MagicMock(return_value=[existing])
+ provider.edit_review_comment = MagicMock(return_value=False)
+ provider.resolve_review_thread = MagicMock(return_value=True)
+ provider.unresolve_review_thread = MagicMock(return_value=True)
+ with _set_settings(persistent_mode="update", resolve_outdated=True), \
+ patch("pr_agent.git_providers.github_provider.get_logger") as mock_get_logger:
+ logger = MagicMock()
+ mock_get_logger.return_value = logger
+ # Should not raise.
+ provider.publish_code_suggestions([s_emitted])
+ provider.resolve_review_thread.assert_called_once()
+ provider.edit_review_comment.assert_called_once()
+ provider.unresolve_review_thread.assert_called_once_with(existing)
+ warning_message = logger.warning.call_args[0][0]
+ assert "failed to write resolved marker" in warning_message
+ assert "783" in warning_message
+
+ def test_outdated_comment_without_id_is_not_resolved_or_edited(self, provider):
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _existing(c_id=None, marker=marker_outdated, thread_id="T-missing-id")
+ provider.get_bot_review_comments = MagicMock(return_value=[existing])
+ provider.edit_review_comment = MagicMock()
+ provider.resolve_review_thread = MagicMock(return_value=True)
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ provider.publish_code_suggestions([s_emitted])
+ provider.resolve_review_thread.assert_not_called()
+ provider.edit_review_comment.assert_not_called()
+ provider.pr.create_review.assert_called_once()
+
+
+class TestStructuredHashLivePath:
+ """End-to-end: paraphrased prose + identical improved_code → update, not create."""
+
+ def test_paraphrased_prose_same_edit_routes_to_edit_review_comment(self, provider):
+ # Two suggestions that differ only in wording; improved_code is identical.
+ improved = "cleanup_mode=None if dry_run else cleanup_mode,"
+ file = "src/release.py"
+ first_run = {
+ "relevant_file": file,
+ "label": "possible issue",
+ "suggestion_content": (
+ "When dry_run=True, cleanup_mode is still passed through "
+ "unchanged to bump_version."
+ ),
+ "improved_code": improved,
+ }
+ second_run = {
+ "relevant_file": file,
+ "label": "possible issue",
+ "suggestion_content": (
+ "When dry_run=True, the cleanup_mode is still forwarded "
+ "unchanged to bump_version."
+ ),
+ "improved_code": improved,
+ }
+
+ marker_first = generate_marker(first_run)
+ marker_second = generate_marker(second_run)
+ assert marker_first == marker_second, \
+ "paraphrased prose with identical improved_code must collide"
+
+ existing_comment = {
+ "id": 777,
+ "thread_id": "T1",
+ "body": f"old body\n\n{marker_first}",
+ "path": file,
+ "line": 12,
+ "start_line": 10,
+ "is_resolved": False,
+ }
+ provider.get_bot_review_comments = MagicMock(return_value=[existing_comment])
+ provider.edit_review_comment = MagicMock(return_value=True)
+ provider.unresolve_review_thread = MagicMock()
+
+ body_text = (
+ "**Suggestion:** paraphrased wording, same fix [possible issue]\n"
+ "```suggestion\n"
+ f"{improved}\n"
+ "```"
+ )
+ code_suggestion = {
+ "body": body_text,
+ "relevant_file": file,
+ "relevant_lines_start": 10,
+ "relevant_lines_end": 12,
+ "original_suggestion": second_run,
+ }
+
+ with _set_mode("update"):
+ provider.publish_code_suggestions([code_suggestion])
+
+ provider.edit_review_comment.assert_called_once()
+ called_id, called_body = provider.edit_review_comment.call_args[0]
+ assert called_id == 777
+ assert marker_first in called_body
+ provider.pr.create_review.assert_not_called()
diff --git a/tests/unittest/test_gitlab_inline_dedup.py b/tests/unittest/test_gitlab_inline_dedup.py
new file mode 100644
index 0000000000..300c3a71a1
--- /dev/null
+++ b/tests/unittest/test_gitlab_inline_dedup.py
@@ -0,0 +1,790 @@
+"""
+Tests for GitLabProvider dedup-aware publish_code_suggestions,
+get_bot_review_comments, and edit_review_comment.
+"""
+from unittest.mock import MagicMock, patch
+
+from pr_agent.algo.inline_comments_dedup import (
+ MARKER_PREFIX,
+ MARKER_SUFFIX,
+ RESOLVED_BODY_MARKER,
+ RESOLVED_NOTE,
+ generate_marker,
+)
+
+
+def _sug(label="possible issue", file="src/app.py",
+ content="Check for None before dereferencing user_id on this line.",
+ start=10, end=12):
+ orig = {
+ "relevant_file": file,
+ "label": label,
+ "suggestion_content": content,
+ "relevant_lines_start": start,
+ "relevant_lines_end": end,
+ }
+ return {
+ "body": f"**Suggestion:** {content} [{label}]\n```suggestion\nfix\n```",
+ "relevant_file": file,
+ "relevant_lines_start": start,
+ "relevant_lines_end": end,
+ "original_suggestion": orig,
+ }
+
+
+def _make_provider():
+ """Construct a GitLabProvider bypassing __init__ and set needed attributes."""
+ from pr_agent.git_providers.gitlab_provider import GitLabProvider
+ p = GitLabProvider.__new__(GitLabProvider)
+ p.max_comment_chars = 65000
+ p.gl = MagicMock()
+ p.mr = MagicMock()
+ p.id_mr = 1
+ return p
+
+
+def _set_settings(persistent_mode="update", resolve_outdated=True):
+ values = {
+ "persistent_inline_comments": persistent_mode,
+ "resolve_outdated_inline_comments": resolve_outdated,
+ }
+ return patch(
+ "pr_agent.git_providers.gitlab_provider.get_settings",
+ return_value=MagicMock(
+ pr_code_suggestions=MagicMock(get=lambda key, default=None: values.get(key, default)),
+ ),
+ )
+
+
+def _set_mode(mode):
+ return _set_settings(persistent_mode=mode, resolve_outdated=False)
+
+
+# ---------------------------------------------------------------------------
+# Off mode
+# ---------------------------------------------------------------------------
+
+class TestOffMode:
+ def test_off_mode_skips_fetch(self):
+ """In 'off' mode, get_bot_review_comments must never be called."""
+ p = _make_provider()
+ p.get_bot_review_comments = MagicMock()
+ p.edit_review_comment = MagicMock()
+ p.send_inline_comment = MagicMock()
+ p.get_diff_files = MagicMock(return_value=[])
+
+ with _set_mode("off"):
+ p.publish_code_suggestions([_sug()])
+
+ p.get_bot_review_comments.assert_not_called()
+ p.edit_review_comment.assert_not_called()
+
+
+# ---------------------------------------------------------------------------
+# Update mode
+# ---------------------------------------------------------------------------
+
+class TestUpdateMode:
+ def test_no_match_creates_new(self):
+ """When no existing comment matches, send_inline_comment is called and marker is embedded."""
+ p = _make_provider()
+ p.get_bot_review_comments = MagicMock(return_value=[])
+ p.edit_review_comment = MagicMock()
+ p.send_inline_comment = MagicMock()
+
+ # Set up get_diff_files so the suggestion loop completes
+ fake_file = MagicMock()
+ fake_file.filename = "src/app.py"
+ fake_file.head_file = "\n".join(f"line{i}" for i in range(1, 50))
+ p.get_diff_files = MagicMock(return_value=[fake_file])
+
+ s = _sug()
+ with _set_mode("update"):
+ p.publish_code_suggestions([s])
+
+ p.get_bot_review_comments.assert_called_once()
+ p.edit_review_comment.assert_not_called()
+ p.send_inline_comment.assert_called_once()
+ # The body passed to send_inline_comment should contain the marker
+ call_body = p.send_inline_comment.call_args[0][0]
+ assert MARKER_PREFIX in call_body
+ assert MARKER_SUFFIX in call_body
+
+ def test_match_edits_and_skips_creation(self):
+ """When an existing comment matches, edit_review_comment is called and send_inline_comment is not."""
+ p = _make_provider()
+ s = _sug()
+ marker = generate_marker(s["original_suggestion"])
+ existing_body = "old body\n\n" + marker
+ p.get_bot_review_comments = MagicMock(
+ return_value=[{
+ "id": 777,
+ "body": existing_body,
+ "path": s["relevant_file"],
+ "line": s["relevant_lines_end"],
+ "start_line": s["relevant_lines_start"],
+ }]
+ )
+ p.edit_review_comment = MagicMock(return_value=True)
+ p.send_inline_comment = MagicMock()
+ p.get_diff_files = MagicMock(return_value=[])
+
+ with _set_mode("update"):
+ p.publish_code_suggestions([s])
+
+ p.edit_review_comment.assert_called_once()
+ called_id, called_body = p.edit_review_comment.call_args[0]
+ assert called_id == 777
+ assert marker in called_body
+ assert s["body"] in called_body
+ p.send_inline_comment.assert_not_called()
+
+ def test_edit_failure_falls_back_to_create(self):
+ """When edit_review_comment returns False, fall back to send_inline_comment."""
+ p = _make_provider()
+ s = _sug()
+ marker = generate_marker(s["original_suggestion"])
+ p.get_bot_review_comments = MagicMock(
+ return_value=[{
+ "id": 777,
+ "body": "old " + marker,
+ "path": s["relevant_file"],
+ "line": s["relevant_lines_end"],
+ "start_line": s["relevant_lines_start"],
+ }]
+ )
+ p.edit_review_comment = MagicMock(return_value=False)
+ p.send_inline_comment = MagicMock()
+
+ fake_file = MagicMock()
+ fake_file.filename = "src/app.py"
+ fake_file.head_file = "\n".join(f"line{i}" for i in range(1, 50))
+ p.get_diff_files = MagicMock(return_value=[fake_file])
+
+ with _set_mode("update"):
+ p.publish_code_suggestions([s])
+
+ p.edit_review_comment.assert_called_once()
+ p.send_inline_comment.assert_called_once()
+
+ def test_mixed_match_and_new(self):
+ """Matched suggestions are edited; unmatched ones are created."""
+ p = _make_provider()
+ matched = _sug(content="Matched suggestion")
+ unmatched = _sug(content="Brand new suggestion", start=40, end=42)
+ marker_matched = generate_marker(matched["original_suggestion"])
+
+ p.get_bot_review_comments = MagicMock(
+ return_value=[{
+ "id": 1,
+ "body": marker_matched,
+ "path": matched["relevant_file"],
+ "line": matched["relevant_lines_end"],
+ "start_line": matched["relevant_lines_start"],
+ }]
+ )
+ p.edit_review_comment = MagicMock(return_value=True)
+ p.send_inline_comment = MagicMock()
+
+ fake_file = MagicMock()
+ fake_file.filename = "src/app.py"
+ fake_file.head_file = "\n".join(f"line{i}" for i in range(1, 50))
+ p.get_diff_files = MagicMock(return_value=[fake_file])
+
+ with _set_mode("update"):
+ p.publish_code_suggestions([matched, unmatched])
+
+ assert p.edit_review_comment.call_count == 1
+ p.send_inline_comment.assert_called_once()
+ call_body = p.send_inline_comment.call_args[0][0]
+ assert "Brand new" in call_body
+
+ def test_fetch_failure_falls_back_to_creating_all(self):
+ """When get_bot_review_comments raises, fall back to creating all suggestions."""
+ p = _make_provider()
+ p.get_bot_review_comments = MagicMock(side_effect=RuntimeError("api down"))
+ p.edit_review_comment = MagicMock()
+ p.send_inline_comment = MagicMock()
+
+ fake_file = MagicMock()
+ fake_file.filename = "src/app.py"
+ fake_file.head_file = "\n".join(f"line{i}" for i in range(1, 50))
+ p.get_diff_files = MagicMock(return_value=[fake_file])
+
+ with _set_mode("update"):
+ p.publish_code_suggestions([_sug()])
+
+ p.edit_review_comment.assert_not_called()
+ p.send_inline_comment.assert_called_once()
+
+
+# ---------------------------------------------------------------------------
+# Skip mode
+# ---------------------------------------------------------------------------
+
+class TestSkipMode:
+ def test_match_skips_entirely(self):
+ """In skip mode, a matched suggestion is not re-posted at all."""
+ p = _make_provider()
+ s = _sug()
+ marker = generate_marker(s["original_suggestion"])
+ p.get_bot_review_comments = MagicMock(
+ return_value=[{
+ "id": 1,
+ "body": marker,
+ "path": s["relevant_file"],
+ "line": s["relevant_lines_end"],
+ "start_line": s["relevant_lines_start"],
+ }]
+ )
+ p.edit_review_comment = MagicMock()
+ p.send_inline_comment = MagicMock()
+ p.get_diff_files = MagicMock(return_value=[])
+
+ with _set_mode("skip"):
+ p.publish_code_suggestions([s])
+
+ p.edit_review_comment.assert_not_called()
+ p.send_inline_comment.assert_not_called()
+
+ def test_no_match_still_creates(self):
+ """In skip mode, suggestions without a matching existing comment are still created."""
+ p = _make_provider()
+ p.get_bot_review_comments = MagicMock(return_value=[])
+ p.edit_review_comment = MagicMock()
+ p.send_inline_comment = MagicMock()
+
+ fake_file = MagicMock()
+ fake_file.filename = "src/app.py"
+ fake_file.head_file = "\n".join(f"line{i}" for i in range(1, 50))
+ p.get_diff_files = MagicMock(return_value=[fake_file])
+
+ with _set_mode("skip"):
+ p.publish_code_suggestions([_sug()])
+
+ p.send_inline_comment.assert_called_once()
+
+ def test_resolved_match_reopens_and_refreshes_body(self):
+ p = _make_provider()
+ s = _sug()
+ marker = generate_marker(s["original_suggestion"])
+ existing = {
+ "id": 22,
+ "thread_id": "DIS-22",
+ "discussion_id": "DIS-22",
+ "body": f"old body\n\n---\n_{RESOLVED_NOTE}_\n{RESOLVED_BODY_MARKER}\n\n{marker}",
+ "path": s["relevant_file"],
+ "line": s["relevant_lines_end"],
+ "start_line": s["relevant_lines_start"],
+ "is_resolved": True,
+ }
+ p.get_bot_review_comments = MagicMock(return_value=[existing])
+ p.edit_review_comment = MagicMock(return_value=True)
+ p.unresolve_review_thread = MagicMock(return_value=True)
+ p.send_inline_comment = MagicMock()
+ p.get_diff_files = MagicMock(return_value=[])
+ with _set_settings(persistent_mode="skip", resolve_outdated=True):
+ p.publish_code_suggestions([s])
+ p.edit_review_comment.assert_called_once()
+ called_id, called_body = p.edit_review_comment.call_args[0]
+ assert called_id == 22
+ assert s["body"] in called_body
+ assert RESOLVED_BODY_MARKER not in called_body
+ p.unresolve_review_thread.assert_called_once_with(existing)
+ p.send_inline_comment.assert_not_called()
+
+
+# ---------------------------------------------------------------------------
+# get_bot_review_comments — bot identity filtering
+# ---------------------------------------------------------------------------
+
+class TestGetBotReviewCommentsFiltering:
+ """Exercises the real get_bot_review_comments filter (not mocked out)."""
+
+ def _make_discussions(self, notes_list):
+ """Build a list of mock discussion objects from a list of note dicts."""
+ discussions = []
+ for i, notes in enumerate(notes_list):
+ d = MagicMock()
+ d.id = f"discussion-{i}"
+ d.attributes = {"notes": notes}
+ discussions.append(d)
+ return discussions
+
+ def _note(self, note_id, username, body="comment body", path="src/app.py", new_line=5):
+ return {
+ "id": note_id,
+ "type": "DiffNote",
+ "body": body,
+ "author": {"username": username},
+ "position": {
+ "new_path": path,
+ "new_line": new_line,
+ "line_range": None,
+ },
+ }
+
+ def test_bot_authored_notes_kept(self):
+ """Notes authored by the bot username are returned."""
+ from pr_agent.git_providers.gitlab_provider import GitLabProvider
+ p = GitLabProvider.__new__(GitLabProvider)
+ p.max_comment_chars = 65000
+
+ bot_note = self._note(10, "my-bot", body="bot comment")
+ human_note = self._note(11, "alice", body="human comment")
+
+ p.mr = MagicMock()
+ p.mr.discussions.list.return_value = self._make_discussions([[bot_note, human_note]])
+
+ p.gl = MagicMock()
+ p.gl.auth = MagicMock()
+ p.gl.user = MagicMock()
+ p.gl.user.username = "my-bot"
+
+ result = p.get_bot_review_comments()
+ assert [c["id"] for c in result] == [10]
+
+ def test_human_authored_notes_rejected(self):
+ """Notes not authored by the bot are excluded."""
+ from pr_agent.git_providers.gitlab_provider import GitLabProvider
+ p = GitLabProvider.__new__(GitLabProvider)
+ p.max_comment_chars = 65000
+
+ human_note = self._note(20, "alice", body="human")
+
+ p.mr = MagicMock()
+ p.mr.discussions.list.return_value = self._make_discussions([[human_note]])
+
+ p.gl = MagicMock()
+ p.gl.auth = MagicMock()
+ p.gl.user = MagicMock()
+ p.gl.user.username = "my-bot"
+
+ result = p.get_bot_review_comments()
+ assert result == []
+
+ def test_non_diff_notes_excluded(self):
+ """Regular MR notes (type != DiffNote) are excluded."""
+ from pr_agent.git_providers.gitlab_provider import GitLabProvider
+ p = GitLabProvider.__new__(GitLabProvider)
+ p.max_comment_chars = 65000
+
+ regular_note = {
+ "id": 30,
+ "type": "Note", # NOT a DiffNote
+ "body": "general comment",
+ "author": {"username": "my-bot"},
+ "position": None,
+ }
+
+ p.mr = MagicMock()
+ d = MagicMock()
+ d.id = "disc-0"
+ d.attributes = {"notes": [regular_note]}
+ p.mr.discussions.list.return_value = [d]
+
+ p.gl = MagicMock()
+ p.gl.auth = MagicMock()
+ p.gl.user = MagicMock()
+ p.gl.user.username = "my-bot"
+
+ result = p.get_bot_review_comments()
+ assert result == []
+
+ def test_returns_empty_on_exception(self):
+ """If discussions.list raises, return [] and log a warning."""
+ from pr_agent.git_providers.gitlab_provider import GitLabProvider
+ p = GitLabProvider.__new__(GitLabProvider)
+ p.max_comment_chars = 65000
+
+ p.mr = MagicMock()
+ p.mr.discussions.list.side_effect = RuntimeError("api failure")
+
+ p.gl = MagicMock()
+ p.gl.auth = MagicMock()
+ p.gl.user = MagicMock()
+ p.gl.user.username = "my-bot"
+
+ result = p.get_bot_review_comments()
+ assert result == []
+
+ def test_multiline_note_extracts_start_line(self):
+ """Multi-line DiffNote has start_line extracted from line_range.start.new_line."""
+ from pr_agent.git_providers.gitlab_provider import GitLabProvider
+ p = GitLabProvider.__new__(GitLabProvider)
+ p.max_comment_chars = 65000
+
+ multiline_note = {
+ "id": 40,
+ "type": "DiffNote",
+ "body": "multi-line",
+ "author": {"username": "my-bot"},
+ "position": {
+ "new_path": "src/foo.py",
+ "new_line": 20,
+ "line_range": {
+ "start": {"new_line": 15},
+ "end": {"new_line": 20},
+ },
+ },
+ }
+
+ p.mr = MagicMock()
+ d = MagicMock()
+ d.id = "disc-0"
+ d.attributes = {"notes": [multiline_note]}
+ p.mr.discussions.list.return_value = [d]
+
+ p.gl = MagicMock()
+ p.gl.auth = MagicMock()
+ p.gl.user = MagicMock()
+ p.gl.user.username = "my-bot"
+
+ result = p.get_bot_review_comments()
+ assert len(result) == 1
+ assert result[0]["start_line"] == 15
+ assert result[0]["line"] == 20
+
+
+# ---------------------------------------------------------------------------
+# edit_review_comment
+# ---------------------------------------------------------------------------
+
+class TestEditReviewComment:
+ def test_returns_true_on_success(self):
+ from pr_agent.git_providers.gitlab_provider import GitLabProvider
+ p = GitLabProvider.__new__(GitLabProvider)
+ p.max_comment_chars = 65000
+
+ mock_note = MagicMock()
+ p.mr = MagicMock()
+ p.mr.notes.get.return_value = mock_note
+
+ result = p.edit_review_comment(99, "new body")
+
+ assert result is True
+ p.mr.notes.get.assert_called_once_with(99)
+ assert mock_note.body == "new body"
+ mock_note.save.assert_called_once()
+
+ def test_returns_false_on_exception(self):
+ from pr_agent.git_providers.gitlab_provider import GitLabProvider
+ p = GitLabProvider.__new__(GitLabProvider)
+ p.max_comment_chars = 65000
+
+ p.mr = MagicMock()
+ p.mr.notes.get.side_effect = RuntimeError("not found")
+
+ result = p.edit_review_comment(99, "new body")
+
+ assert result is False
+
+
+class TestGitLabResolveUnresolve:
+ def _provider(self):
+ from pr_agent.git_providers.gitlab_provider import GitLabProvider
+ p = GitLabProvider.__new__(GitLabProvider)
+ p.mr = MagicMock()
+ return p
+
+ def test_resolve_calls_save_with_resolved_true(self):
+ p = self._provider()
+ d = MagicMock()
+ d.resolvable = True
+ d.resolved = False
+ p.mr.discussions.get = MagicMock(return_value=d)
+ assert p.resolve_review_thread({"thread_id": "DIS123"}) is True
+ p.mr.discussions.get.assert_called_once_with("DIS123")
+ assert d.resolved is True
+ d.save.assert_called_once()
+
+ def test_unresolve_calls_save_with_resolved_false(self):
+ p = self._provider()
+ d = MagicMock()
+ d.resolvable = True
+ d.resolved = True
+ p.mr.discussions.get = MagicMock(return_value=d)
+ assert p.unresolve_review_thread({"thread_id": "DIS123"}) is True
+ assert d.resolved is False
+ d.save.assert_called_once()
+
+ def test_non_resolvable_discussion_returns_false(self):
+ p = self._provider()
+ d = MagicMock()
+ d.resolvable = False
+ p.mr.discussions.get = MagicMock(return_value=d)
+ assert p.resolve_review_thread({"thread_id": "DIS123"}) is False
+ d.save.assert_not_called()
+
+ def test_resolve_returns_false_on_exception(self):
+ p = self._provider()
+ p.mr.discussions.get = MagicMock(side_effect=RuntimeError("api down"))
+ assert p.resolve_review_thread({"thread_id": "DIS123"}) is False
+
+ def test_resolve_returns_false_when_save_raises(self):
+ """resolve_review_thread returns False and logs a warning when save() raises."""
+ p = self._provider()
+ d = MagicMock()
+ d.resolvable = True
+ d.resolved = False
+ d.save.side_effect = RuntimeError("save failed")
+ p.mr.discussions.get = MagicMock(return_value=d)
+
+ import logging
+ with patch("pr_agent.git_providers.gitlab_provider.get_logger") as mock_logger:
+ mock_log = MagicMock()
+ mock_logger.return_value = mock_log
+ result = p.resolve_review_thread({"thread_id": "DIS123"})
+
+ assert result is False
+ mock_log.warning.assert_called_once()
+
+ def test_resolve_returns_false_when_thread_id_missing(self):
+ p = self._provider()
+ p.mr.discussions.get = MagicMock()
+ assert p.resolve_review_thread({"id": 1}) is False
+ p.mr.discussions.get.assert_not_called()
+
+
+class TestGetBotReviewCommentsIncludesIsResolved:
+ def _make_provider(self):
+ from pr_agent.git_providers.gitlab_provider import GitLabProvider
+ p = GitLabProvider.__new__(GitLabProvider)
+ p.gl = MagicMock()
+ p.gl.user.username = "pr-agent-bot"
+ p.mr = MagicMock()
+ return p
+
+ def _note(self, note_id, resolved=False):
+ return {
+ "type": "DiffNote",
+ "id": note_id,
+ "body": "x",
+ "author": {"username": "pr-agent-bot"},
+ "position": {"new_path": "a.py", "new_line": 5},
+ "resolved": resolved,
+ }
+
+ def test_is_resolved_propagates_from_discussion(self):
+ p = self._make_provider()
+ d_resolved = MagicMock()
+ d_resolved.id = "D-1"
+ d_resolved.resolved = True # top-level signal
+ d_resolved.attributes = {"notes": [self._note(1, resolved=True)]}
+ d_unresolved = MagicMock()
+ d_unresolved.id = "D-2"
+ d_unresolved.resolved = False # top-level signal
+ d_unresolved.attributes = {"notes": [self._note(2, resolved=False)]}
+ p.mr.discussions.list = MagicMock(return_value=[d_resolved, d_unresolved])
+ out = p.get_bot_review_comments()
+ ids_to_resolved = {c["id"]: c["is_resolved"] for c in out}
+ assert ids_to_resolved == {1: True, 2: False}
+ assert all("thread_id" in c for c in out)
+
+ def test_top_level_resolved_wins_over_note_resolved(self):
+ """Top-level discussion.resolved=True takes precedence over first-note resolved=False."""
+ p = self._make_provider()
+ d = MagicMock()
+ d.id = "D-1"
+ d.resolved = True # top-level says resolved
+ d.attributes = {"notes": [self._note(1, resolved=False)]} # note says not resolved
+ p.mr.discussions.list = MagicMock(return_value=[d])
+ out = p.get_bot_review_comments()
+ assert len(out) == 1
+ assert out[0]["is_resolved"] is True
+
+ def test_fallback_to_note_resolved_when_top_level_absent(self):
+ """When discussion has no top-level resolved attribute, first-note resolved is used."""
+ p = self._make_provider()
+ d = MagicMock(spec=["id", "attributes"]) # no .resolved attribute
+ d.id = "D-1"
+ d.attributes = {"notes": [self._note(1, resolved=True)]}
+ p.mr.discussions.list = MagicMock(return_value=[d])
+ out = p.get_bot_review_comments()
+ assert len(out) == 1
+ assert out[0]["is_resolved"] is True
+
+
+# ---------------------------------------------------------------------------
+# Outdated pass
+# ---------------------------------------------------------------------------
+
+def _gl_existing(c_id, marker, *, is_resolved=False, body_extra="", thread_id=None):
+ return {
+ "id": c_id,
+ "thread_id": thread_id or f"DIS-{c_id}",
+ "discussion_id": thread_id or f"DIS-{c_id}",
+ "body": "old body" + body_extra + "\n\n" + marker,
+ "path": "src/app.py",
+ "line": 12,
+ "start_line": 10,
+ "is_resolved": is_resolved,
+ }
+
+
+class TestGitLabOutdatedPass:
+ def _provider(self):
+ p = _make_provider()
+ p.send_inline_comment = MagicMock()
+ p.get_diff_files = MagicMock(return_value=[])
+ return p
+
+ def test_outdated_marker_resolves_and_edits(self):
+ p = self._provider()
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _gl_existing(c_id=10, marker=marker_outdated)
+ p.get_bot_review_comments = MagicMock(return_value=[existing])
+ p.edit_review_comment = MagicMock(return_value=True)
+ p.resolve_review_thread = MagicMock(return_value=True)
+ p.unresolve_review_thread = MagicMock()
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ p.publish_code_suggestions([s_emitted])
+ p.resolve_review_thread.assert_called_once()
+ p.edit_review_comment.assert_called_once()
+ called_id, called_body = p.edit_review_comment.call_args[0]
+ assert called_id == 10
+ assert RESOLVED_NOTE in called_body
+ assert RESOLVED_BODY_MARKER in called_body
+
+ def test_same_marker_different_location_creates_new_and_resolves_old(self):
+ p = self._provider()
+ s = _sug(start=40, end=42)
+ marker = generate_marker(s["original_suggestion"])
+ existing = _gl_existing(c_id=9, marker=marker)
+ p.get_bot_review_comments = MagicMock(return_value=[existing])
+ p.edit_review_comment = MagicMock(return_value=True)
+ p.resolve_review_thread = MagicMock(return_value=True)
+ p.unresolve_review_thread = MagicMock()
+ fake_file = MagicMock()
+ fake_file.filename = "src/app.py"
+ fake_file.head_file = "\n".join(f"line{i}" for i in range(1, 80))
+ p.get_diff_files = MagicMock(return_value=[fake_file])
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ p.publish_code_suggestions([s])
+ p.send_inline_comment.assert_called_once()
+ p.resolve_review_thread.assert_called_once_with(existing)
+ p.unresolve_review_thread.assert_not_called()
+ p.edit_review_comment.assert_called_once()
+ called_id, called_body = p.edit_review_comment.call_args[0]
+ assert called_id == 9
+ assert RESOLVED_NOTE in called_body
+ assert s["body"] not in called_body
+
+ def test_already_resolved_is_skipped(self):
+ p = self._provider()
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _gl_existing(c_id=11, marker=marker_outdated, is_resolved=True)
+ p.get_bot_review_comments = MagicMock(return_value=[existing])
+ p.edit_review_comment = MagicMock()
+ p.resolve_review_thread = MagicMock()
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ p.publish_code_suggestions([s_emitted])
+ p.resolve_review_thread.assert_not_called()
+ p.edit_review_comment.assert_not_called()
+
+ def test_body_marker_skips_resolve(self):
+ p = self._provider()
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _gl_existing(
+ c_id=12, marker=marker_outdated, is_resolved=False,
+ body_extra=f"\n\n---\n_{RESOLVED_NOTE}_\n{RESOLVED_BODY_MARKER}",
+ )
+ p.get_bot_review_comments = MagicMock(return_value=[existing])
+ p.edit_review_comment = MagicMock()
+ p.resolve_review_thread = MagicMock()
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ p.publish_code_suggestions([s_emitted])
+ p.resolve_review_thread.assert_not_called()
+ p.edit_review_comment.assert_not_called()
+
+ def test_re_emit_after_prior_resolve_calls_unresolve(self):
+ p = self._provider()
+ s = _sug()
+ marker = generate_marker(s["original_suggestion"])
+ existing = _gl_existing(c_id=13, marker=marker, is_resolved=True)
+ p.get_bot_review_comments = MagicMock(return_value=[existing])
+ p.edit_review_comment = MagicMock(return_value=True)
+ p.resolve_review_thread = MagicMock()
+ p.unresolve_review_thread = MagicMock(return_value=True)
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ p.publish_code_suggestions([s])
+ p.edit_review_comment.assert_called_once()
+ p.unresolve_review_thread.assert_called_once()
+ p.resolve_review_thread.assert_not_called()
+
+ def test_setting_off_skips_outdated_pass(self):
+ p = self._provider()
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _gl_existing(c_id=14, marker=marker_outdated)
+ p.get_bot_review_comments = MagicMock(return_value=[existing])
+ p.edit_review_comment = MagicMock()
+ p.resolve_review_thread = MagicMock()
+ with _set_settings(persistent_mode="update", resolve_outdated=False):
+ p.publish_code_suggestions([s_emitted])
+ p.resolve_review_thread.assert_not_called()
+ p.edit_review_comment.assert_not_called()
+
+ def test_persistent_off_skips_outdated_pass(self):
+ p = self._provider()
+ s = _sug()
+ p.get_bot_review_comments = MagicMock()
+ p.resolve_review_thread = MagicMock()
+ with _set_settings(persistent_mode="off", resolve_outdated=True):
+ p.publish_code_suggestions([s])
+ p.get_bot_review_comments.assert_not_called()
+ p.resolve_review_thread.assert_not_called()
+
+ def test_resolve_failure_skips_edit(self):
+ p = self._provider()
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _gl_existing(c_id=15, marker=marker_outdated)
+ p.get_bot_review_comments = MagicMock(return_value=[existing])
+ p.edit_review_comment = MagicMock()
+ p.resolve_review_thread = MagicMock(return_value=False)
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ p.publish_code_suggestions([s_emitted])
+ p.resolve_review_thread.assert_called_once()
+ p.edit_review_comment.assert_not_called()
+
+ def test_edit_failure_after_resolve_unresolves_thread(self):
+ p = self._provider()
+ s_emitted = _sug(content="A new suggestion", start=40, end=42)
+ marker_outdated = generate_marker(_sug(content="Old")["original_suggestion"])
+ existing = _gl_existing(c_id=16, marker=marker_outdated)
+ p.get_bot_review_comments = MagicMock(return_value=[existing])
+ p.edit_review_comment = MagicMock(return_value=False)
+ p.resolve_review_thread = MagicMock(return_value=True)
+ p.unresolve_review_thread = MagicMock(return_value=True)
+ with _set_settings(persistent_mode="update", resolve_outdated=True), \
+ patch("pr_agent.git_providers.gitlab_provider.get_logger") as mock_get_logger:
+ logger = MagicMock()
+ mock_get_logger.return_value = logger
+ p.publish_code_suggestions([s_emitted])
+ p.resolve_review_thread.assert_called_once_with(existing)
+ p.edit_review_comment.assert_called_once()
+ p.unresolve_review_thread.assert_called_once_with(existing)
+ warning_message = logger.warning.call_args[0][0]
+ assert "failed to write resolved marker" in warning_message
+ assert "16" in warning_message
+
+ def test_existing_hash_re_emitted_skips_outdated_resolve(self):
+ p = self._provider()
+ s = _sug()
+ marker = generate_marker(s["original_suggestion"])
+ existing = _gl_existing(c_id=20, marker=marker) # is_resolved defaults to False
+ p.get_bot_review_comments = MagicMock(return_value=[existing])
+ p.edit_review_comment = MagicMock(return_value=True)
+ p.resolve_review_thread = MagicMock()
+ p.unresolve_review_thread = MagicMock()
+ with _set_settings(persistent_mode="update", resolve_outdated=True):
+ p.publish_code_suggestions([s])
+ p.resolve_review_thread.assert_not_called()
+ # Edit happened in the update path, not the outdated pass:
+ p.edit_review_comment.assert_called_once()
+ # Not previously resolved, so no unresolve:
+ p.unresolve_review_thread.assert_not_called()
diff --git a/tests/unittest/test_inline_comments_dedup.py b/tests/unittest/test_inline_comments_dedup.py
new file mode 100644
index 0000000000..a8cdf5b4fd
--- /dev/null
+++ b/tests/unittest/test_inline_comments_dedup.py
@@ -0,0 +1,296 @@
+from pr_agent.algo.inline_comments_dedup import (
+ MARKER_PREFIX,
+ MARKER_SUFFIX,
+ PERSISTENT_MODE_OFF,
+ PERSISTENT_MODE_SKIP,
+ PERSISTENT_MODE_UPDATE,
+ VALID_PERSISTENT_MODES,
+ append_marker,
+ build_marker_index,
+ extract_marker,
+ generate_marker,
+ normalize_persistent_mode,
+)
+
+
+def _suggestion(file="src/app.py", label="possible issue",
+ content="Nullable pointer may crash on line 42 when user_id is None",
+ start=10, end=12):
+ return {
+ "relevant_file": file,
+ "label": label,
+ "suggestion_content": content,
+ "relevant_lines_start": start,
+ "relevant_lines_end": end,
+ }
+
+
+class TestGenerateMarker:
+ def test_shape(self):
+ marker = generate_marker(_suggestion())
+ assert marker.startswith(MARKER_PREFIX)
+ assert marker.endswith(MARKER_SUFFIX)
+ hash_part = marker[len(MARKER_PREFIX):-len(MARKER_SUFFIX)]
+ assert len(hash_part) == 12
+ assert all(c in "0123456789abcdef" for c in hash_part)
+
+ def test_deterministic(self):
+ assert generate_marker(_suggestion()) == generate_marker(_suggestion())
+
+ def test_stable_across_line_shifts(self):
+ a = generate_marker(_suggestion(start=10, end=12))
+ b = generate_marker(_suggestion(start=200, end=202))
+ assert a == b
+
+ def test_changes_with_file(self):
+ a = generate_marker(_suggestion(file="src/app.py"))
+ b = generate_marker(_suggestion(file="src/other.py"))
+ assert a != b
+
+ def test_changes_with_label(self):
+ a = generate_marker(_suggestion(label="possible issue"))
+ b = generate_marker(_suggestion(label="security"))
+ assert a != b
+
+ def test_changes_with_content_prefix(self):
+ a = generate_marker(_suggestion(content="A totally different suggestion about X"))
+ b = generate_marker(_suggestion(content="Another totally different suggestion about Y"))
+ assert a != b
+
+ def test_tolerates_trailing_content_variation(self):
+ long_base = "Same opening 128-chars " + "x" * 200
+ a = generate_marker(_suggestion(content=long_base + "tail-A"))
+ b = generate_marker(_suggestion(content=long_base + "tail-B"))
+ assert a == b
+
+ def test_whitespace_normalized(self):
+ a = generate_marker(_suggestion(content="Same content here"))
+ b = generate_marker(_suggestion(content="Same content here"))
+ assert a == b
+
+ def test_missing_fields_returns_none(self):
+ assert generate_marker({"relevant_file": "a.py"}) is None
+ assert generate_marker({}) is None
+
+
+class TestExtractMarker:
+ def test_present(self):
+ body = "some text\n"
+ assert extract_marker(body) == "abc123def456"
+
+ def test_missing(self):
+ assert extract_marker("no marker here") is None
+
+ def test_empty(self):
+ assert extract_marker("") is None
+
+ def test_multiple_returns_last(self):
+ body = "\nmore\n"
+ assert extract_marker(body) == "000000000002"
+
+ def test_roundtrip_with_append(self):
+ marker = generate_marker(_suggestion())
+ body_plus = append_marker("suggestion body", marker)
+ assert extract_marker(body_plus) == marker[len(MARKER_PREFIX):-len(MARKER_SUFFIX)]
+
+
+class TestAppendMarker:
+ def test_adds_separator(self):
+ body = append_marker("hello", "")
+ assert body.endswith("")
+ assert "hello\n\n"
+ once = append_marker("hello", marker)
+ twice = append_marker(once, marker)
+ assert once == twice
+
+
+class TestBuildMarkerIndex:
+ def test_indexes_marked_comments(self):
+ comments = [
+ {"id": 1, "body": "body A "},
+ {"id": 2, "body": "body B "},
+ ]
+ index = build_marker_index(comments)
+ assert [comment["id"] for comment in index["aaaaaaaaaaaa"]] == [1]
+ assert [comment["id"] for comment in index["bbbbbbbbbbbb"]] == [2]
+
+ def test_ignores_unmarked(self):
+ comments = [{"id": 1, "body": "no marker"}]
+ assert build_marker_index(comments) == {}
+
+ def test_duplicate_hashes_are_preserved(self):
+ comments = [
+ {"id": 1, "body": "A "},
+ {"id": 2, "body": "B "},
+ ]
+ index = build_marker_index(comments)
+ assert [comment["id"] for comment in index["aaaaaaaaaaaa"]] == [1, 2]
+
+
+class TestNormalizePersistentMode:
+ def test_valid_values(self):
+ assert normalize_persistent_mode("off") == PERSISTENT_MODE_OFF
+ assert normalize_persistent_mode("update") == PERSISTENT_MODE_UPDATE
+ assert normalize_persistent_mode("skip") == PERSISTENT_MODE_SKIP
+
+ def test_case_and_whitespace(self):
+ assert normalize_persistent_mode(" UPDATE ") == PERSISTENT_MODE_UPDATE
+
+ def test_invalid_falls_back_to_off(self):
+ assert normalize_persistent_mode("garbage") == PERSISTENT_MODE_OFF
+ assert normalize_persistent_mode(None) == PERSISTENT_MODE_OFF
+ assert normalize_persistent_mode("") == PERSISTENT_MODE_OFF
+
+ def test_valid_set_exposed(self):
+ assert VALID_PERSISTENT_MODES == {PERSISTENT_MODE_OFF, PERSISTENT_MODE_UPDATE, PERSISTENT_MODE_SKIP}
+
+
+from pr_agent.algo.inline_comments_dedup import normalize_code
+
+
+class TestNormalizeCode:
+ def test_empty_inputs(self):
+ assert normalize_code("") == ""
+ assert normalize_code(None) == ""
+ assert normalize_code(" \n \n") == ""
+
+ def test_reindent_produces_same_output(self):
+ a = (
+ " cleanup_mode=cleanup_mode,\n"
+ )
+ b = (
+ " cleanup_mode=cleanup_mode,\n"
+ )
+ assert normalize_code(a) == normalize_code(b)
+
+ def test_multiline_reindent_produces_same_output(self):
+ a = (
+ " bump_version(\n"
+ " github=self.github,\n"
+ " cleanup_mode=None if dry_run else cleanup_mode,\n"
+ " )\n"
+ )
+ b = (
+ " bump_version(\n"
+ " github=self.github,\n"
+ " cleanup_mode=None if dry_run else cleanup_mode,\n"
+ " )\n"
+ )
+ assert normalize_code(a) == normalize_code(b)
+
+ def test_trailing_whitespace_stripped(self):
+ assert normalize_code("foo = 1 \n") == normalize_code("foo = 1\n")
+
+ def test_leading_and_trailing_blank_lines_dropped(self):
+ assert normalize_code("\n\nfoo = 1\n\n\n") == normalize_code("foo = 1")
+
+ def test_tabs_expand_consistently(self):
+ tabbed = "\tfoo = 1\n\tbar = 2\n"
+ spaced = " foo = 1\n bar = 2\n"
+ assert normalize_code(tabbed) == normalize_code(spaced)
+
+ def test_token_difference_preserved(self):
+ assert normalize_code("cleanup_mode=None if dry_run else cleanup_mode") != \
+ normalize_code("cleanup_mode=cleanup_mode if not dry_run else None")
+
+ def test_is_idempotent(self):
+ sample = " x = f(1, 2)\n y = g(3)\n"
+ once = normalize_code(sample)
+ twice = normalize_code(once)
+ assert once == twice
+
+ def test_internal_whitespace_in_string_literal_is_preserved(self):
+ assert normalize_code('message = "a b"') != normalize_code('message = "a b"')
+
+
+def _structured_suggestion(
+ file="src/app.py",
+ content="some prose",
+ improved_code="cleanup_mode=None if dry_run else cleanup_mode,",
+ label="possible issue",
+ start=10,
+ end=12,
+):
+ return {
+ "relevant_file": file,
+ "label": label,
+ "suggestion_content": content,
+ "improved_code": improved_code,
+ "relevant_lines_start": start,
+ "relevant_lines_end": end,
+ }
+
+
+class TestGenerateMarkerStructured:
+ def test_paraphrased_prose_same_edit_collides(self):
+ a = _structured_suggestion(
+ content="When dry_run=True, cleanup_mode is still passed through unchanged to bump_version.",
+ )
+ b = _structured_suggestion(
+ content="When dry_run=True, the cleanup_mode is still forwarded unchanged to bump_version.",
+ )
+ assert generate_marker(a) == generate_marker(b)
+
+ def test_same_prose_different_edit_splits(self):
+ a = _structured_suggestion(
+ improved_code="cleanup_mode=None if dry_run else cleanup_mode,",
+ )
+ b = _structured_suggestion(
+ improved_code="cleanup_mode=cleanup_mode if not dry_run else None,",
+ )
+ assert generate_marker(a) != generate_marker(b)
+
+ def test_reindented_edit_collides(self):
+ a = _structured_suggestion(
+ improved_code=" cleanup_mode=None if dry_run else cleanup_mode,\n",
+ )
+ b = _structured_suggestion(
+ improved_code=" cleanup_mode=None if dry_run else cleanup_mode,\n",
+ )
+ assert generate_marker(a) == generate_marker(b)
+
+ def test_label_change_does_not_split_when_structured(self):
+ a = _structured_suggestion(label="possible issue")
+ b = _structured_suggestion(label="best practice")
+ assert generate_marker(a) == generate_marker(b)
+
+ def test_missing_file_returns_none(self):
+ s = _structured_suggestion()
+ s["relevant_file"] = ""
+ assert generate_marker(s) is None
+
+ def test_empty_improved_code_falls_back_to_prose(self):
+ s = _structured_suggestion(improved_code="")
+ # With prose + label present, fallback produces a marker.
+ assert generate_marker(s) is not None
+
+ def test_fallback_missing_label_returns_none(self):
+ s = _structured_suggestion(improved_code="", label="")
+ assert generate_marker(s) is None
+
+ def test_fallback_missing_content_returns_none(self):
+ s = _structured_suggestion(improved_code="", content="")
+ s["suggestion_content"] = ""
+ assert generate_marker(s) is None
+
+ def test_structured_and_prose_differ_on_same_inputs(self):
+ # Same file/label/content; structured extra input shouldn't alias to prose hash.
+ structured = _structured_suggestion(
+ improved_code="x = 1",
+ content="x = 1",
+ label="possible issue",
+ )
+ prose_only = _structured_suggestion(
+ improved_code="",
+ content="x = 1",
+ label="possible issue",
+ )
+ assert generate_marker(structured) != generate_marker(prose_only)
+
+ def test_same_hash_can_be_reused_for_multiple_locations(self):
+ first = _structured_suggestion(start=10, end=12)
+ second = _structured_suggestion(start=40, end=42)
+ assert generate_marker(first) == generate_marker(second)
diff --git a/tests/unittest/test_inline_comments_dedup_constants.py b/tests/unittest/test_inline_comments_dedup_constants.py
new file mode 100644
index 0000000000..49855b9bf7
--- /dev/null
+++ b/tests/unittest/test_inline_comments_dedup_constants.py
@@ -0,0 +1,27 @@
+"""Smoke tests for resolve-outdated constants and base GitProvider defaults."""
+
+from pr_agent.algo.inline_comments_dedup import (
+ RESOLVED_BODY_MARKER,
+ RESOLVED_NOTE,
+)
+
+
+def test_resolved_marker_is_html_comment():
+ assert RESOLVED_BODY_MARKER.startswith("")
+ assert "\n" not in RESOLVED_BODY_MARKER
+
+
+def test_resolved_marker_detectable_only_when_present():
+ body_with = "some comment body\n\n---\n_" + RESOLVED_NOTE + "_\n" + RESOLVED_BODY_MARKER
+ body_without = "an unrelated comment that quotes "
+ assert RESOLVED_BODY_MARKER in body_with
+ assert RESOLVED_BODY_MARKER not in body_without
+
+
+def test_base_provider_defaults_return_false():
+ from pr_agent.git_providers.git_provider import GitProvider
+
+ # GitProvider is abstract; verify defaults via the unbound methods.
+ assert GitProvider.resolve_review_thread(None, {"id": 1}) is False
+ assert GitProvider.unresolve_review_thread(None, {"id": 1}) is False
diff --git a/tests/unittest/test_litellm_claude_extended_thinking.py b/tests/unittest/test_litellm_claude_extended_thinking.py
new file mode 100644
index 0000000000..493211f8df
--- /dev/null
+++ b/tests/unittest/test_litellm_claude_extended_thinking.py
@@ -0,0 +1,70 @@
+from unittest.mock import MagicMock, patch
+
+import pytest
+
+import pr_agent.algo.ai_handlers.litellm_ai_handler as litellm_handler
+from pr_agent.algo import CLAUDE_EXTENDED_THINKING_MODELS
+from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
+
+
+def create_mock_settings(override):
+ """Create a fake settings object with configurable Claude extended-thinking override."""
+ return type('', (), {
+ 'config': type('', (), {
+ 'verbosity_level': 0,
+ 'get': lambda self, key, default=None: override
+ if key == "claude_extended_thinking_models_override"
+ else default,
+ })(),
+ 'litellm': type('', (), {
+ 'get': lambda self, key, default=None: default
+ })(),
+ 'get': lambda self, key, default=None: default
+ })()
+
+
+@pytest.fixture
+def mock_logger():
+ """Mock logger to capture warning calls."""
+ with patch('pr_agent.algo.ai_handlers.litellm_ai_handler.get_logger') as mock_log:
+ mock_log_instance = MagicMock()
+ mock_log.return_value = mock_log_instance
+ yield mock_log_instance
+
+
+def test_claude_extended_thinking_override_invalid_type_warns_and_uses_default(monkeypatch, mock_logger):
+ fake_settings = create_mock_settings("claude-3-7-sonnet-latest")
+ monkeypatch.setattr(litellm_handler, "get_settings", lambda: fake_settings)
+
+ handler = LiteLLMAIHandler()
+
+ assert handler.claude_extended_thinking_models == CLAUDE_EXTENDED_THINKING_MODELS
+ mock_logger.warning.assert_called_once()
+ warning_call = mock_logger.warning.call_args[0][0]
+ assert "Invalid claude_extended_thinking_models_override" in warning_call
+ assert "expected a list" in warning_call
+
+
+def test_claude_extended_thinking_override_invalid_model_names_warns_and_uses_default(monkeypatch, mock_logger):
+ fake_settings = create_mock_settings(["claude-3-7-sonnet-latest", 123])
+ monkeypatch.setattr(litellm_handler, "get_settings", lambda: fake_settings)
+
+ handler = LiteLLMAIHandler()
+
+ assert handler.claude_extended_thinking_models == CLAUDE_EXTENDED_THINKING_MODELS
+ mock_logger.warning.assert_called_once()
+ warning_call = mock_logger.warning.call_args[0][0]
+ assert "Invalid claude_extended_thinking_models_override" in warning_call
+ assert "expected a list of model name strings" in warning_call
+
+
+def test_claude_extended_thinking_override_valid_list_replaces_default(monkeypatch, mock_logger):
+ override = ["custom-claude-model"]
+ fake_settings = create_mock_settings(override)
+ monkeypatch.setattr(litellm_handler, "get_settings", lambda: fake_settings)
+
+ handler = LiteLLMAIHandler()
+
+ assert handler.claude_extended_thinking_models == override
+ assert handler.claude_extended_thinking_models is not override
+ mock_logger.warning.assert_not_called()
diff --git a/tests/unittest/test_repo_context.py b/tests/unittest/test_repo_context.py
new file mode 100644
index 0000000000..b4ae4303c8
--- /dev/null
+++ b/tests/unittest/test_repo_context.py
@@ -0,0 +1,160 @@
+from unittest.mock import Mock
+
+from jinja2 import Environment, StrictUndefined
+
+from pr_agent.algo.repo_context import build_repo_context
+from pr_agent.config_loader import get_settings
+from pr_agent.git_providers.git_provider import GitProvider
+from pr_agent.git_providers.github_provider import GithubProvider
+
+
+class FakeProvider:
+ def __init__(self, files):
+ self.files = files
+ self.requested_paths = []
+
+ def get_repo_file_content(self, file_path: str):
+ self.requested_paths.append(file_path)
+ return self.files.get(file_path)
+
+
+def test_build_repo_context_returns_empty_when_no_files_configured():
+ original_files = get_settings().config.get("repo_context_files", [])
+ try:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", [])
+
+ assert build_repo_context(FakeProvider({"AGENTS.md": "repo purpose"})) == ""
+ finally:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", original_files)
+
+
+def test_build_repo_context_fetches_and_formats_configured_files():
+ original_files = get_settings().config.get("repo_context_files", [])
+ original_max_lines = get_settings().config.get("repo_context_max_lines", 500)
+ try:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", ["AGENTS.md", "CONTRIBUTING.md"])
+ get_settings().set("CONFIG.REPO_CONTEXT_MAX_LINES", 500)
+ provider = FakeProvider({
+ "AGENTS.md": "# Agent Guide\nUse focused tests.",
+ "CONTRIBUTING.md": "Keep PRs small.",
+ })
+
+ context = build_repo_context(provider)
+
+ assert context == (
+ "## AGENTS.md\n"
+ "# Agent Guide\n"
+ "Use focused tests.\n\n"
+ "## CONTRIBUTING.md\n"
+ "Keep PRs small."
+ )
+ assert provider.requested_paths == ["AGENTS.md", "CONTRIBUTING.md"]
+ finally:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", original_files)
+ get_settings().set("CONFIG.REPO_CONTEXT_MAX_LINES", original_max_lines)
+
+
+def test_build_repo_context_skips_missing_and_invalid_files():
+ original_files = get_settings().config.get("repo_context_files", [])
+ try:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", ["", 7, "MISSING.md", "AGENTS.md"])
+ provider = FakeProvider({"AGENTS.md": "Loaded context"})
+
+ assert build_repo_context(provider) == "## AGENTS.md\nLoaded context"
+ assert provider.requested_paths == ["MISSING.md", "AGENTS.md"]
+ finally:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", original_files)
+
+
+def test_build_repo_context_enforces_total_line_cap():
+ original_files = get_settings().config.get("repo_context_files", [])
+ original_max_lines = get_settings().config.get("repo_context_max_lines", 500)
+ try:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", ["AGENTS.md", "CONTRIBUTING.md"])
+ get_settings().set("CONFIG.REPO_CONTEXT_MAX_LINES", 4)
+ provider = FakeProvider({
+ "AGENTS.md": "one\ntwo\nthree",
+ "CONTRIBUTING.md": "four\nfive",
+ })
+
+ context = build_repo_context(provider)
+
+ assert context == "## AGENTS.md\none\ntwo\nthree"
+ finally:
+ get_settings().set("CONFIG.REPO_CONTEXT_FILES", original_files)
+ get_settings().set("CONFIG.REPO_CONTEXT_MAX_LINES", original_max_lines)
+
+
+def test_base_provider_repo_file_content_returns_empty():
+ assert GitProvider.get_repo_file_content(None, "AGENTS.md") == ""
+
+
+def test_github_provider_fetches_repo_file_content_from_default_branch():
+ provider = GithubProvider.__new__(GithubProvider)
+ provider.repo_obj = Mock()
+ provider.repo_obj.get_contents.return_value.decoded_content = b"repo context"
+
+ assert provider.get_repo_file_content("AGENTS.md") == "repo context"
+ provider.repo_obj.get_contents.assert_called_once_with("AGENTS.md")
+
+
+def test_reviewer_prompt_renders_repo_context_block():
+ variables = {
+ "extra_instructions": "",
+ "repo_context": "## AGENTS.md\nRepo purpose",
+ "require_can_be_split_review": False,
+ "related_tickets": "",
+ "require_estimate_contribution_time_cost": False,
+ "require_score": False,
+ "require_tests": True,
+ "question_str": "",
+ "require_security_review": True,
+ "require_todo_scan": False,
+ "require_estimate_effort_to_review": True,
+ "num_max_findings": 3,
+ "num_pr_files": 1,
+ "is_ai_metadata": False,
+ }
+
+ rendered = Environment(undefined=StrictUndefined).from_string(
+ get_settings().pr_review_prompt.system
+ ).render(variables)
+
+ assert "Repository context:" in rendered
+ assert "## AGENTS.md" in rendered
+
+
+def test_description_prompt_renders_repo_context_block():
+ variables = {
+ "extra_instructions": "",
+ "repo_context": "## AGENTS.md\nRepo purpose",
+ "enable_custom_labels": False,
+ "custom_labels_class": "",
+ "enable_semantic_files_types": True,
+ "include_file_summary_changes": True,
+ "enable_pr_diagram": False,
+ }
+
+ rendered = Environment(undefined=StrictUndefined).from_string(
+ get_settings().pr_description_prompt.system
+ ).render(variables)
+
+ assert "Repository context:" in rendered
+ assert "## AGENTS.md" in rendered
+
+
+def test_code_suggestions_prompt_renders_repo_context_block():
+ variables = {
+ "extra_instructions": "",
+ "repo_context": "## AGENTS.md\nRepo purpose",
+ "focus_only_on_problems": True,
+ "num_code_suggestions": 3,
+ "is_ai_metadata": False,
+ }
+
+ rendered = Environment(undefined=StrictUndefined).from_string(
+ get_settings().pr_code_suggestions_prompt.system
+ ).render(variables)
+
+ assert "Repository context:" in rendered
+ assert "## AGENTS.md" in rendered