Skip to content

fix: skip empty llm summaries#8195

Open
he-yufeng wants to merge 1 commit into
AstrBotDevs:masterfrom
he-yufeng:fix/llm-summary-empty-content
Open

fix: skip empty llm summaries#8195
he-yufeng wants to merge 1 commit into
AstrBotDevs:masterfrom
he-yufeng:fix/llm-summary-empty-content

Conversation

@he-yufeng
Copy link
Copy Markdown
Contributor

@he-yufeng he-yufeng commented May 15, 2026

Summary

  • skip LLM context compression when the summary response is empty
  • log a warning instead of inserting an empty summary placeholder into history
  • add a regression test for blank completion_text

Closes #8133.

To verify

  • .venv\Scripts\python.exe -m py_compile astrbot\core\agent\context\compressor.py tests\agent\test_context_manager.py
  • .venv\Scripts\python.exe -m ruff check astrbot\core\agent\context\compressor.py tests\agent\test_context_manager.py
  • .venv\Scripts\python.exe -m pytest tests\agent\test_context_manager.py -q --basetemp .tmp\pytest -p no:cacheprovider
  • git diff --check

Summary by Sourcery

Handle empty LLM-generated summaries safely in the context compressor to avoid altering history when no usable summary is returned.

Bug Fixes:

  • Prevent LLMSummaryCompressor from inserting an empty summary into the message history when the LLM returns blank or missing completion text.

Tests:

  • Add an async regression test ensuring LLMSummaryCompressor preserves history and logs a warning when the LLM returns an empty summary.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • Consider including contextual details (e.g., provider name or conversation identifier) in the warning log for empty summaries to make it easier to trace and debug in production.
  • To avoid brittle coupling between the test and implementation, you could define the warning message as a constant and reference it both in LLMSummaryCompressor and in the test assertion instead of duplicating the literal string.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider including contextual details (e.g., provider name or conversation identifier) in the warning log for empty summaries to make it easier to trace and debug in production.
- To avoid brittle coupling between the test and implementation, you could define the warning message as a constant and reference it both in `LLMSummaryCompressor` and in the test assertion instead of duplicating the literal string.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enhances the LLMSummaryCompressor to robustly handle empty or whitespace-only responses from the LLM. It now strips the completion text and returns the original message history if the summary is empty, accompanied by a warning log. Additionally, a new test case has been added to verify this fallback behavior. I have no feedback to provide as there were no review comments.

@dosubot dosubot Bot added area:core The bug / feature is about astrbot's core, backend area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. size:XS This PR changes 0-9 lines, ignoring generated files. labels May 15, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:core The bug / feature is about astrbot's core, backend area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. size:XS This PR changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] LLM context compression inserts empty summary without logs or warning

1 participant