Fix #5808: Add support for Claude 4.7 Opus (no assistant prefill, drop temperature)#5810
Fix #5808: Add support for Claude 4.7 Opus (no assistant prefill, drop temperature)#5810devin-ai-integration[bot] wants to merge 1 commit into
Conversation
…p temperature) - Add LLM.supports_assistant_prefill() to detect Anthropic models that reject trailing assistant messages (Claude 4.6+) - Add CrewAgentExecutor._append_assistant_response() to split the observation into a separate user-role message for no-prefill models, ensuring the conversation never ends with an assistant turn - Drop the temperature parameter for Claude 4.6+ models that reject it - Add 17 unit tests covering detection, temperature dropping, and message splitting behaviour Co-Authored-By: João <joao@crewai.com>
|
Prompt hidden (unlisted session) |
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
📝 WalkthroughWalkthroughThis PR adds support for Claude 4.7 Opus and other newer Anthropic models that lack assistant message prefill by detecting model capabilities, dropping unsupported parameters, and reformatting conversation messages to end on user turns rather than assistant turns. ChangesClaude 4.7 Opus Support
🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
Comment |
|
|
||
| from unittest.mock import MagicMock, patch | ||
|
|
||
| import pytest |
| prefill = info.get("supports_assistant_prefill") | ||
| if "anthropic" in provider and prefill is False: | ||
| return False | ||
| except Exception: |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/crewai/agents/crew_agent_executor.py`:
- Line 72: The code unconditionally calls self.llm.supports_assistant_prefill()
which raises AttributeError for custom LLM adapters that lack that method;
change the assignment to safely check for the method on self.llm (e.g., via
hasattr/getattr) and default supports_prefill to True when the method is absent
so backward compatibility is preserved; update the code around the
supports_prefill assignment in CrewAgentExecutor (referencing self.llm and
supports_assistant_prefill) to perform the guarded call and fallback.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro Plus
Run ID: b82b5286-f78e-4cbd-993c-33c58817c69d
📒 Files selected for processing (3)
src/crewai/agents/crew_agent_executor.pysrc/crewai/llm.pytests/test_claude_opus_4_7_support.py
| self.original_tools = original_tools | ||
| self.step_callback = step_callback | ||
| self.use_stop_words = self.llm.supports_stop_words() | ||
| self.supports_prefill = self.llm.supports_assistant_prefill() |
There was a problem hiding this comment.
Guard supports_assistant_prefill() for custom LLM adapters.
Line 72 unconditionally calls a new method on llm; custom adapters that previously worked can now fail during executor initialization with AttributeError. Defaulting to True when the method is absent preserves backward compatibility.
Proposed fix
- self.supports_prefill = self.llm.supports_assistant_prefill()
+ supports_prefill_fn = getattr(self.llm, "supports_assistant_prefill", None)
+ self.supports_prefill = (
+ supports_prefill_fn() if callable(supports_prefill_fn) else True
+ )🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/crewai/agents/crew_agent_executor.py` at line 72, The code
unconditionally calls self.llm.supports_assistant_prefill() which raises
AttributeError for custom LLM adapters that lack that method; change the
assignment to safely check for the method on self.llm (e.g., via
hasattr/getattr) and default supports_prefill to True when the method is absent
so backward compatibility is preserved; update the code around the
supports_prefill assignment in CrewAgentExecutor (referencing self.llm and
supports_assistant_prefill) to perform the guarded call and fallback.
Summary
Fixes #5808 — Claude 4.7 Opus (and the Claude 4.6+ family) fails with
AnthropicException - This model does not support assistant message prefillbecause CrewAI's agent executor appends tool-use results as assistant-role messages, leaving the conversation ending with an assistant turn. Anthropic's newer models explicitly reject this.Root causes addressed:
Assistant message prefill:
CrewAgentExecutor._invoke_loop()appends the agent's response (including theObservation:from tool execution) as a singlerole="assistant"message. On the next LLM call, the messages array ends with an assistant turn — which Claude 4.6+ rejects.Temperature parameter: Claude 4.7 Opus rejects the
temperatureparameter (temperature is deprecated for this model). LiteLLM'sdrop_params=Truedoes not catch this because litellm still incorrectly reportstemperatureas supported for these models.Changes:
LLM.supports_assistant_prefill()— querieslitellm.get_model_info()for thesupports_assistant_prefillflag; falls back to a name-based heuristic for models not in litellm's registry (matchesclaude-*-4-6,claude-*-4-7, etc.).LLM._is_anthropic_no_prefill_model()— convenience wrapper used to also droptemperaturebefore callinglitellm.completion().CrewAgentExecutor._append_assistant_response()— for no-prefill models, splits theObservation:portion into a separaterole="user"message so the conversation never ends with an assistant turn. For models that support prefill, behaviour is unchanged.Review & Testing Checklist for Human
model="claude-opus-4-7"and confirm noassistant message prefillerrorLLM(model="claude-opus-4-7", temperature=0.7)and confirm notemperature is deprecatederrormodel="claude-3-opus-20240229"and confirm existing behaviour is preserved (single assistant message, temperature works)model="gpt-4o"to confirm no behavioural change for non-Anthropic modelsNotes
test_agent_powered_by_new_o_model_family_*) are unrelated — they fail because litellm 1.50.2 cannot resolve theo1-previewmodel name.params.pop("temperature")will be a no-op.Link to Devin session: https://app.devin.ai/sessions/91e01d2a89e04f188e77f8cb39edb2d8
Summary by CodeRabbit
New Features
Tests