Skip to content

Fix #5808: Add support for Claude 4.7 Opus (no assistant prefill, drop temperature)#5810

Open
devin-ai-integration[bot] wants to merge 1 commit into
mainfrom
devin/1778761109-fix-claude-4.7-opus-support
Open

Fix #5808: Add support for Claude 4.7 Opus (no assistant prefill, drop temperature)#5810
devin-ai-integration[bot] wants to merge 1 commit into
mainfrom
devin/1778761109-fix-claude-4.7-opus-support

Conversation

@devin-ai-integration
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration Bot commented May 14, 2026

Summary

Fixes #5808 — Claude 4.7 Opus (and the Claude 4.6+ family) fails with AnthropicException - This model does not support assistant message prefill because CrewAI's agent executor appends tool-use results as assistant-role messages, leaving the conversation ending with an assistant turn. Anthropic's newer models explicitly reject this.

Root causes addressed:

  1. Assistant message prefill: CrewAgentExecutor._invoke_loop() appends the agent's response (including the Observation: from tool execution) as a single role="assistant" message. On the next LLM call, the messages array ends with an assistant turn — which Claude 4.6+ rejects.

  2. Temperature parameter: Claude 4.7 Opus rejects the temperature parameter (temperature is deprecated for this model). LiteLLM's drop_params=True does not catch this because litellm still incorrectly reports temperature as supported for these models.

Changes:

  • LLM.supports_assistant_prefill() — queries litellm.get_model_info() for the supports_assistant_prefill flag; falls back to a name-based heuristic for models not in litellm's registry (matches claude-*-4-6, claude-*-4-7, etc.).
  • LLM._is_anthropic_no_prefill_model() — convenience wrapper used to also drop temperature before calling litellm.completion().
  • CrewAgentExecutor._append_assistant_response() — for no-prefill models, splits the Observation: portion into a separate role="user" message so the conversation never ends with an assistant turn. For models that support prefill, behaviour is unchanged.
  • 17 new unit tests covering detection logic, temperature dropping, and message-splitting behaviour across multiple model types.

Review & Testing Checklist for Human

  • Verify with actual Claude 4.7 Opus API key: run a simple crew with tools using model="claude-opus-4-7" and confirm no assistant message prefill error
  • Verify temperature handling: create an LLM(model="claude-opus-4-7", temperature=0.7) and confirm no temperature is deprecated error
  • Regression check with Claude 3.x: run the same crew with model="claude-3-opus-20240229" and confirm existing behaviour is preserved (single assistant message, temperature works)
  • Regression check with OpenAI/Gemini: run with model="gpt-4o" to confirm no behavioural change for non-Anthropic models

Notes

  • The 2 pre-existing test failures (test_agent_powered_by_new_o_model_family_*) are unrelated — they fail because litellm 1.50.2 cannot resolve the o1-preview model name.
  • The temperature workaround is defensive; once litellm fixes BerriAI/litellm#26444, the params.pop("temperature") will be a no-op.

Link to Devin session: https://app.devin.ai/sessions/91e01d2a89e04f188e77f8cb39edb2d8

Summary by CodeRabbit

  • New Features

    • Enhanced support for latest Anthropic Claude models (Claude 4.6+).
    • Improved temperature parameter handling for compatibility with newer Claude versions.
    • Optimized conversation message formatting for better conversation flow.
  • Tests

    • Comprehensive test coverage added for model compatibility, parameter management, and message handling.

Review Change Stack

…p temperature)

- Add LLM.supports_assistant_prefill() to detect Anthropic models that
  reject trailing assistant messages (Claude 4.6+)
- Add CrewAgentExecutor._append_assistant_response() to split the
  observation into a separate user-role message for no-prefill models,
  ensuring the conversation never ends with an assistant turn
- Drop the temperature parameter for Claude 4.6+ models that reject it
- Add 17 unit tests covering detection, temperature dropping, and
  message splitting behaviour

Co-Authored-By: João <joao@crewai.com>
@devin-ai-integration
Copy link
Copy Markdown
Contributor Author

Prompt hidden (unlisted session)

@devin-ai-integration
Copy link
Copy Markdown
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 14, 2026

📝 Walkthrough

Walkthrough

This PR adds support for Claude 4.7 Opus and other newer Anthropic models that lack assistant message prefill by detecting model capabilities, dropping unsupported parameters, and reformatting conversation messages to end on user turns rather than assistant turns.

Changes

Claude 4.7 Opus Support

Layer / File(s) Summary
LLM prefill capability detection
src/crewai/llm.py
LLM detects whether the underlying model supports assistant message prefill via litellm's model registry, with fallback to Claude version regex parsing. Identifies Claude 4.6+ and 4.7+ models that reject prefill support.
Temperature parameter removal for no-prefill models
src/crewai/llm.py
LLM.call() conditionally removes temperature parameter when the model is identified as an Anthropic no-prefill model, avoiding request failures.
CrewAgentExecutor prefill-aware message formatting
src/crewai/agents/crew_agent_executor.py
CrewAgentExecutor stores LLM prefill capability during init and uses _append_assistant_response() helper to adapt message formatting: standard assistant messages when prefill is supported; Observation content relocated to user-role messages when prefill is unsupported, ensuring conversations end on user turns.
Test coverage for prefill detection, temperature dropping, and message formatting
tests/test_claude_opus_4_7_support.py
Comprehensive pytest suite validates LLM prefill detection across model types, temperature parameter dropping, and CrewAgentExecutor message role handling for both prefill-supported and prefill-unsupported scenarios with edge cases and multi-iteration sequences.

🐰 A Poet's Note:

Claude 4.7 arrives with thoughts so new,
No prefill dance, no temps will do,
We split observations user-bound,
So conversations turn around,
A clever hop to keep things true! 🌟

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 48.15% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main changes: adding support for Claude 4.7 Opus by implementing assistant prefill detection and temperature parameter dropping.
Linked Issues check ✅ Passed The PR implements all coding requirements from both issues: detects no-prefill models via LLM.supports_assistant_prefill(), drops temperature for affected models, and adjusts message handling in CrewAgentExecutor to avoid ending on assistant turns.
Out of Scope Changes check ✅ Passed All changes directly address the linked issues. The modifications to LLM, CrewAgentExecutor, and comprehensive test coverage are all scoped to supporting Claude 4.7 Opus and handling the temperature parameter correctly.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch devin/1778761109-fix-claude-4.7-opus-support
⚔️ Resolve merge conflicts
  • Resolve merge conflict in branch devin/1778761109-fix-claude-4.7-opus-support

Comment @coderabbitai help to get the list of available commands and usage tips.


from unittest.mock import MagicMock, patch

import pytest
Comment thread src/crewai/llm.py
prefill = info.get("supports_assistant_prefill")
if "anthropic" in provider and prefill is False:
return False
except Exception:
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/crewai/agents/crew_agent_executor.py`:
- Line 72: The code unconditionally calls self.llm.supports_assistant_prefill()
which raises AttributeError for custom LLM adapters that lack that method;
change the assignment to safely check for the method on self.llm (e.g., via
hasattr/getattr) and default supports_prefill to True when the method is absent
so backward compatibility is preserved; update the code around the
supports_prefill assignment in CrewAgentExecutor (referencing self.llm and
supports_assistant_prefill) to perform the guarded call and fallback.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro Plus

Run ID: b82b5286-f78e-4cbd-993c-33c58817c69d

📥 Commits

Reviewing files that changed from the base of the PR and between c36827b and 181a8c1.

📒 Files selected for processing (3)
  • src/crewai/agents/crew_agent_executor.py
  • src/crewai/llm.py
  • tests/test_claude_opus_4_7_support.py

self.original_tools = original_tools
self.step_callback = step_callback
self.use_stop_words = self.llm.supports_stop_words()
self.supports_prefill = self.llm.supports_assistant_prefill()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Guard supports_assistant_prefill() for custom LLM adapters.

Line 72 unconditionally calls a new method on llm; custom adapters that previously worked can now fail during executor initialization with AttributeError. Defaulting to True when the method is absent preserves backward compatibility.

Proposed fix
-        self.supports_prefill = self.llm.supports_assistant_prefill()
+        supports_prefill_fn = getattr(self.llm, "supports_assistant_prefill", None)
+        self.supports_prefill = (
+            supports_prefill_fn() if callable(supports_prefill_fn) else True
+        )
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/crewai/agents/crew_agent_executor.py` at line 72, The code
unconditionally calls self.llm.supports_assistant_prefill() which raises
AttributeError for custom LLM adapters that lack that method; change the
assignment to safely check for the method on self.llm (e.g., via
hasattr/getattr) and default supports_prefill to True when the method is absent
so backward compatibility is preserved; update the code around the
supports_prefill assignment in CrewAgentExecutor (referencing self.llm and
supports_assistant_prefill) to perform the guarded call and fallback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

0 participants