Problem summary
Task Succeeded. Result: LLM request failed: Error code: 400 - {'error': {'message': 'System message must be at the beginning.', 'type': 'Ba
Expected behavior
子agent执行成功
Actual behavior
我在调试后端代码的时候,我留意到子agent的message拼接顺序有问题,具体报错如下:
================================== Ai Message ==================================
...
现在开始工作...
Tool Calls:
read_file (chatcmpl-tool-b4ae020cd7799b40)
Call ID: chatcmpl-tool-b4ae020cd7799b40
Args:
description: Load writing style guidelines
path: /mnt/skills/custom/patent-navigation-report/writing_style/guidelines.md
read_file (chatcmpl-tool-8e84c3daecada076)
Call ID: chatcmpl-tool-8e84c3daecada076
Args:
description: Load chart specification
path: /mnt/skills/custom/patent-navigation-report/plotting/chart_specification.md
read_file (chatcmpl-tool-908f835968f3c2cb)
Call ID: chatcmpl-tool-908f835968f3c2cb
Args:
description: Load general methodology
path: /mnt/skills/custom/patent-navigation-report/methodology/general_methodology.md
task (chatcmpl-tool-9586c236eb5954cd)
Call ID: chatcmpl-tool-9586c236eb5954cd
Args:
description: Research global CF textile patents
prompt: Research the latest information on carbon fiber textile patents globally and in China. I need:
...
Use web_search and web_fetch tools to gather this information. Be thorough and cite your sources.
subagent_type: general-purpose
task (chatcmpl-tool-b581e9d9dc23a3bb)
Call ID: chatcmpl-tool-b581e9d9dc23a3bb
Args:
description: Research CF textile companies
prompt: Research the major companies in the carbon fiber textile industry, focusing on their patent portfolios and competitive positions. I need detailed information on:
....
Search the web for recent news, company reports, and patent database information. Provide specific data and cite sources.
subagent_type: general-purpose
task (chatcmpl-tool-8191dc82aa4faa8a)
Call ID: chatcmpl-tool-8191dc82aa4faa8a
Args:
description: Research CF textile market policies
prompt: Research the market size, industry policies, and technology trends for carbon fiber textiles globally and in China. I need:
...
Search the web for recent industry reports, government policy documents, and market research. Provide specific numbers and cite sources.
subagent_type: general-purpose
/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/pydantic/functional_validators.py:835: UserWarning: Pydantic serializer warnings:
PydanticSerializationUnexpectedValue(Expected none - serialized value may not be as expected [field_name='context', input_value={'thread_id': 'debug-thre...sandbox_id': '761c3b10'}, input_type=dict])
function=lambda v, h: h(v), schema=original_schema
/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/pydantic/main.py:475: UserWarning: Pydantic serializer warnings:
PydanticSerializationUnexpectedValue(Expected none - serialized value may not be as expected [field_name='context', input_value={'thread_id': 'debug-thre...sandbox_id': '761c3b10'}, input_type=dict])
return self.pydantic_serializer.to_python(
================================= Tool Message =================================
Name: task
Task Succeeded. Result: LLM request failed: Error code: 400 - {'error': {'message': 'System message must be at the beginning.', 'type': 'BadRequestError', 'param': None, 'code': 400}}
Operating system
Linux
Platform details
No response
Python version
No response
Node.js version
No response
pnpm version
pnpm 10.33.1
uv version
No response
How are you running DeerFlow?
Local (make dev)
Reproduction steps
make dev
Relevant logs
我查看了后端日志如下:
2026-05-02 14:55:23 - deerflow.subagents.executor - INFO - [trace=4b2574c4] Subagent general-purpose captured AI message #1
2026-05-02 14:55:23 - deerflow.subagents.executor - INFO - [trace=4b2574c4] Subagent general-purpose completed async execution
2026-05-02 14:55:23 - deerflow.subagents.executor - INFO - [trace=4b2574c4] Subagent general-purpose final messages count: 24
2026-05-02 14:55:23 - httpx - INFO - HTTP Request: POST http://127.0.0.1:23333/v1/chat/completions "HTTP/1.1 400 Bad Request"
2026-05-02 14:55:23 - deerflow.agents.middlewares.llm_error_handling_middleware - WARNING - LLM call failed after 1 attempt(s): Error code: 400 - {'error': {'message': 'System message must be at the beginning.', 'type': 'BadRequestError', 'param': None, 'code': 400}}
Traceback (most recent call last):
File "/data2/deer-flow/backend/packages/harness/deerflow/agents/middlewares/llm_error_handling_middleware.py", line 266, in awrap_model_call
response = await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/langchain/agents/factory.py", line 1330, in _execute_model_async
output = await model_.ainvoke(messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 5752, in ainvoke
return await self.bound.ainvoke(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 494, in ainvoke
llm_result = await self.agenerate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1760, in agenerate_prompt
return await self.agenerate(
^^^^^^^^^^^^^^^^^^^^^
File "/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1718, in agenerate
raise exceptions[0]
File "/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 2053, in _agenerate_with_cache
result = await self._agenerate(
^^^^^^^^^^^^^^^^^^^^^^
File "/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 1923, in _agenerate
_handle_openai_bad_request(e)
File "/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 1918, in _agenerate
raw_response = await self.async_client.with_raw_response.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/openai/_legacy_response.py", line 384, in wrapped
return cast(LegacyAPIResponse[R], await func(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1913, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1698, in request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'System message must be at the beginning.', 'type': 'BadRequestError', 'param': None, 'code': 400}}
Git state
commit eba3b9e (HEAD -> main, origin/main, origin/HEAD)
Author: He Wang wanghechn@qq.com
Date: Thu Apr 30 22:27:14 2026 +0800
fix(config): unify log_level from config.yaml across Gateway and debug entry points (#2601)
Centralize log level parsing in `logging_level_from_config()` and
application in `apply_logging_level()` within `deerflow.config.app_config`.
- Gateway lifespan applies configured log level on startup
- `debug.py` uses shared helpers instead of local duplicates
- `apply_logging_level()` targets only `deerflow`/`app` logger hierarchies
so third-party library verbosity is not affected; root handler levels
are only lowered (never raised) to allow configured loggers through
without suppressing third-party output; root logger level is not modified
- Config field description updated to clarify scope
- Tests save/restore global logging state to avoid test pollution
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
Additional context
No response
Problem summary
Task Succeeded. Result: LLM request failed: Error code: 400 - {'error': {'message': 'System message must be at the beginning.', 'type': 'Ba
Expected behavior
子agent执行成功
Actual behavior
我在调试后端代码的时候,我留意到子agent的message拼接顺序有问题,具体报错如下:
================================== Ai Message ==================================
...
现在开始工作...
Tool Calls:
read_file (chatcmpl-tool-b4ae020cd7799b40)
Call ID: chatcmpl-tool-b4ae020cd7799b40
Args:
description: Load writing style guidelines
path: /mnt/skills/custom/patent-navigation-report/writing_style/guidelines.md
read_file (chatcmpl-tool-8e84c3daecada076)
Call ID: chatcmpl-tool-8e84c3daecada076
Args:
description: Load chart specification
path: /mnt/skills/custom/patent-navigation-report/plotting/chart_specification.md
read_file (chatcmpl-tool-908f835968f3c2cb)
Call ID: chatcmpl-tool-908f835968f3c2cb
Args:
description: Load general methodology
path: /mnt/skills/custom/patent-navigation-report/methodology/general_methodology.md
task (chatcmpl-tool-9586c236eb5954cd)
Call ID: chatcmpl-tool-9586c236eb5954cd
Args:
description: Research global CF textile patents
prompt: Research the latest information on carbon fiber textile patents globally and in China. I need:
...
Use web_search and web_fetch tools to gather this information. Be thorough and cite your sources.
subagent_type: general-purpose
task (chatcmpl-tool-b581e9d9dc23a3bb)
Call ID: chatcmpl-tool-b581e9d9dc23a3bb
Args:
description: Research CF textile companies
prompt: Research the major companies in the carbon fiber textile industry, focusing on their patent portfolios and competitive positions. I need detailed information on:
....
Search the web for recent news, company reports, and patent database information. Provide specific data and cite sources.
subagent_type: general-purpose
task (chatcmpl-tool-8191dc82aa4faa8a)
Call ID: chatcmpl-tool-8191dc82aa4faa8a
Args:
description: Research CF textile market policies
prompt: Research the market size, industry policies, and technology trends for carbon fiber textiles globally and in China. I need:
...
Search the web for recent industry reports, government policy documents, and market research. Provide specific numbers and cite sources.
subagent_type: general-purpose
/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/pydantic/functional_validators.py:835: UserWarning: Pydantic serializer warnings:
PydanticSerializationUnexpectedValue(Expected
none- serialized value may not be as expected [field_name='context', input_value={'thread_id': 'debug-thre...sandbox_id': '761c3b10'}, input_type=dict])function=lambda v, h: h(v), schema=original_schema
/data2/deer-flow/backend/.venv/lib/python3.12/site-packages/pydantic/main.py:475: UserWarning: Pydantic serializer warnings:
PydanticSerializationUnexpectedValue(Expected
none- serialized value may not be as expected [field_name='context', input_value={'thread_id': 'debug-thre...sandbox_id': '761c3b10'}, input_type=dict])return self.pydantic_serializer.to_python(
================================= Tool Message =================================
Name: task
Task Succeeded. Result: LLM request failed: Error code: 400 - {'error': {'message': 'System message must be at the beginning.', 'type': 'BadRequestError', 'param': None, 'code': 400}}
Operating system
Linux
Platform details
No response
Python version
No response
Node.js version
No response
pnpm version
pnpm 10.33.1
uv version
No response
How are you running DeerFlow?
Local (make dev)
Reproduction steps
make dev
Relevant logs
Git state
commit eba3b9e (HEAD -> main, origin/main, origin/HEAD)
Author: He Wang wanghechn@qq.com
Date: Thu Apr 30 22:27:14 2026 +0800
Additional context
No response