Skip to content

Loop Detection Middleware Incorrectly Interrupts Scientific Workflows with Repeated Tool Calls #2511

@gs80140

Description

@gs80140

Problem summary

Loop detection middleware incorrectly flags legitimate scientific workflows as infinite loops, blocking continuation of complex analyses after detecting repeated tool calls. This impacts RNA-seq analysis requiring multiple sequential operations.

Expected behavior

Loop detection should distinguish between actual infinite loops and legitimate scientific workflows requiring repeated operations. System should allow continued execution of batch processing tasks such as RNA-seq analysis with appropriate safeguards, rather than blocking all subsequent operations.

Actual behavior

Loop detection middleware intercepts conversation with "[LOOP DETECTED] You are repeating the same tool calls..." message, causing LLM request failure with 400 error: InvalidParameter: Invalid messages at 12: {'role': 'user', 'content': '[LOOP DETECTED] You are repeating the same tool calls...'}. This prevents completion of legitimate batch operations in scientific workflows.

Image

Operating system

Linux

Platform details

No response

Python version

Python 3.10.12

Node.js version

v22.21.0

pnpm version

No response

uv version

uv 0.8.9

How are you running DeerFlow?

Docker (make docker-dev)

Reproduction steps

Initiate RNA-seq differential expression analysis requiring multiple sequential tool calls
Execute batch operations (quality control, alignment, quantification) on multiple samples
Observe workflow interruption when loop detection triggers after repeated bash tool calls
Note the 400 error caused by loop detection middleware injecting error message as user content

rna-seq-differential-analysis.zip

Relevant logs

logs/langgraph.log

openai.BadRequestError: Error code: 400 - {'error': {'message': "<400> InternalError.Algo.InvalidParameter: Error message is: Invalid messages at 12:\n{'role': 'user', 'content': '[LOOP DETECTED] You are repeating the same tool calls. Stop calling tools and produce your final answer now. If you cannot complete the task, summarize what you accomplished so far.<|end▁of▁sentence|><|end▁of▁sentence|><|end▁of▁sentence|>'}", 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_parameter_error'}, 'id': 'chatcmpl-9cc7bf0c-332e-9cb9-ac4e-2b8bb58989f9', 'request_id': '9cc7bf0c-332e-9cb9-ac4e-2b8bb58989f9'}
�[2m2026-04-24T08:38:17.830628Z�[0m [�[32m�[1minfo     �[0m] �[1m1 change detected             �[0m [�[0m�[1m�[34mwatchfiles.main�[0m]�[0m �[36mapi_variant�[0m=�[35mlocal_dev�[0m �[36mlanggraph_api_version�[0m=�[35m0.7.65�[0m �[36mthread_name�[0m=�[35mMainThread�[0m
�[2m2026-04-24T08:38:17.864915Z�[0m [�[32m�[1minfo     �[0m] �[1mHTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions "HTTP/1.1 400 Bad Request"�[0m [�[0m�[1m�[34mhttpx�[0m]�[0m �[36mapi_variant�[0m=�[35mlocal_dev�[0m �[36massistant_id�[0m=�[35mbee7d354-5df5-5f26-a978-10ea053f620d�[0m �[36mgraph_id�[0m=�[35mlead_agent�[0m �[36mlanggraph_api_version�[0m=�[35m0.7.65�[0m �[36mlanggraph_node�[0m=�[35mmodel�[0m �[36mrequest_id�[0m=�[35m9a3d772c-ab23-4782-88ea-7c3d73f2f1c0�[0m �[36mrun_attempt�[0m=�[35m1�[0m �[36mrun_id�[0m=�[35m019dbe9b-9ba0-72c0-bdd5-73e68fe77561�[0m �[36mthread_id�[0m=�[35m765bc7c1-00ba-44a0-86fa-a371d4406bd9�[0m �[36mthread_name�[0m=�[35mMainThread�[0m
�[2m2026-04-24T08:38:17.866061Z�[0m [�[33m�[1mwarning  �[0m] �[1mLLM call failed after 1 attempt(s): Error code: 400 - {'error': {'message': "<400> InternalError.Algo.InvalidParameter: Error message is: Invalid messages at 12:\n{'role': 'user', 'content': '[LOOP DETECTED] You are repeating the same tool calls. Stop calling tools and produce your final answer now. If you cannot complete the task, summarize what you accomplished so far.<|end▁of▁sentence|><|end▁of▁sentence|><|end▁of▁sentence|>'}", 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_parameter_error'}, 'id': 'chatcmpl-a8bc8c50-dc51-9db3-bf26-d7fd020d261b', 'request_id': 'a8bc8c50-dc51-9db3-bf26-d7fd020d261b'}�[0m [�[0m�[1m�[34mdeerflow.agents.middlewares.llm_error_handling_middleware�[0m]�[0m �[36mapi_variant�[0m=�[35mlocal_dev�[0m �[36massistant_id�[0m=�[35mbee7d354-5df5-5f26-a978-10ea053f620d�[0m �[36mgraph_id�[0m=�[35mlead_agent�[0m �[36mlanggraph_api_version�[0m=�[35m0.7.65�[0m �[36mlanggraph_node�[0m=�[35mmodel�[0m �[36mrequest_id�[0m=�[35m9a3d772c-ab23-4782-88ea-7c3d73f2f1c0�[0m �[36mrun_attempt�[0m=�[35m1�[0m �[36mrun_id�[0m=�[35m019dbe9b-9ba0-72c0-bdd5-73e68fe77561�[0m �[36mthread_id�[0m=�[35m765bc7c1-00ba-44a0-86fa-a371d4406bd9�[0m �[36mthread_name�[0m=�[35mMainThread�[0m
Traceback (most recent call last):
  File "/app/backend/packages/harness/deerflow/agents/middlewares/llm_error_handling_middleware.py", line 275, in awrap_model_call
    response = await handler(request)
               ^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain/agents/factory.py", line 257, in inner_handler
    inner_result = await inner(req, handler)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain/agents/middleware/todo.py", line 228, in awrap_model_call
    return await handler(request.override(system_message=new_system_message))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain/agents/factory.py", line 1156, in _execute_model_async
    output = await model_.ainvoke(messages)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 5708, in ainvoke
    return await self.bound.ainvoke(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 477, in ainvoke
    llm_result = await self.agenerate_prompt(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1196, in agenerate_prompt
    return await self.agenerate(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1154, in agenerate
    raise exceptions[0]
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1380, in _agenerate_with_cache
    async for chunk in self._astream(messages, stop=stop, **kwargs):
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 3029, in _astream
    async for chunk in super()._astream(*args, **kwargs):
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 1545, in _astream
    response = await self.async_client.create(**payload)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2678, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1797, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1597, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "<400> InternalError.Algo.InvalidParameter: Error message is: Invalid messages at 12:\n{'role': 'user', 'content': '[LOOP DETECTED] You are repeating the same tool calls. Stop calling tools and produce your final answer now. If you cannot complete the task, summarize what you accomplished so far.<|end▁of▁sentence|><|end▁of▁sentence|><|end▁of▁sentence|>'}", 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_parameter_error'}, 'id': 'chatcmpl-a8bc8c50-dc51-9db3-bf26-d7fd020d261b', 'request_id': 'a8bc8c50-dc51-9db3-bf26-d7fd020d261b'}
�[2m2026-04-24T08:38:18.117544Z�[0m [�[32m�[1minfo     �[0m] �[1mHTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions "HTTP/1.1 400 Bad Request"�[0m [�[0m�[1m�[34mhttpx�[0m]�[0m �[36mapi_variant�[0m=�[35mlocal_dev�[0m �[36massistant_id�[0m=�[35mbee7d354-5df5-5f26-a978-10ea053f620d�[0m �[36mgraph_id�[0m=�[35mlead_agent�[0m �[36mlanggraph_api_version�[0m=�[35m0.7.65�[0m �[36mlanggraph_node�[0m=�[35mmodel�[0m �[36mrequest_id�[0m=�[35m9a3d772c-ab23-4782-88ea-7c3d73f2f1c0�[0m �[36mrun_attempt�[0m=�[35m1�[0m �[36mrun_id�[0m=�[35m019dbe9b-9ba0-72c0-bdd5-73e68fe77561�[0m �[36mthread_id�[0m=�[35m765bc7c1-00ba-44a0-86fa-a371d4406bd9�[0m �[36mthread_name�[0m=�[35mMainThread�[0m
�[2m2026-04-24T08:38:18.118628Z�[0m [�[33m�[1mwarning  �[0m] �[1mLLM call failed after 1 attempt(s): Error code: 400 - {'error': {'message': "<400> InternalError.Algo.InvalidParameter: Error message is: Invalid messages at 12:\n{'role': 'user', 'content': '[LOOP DETECTED] You are repeating the same tool calls. Stop calling tools and produce your final answer now. If you cannot complete the task, summarize what you accomplished so far.<|end▁of▁sentence|><|end▁of▁sentence|><|end▁of▁sentence|>'}", 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_parameter_error'}, 'id': 'chatcmpl-4020e9f2-f50e-9db4-9be8-095c993b871d', 'request_id': '4020e9f2-f50e-9db4-9be8-095c993b871d'}�[0m [�[0m�[1m�[34mdeerflow.agents.middlewares.llm_error_handling_middleware�[0m]�[0m �[36mapi_variant�[0m=�[35mlocal_dev�[0m �[36massistant_id�[0m=�[35mbee7d354-5df5-5f26-a978-10ea053f620d�[0m �[36mgraph_id�[0m=�[35mlead_agent�[0m �[36mlanggraph_api_version�[0m=�[35m0.7.65�[0m �[36mlanggraph_node�[0m=�[35mmodel�[0m �[36mrequest_id�[0m=�[35m9a3d772c-ab23-4782-88ea-7c3d73f2f1c0�[0m �[36mrun_attempt�[0m=�[35m1�[0m �[36mrun_id�[0m=�[35m019dbe9b-9ba0-72c0-bdd5-73e68fe77561�[0m �[36mthread_id�[0m=�[35m765bc7c1-00ba-44a0-86fa-a371d4406bd9�[0m �[36mthread_name�[0m=�[35mMainThread�[0m
Traceback (most recent call last):
  File "/app/backend/packages/harness/deerflow/agents/middlewares/llm_error_handling_middleware.py", line 275, in awrap_model_call
    response = await handler(request)
               ^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain/agents/factory.py", line 257, in inner_handler
    inner_result = await inner(req, handler)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain/agents/middleware/todo.py", line 228, in awrap_model_call
    return await handler(request.override(system_message=new_system_message))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain/agents/factory.py", line 1156, in _execute_model_async
    output = await model_.ainvoke(messages)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 5708, in ainvoke
    return await self.bound.ainvoke(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 477, in ainvoke
    llm_result = await self.agenerate_prompt(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1196, in agenerate_prompt
    return await self.agenerate(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1154, in agenerate
    raise exceptions[0]
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1380, in _agenerate_with_cache
    async for chunk in self._astream(messages, stop=stop, **kwargs):
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 3029, in _astream
    async for chunk in super()._astream(*args, **kwargs):
  File "/app/backend/.venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 1545, in _astream
    response = await self.async_client.create(**payload)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2678, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1797, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1597, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "<400> InternalError.Algo.InvalidParameter: Error message is: Invalid messages at 12:\n{'role': 'user', 'content': '[LOOP DETECTED] You are repeating the same tool calls. Stop calling tools and produce your final answer now. If you cannot complete the task, summarize what you accomplished so far.<|end▁of▁sentence|><|end▁of▁sentence|><|end▁of▁sentence|>'}", 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_parameter_error'}, 'id': 'chatcmpl-4020e9f2-f50e-9db4-9be8-095c993b871d', 'request_id': '4020e9f2-f50e-9db4-9be8-095c993b871d'}


logs/gateway.log

2026-04-24 08:43:52 - app.gateway.app - INFO - Configuration loaded successfully
2026-04-24 08:43:52 - app.gateway.app - INFO - Starting API Gateway on 0.0.0.0:8001
2026-04-24 08:43:52 - deerflow.runtime.stream_bridge.async_provider - INFO - Stream bridge initialised: memory (queue_maxsize=256)
2026-04-24 08:43:52 - deerflow.runtime.store.async_provider - INFO - Store: using AsyncSqliteStore (/app/backend/.deer-flow/checkpoints.db)
2026-04-24 08:43:52 - app.gateway.app - INFO - LangGraph runtime initialised
2026-04-24 08:43:52 - app.channels.manager - INFO - ChannelManager started (max_concurrency=5)
2026-04-24 08:43:52 - app.channels.service - INFO - ChannelService started with channels: []
2026-04-24 08:43:52 - app.gateway.app - INFO - Channel service started: {'service_running': True, 'channels': {'discord': {'enabled': False, 'running': False}, 'feishu': {'enabled': False, 'running': False}, 'slack': {'enabled': False, 'running': False}, 'telegram': {'enabled': False, 'running': False}, 'wechat': {'enabled': False, 'running': False}, 'wecom': {'enabled': False, 'running': False}}}
2026-04-24 08:43:52 - app.channels.manager - INFO - [Manager] dispatch loop started, waiting for inbound messages
INFO:     Application startup complete.
WARNING:  WatchFiles detected changes in 'packages/harness/deerflow/agents/middlewares/loop_detection_middleware.py'. Reloading...
INFO:     Shutting down
INFO:     Waiting for application shutdown.
2026-04-24 08:44:11 - app.channels.manager - INFO - ChannelManager stopped
2026-04-24 08:44:11 - app.channels.service - INFO - ChannelService stopped
2026-04-24 08:44:11 - app.gateway.app - INFO - Shutting down API Gateway
INFO:     Application shutdown complete.
INFO:     Finished server process [39]
INFO:     Started server process [55]
INFO:     Waiting for application startup.

Git state

30d619d (HEAD -> g80140-dev, upstream/main, local/main, main) feat(subagents): support per-subagent skill loading and custom subagent types (#2253

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions