Skip to content

feat(openai): support for apply_patch built-in tool #37031

@kapis

Description

Submission checklist

  • This is a feature request, not a bug report or usage question.
  • I added a clear and descriptive title that summarizes the feature request.
  • I used the GitHub search to find a similar feature request and didn't find it.
  • I checked the LangChain documentation and API reference to see if this feature already exists.
  • This is not related to the langchain-community package.

Package (Required)

  • langchain
  • langchain-openai
  • langchain-anthropic
  • langchain-classic
  • langchain-core
  • langchain-model-profiles
  • langchain-tests
  • langchain-text-splitters
  • langchain-chroma
  • langchain-deepseek
  • langchain-exa
  • langchain-fireworks
  • langchain-groq
  • langchain-huggingface
  • langchain-mistralai
  • langchain-nomic
  • langchain-ollama
  • langchain-openrouter
  • langchain-perplexity
  • langchain-qdrant
  • langchain-xai
  • Other / not sure / general

Feature Description

Add support for OpenAI Responses API apply_patch in langchain-openai.

Expected behavior:

model = ChatOpenAI(model="gpt-5.2", output_version="responses/v1")
model_with_tools = model.bind_tools([
    {"type": "apply_patch"}
])
message = model_with_tools.invoke("Create hello.txt")
message should preserve OpenAIs apply_patch_call output item instead of dropping it.

Use Case

I’m building a code-editing agent that uses OpenAI’s native apply_patch tool.

OpenAI returns this correctly:

{
  "type": "apply_patch_call",
  "call_id": "call_...",
  "operation": {
    "type": "create_file",
    "path": "hello.txt",
    "diff": "+hello\n"
  },
  "status": "completed"
}

But LangChain currently converts this into an AIMessage with:

content = []
tool_calls = []
additional_kwargs = {}

So the patch operation is lost. This also affects streaming, where apply_patch_call events are not surfaced in chunks or the final accumulated message.

Proposed Solution

I think this could be implemented by:

  • Adding apply_patch to the OpenAI well-known tool lists so bind_tools([{"type": "apply_patch"}]) works.
  • Preserving apply_patch_call and apply_patch_call_output in langchain_openai.chat_models.base._construct_lc_result_from_responses_api.
  • Preserving the same item types in _convert_responses_chunk_to_generation_chunk for streaming.
  • Optionally supporting round-trip conversion when an AIMessage containing these Responses content blocks is sent back as input.

The API could look like:

model = ChatOpenAI(model="gpt-5.2", output_version="responses/v1")
model_with_tools = model.bind_tools([
    {"type": "apply_patch"}
])
message = model_with_tools.invoke("Create hello.txt")
message.content
# [
#   {
#     "type": "apply_patch_call",
#     "call_id": "call_...",
#     "operation": {
#       "type": "create_file",
#       "path": "hello.txt",
#       "diff": "+hello\n",
#     },
#     "status": "completed",
#   }
# ]

Alternatives Considered

No response

Additional Context

I tried using raw .bind(...):

model.bind(
    tools=[{"type": "apply_patch"}],
    tool_choice={"type": "apply_patch"},
)

This sends the tool to OpenAI correctly, but LangChain still drops the returned apply_patch_call.

I also tested a local monkeypatch that adds apply_patch_call to the Responses conversion allow-list. That works, but it relies on private LangChain internals.

Metadata

Metadata

Labels

core`langchain-core` package issues & PRsexternalfeature requestRequest for an enhancement / additional functionalityopenai`langchain-openai` package issues & PRs
No fields configured for Feature.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions