Submission checklist
Package (Required)
Feature Description
Add support for OpenAI Responses API apply_patch in langchain-openai.
Expected behavior:
model = ChatOpenAI(model="gpt-5.2", output_version="responses/v1")
model_with_tools = model.bind_tools([
{"type": "apply_patch"}
])
message = model_with_tools.invoke("Create hello.txt")
message should preserve OpenAI’s apply_patch_call output item instead of dropping it.
Use Case
I’m building a code-editing agent that uses OpenAI’s native apply_patch tool.
OpenAI returns this correctly:
{
"type": "apply_patch_call",
"call_id": "call_...",
"operation": {
"type": "create_file",
"path": "hello.txt",
"diff": "+hello\n"
},
"status": "completed"
}
But LangChain currently converts this into an AIMessage with:
content = []
tool_calls = []
additional_kwargs = {}
So the patch operation is lost. This also affects streaming, where apply_patch_call events are not surfaced in chunks or the final accumulated message.
Proposed Solution
I think this could be implemented by:
- Adding
apply_patch to the OpenAI well-known tool lists so bind_tools([{"type": "apply_patch"}]) works.
- Preserving
apply_patch_call and apply_patch_call_output in langchain_openai.chat_models.base._construct_lc_result_from_responses_api.
- Preserving the same item types in
_convert_responses_chunk_to_generation_chunk for streaming.
- Optionally supporting round-trip conversion when an
AIMessage containing these Responses content blocks is sent back as input.
The API could look like:
model = ChatOpenAI(model="gpt-5.2", output_version="responses/v1")
model_with_tools = model.bind_tools([
{"type": "apply_patch"}
])
message = model_with_tools.invoke("Create hello.txt")
message.content
# [
# {
# "type": "apply_patch_call",
# "call_id": "call_...",
# "operation": {
# "type": "create_file",
# "path": "hello.txt",
# "diff": "+hello\n",
# },
# "status": "completed",
# }
# ]
Alternatives Considered
No response
Additional Context
I tried using raw .bind(...):
model.bind(
tools=[{"type": "apply_patch"}],
tool_choice={"type": "apply_patch"},
)
This sends the tool to OpenAI correctly, but LangChain still drops the returned apply_patch_call.
I also tested a local monkeypatch that adds apply_patch_call to the Responses conversion allow-list. That works, but it relies on private LangChain internals.
Submission checklist
Package (Required)
Feature Description
Add support for OpenAI Responses API apply_patch in
langchain-openai.Expected behavior:
Use Case
I’m building a code-editing agent that uses OpenAI’s native apply_patch tool.
OpenAI returns this correctly:
{ "type": "apply_patch_call", "call_id": "call_...", "operation": { "type": "create_file", "path": "hello.txt", "diff": "+hello\n" }, "status": "completed" }But LangChain currently converts this into an AIMessage with:
So the patch operation is lost. This also affects streaming, where
apply_patch_callevents are not surfaced in chunks or the final accumulated message.Proposed Solution
I think this could be implemented by:
apply_patchto the OpenAI well-known tool lists sobind_tools([{"type": "apply_patch"}])works.apply_patch_callandapply_patch_call_outputinlangchain_openai.chat_models.base._construct_lc_result_from_responses_api._convert_responses_chunk_to_generation_chunkfor streaming.AIMessagecontaining theseResponsescontent blocks is sent back as input.The API could look like:
Alternatives Considered
No response
Additional Context
I tried using raw
.bind(...):This sends the tool to OpenAI correctly, but LangChain still drops the returned apply_patch_call.
I also tested a local monkeypatch that adds
apply_patch_callto the Responses conversion allow-list. That works, but it relies on private LangChain internals.