Checked other resources
Package (Required)
Feature Description
ChatMistralAI._convert_mistral_chat_message_to_message treats response content as a plain string. When calling Mistral's API with citations=True, content is a list of typed chunks (text and reference) the reference metadata (reference_ids, source mapping) gets silently dropped.
The citation data should be extracted and stored in response_metadata["citations"] so users doing RAG with Mistral can map answer fragments back to source documents.
Relevant code: _convert_mistral_chat_message_to_message in langchain_mistralai/chat_models.py, specifically:
content = _message.get("content", "") or ""
Use Case
RAG pipelines using Mistral models with native citation support. Mistral returns which parts of the answer come from which source documents, but there's currently no way to access that through ChatMistralAI. Users who need inline citations have to bypass langchain and call the Mistral SDK directly.
Proposed Solution
When content is a list, concatenate the text for backward compatibility and extract reference chunks into response_metadata:
citations = []
if isinstance(content, list):
parts = []
for chunk in content:
parts.append(chunk.get("text", ""))
if chunk.get("type") == "reference":
citations.append(chunk)
content = "".join(parts)
response_metadata = {"model_provider": "mistralai"}
if citations:
response_metadata["citations"] = citations
content stays a string, citations are available via response_metadata. No breaking change.
Alternatives Considered
- Calling the
mistralai SDK directly instead of going through langchain — works but loses all the langchain integration (chains, callbacks, tracing)
- Wrapping
ChatMistralAI with a post-processing step that re-parses the raw API response — fragile, duplicates work
Additional Context
Mistral citation response format:
content = [
{"type": "text", "text": "According to the document, "},
{"type": "reference", "reference_ids": [0], "text": "the temperature is 20°C"},
{"type": "text", "text": " on average."}
]
Docs: https://docs.mistral.ai/capabilities/citations/
Checked other resources
Package (Required)
Feature Description
ChatMistralAI._convert_mistral_chat_message_to_messagetreats responsecontentas a plain string. When calling Mistral's API withcitations=True,contentis a list of typed chunks (textandreference) the reference metadata (reference_ids, source mapping) gets silently dropped.The citation data should be extracted and stored in
response_metadata["citations"]so users doing RAG with Mistral can map answer fragments back to source documents.Relevant code:
_convert_mistral_chat_message_to_messageinlangchain_mistralai/chat_models.py, specifically:Use Case
RAG pipelines using Mistral models with native citation support. Mistral returns which parts of the answer come from which source documents, but there's currently no way to access that through ChatMistralAI. Users who need inline citations have to bypass langchain and call the Mistral SDK directly.
Proposed Solution
When
contentis a list, concatenate the text for backward compatibility and extract reference chunks intoresponse_metadata:contentstays a string, citations are available viaresponse_metadata. No breaking change.Alternatives Considered
mistralaiSDK directly instead of going through langchain — works but loses all the langchain integration (chains, callbacks, tracing)ChatMistralAIwith a post-processing step that re-parses the raw API response — fragile, duplicates workAdditional Context
Mistral citation response format:
Docs: https://docs.mistral.ai/capabilities/citations/