Skip to content

CH04_NB02 #7

@hubin-keio

Description

@hubin-keio

RuntimeError in

'with torch.no_grad():
torch.onnx.export(model,
inputs_sample,
export_model_path,
export_params=True,
opset_version=15,
do_constant_folding=True,
input_names=['input_ids']
)'

/tmp/ipython-input-2815543664.py:2: DeprecationWarning: You are using the legacy TorchScript-based ONNX export. Starting in PyTorch 2.9, the new torch.export-based ONNX exporter will be the default. To switch now, set dynamo=True in torch.onnx.export. This new exporter supports features like exporting LLMs with DynamicCache. We encourage you to try it and share feedback to help improve the experience. Learn more about the new export logic: https://pytorch.org/docs/stable/onnx_dynamo.html. For exporting control flow: https://pytorch.org/tutorials/beginner/onnx/export_control_flow_model_to_onnx_tutorial.html.
torch.onnx.export(model,
/usr/local/lib/python3.12/dist-packages/transformers/cache_utils.py:92: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
self.keys = torch.tensor([], dtype=self.dtype, device=self.device)
/usr/local/lib/python3.12/dist-packages/transformers/cache_utils.py:93: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
self.values = torch.tensor([], dtype=self.dtype, device=self.device)


RuntimeError Traceback (most recent call last)
/tmp/ipython-input-2815543664.py in <cell line: 0>()
1 with torch.no_grad():
----> 2 torch.onnx.export(model,
3 inputs_sample,
4 export_model_path,
5 export_params=True,

10 frames
/usr/local/lib/python3.12/dist-packages/torch/jit/_trace.py in wrapper(*args)
130 if self._return_inputs_states:
131 inputs_states[0] = (inputs_states[0], trace_inputs)
--> 132 out_vars, _ = _flatten(outs)
133 if len(out_vars) == 1:
134 return out_vars[0]

RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: DynamicCache

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions