Skip to content

runner fix to mitigate the numerical issue (#19286)#19286

Open
billmguo wants to merge 1 commit intopytorch:mainfrom
billmguo:export-D103690468
Open

runner fix to mitigate the numerical issue (#19286)#19286
billmguo wants to merge 1 commit intopytorch:mainfrom
billmguo:export-D103690468

Conversation

@billmguo
Copy link
Copy Markdown
Contributor

@billmguo billmguo commented May 4, 2026

Summary:

Fix 1 — Dangling shared_ptr (2 files)

  • runner/static_transformer_runner.h:33
  • runner/experimental/static_transformer_runner.h:33

Changed module_(std::shared_ptr(module.get())) to module_(std::move(module)). The old code extracted the raw pointer without releasing ownership, so the unique_ptr destructor would free the Module while the shared_ptr member still pointed to it.

Fix 2 — std::accumulate overflow (2 files)

  • llama/runner/static_attention_io_manager.h:58
  • runner/experimental/static_attention_io_manager.h:59

Changed std::accumulate(..., 0) to std::accumulate(..., size_t(0)). The int initial value caused the entire accumulation to happen in 32-bit signed arithmetic before assigning to size_t.

Fix 3 — Type-safety check in set_input (4 files)

  • llama/runner/static_attention_io_manager.h — added include + size check
  • runner/experimental/static_attention_io_manager.h — added include + size check
  • runner/static_transformer_runner.h — added size check (include inherited)
  • runner/experimental/static_transformer_runner.h — added size check (include inherited)

Added ET_CHECK_MSG(sizeof(T) == elementSize(inputMeta->scalar_type()), ...) before constructing the TensorImpl. This catches mismatches between the runner's compiled types (CacheT, MaskT, RopeT) and the model's actual tensor dtypes at load time, rather than silently reinterpreting data.

Reviewed By: viveknayakatmeta

Differential Revision: D103690468

@billmguo billmguo requested a review from lucylq as a code owner May 4, 2026 21:43
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented May 4, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19286

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures, 3 Pending, 2 Unrelated Failures

As of commit 2f878b5 with merge base a0d6e9b (image):

NEW FAILURES - The following jobs have failed:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following job failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 4, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented May 4, 2026

@billmguo has exported this pull request. If you are a Meta employee, you can view the originating Diff in D103690468.

@meta-codesync meta-codesync Bot changed the title runner fix to mitigate the numerical issue runner fix to mitigate the numerical issue (#19286) May 5, 2026
billmguo added a commit to billmguo/executorch that referenced this pull request May 5, 2026
Summary:

Fix 1 — Dangling shared_ptr (2 files)

- runner/static_transformer_runner.h:33
- runner/experimental/static_transformer_runner.h:33

Changed module_(std::shared_ptr<Module>(module.get())) to module_(std::move(module)). The old code extracted the raw pointer without releasing ownership, so the unique_ptr destructor would free the Module while the shared_ptr member still pointed to it.

Fix 2 — std::accumulate overflow (2 files)

- llama/runner/static_attention_io_manager.h:58
- runner/experimental/static_attention_io_manager.h:59

Changed std::accumulate(..., 0) to std::accumulate(..., size_t(0)). The int initial value caused the entire accumulation to happen in 32-bit signed arithmetic before assigning to size_t.

Fix 3 — Type-safety check in set_input (4 files)

- llama/runner/static_attention_io_manager.h — added include + size check
- runner/experimental/static_attention_io_manager.h — added include + size check
- runner/static_transformer_runner.h — added size check (include inherited)
- runner/experimental/static_transformer_runner.h — added size check (include inherited)

Added ET_CHECK_MSG(sizeof(T) == elementSize(inputMeta->scalar_type()), ...) before constructing the TensorImpl. This catches mismatches between the runner's compiled types (CacheT, MaskT, RopeT) and the model's actual tensor dtypes at load time, rather than silently reinterpreting data.

Reviewed By: viveknayakatmeta

Differential Revision: D103690468
@billmguo billmguo force-pushed the export-D103690468 branch from 5000c26 to 6c0b59c Compare May 5, 2026 18:22
billmguo added a commit to billmguo/executorch that referenced this pull request May 5, 2026
Summary:
Pull Request resolved: pytorch#19286

Fix 1 — Dangling shared_ptr (2 files)

- runner/static_transformer_runner.h:33
- runner/experimental/static_transformer_runner.h:33

Changed module_(std::shared_ptr<Module>(module.get())) to module_(std::move(module)). The old code extracted the raw pointer without releasing ownership, so the unique_ptr destructor would free the Module while the shared_ptr member still pointed to it.

Fix 2 — std::accumulate overflow (2 files)

- llama/runner/static_attention_io_manager.h:58
- runner/experimental/static_attention_io_manager.h:59

Changed std::accumulate(..., 0) to std::accumulate(..., size_t(0)). The int initial value caused the entire accumulation to happen in 32-bit signed arithmetic before assigning to size_t.

Fix 3 — Type-safety check in set_input (4 files)

- llama/runner/static_attention_io_manager.h — added include + size check
- runner/experimental/static_attention_io_manager.h — added include + size check
- runner/static_transformer_runner.h — added size check (include inherited)
- runner/experimental/static_transformer_runner.h — added size check (include inherited)

Added ET_CHECK_MSG(sizeof(T) == elementSize(inputMeta->scalar_type()), ...) before constructing the TensorImpl. This catches mismatches between the runner's compiled types (CacheT, MaskT, RopeT) and the model's actual tensor dtypes at load time, rather than silently reinterpreting data.

Reviewed By: viveknayakatmeta

Differential Revision: D103690468
@billmguo billmguo force-pushed the export-D103690468 branch from 6c0b59c to a45dfd3 Compare May 5, 2026 18:24
@YIWENX14 YIWENX14 self-requested a review May 5, 2026 18:42
Copy link
Copy Markdown
Contributor

@YIWENX14 YIWENX14 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Please fix the lint error before commit.

@YIWENX14
Copy link
Copy Markdown
Contributor

YIWENX14 commented May 5, 2026

@pytorchbot label "release notes: none"

@pytorch-bot pytorch-bot Bot added the release notes: none Do not include this in the release notes label May 5, 2026
billmguo added a commit to billmguo/executorch that referenced this pull request May 5, 2026
Summary:

Fix 1 — Dangling shared_ptr (2 files)

- runner/static_transformer_runner.h:33
- runner/experimental/static_transformer_runner.h:33

Changed module_(std::shared_ptr<Module>(module.get())) to module_(std::move(module)). The old code extracted the raw pointer without releasing ownership, so the unique_ptr destructor would free the Module while the shared_ptr member still pointed to it.

Fix 2 — std::accumulate overflow (2 files)

- llama/runner/static_attention_io_manager.h:58
- runner/experimental/static_attention_io_manager.h:59

Changed std::accumulate(..., 0) to std::accumulate(..., size_t(0)). The int initial value caused the entire accumulation to happen in 32-bit signed arithmetic before assigning to size_t.

Fix 3 — Type-safety check in set_input (4 files)

- llama/runner/static_attention_io_manager.h — added include + size check
- runner/experimental/static_attention_io_manager.h — added include + size check
- runner/static_transformer_runner.h — added size check (include inherited)
- runner/experimental/static_transformer_runner.h — added size check (include inherited)

Added ET_CHECK_MSG(sizeof(T) == elementSize(inputMeta->scalar_type()), ...) before constructing the TensorImpl. This catches mismatches between the runner's compiled types (CacheT, MaskT, RopeT) and the model's actual tensor dtypes at load time, rather than silently reinterpreting data.

Reviewed By: viveknayakatmeta

Differential Revision: D103690468
@billmguo billmguo force-pushed the export-D103690468 branch from a45dfd3 to ccf1c11 Compare May 5, 2026 19:42
Summary:
Pull Request resolved: pytorch#19286

Fix 1 — Dangling shared_ptr (2 files)

- runner/static_transformer_runner.h:33
- runner/experimental/static_transformer_runner.h:33

Changed module_(std::shared_ptr<Module>(module.get())) to module_(std::move(module)). The old code extracted the raw pointer without releasing ownership, so the unique_ptr destructor would free the Module while the shared_ptr member still pointed to it.

Fix 2 — std::accumulate overflow (2 files)

- llama/runner/static_attention_io_manager.h:58
- runner/experimental/static_attention_io_manager.h:59

Changed std::accumulate(..., 0) to std::accumulate(..., size_t(0)). The int initial value caused the entire accumulation to happen in 32-bit signed arithmetic before assigning to size_t.

Fix 3 — Type-safety check in set_input (4 files)

- llama/runner/static_attention_io_manager.h — added include + size check
- runner/experimental/static_attention_io_manager.h — added include + size check
- runner/static_transformer_runner.h — added size check (include inherited)
- runner/experimental/static_transformer_runner.h — added size check (include inherited)

Added ET_CHECK_MSG(sizeof(T) == elementSize(inputMeta->scalar_type()), ...) before constructing the TensorImpl. This catches mismatches between the runner's compiled types (CacheT, MaskT, RopeT) and the model's actual tensor dtypes at load time, rather than silently reinterpreting data.

Reviewed By: viveknayakatmeta

Differential Revision: D103690468
@billmguo billmguo force-pushed the export-D103690468 branch from ccf1c11 to 2f878b5 Compare May 5, 2026 19:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported release notes: none Do not include this in the release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants