docs(lambda): add migration guide + non-Lambda Dockerfile example#915
docs(lambda): add migration guide + non-Lambda Dockerfile example#915jrusso1020 wants to merge 15 commits into
Conversation
|
Warning This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
This stack of pull requests is managed by Graphite. Learn more about stacking. |
9f26000 to
2f3384e
Compare
63a62c1 to
1265426
Compare
2f3384e to
eff27f7
Compare
1265426 to
9c7e205
Compare
miguel-heygen
left a comment
There was a problem hiding this comment.
CI green (same pattern as #914 — all real checks pass, Graphite mergeability pending is expected for a stacked PR).
Migration guide (migrating-to-hyperframes-lambda.mdx) — the concept-mapping table is thorough and accurate. The "What HyperFrames does differently" section is the most useful part: GPU determinism, failClosedFontFetch, no HDR, no webm distributed, and local state files are all real footguns that a migrating adopter would hit without warning. Good to have them front-loaded.
hyperframes lambda sites create ./project appears in the concept table (mapping to "one-shot site upload") but doesn't appear anywhere in PR #914's CLI guide — if that subcommand isn't implemented yet, this row will confuse adopters. Either add it to #914 or mark it as coming soon.
FPS note — "24, 30, 60 only — non-integer NTSC rationals are an in-process-only feature" is accurate and good to call out explicitly.
Migration checklist — solid 6-step sequence. Step 2 links to /concepts (correct path) and the /hyperframes skill. Step 5's policies user > infra/iam/hyperframes.json shell redirect is a nice concrete pattern.
Dockerfile (examples/k8s-jobs/Dockerfile.example) — well-commented and production-grade:
chrome-headless-shell@131.0.6778.139pinned explicitly — correct approach, adopters should pin this to their validated version.tinias PID 1 is the right call for SIGTERM propagation in Cloud Run / Fargate.- Running
bun install --frozen-lockfilethen uninstalling bun keeps the runtime image clean. - The default CMD prints
{producerVersion, chromePath}JSON — useful health-check / smoke test.
One minor issue: libcairo2, libcups2, libdbus-1-3, libglib2.0-0, libnspr4, libpango-1.0-0 are listed in the apt install block but not in the inline comment explaining "minimum set chrome-headless-shell needs". The comment says it lists the minimum set; listing them there but not in the comment is harmless but slightly confusing. Consider either expanding the comment or removing the "(the minimum set ...)" qualifier.
The .gitignore negation for examples/k8s-jobs/** is correct — matches the existing !examples/aws-lambda/** pattern.
Approved.
vanceingalls
left a comment
There was a problem hiding this comment.
Additive review — @miguel-heygen already covered the concept-mapping table accuracy, the "What HyperFrames does differently" section, the sites create vs #914 cross-reference, the FPS note, the migration checklist, tini, the bun install --frozen-lockfile pattern, and the libcairo2-block-vs-comment mismatch. The findings below are gaps not raised in that review.
Strengths (additive to Miguel's):
- Migration guide footguns (
migrating-to-hyperframes-lambda.mdx:46-66) trace cleanly to actual code paths —BROWSER_GPU_NOT_SOFTWARE(packages/engine/src/utils/assertSwiftShader.ts),FONT_FETCH_FAILED(packages/producer/src/services/deterministicFonts.ts),FORMAT_NOT_SUPPORTED_IN_DISTRIBUTED(packages/producer/src/services/distributed/plan.ts). Adopters who hit any of these can grep the codebase from the doc. chunk-size=240/max-parallel-chunks=16defaults in the render-config table (migrating-to-hyperframes-lambda.mdx:36-37) matchpackages/cli/src/commands/lambda.ts:100-101exactly.
Important — Chrome version pin diverges from the rest of the stack (examples/k8s-jobs/Dockerfile.example:73)
CHROME_HEADLESS_SHELL_VERSION=131.0.6778.139 is the version this Dockerfile installs. Searching the repo:
packages/cli/src/browser/manager.ts:7pinsCHROME_VERSION = \"131.0.6778.85\"(the dev path).Dockerfile.test:56,packages/cli/src/docker/Dockerfile.render:19,packages/aws-lambda/scripts/build-zip.ts:400all usechrome-headless-shell@stable(floating, currently resolves to a different patch).
The migration guide itself emphasizes byte-level determinism: "the per-chunk concat-copy assumes byte-level reproducibility" (migrating-to-hyperframes-lambda.mdx:48). Adopters who run Lambda plan + non-Lambda renderChunk in the same pipeline with these two image families will be on different Chrome patches across chunk boundaries, which is exactly the failure mode the guide warns about. Pin all three to the same version (preferably the producer's actual .85, or upgrade the producer first and pin everything to .139).
Important — PR-body claim is not verifiable
The PR body says: "matches the chrome-headless-shell version pinned at 131.0.6778.139 that the producer already uses." Grepping the repo finds no .139 pin anywhere outside this PR. Either the claim is stale or there's a producer-side pin I missed — if it's the latter, citing the file would help; if it's the former, update the body so reviewers don't take the parity claim at face value.
Important — Container runs as root (examples/k8s-jobs/Dockerfile.example)
No USER directive. tini is wired up correctly for PID 1 but Chrome (+ everything else) runs as root. Adopters will copy-paste this Dockerfile into K8s / Cloud Run / Fargate — those platforms' best-practice (and some org-level admission policies) require non-root. The aws-lambda adapter sidesteps this because Lambda's execution model isn't a long-running container, but the K8s example is. Suggest:
RUN groupadd --system --gid 1001 hyperframes && \
useradd --system --uid 1001 --gid hyperframes --shell /usr/sbin/nologin hyperframes
# ...after the COPY blocks...
RUN chown -R hyperframes:hyperframes /app /opt/chrome
USER hyperframes
Adopters who need root for some platform reason can override; non-root should be the default.
Nit — Probably-dead apt deps (examples/k8s-jobs/Dockerfile.example:46,55)
wget and xz-utils are in the apt block but I don't see either invoked downstream — @puppeteer/browsers does its HTTPS + extraction in Node. If they're vestigial from a previous draft, dropping them shaves a few MB and shrinks the supply-chain surface. (If they're needed and I missed where, ignore.)
Note — sites HeadObject status (migrating-to-hyperframes-lambda.mdx:14)
The doc says "HeadObject 200" (correct per S3 API). The implementation comment at packages/cli/src/commands/lambda/sites.ts:8 says "HeadObject-204" — code comment is loose, doc is right. Not a doc finding; just flagging in case James wants to fix the code comment in a sibling PR.
Stack-aware: this is an OSS doc + example-Dockerfile PR with all required CI green, prior approval from Miguel, and no correctness risk to the runtime path. The Chrome-version finding is the highest-leverage of the three importants because adopters who hit it will silently get non-deterministic renders rather than a loud failure — worth landing as a follow-up commit on this PR, but not a hard block.
Verdict: APPROVE
Reasoning: Documentation + reference example, no runtime code paths changed, prior reviewer covered the core content; my findings are quality nudges on an adopter-facing artifact, not correctness blockers.
Review by Vai
9c7e205 to
8d6ffe5
Compare
5be0bad to
1673e46
Compare
8d6ffe5 to
0ded0c0
Compare
Wraps the @hyperframes/aws-lambda SDK + the Phase 6a SAM template behind
a single CLI surface so an end-to-end render is three commands instead
of the ~8 manual bun+sam+aws steps the smoke script does today:
hyperframes lambda deploy
hyperframes lambda render ./my-project --width 1920 --height 1080 --wait
hyperframes lambda destroy
Subcommands:
- deploy: build handler.zip + sam-deploy + persist stack outputs
to <cwd>/.hyperframes/lambda-stack-<name>.json
- sites create: pre-upload a project to S3 with a stable content hash
so re-renders skip the tar+PUT pass
- render: start a Step Functions execution; --wait blocks and
streams per-chunk progress + accrued cost
- progress: one-shot snapshot — status, frames, cost breakdown,
errors. Accepts renderId or executionArn
- destroy: sam-delete + drop the local state file (S3 bucket
is Retain'd by the template; documented in --help
and in docs/packages/cli.mdx)
To keep @sparticuz/chromium out of the CLI's transitive deps, this also
adds a dedicated ./sdk subpath export to @hyperframes/aws-lambda; the
CLI imports from @hyperframes/aws-lambda/sdk exclusively. The existing
. barrel still re-exports both handler + SDK for adopters who want one
entry point.
Defaults are deliberately cost-conservative for first-time users:
--concurrency=8 (low enough to never surprise) and --memory=10240 (the
common case; documented for adopters who want to tune down).
Tests: 5 unit tests on the state-file round-trip. CLI integration
against sam local invoke is part of the upcoming PR 6.6 (lambda-local
regression harness).
Two small cleanups on top of the lambda CLI:
- Replace parseFormat / parseCodec / parseQuality / parseChromeSource
(four near-identical helpers) with a single generic parseEnum() +
typed const-tuple lookups. The four callers now read as one-line
arrow functions that lift the allowed values out of the function
body so they're easy to extend.
- DEFAULT_STACK_NAME was const-declared then re-exported at the
bottom of state.ts; just mark the const export inline.
No behavior changes. All CLI tests still pass.
esbuild can't bundle @hyperframes/aws-lambda's transitive AWS SDK
deps (@aws-sdk/* + @smithy/*) cleanly into a node binary — the
SDK's .browser.js conditional re-exports break the resolver:
ESM Build failed
No matching export in "splitStream.browser.js" for import
"splitStream" (and ~10 similar errors)
Mark aws-lambda as `external` so esbuild doesn't follow it, and
move it from devDependencies to dependencies so the published CLI
can resolve it from node_modules at runtime. The lambda subverb
files dynamic-import only on `hyperframes lambda *` invocation, so
the CLI cold-start cost is unchanged.
The install-size hit (AWS SDK + @sparticuz/chromium ≈ 200 MiB) is
documented as a v1 tradeoff; a future split into a lambda-sdk-only
subpackage can pare this back.
Two blockers + four important items from Vai's review:
- `--memory` was parsed and recorded in the local state file but
never forwarded to `sam deploy` as a parameter override. Worse,
`progress.ts` then read the *recorded* value for cost math, so
`--memory 5120` produced wrong cost numbers downstream. Thread
`LambdaMemoryMb` through samDeploy's --parameter-overrides.
- `--profile` was only consumed by deploy / destroy. render and
progress fell back to the default credentials chain — a user
with `--profile prod` would silently render against their
default account (wrong-account billing footgun). Set
`process.env.AWS_PROFILE` (and `AWS_REGION`) in the dispatcher
before any subverb runs; the AWS SDK reads them natively, so
render / progress / sites all benefit without each subverb
threading the flag through the SDK call.
- `--profile` + destroy now also reads `process.env.AWS_PROFILE`
as a fallback (matching deploy's existing env fallback).
- `--wait --json` printed both the start handle AND the final
progress snapshot, producing two concatenated JSON blobs that
`jq` rejected. Now emits a single document: handle (without
--wait) OR final progress (with --wait).
- Negative integers on `--width` / `--height` / `--chunk-size` /
`--max-parallel-chunks` / `--memory` / `--concurrency` now fail
loudly via a new `parsePositiveInt` wrapper instead of flowing
into the SDK and producing opaque AWS validation errors mid-
render.
- `DEFAULT_STACK_NAME` is now centralized to the literal
`"hyperframes-default"` and consumed from one place. Previously
the value was assembled as `hyperframes-${"default"}` in three
sites and hardcoded as `"hyperframes-default"` in a fourth.
`requireStack`'s hint now matches the dispatcher's default.
The faked `SiteHandle` for `--site-id` keeps the documented
placeholder fields but also surfaces `bucketName` (from PR 909's
extended SiteHandle interface), matching the SDK contract.
All CLI unit tests + the full bundler build still pass.
0ded0c0 to
a6f848e
Compare
1673e46 to
f36fc4b
Compare
The base branch was changed.
miguel-heygen
left a comment
There was a problem hiding this comment.
Re-approve after rebase. Diff verified unchanged.
vanceingalls
left a comment
There was a problem hiding this comment.
Re-approve after rebase onto main. Force-push dismissed my prior --approve (require_last_push_approval: true) — content unchanged, same commits replayed on the new base. All findings from the prior review's resolution still apply.
Re-review by Vai (post-rebase re-stamp)
The "Smoke: global install" CI step packs the CLI via `npm pack` and
installs it globally via `npm install -g <tgz>`. npm doesn't understand
the workspace: protocol, so a runtime `dependencies` entry of
`@hyperframes/aws-lambda: workspace:*` blows up with:
npm error code EUNSUPPORTEDPROTOCOL
npm error Unsupported URL Type "workspace:": workspace:*
(pnpm rewrites workspace:* on publish; npm pack doesn't.)
Three changes to unblock the smoke + keep the published CLI install
small for users who don't deploy to Lambda:
- Move `@hyperframes/aws-lambda` from CLI's `dependencies` back to
`devDependencies`. It's already external in tsup.config.ts; the
bundle references it via runtime resolution only.
- Convert the static `import { … } from "@hyperframes/aws-lambda/sdk"`
in sites.ts / render.ts / progress.ts to `await import()` inside
each function. tsup with `splitting: false` was inlining those
static imports at the top of the bundle, which made Node eagerly
resolve them at CLI startup (MODULE_NOT_FOUND before any lambda
subcommand even runs). Dynamic imports stay dynamic in the bundle.
- Add a friendly missing-module check in the lambda dispatcher.
When a user runs `hyperframes lambda deploy / render / sites /
progress / destroy` without aws-lambda installed, they now see:
@hyperframes/aws-lambda is not installed.
The `hyperframes lambda deploy` command needs it at runtime.
Install it alongside the CLI:
npm install -g @hyperframes/aws-lambda
Verified locally: pack + global install + `hyperframes init --example
blank` now succeeds end-to-end (was the same scenario the CI smoke job
runs).
IAM bootstrap subcommand for the lambda CLI. Closes the "first run hits
'User is not authorized to perform iam:CreateRole'" gap that adopters
otherwise have to figure out by hand.
hyperframes lambda policies user
→ prints an inline-policy doc to attach to the IAM user that runs
the CLI
hyperframes lambda policies role --principal=cloudformation
→ prints { TrustRelationship, InlinePolicy } for a service role
cloudformation can assume
hyperframes lambda policies validate ./infra/policy.json
→ diffs a checked-in policy against the CLI's required action set,
expanding s3:* / s3:Get* / * wildcards, exits non-zero on missing
actions (wire it into CI to catch drift before deploys fail)
The required-actions list is derived from what the SAM template at
examples/aws-lambda/template.yaml needs to create plus what
renderToLambda/getRenderProgress call against S3 + Step Functions at
runtime. Sorted alphabetically per-service so diffs stay readable.
Resource is "*" by design — CloudFormation creates new function /
state-machine / bucket ARNs on every adopter's first deploy. The
generated policy is documented as a starting point; adopters with
stricter postures narrow Resource to the deployed ARNs after the
first successful run.
Tests: 10 unit tests covering the action set, doc shape, trust policy
service principal, and validate() against valid / missing / wildcard /
single-Statement / Deny-statement inputs.
Adds a typed TrustPolicyDocument / TrustPolicyStatement pair so
buildRoleTrustPolicy can return a real type instead of unknown. The
trust-policy shape has a Principal field that the generic
PolicyStatement doesn't model, but it was previously punted via a
return unknown rather than a parallel type.
Test cleanup: drop the `as {...}` casts that the previous return-
unknown signature forced.
One blocker + four importants from Vai's review:
- REQUIRED_ACTIONS was missing `s3:ListAllMyBuckets` (called by
`sam deploy --resolve-s3` on first run to discover/create the
`aws-sam-cli-managed-default-*` artifact bucket) and
`cloudformation:ValidateTemplate` (CFN template validation
during change-set creation). Without these, a first-deploy
adopter with the generated policy hits AccessDenied on the
very call the PR was meant to unblock. Added both.
- `policies role --principal=lambda` was a footgun — it produced
a `lambda.amazonaws.com` trust paired with the full deploy
superset, i.e. a confusingly-overscoped Lambda execution role
no human should attach (the SAM template creates its own
scoped execution role automatically). Dropped `lambda` as a
principal option; `policies role` now always emits a
CloudFormation service-role doc.
- `validatePolicy` silently misreported NotAction/NotResource
statements (treating them as zero grants), producing false
negatives. Detect both shapes and surface them via a new
`warnings: string[]` field; NotAction statements are skipped
(rather than producing a false negative), NotResource is
treated as full action grant + a warning.
- Mid-string wildcards (`s3:Get*Object`, `?`) silently failed
the matcher. End-anchored wildcards still work; mid-string
patterns now warn so users know the validator can't expand
them.
- Dropped the dead `samArtifactBucket` action group (fully
subsumed by `s3Bucket` + `s3Object`).
- `validate --json` now wraps errors in a friendly envelope
(`{ ok: false, error: "..." }`) so CI consumers have one
parse shape regardless of failure mode.
- lambda.ts subcommand description and examples updated to
include `policies`.
Tests: 5 new negative-path tests cover NotAction warning,
NotResource warning, mid-string wildcard warning, missing file
(ENOENT), malformed JSON (SyntaxError), and absent Statement
field. All 21 policies tests pass.
Third harness mode that drives the OSS @hyperframes/aws-lambda handler
through the exact event sequence Step Functions produces in
production:
handler({Action: "plan"}) → planDir tarball on fake S3
handler({Action: "renderChunk"}) × N → chunk artifacts on fake S3
handler({Action: "assemble"}) → final mp4/mov/png-sequence
The S3 client is a filesystem-backed fake (every s3://<bucket>/<key>
URI maps to <tempRoot>/s3/<key>), so the harness exercises the
handler's event-parsing + tar/S3 conventions + dispatch logic on top
of the underlying producer primitives. Regressions in event JSON
shape, S3 key layout, or plan-hash boundary checks now surface in
the same CI run as the in-process and distributed-simulated modes
without paying for a real AWS round-trip.
Deliberately NOT a Docker/RIE invocation — that would gate the
producer test suite on Docker-in-Docker support which most CI
runners lack. Real-ZIP-via-RIE tests live in
packages/aws-lambda/scripts/ (probe:beginframe) and the
maintainer-run smoke.sh.
Wired up via:
- HarnessMode union extended to include "lambda-local"
- parseHarnessModeFlag accepts --mode=lambda-local
- regression-harness.ts dispatches to runLambdaLocalRender for
the new mode, sharing the distributed-support gate +
pathology-floor threshold with distributed-simulated mode
- package.json scripts: test:lambda-local + docker:test:lambda-local
- producer.devDependencies += @hyperframes/aws-lambda (workspace)
- producer/tsconfig.json gains path mappings to self so the type
cycle through aws-lambda's source resolves at typecheck time
without needing producer to be pre-built
Tests: 3 new unit tests on parseHarnessModeFlag + resolveMinPsnrForMode
cover the new mode. End-to-end PSNR contract still runs through
Dockerfile.test (manual + CI).
Three small cleanups on top of the lambda-local harness:
- Drop the unused createReadStream import + its `void` workaround
comment. The aws-lambda handler's tar / S3 transport pulls
createReadStream from its own imports; this file never references
it directly.
- Hoist the dynamic `await import("node:fs")` calls for
writeFileSync out of FilesystemBackedFakeS3.send into the static
import block. Repeated PutObject calls don't need to repay the
dynamic-import cost.
- Hoist the dynamic `await import("@hyperframes/aws-lambda")` call
for untarDirectory similarly. Drops the now-redundant duplicate
aws-lambda import statement.
The PutObject body branch also collapses: `body instanceof Buffer`
and `typeof body === "string"` both call writeFileSync identically,
so they share one branch.
No behavior changes.
The static import of regression-harness-lambda-local.ts pulled @hyperframes/aws-lambda (and its @aws-sdk/* + @sparticuz/chromium transitive deps) at module-load time. Dockerfile.test only copies the producer's own files into the container, so aws-lambda's src isn't present at runtime — and even `--mode=in-process` failed: Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/app/packages/producer/node_modules/@hyperframes/aws-lambda/src/index.ts' imported from /app/packages/producer/src/regression-harness-lambda-local.ts Load the module on demand instead. `--mode=lambda-local` callers pay the import cost; the existing in-process and distributed- simulated modes don't.
Three review items from Vai:
- `Config.width`/`Config.height` are now plumbed through
RunLambdaLocalInput rather than hardcoded inside
runLambdaLocalRender. Lambda-local's whole point is to catch
event-shape drift; if the handler ever starts honouring
Config.width/height (e.g. for canvas sizing), having those
values flow from the caller means the harness sees what the
fixture authored. The interface change makes the eventual
upgrade-to-real-fixture-resolution a one-line dispatch swap.
- Drop the dead `export type { Fps }` and its unused import
from @hyperframes/core. The module never re-exports it.
- The dispatch site in regression-harness.ts now passes 1920×1080
explicitly with a comment marking it as a placeholder until
the harness compiles the composition HTML up-front to surface
the authored data-width/data-height. distributed-simulated
mode uses the same placeholder internally, kept for parity.
No behavior change in the existing modes; lambda-local now has a
clear extension point for honouring fixture dimensions.
End-to-end deploy guide for the AWS Lambda surface. Covers:
- Architecture diagram (Step Functions Plan → Map(N) → Assemble +
the single Lambda function dispatching by Action; pulled from
the distributed rendering plan §15.2).
- Prerequisites table (AWS creds, SAM CLI, bun, repo checkout).
- Three deployment paths: hyperframes lambda CLI (recommended),
direct sam deploy against examples/aws-lambda/template.yaml,
and HyperframesRenderStack CDK construct.
- IAM bootstrap via hyperframes lambda policies user/role/validate.
- Cost shape — how Lambda GB-seconds + SFN transitions roll up
into the displayCost the progress verb prints.
- Troubleshooting block with the typed error names operators
actually hit (PLAN_HASH_MISMATCH, BROWSER_GPU_NOT_SOFTWARE,
iam:CreateRole denial, stuck RUNNING, S3 Retain semantics).
- "What's NOT in v1" callout so adopters don't burn time looking
for webhooks / compositions verb / HDR support.
Registered under a new "Deploy" group in docs.json's Documentation
tab, sitting after Packages so the conceptual flow is "what you
can build" → "how to ship it."
No code changes.
One blocker + two important items from Vai's review:
- The BROWSER_GPU_NOT_SOFTWARE troubleshooting entry pointed
adopters at a non-existent `data-gpu-mode` composition attribute.
Replaced with the actual root cause (Chrome launch flags +
@sparticuz/chromium libs in the handler ZIP) and the actual
remediation: rebuild + redeploy via `lambda deploy` (which
always rebuilds the ZIP). The composition-attribute story
would have sent users editing the wrong file entirely.
- Added a `sites create` subsection under Path 1 so adopters
running tight inner loops know how to reuse a project upload
across many renders instead of re-tarring + re-uploading on
each call. The CLI surface was first-class but the doc had
been silent.
- Added a Warning callout under Path 2 explaining that the SAM
template's own ReservedConcurrency default is `-1` (unreserved)
— a reader simplifying the Path 2 example by dropping the
--parameter-overrides flag would silently switch to unreserved
concurrency and pay the runaway-Map cost. The warning mirrors
the cost-shape callout earlier in the page.
Two adopter-facing artifacts that close out Phase 6b's user-facing
surface:
- docs/deploy/migrating-to-hyperframes-lambda.mdx — side-by-side
concept mapping for users coming from another one-command-deploy
video renderer. Covers the verb mapping (deploy/render/progress/
destroy/sites/policies), composition format (plain HTML vs JSX),
render config, and a handful of intentional differences (no HDR
in distributed mode, no webm, gpu-mode=software requirement,
fail-closed font fetch, local stack-state files, narrow-after-
first-deploy IAM pattern). Closes with a migration checklist.
Per repo convention, no competitor framework is named anywhere
in the source — adopters self-identify.
- examples/k8s-jobs/Dockerfile.example + README.md — reference
Dockerfile for adopters who want to run distributed renders
outside AWS Lambda. Bakes Node 22 + chrome-headless-shell +
ffmpeg + the producer source. Deliberately not published to
a registry; adopters build it themselves so Chrome / ffmpeg /
producer versions stay pinned to the checkout they audited.
The README documents the typical K8s Jobs orchestration shape
that points adopters at packages/aws-lambda/src/handler.ts as
the reference adapter.
Migration guide registered under the existing Deploy group in
docs.json. .gitignore extended to negate the new examples/k8s-jobs/
path the same way examples/aws-lambda/ is negated.
No source code changes.
f36fc4b to
0b2bb53
Compare

What
Two adopter-facing artifacts that close out Phase 6b's user-facing surface:
docs/deploy/migrating-to-hyperframes-lambda.mdx— side-by-side concept mapping for users coming from another one-command-deploy video renderer. Verb mapping (deploy / render / progress / destroy / sites / policies), composition format (plain HTML vs JSX), render config, and a list of intentional differences (no HDR in distributed mode, no webm,gpu-mode=softwarerequirement, fail-closed font fetch, local stack-state files, narrow-after-first-deploy IAM pattern). Closes with a migration checklist.examples/k8s-jobs/Dockerfile.example+README.md— reference Dockerfile for adopters who want to run HyperFrames distributed renders outside Lambda. Bakes Node 22 +chrome-headless-shell+ffmpeg+ the producer source. Built by adopters; not published to a registry.Why
Per
DISTRIBUTED-RENDERING-PLAN.md§ 11 Phase 6b PR 6.8: the Lambda surface is the recommended path, but the OSS contract has always been "the distributed primitives are runtime-agnostic — Lambda is one adapter." Without this PR, adopters running Kubernetes Jobs / Argo Workflows / Cloud Run Jobs / ECS Fargate have to derive the runtime image from scratch by reading the Lambda Dockerfile. This PR collapses that to "build the example Dockerfile."How
docs/docs.json(added by the previous PR in this stack)..gitignoreextended to negateexamples/k8s-jobs/the same wayexamples/aws-lambda/is negated — the repo blanket-ignoresexamples/*and opts tracked subdirs back in.packages/aws-lambda/src/handler.tsas the reference adapter for how to wire the per-activity event shape into their own orchestrator.Test plan
docs/docs.json) parses — verified the new entry alongside the deploy guide added in docs(lambda): add docs/deploy/aws-lambda.mdx deployment guide #914.gitignorerule round-trips:git check-ignore -v examples/k8s-jobs/...now reports the negationdocker buildofexamples/k8s-jobs/Dockerfile.example— out of scope for an OSS PR since the image isn't published; the build instructions are documented inline and match the chrome-headless-shell version pinned at131.0.6778.139that the producer already usesStacks on #909, #910, #912, #913, and #914.
🤖 Generated with Claude Code