ci(example): add Play Store + TestFlight publish workflow, modernize toolchain#347
Merged
evan-masseau merged 29 commits intofeat/example-appfrom Apr 28, 2026
Merged
Conversation
e759edf to
85e0eb7
Compare
e0b8e13 to
a3d7d0a
Compare
85e0eb7 to
0c8357f
Compare
0c8357f to
7f6e898
Compare
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 2 total unresolved issues (including 1 from previous review).
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 7f6e898. Configure here.
79a17be to
7d94eae
Compare
7d94eae to
00964ee
Compare
00964ee to
a27ed6b
Compare
a3d7d0a to
ac8ae77
Compare
a27ed6b to
beae0fa
Compare
beae0fa to
f40dbdc
Compare
92f82d4 to
ac8ae77
Compare
ac8ae77 to
bf39103
Compare
f40dbdc to
c1b80cf
Compare
bf39103 to
cad277c
Compare
cad277c to
b2fb2b6
Compare
c1b80cf to
0e694cd
Compare
Play rejects `status: completed` uploads to an app whose listing hasn't been published out of Draft. Switches to `status: draft` so the release lands on the internal track unpublished — manual promote in Console until the app listing is fully set up (content rating, data safety, etc.), then we flip back to `completed` for fully-automated publishes. Track stays `internal`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds two extra updates to the SDK version bump path: - Android: example/android/app/build.gradle versionName - iOS: example/ios/KlaviyoReactNativeSdkExample.xcodeproj/project.pbxproj MARKETING_VERSION (all build configurations) So that TestFlight and Play Store uploads carry the SDK version they're demonstrating without a separate manual step. versionCode/build numbers remain CI-injected per-run (github.run_number) and aren't touched by this script. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Restructures the example app publish workflow as a single workflow with two parallel jobs (deploy-android, deploy-ios) sharing triggers and shared composite actions. Both ship together on workflow_dispatch and on every SDK release. iOS job mirrors the iOS test app's TestFlight workflow (klaviyo-ios-test-app/.github/workflows/testflight.yml): - Ephemeral keychain with Apple Distribution cert - Per-target Manual signing via xcodeproj Ruby gem (Modern Xcode ignores CLI signing flags) - ExportOptions.plist with provisioning profile UUIDs patched in at build time via PlistBuddy - xcrun altool upload with retry-on-collision (handles manual TestFlight uploads racing past github.run_number) - agvtool sets the build number across all targets per run RN-specific additions on top: - Node + Yarn setup before pod install (Metro bundler runs during archive) - bundle exec pod install --repo-update (no stale CDN pinning) - Delete .xcode.env.local so Xcode build phase resolves node via $PATH - GoogleService-Info.plist injected from a base64 secret (versus Android's plain-text google-services.json) so Firebase push works in the build Shared between jobs via composite actions: - .github/actions/example-publish-prep — verifies KLAVIYO_EXAMPLE_API_KEY and writes example/.env - .github/actions/notify-slack-publish — single notification action with result + platform inputs, replacing four duplicated payload blocks example/ios/ExportOptions.plist is checked in with a GH_actions UUID placeholder; CI overwrites it with the real UUID per run. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Klaviyo Apple Team ID isn't a high-stakes secret (it's recoverable from any signed IPA's embedded.mobileprovision) but it's org-identifying and shouldn't sit in checked-in YAML/plist as a matter of hygiene. - ExportOptions.plist now ships with an APPLE_TEAM_ID_PLACEHOLDER value that CI rewrites via PlistBuddy. - The xcodebuild archive call and the per-target signing Ruby script both read the team ID from the APPLE_TEAM_ID env var, sourced from secrets.APPLE_TEAM_ID. Adds APPLE_TEAM_ID to the required secrets list. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Catches the iOS example up to the SDK version (Android versionName was already 2.4.0). bump-version.sh now keeps both in sync going forward, but that path hadn't been run since the script gained example-app handling. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…iOS signing Replaces the previous github.run_number + retry-on-collision pattern on both jobs with a single preflight call to each store's API: Android: - google-github-actions/auth + gcloud Bearer token + Play Edits API to find the highest existing versionCode and pick highest+1. - AAB is built once with the resolved versionCode and uploaded with r0adkll/upload-google-play (kept the third-party action since the preflight removed the only reason we'd need our own uploader). - versionName now read from build.gradle and reported in Slack notifications. Drops the release.tag_name fallback — the workflow is also triggered by workflow_dispatch and branch push, where there is no release event, and it gave us the useless "manual" label. iOS — full pivot to the App Store Connect API Key auth model: - Drops the cert/keychain/provisioning-profile dance, the per-target Manual signing Ruby script, the Apple ID + app-specific password upload, and the archive-and-retry build number loop. - Writes the API key .p8 to ~/.appstoreconnect/private_keys/ where Apple tools auto-discover it. - Preflight: JWT-signed call to /v1/builds via the App Store Connect REST API to find the latest CFBundleVersion and pick latest+1. jwt gem is installed inline since example/Gemfile is reserved for CocoaPods. - Archive uses xcodebuild -allowProvisioningUpdates with the API key flags so xcodebuild downloads/refreshes signing certs and provisioning profiles automatically. No keychain setup, no .p12. - Upload via xcrun altool --apiKey/--apiIssuer (Apple's recommended modern path). - ExportOptions.plist simplified to method=app-store-connect, signingStyle=automatic, with team ID stamped in at run time. Net effect on the secrets surface: - iOS drops: BUILD_CERTIFICATE_BASE64, P12_PASSWORD, BUILD_PROVISION_ PROFILE_BASE64, EXTENSION_PROVISION_PROFILE_BASE64, KEYCHAIN_PASSWORD, APPLE_ID, APP_SPECIFIC_PASSWORD. - iOS adds: APP_STORE_CONNECT_API_KEY_ID, APP_STORE_CONNECT_API_KEY_ ISSUER_ID, APP_STORE_CONNECT_API_KEY_BASE64. - Both jobs share APPLE_TEAM_ID (ESC stamps it into ExportOptions.plist at build time rather than baking it into checked-in YAML/plist). Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…n signing
- Android: switch preflight from /edits/{editId}/bundles (scoped to the
current edit, always empty) to /edits/{editId}/tracks. Iterate every
active release across every track and pick max(versionCode)+1. Fixes
versionCode-1 collisions caused by the bundles endpoint reporting
empty for a fresh edit.
- iOS: pbxproj inherits the RN-template default
CODE_SIGN_IDENTITY[sdk=iphoneos*] = "iPhone Developer"
which forces Development signing on Release archives. That made
-allowProvisioningUpdates request a Development profile (and try to
register the runner machine as a dev device). Override at xcodebuild
invocation time with CODE_SIGN_IDENTITY="Apple Distribution" (and the
same SDK-conditional variant) so xcodebuild provisions an App Store
distribution profile via the API key.
Part of MAGE-464
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…override, mute Slack temporarily
Android preflight:
- Switch back to /edits/{editId}/bundles. Per fastlane/supply, this
endpoint actually returns ALL bundles for the app (its comment is
literally "Get a list of all AAB version codes"); my earlier read
that it was edit-scoped was wrong, and switching to /tracks regressed
to "active releases only" which doesn't surface superseded
versionCodes Play still treats as used.
- Request the androidpublisher OAuth scope explicitly on the gcloud
token. The default cloud-platform scope was the most likely cause of
the previous run returning empty bundles.
- Log the raw bundles response so future surprises are diagnosable
without re-instrumenting.
- Apply a 100 floor on the resolved versionCode to leapfrog the small
historical versionCodes (1-9) we know got uploaded by the original
Android-only workflow before its rename — even if the bundles list
comes back empty for any reason, we won't collide with those.
iOS Archive:
- Drop the second build-setting override
`CODE_SIGN_IDENTITY[sdk=iphoneos*]=Apple Distribution`. xcodebuild's
CLI build-setting syntax doesn't support the [sdk=...] conditional
modifier; bash split it into a malformed key/value and the resulting
literal "iphoneos*]=Apple Distribution" propagated into Pods'
CODE_SIGN_IDENTITY, breaking the entire dependency build. The plain
unconditional CODE_SIGN_IDENTITY="Apple Distribution" wins against
the project's conditional setting on its own because command-line
overrides take highest precedence in xcodebuild's resolution order.
Slack notifications:
- All four notify-slack-publish callsites gated to `if: false` while we
iterate. Each failed run was generating a noisy alert. Re-enable by
flipping back to `if: success()` / `if: failure()` once both jobs are
reliably green.
Part of MAGE-464
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…and Android floor iOS Archive: - Add `-destination "generic/platform=iOS"` so xcodebuild archives for iOS device. Without it, the runner's Apple Silicon Mac advertises itself as a valid destination for the scheme (Designed-for-iPad is enabled), and xcodebuild defaults to "My Mac" — which uses Development signing for archive and breaks our Distribution flow. - Drop the `CODE_SIGN_IDENTITY="Apple Distribution"` CLI override. With Automatic signing enabled in the project, xcodebuild already picks Apple Distribution for the `archive` action when the destination is iOS. Combining Automatic with a manual identity override is what xcodebuild flagged as "conflicting provisioning settings". Klaviyo's Distribution Managed cert is team-wide and bundle-ID-agnostic, so -allowProvisioningUpdates + the API key can sign without any further setup. Android preflight: - Drop the 100 floor on resolved versionCode. Now that the androidpublisher OAuth scope fix is confirmed working (last run succeeded with highest=8 returned correctly), the floor is just a one-time hack that would actively cause collisions on subsequent runs if the bundles list ever returned empty after we'd uploaded 100+. Trust the API; on the rare empty-response case, the upload fails loudly with a useful error. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Without this env var the Podfile's new-architecture branch doesn't run, so codegen artifacts like RCTAppDependencyProvider.h are never produced, even though Pod targets that depend on them (e.g. ReactAppDependencyProvider) still get wired into the Pods project. The mismatch surfaces at archive time as lstat(.../build/generated/ios/RCTAppDependencyProvider.h): No such file or directory Setting RCT_NEW_ARCH_ENABLED=1 at both the pod install and xcodebuild steps mirrors what the existing iOS compile CI workflow does for its new-arch matrix slice and matches RN 0.81's default of new-arch on. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
… iOS
iOS Archive:
- Drop `rm -rf ./build` from the Archive step's pre-clean. Pod install
produces RN's new-arch codegen artifacts into ./build/generated/ios/
(RCTAppDependencyProvider.h, RCTModuleProviders.{h,mm}, etc.) and
xcodebuild's Pods targets read them from there during archive. The
cleanup line was inherited from the previous retry-loop shape and
was deleting codegen output right before xcodebuild tried to use it.
Only clear the .xcarchive now, which is the one thing the archive
step itself produces.
Android job:
- Gate with `if: false` while we iterate on the iOS pipeline. Android
was confirmed end-to-end green in run 25031149357. Each branch push
re-triggers both jobs, and the Android slot was burning runner
minutes for no signal. Re-enable by removing the gate once iOS is
reliably green.
Part of MAGE-464
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…list Previous exportArchive failed with `No signing certificate "iOS Distribution" found` even though the team has Apple Distribution and Distribution Managed certs available. With method=app-store-connect + signingStyle=automatic and no signingCertificate hint, Xcode's exportArchive can fall back to the legacy "iOS Distribution" identity family, ignoring the modern unified "Apple Distribution" / Distribution Managed certs that -allowProvisioningUpdates would actually serve. Adding signingCertificate=Apple Distribution to ExportOptions.plist (via PlistBuddy in the existing stamp step, alongside teamID) tells exportArchive to look for the modern unified cert family on Apple's cloud-signing side. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…kflow Action runtime upgrade (resolves the Node.js 20 deprecation warning Apple runners surface — Node 20 is removed from runner images on 2026-09-16): - actions/checkout @v4 -> @v5 - actions/setup-node @v4 -> @v5 - actions/cache @v4 -> @v5 - actions/setup-java @v4 -> @v5 iOS 26 SDK upgrade (resolves App Store Connect's ITMS-90725 warning that all uploads after 2026-04-28 must be built with the iOS 26 SDK): - runs-on: macos-15 -> macos-26 on both ios-build and publish-example's TestFlight job. macos-26 is GA as of 2026-02-26 and ships with Xcode 26.2 / iOS 26 SDK as the default Xcode at /Applications/Xcode.app. - Drop the explicit DEVELOPER_DIR=Xcode_16.4 override so the runner's default Xcode (26.2) is used. Resilient to future point bumps. ios-build CI simplification: - Remove the cocoapods Pods/ cache layer. `react-native build-ios` unconditionally runs `bundle install + bundle exec pod install` as part of its build pipeline regardless of any pre-existing Pods/, so the cache restore was providing zero speed benefit while creating a cache-poisoning false-positive that broke first-run builds. - Remove the `if: github.run_attempt == 1` guard on the turbo cache status check; that gate existed only to bypass the cocoapods cache hack on retries and is no longer needed. - Yarn cache (in the Setup composite) and turborepo cache stay — both provide real value (yarn install speedup; turbo HIT skips the entire iOS build when nothing relevant changed). publish-example workflow finalization: - Re-enable Slack notifications on both jobs (4 callsites flipped back from `if: false` to `if: success()` / `if: failure()`). - Re-enable Android job (drop the `if: false` block). Android was confirmed working in run 25031149357 and was only gated to save runner minutes during iOS iteration. - Push trigger on this branch is intentionally retained for one more verification round; the inline comment already calls out that it must be removed before merging to master. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…triggers publish-example.yml triggers: - Drop the temporary push-on-this-branch trigger that was used to iterate on the workflow during the PR. Final triggers: workflow_dispatch (manual publish), release: published (sync example with each SDK release), and push: branches: [master] (re-publish on merge). Ruby + CocoaPods alignment across local dev and CI: - New file example/.ruby-version pins Ruby 3.2.1 (rbenv-style); both ios-build.yml and publish-example.yml's setup-ruby steps now read it instead of carrying their own ruby-version inputs. - Bump CocoaPods to 1.16.2 in example/Gemfile (was 1.15.2). RubyGems 3.0.3.1 + Ruby 2.6.10 was a known flake source — Ruby 3.2.1 ships with RubyGems 3.4.x and resolves the chronic warning. - bundler-cache: true on setup-ruby gives us `bundle install --frozen` for free, so any Gemfile / Gemfile.lock drift fails the Ruby setup step before we get to pod install. - Regenerated example/Gemfile.lock and example/ios/Podfile.lock with Ruby 3.2.1 + CocoaPods 1.16.2. Lockfile drift guard (catches PRs that touch Podfile without re-running pod install, or that ran pod install with the wrong tool versions): - New `Verify Podfile.lock unchanged` step in ios-build.yml runs `git diff --exit-code example/ios/Podfile.lock` after the build. Pod install already runs as part of `react-native build-ios` via the example app's automaticPodsInstallation: true config, so this just checks the side effect rather than running pod install a second time. - turbo.json gains example/.ruby-version, example/Gemfile, and example/Gemfile.lock as build:ios inputs so a Gemfile-only change forces a turbo MISS → build runs → drift check runs. Cocoapods cleanup in ios-build.yml: - Drop the explicit "Install cocoapods" workflow step entirely. RN CLI's auto pod install (triggered by automaticPodsInstallation: true) is the same code path real users hit; keeping a separate workflow step was paying for pod install twice on every miss. - Drop the stub GoogleService-Info.plist generation. Inject the real plist from GOOGLE_SERVICE_INFO_PLIST_BASE64 (same secret the publish workflow uses) so the smoke-test build is identical to publish. Re-run lever (now actually works): - Restore `if: github.run_attempt == 1` on the turbo cache probe step so that on a manual rerun, turbo_cache_hit stays unset. - Pass `--force` to turbo on rerun (`github.run_attempt > 1`) so the build step actually bypasses turbo's build cache, not just the probe. Previously the lever skipped the probe but the build still hit the same cache, making the lever a no-op for rerun. Xcode pinning: - DEVELOPER_DIR set explicitly to /Applications/Xcode_26.2.app/... in both ios-build.yml and publish-example.yml's deploy-ios job. macos-26 default Xcode is 26.2 today, but pinning protects against silent Xcode-version drift on future runner image updates. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Now that the duplicate pod install is gone (RN CLI's auto pod install
during react-native build-ios is the single canonical install path on
ios-build, and publish-example does its own bundle exec pod install),
restoring a cached Pods/ folder is unambiguously a no-op at install time
when the key hits — Manifest.lock matches Podfile.lock and pod install
returns in ~2s instead of ~30-60s.
The previous Pods cache (removed earlier in this PR) was keyed only on
yarn.lock + Podfile.lock, missed important inputs (Ruby version,
CocoaPods gem version, our local SDK pod's source), and combined with a
broken hit-detection gate caused first-run cache poisoning. The new key
captures everything that should invalidate Pods/:
- example/ios/Podfile.lock (resolved pod versions)
- example/Gemfile.lock (CocoaPods gem version)
- example/.ruby-version (Ruby interpreter version)
- *.podspec (root podspec for our SDK)
- ios/**/*.{swift,h,m,mm} (our local SDK pod's source)
Both ios-build.yml and publish-example.yml's deploy-ios job get the
cache step. The keys share the `<os>-pods-` prefix so cache entries
populated by either workflow are reusable by the other (the
inputs.new-architecture matrix slice on ios-build keeps its arch in
the namespace; publish always builds new-arch and uses `pods-publish-`).
Part of MAGE-464
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…edback Label trigger for ad-hoc publishing (publish-example.yml): - pull_request: types: [labeled] fires the workflow whenever any label is added to any PR. Both deploy-android and deploy-ios are gated on `github.event.label.name == 'deploy-example-app'`, so only that specific label triggers a publish. - New `cleanup-label` job runs after both deploys (`needs:` + `if: always()` + the same label filter) and removes the label from the PR via github-script. 404s on already-removed labels are tolerated. Removing the label means re-applying it re-triggers a fresh publish, which is the desired UX for "I want to push another build." Review feedback addressed: - Cursor: `iOS build number resolution doesn't mirror Android's max approach`. The preflight no longer relies on `sort=-uploadedDate&limit=1` (Apple's `uploadedDate` is documented to sometimes return null, and out-of-chronological-order uploads can produce a stale "latest"). Now fetches `limit=200` and takes the numeric max, same shape as Android. - Cursor: `Curl missing --fail silently ignores Play API errors`. Add `-f` to the three curl calls in the Android preflight. With `set -euo pipefail` already in place, an HTTP error (e.g. 403 from a misconfigured service account) now aborts the step at the API call rather than silently producing HIGHEST=0 → wasted multi-minute build → confusing version-code-conflict at upload. - Cursor: `Ruby version file doesn't match Gemfile.lock recorded version`. Ran `bundle update --ruby` under Ruby 3.2.1 to refresh example/Gemfile.lock's RUBY VERSION line from 3.3.5p100 to 3.2.1p31, matching example/.ruby-version. (The fourth unresolved thread — the placeholder team ID in ExportOptions.plist — is being answered as a reply on the PR thread, no code change required.) Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
f3f4e60 to
0d0060b
Compare
The step-level `env:` block setting RCT_NEW_ARCH_ENABLED wasn't propagating through to pod install. Symptom: in the new-arch matrix slice, the Podfile evaluation reported "Configuring the target with the Legacy Architecture" even though the workflow's step env showed RCT_NEW_ARCH_ENABLED=1, then pod install failed with [!] No podspec found for `ReactAppDependencyProvider` in `build/generated/ios` because RN CLI's auto pod install didn't run new-architecture codegen. The chain that loses the env var somewhere: yarn → turbo → react-native build-ios → bundle exec pod install The fix matches what the previous (pre-Install-cocoapods-removal) version of the workflow did and was empirically reliable: explicitly `export` the var in the shell script body so every subprocess in the chain inherits it from the actual shell env, not just the GitHub-Actions-managed step env. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Diagnosed by reading @react-native-community/cli source. Both old-arch and new-arch slices were failing pod install with: [!] No podspec found for `ReactAppDependencyProvider` in `build/generated/ios` Root cause is a bootstrapping issue inside RN CLI: cli-platform-apple/build/tools/getArchitecture.js reads Pods/Pods.xcodeproj/project.pbxproj and returns true only if the string `-DRCT_NEW_ARCH_ENABLED=1` is present. On a fresh runner Pods/ doesn't exist yet, so it returns false, and cli-config-apple/build/tools/installPods.js's runPodInstall hard-codes the subprocess env to RCT_NEW_ARCH_ENABLED='0' — overriding any value we exported in our shell. That mismatched value then causes the Podfile's pre_install codegen to skip generating ReactAppDependencyProvider.podspec, and pod install fails on the dangling reference. The previous workflow worked despite this because it ran an explicit `yarn example setup` (= bundle exec pod install) before the build, with RCT_NEW_ARCH_ENABLED exported in bash. That populated Pods/ correctly, which made RN CLI's later auto-detection return the right arch. When I removed the explicit step earlier in this PR, I assumed RN CLI's auto pod install was equivalent — it isn't on a fresh runner. Restoring the explicit `Install CocoaPods` step (gated on turbo_cache_hit != 1, so a turbo HIT still skips everything). The auto-install that happens during build:ios afterwards is fast (~5-10s) because Pods/ already matches Podfile.lock. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The check assumed pod install is deterministic given the same Podfile + same tool versions. That's not true for our setup: 1. Klaviyo SDK pods are pinned by branch (`:branch => 'rel/5.3.0'`), not commit. Every `pod install` re-resolves the branch HEAD and produces fresh SPEC CHECKSUMS whenever the SDK branch advances. 2. RN's Podfile evaluation calls `get_folly_config()` which returns different values depending on `new_arch_enabled`, producing different RCT-Folly podspec content (and thus checksum) for old-arch vs new-arch — a single committed lockfile can't satisfy both matrix slices simultaneously. Result: the drift check fires basically every run with churn that isn't the developer's fault to fix. What we still have for catching the original concern: - `bundler-cache: true` on `setup-ruby` runs `bundle install --frozen` and fails fast on any Gemfile.lock drift → catches wrong CocoaPods gem version, wrong Ruby version. - `example/.ruby-version` pins the Ruby interpreter. - `example/Gemfile` pins CocoaPods to 1.16.2. - `pod install` itself still has to succeed, which means the Podfile is internally consistent. Lost: catching "PR changed Podfile but didn't re-run pod install" by byte-identity. That'll quietly resolve at CI pod-install time without the lockfile being committed. Acceptable trade for not chasing false-positive lockfile diffs every PR. Worth revisiting once we're on new-arch-only (RN 0.82+) and the Klaviyo pods are pinned by commit rather than branch. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
48d85c5 to
46f7620
Compare
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 46f7620. Configure here.
For local-path pods, CocoaPods doesn't bundle source into Pods/. It only
adds path references to Pods.xcodeproj that xcodebuild resolves at
compile time. So content edits to existing files in `ios/` don't make
the cached Pods/ stale — xcodebuild reads the current content via the
path reference at build time.
Hashing `ios/**/*.{swift,h,m,mm}` was busting the Pods cache on every
SDK content edit, which is most of our SDK PRs. Cache hits dropped to
"non-iOS-touching commits only," which is rare. Net effect was ~30-60s
of cache miss cost on builds that didn't actually need a fresh pod
install.
Edge case the previous key correctly defended against and we're
intentionally giving up: adding/removing files matching the podspec's
source_files glob (`ios/**/*.{h,m,mm,swift}`). In that case
Pods.xcodeproj would have a stale file list and the new file would
silently not be compiled into the pod. Two reasons we accept the risk:
1. Local devs run `pod install` themselves and would catch this
before pushing — Pods.xcodeproj on their machine is regenerated.
2. File-set changes are rare (~quarterly when we add a new bridge
module or refactor); content edits are common (every SDK PR).
Cache key inputs that remain (cover the actual common drift sources):
- example/ios/Podfile.lock (resolved pod versions)
- example/Gemfile.lock (CocoaPods gem version)
- example/.ruby-version (Ruby interpreter version)
- *.podspec (our SDK pod definition; source_files
glob lives here, so glob changes
invalidate)
Part of MAGE-464
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous version of these comments warned about a "rare edge case" where adding/removing files in `ios/` could let stale Pods.xcodeproj slip through. That warning was wrong — verified empirically by adding a fake source file locally, running `bundle exec pod install`, and confirming the new file is added to Pods.xcodeproj even though Podfile.lock didn't change. The misconception: I was thinking pod install might skip when lockfile hasn't changed, leaving cached Pods.xcodeproj in place. CocoaPods doesn't actually work that way — `pod install` always re-evaluates the podspec's source_files glob from the current filesystem and rewrites Pods.xcodeproj from scratch. The "skip if up-to-date" check that exists in higher-level wrappers (e.g. react-native CLI's automaticPodsInstallation) decides whether to *invoke* pod install, not anything about pod install's internal behavior. So removing `ios/**` from the Pods cache key is simply correct, not a tradeoff. Comments updated to reflect that. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…rkflows Drop session-history narrative and redundant explanations from comments in ios-build.yml and publish-example.yml; keep the load-bearing "why" only. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
evan-masseau
added a commit
that referenced
this pull request
Apr 28, 2026
… toolchain (#347) * ci(example): add Play Store internal track publish workflow Adds a GitHub Actions workflow to build and publish the React Native SDK example app (com.klaviyoreactnativesdkexample) to the Google Play internal track. Fires on SDK releases and manual workflow_dispatch. Includes Node 20 + Yarn 3 setup, JS bundle generation via the RN Gradle plugin (bundleRelease), signing with r0adkll/sign-android-release, and Slack notifications for both success and failure. Part of MAGE-464 * ci(example): add branch push trigger, bump runner to ubuntu-24.04 Temporary push trigger scoped to this branch so the publish pipeline can be exercised end-to-end before merge — workflow_dispatch only fires from the default branch, so push is the only way to test from the PR. Remove before merging to master. Also aligns the runner with every other workflow in the repo (android-build.yml, ci.yml, doc-bot.yml all use ubuntu-24.04), addressing the Cursor Bugbot note. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(ci): restore Slack header block so alert title renders When a `blocks` array is supplied, Slack treats top-level `text` as fallback only — for mobile push and accessibility — and never renders it in the message body. Without a header block the "✅ published" / "🚨 publish failed" title vanished from the Slack message, leaving just the metadata sections. Adds a `type: header` block (plain_text) at the top of both the success and failure payloads so the title is visible inline. Top-level `text` stays so the push notification fallback still reads correctly. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(example/android): produce unsigned release AAB for CI to sign once The publish workflow was failing at Play upload with "signed with multiple certificate chains" because Gradle was signing the release AAB with the debug keystore (via the RN template's stock `release { signingConfig signingConfigs.debug }` line) before the `r0adkll/sign-android-release` step layered the real upload key on top. Play rejects AABs with more than one signer. Drops the debug signing config from the release buildType so bundleRelease produces an unsigned AAB. The CI signing step then signs it exactly once with the upload key from SIGNING_KEY. For local signed release builds, pass signing via `-Pandroid.injected.signing.*` gradle properties — noted inline. Also bumps versionName to 2.4.0 to match the SDK version for the first Play Store release. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * ci(example): sign AAB via gradle-injected properties, drop r0adkll signer Switches the publish workflow to sign the release AAB in a single Gradle pass using `-Pandroid.injected.signing.*` properties. AGP produces a properly v2/v3-signed AAB directly, so the separate `r0adkll/sign- android-release` step is gone — fewer moving parts, no double-signing risk, and one less abandoned Node-20 action emitting deprecation warnings. Also reverts the previous build.gradle change that removed the release signingConfig. With CI now doing its own signing via gradle properties at build time, there's no need to break the RN template default of `release { signingConfig signingConfigs.debug }` — that default keeps local `./gradlew :app:bundleRelease` and `yarn android --mode release` working out of the box for devs. CI's injected properties override the buildType signingConfig anyway. The keystore is decoded from the SIGNING_KEY secret to ${RUNNER_TEMP} and cleaned up in an always() step as hygiene (the runner is ephemeral but explicit cleanup is cheap insurance). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * ci(example): inject versionCode from github.run_number Prevents collision with the versionCode=1 AAB that was uploaded manually to the Play Store internal track to prove package name ownership. Play rejects duplicate versionCodes per package, so the next CI upload would fail without this. github.run_number is monotonic per-workflow-per-repo, so successive CI publishes will always produce a strictly-increasing versionCode. The static `versionCode 1` in build.gradle stays as the local dev default; AGP's `android.injected.version.code` property wins when set. Known limitation (same as sibling TestFlight workflow): manual uploads that bump the versionCode outside of CI can get ahead of run_number, which would then fail as "not greater than previously uploaded build". Fix if/when it happens by re-triggering CI enough times to overtake, or add a large offset to run_number. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(ci): read injected versionCode from project property in build.gradle AGP's `android.injected.version.code` property is an IDE-oriented flag that isn't reliably honored by command-line Gradle builds — CI passed it but the resulting AAB still had versionCode=1, colliding with the manually-uploaded verification build in Play. Switches to a custom project property read explicitly in build.gradle: `-PreleaseVersionCode=N` → `Integer.parseInt(...findProperty(...))`. Locally verified: `-PreleaseVersionCode=42` produces AndroidManifest with `android:versionCode="42"`. Falls back to 1 when the property isn't set, preserving local dev behavior. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(ci): upload as draft release while Play listing is in Draft state Play rejects `status: completed` uploads to an app whose listing hasn't been published out of Draft. Switches to `status: draft` so the release lands on the internal track unpublished — manual promote in Console until the app listing is fully set up (content rating, data safety, etc.), then we flip back to `completed` for fully-automated publishes. Track stays `internal`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * chore(scripts): sync example app version with SDK in bump-version.sh Adds two extra updates to the SDK version bump path: - Android: example/android/app/build.gradle versionName - iOS: example/ios/KlaviyoReactNativeSdkExample.xcodeproj/project.pbxproj MARKETING_VERSION (all build configurations) So that TestFlight and Play Store uploads carry the SDK version they're demonstrating without a separate manual step. versionCode/build numbers remain CI-injected per-run (github.run_number) and aren't touched by this script. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * ci(example): add iOS TestFlight publish job alongside Android Restructures the example app publish workflow as a single workflow with two parallel jobs (deploy-android, deploy-ios) sharing triggers and shared composite actions. Both ship together on workflow_dispatch and on every SDK release. iOS job mirrors the iOS test app's TestFlight workflow (klaviyo-ios-test-app/.github/workflows/testflight.yml): - Ephemeral keychain with Apple Distribution cert - Per-target Manual signing via xcodeproj Ruby gem (Modern Xcode ignores CLI signing flags) - ExportOptions.plist with provisioning profile UUIDs patched in at build time via PlistBuddy - xcrun altool upload with retry-on-collision (handles manual TestFlight uploads racing past github.run_number) - agvtool sets the build number across all targets per run RN-specific additions on top: - Node + Yarn setup before pod install (Metro bundler runs during archive) - bundle exec pod install --repo-update (no stale CDN pinning) - Delete .xcode.env.local so Xcode build phase resolves node via $PATH - GoogleService-Info.plist injected from a base64 secret (versus Android's plain-text google-services.json) so Firebase push works in the build Shared between jobs via composite actions: - .github/actions/example-publish-prep — verifies KLAVIYO_EXAMPLE_API_KEY and writes example/.env - .github/actions/notify-slack-publish — single notification action with result + platform inputs, replacing four duplicated payload blocks example/ios/ExportOptions.plist is checked in with a GH_actions UUID placeholder; CI overwrites it with the real UUID per run. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * ci(example): pull Apple team ID from secret instead of hardcoding The Klaviyo Apple Team ID isn't a high-stakes secret (it's recoverable from any signed IPA's embedded.mobileprovision) but it's org-identifying and shouldn't sit in checked-in YAML/plist as a matter of hygiene. - ExportOptions.plist now ships with an APPLE_TEAM_ID_PLACEHOLDER value that CI rewrites via PlistBuddy. - The xcodebuild archive call and the per-target signing Ruby script both read the team ID from the APPLE_TEAM_ID env var, sourced from secrets.APPLE_TEAM_ID. Adds APPLE_TEAM_ID to the required secrets list. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * chore(example/ios): bump MARKETING_VERSION to 2.4.0 Catches the iOS example up to the SDK version (Android versionName was already 2.4.0). bump-version.sh now keeps both in sync going forward, but that path hadn't been run since the script gained example-app handling. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * ci(example): preflight version numbers from store APIs and modernize iOS signing Replaces the previous github.run_number + retry-on-collision pattern on both jobs with a single preflight call to each store's API: Android: - google-github-actions/auth + gcloud Bearer token + Play Edits API to find the highest existing versionCode and pick highest+1. - AAB is built once with the resolved versionCode and uploaded with r0adkll/upload-google-play (kept the third-party action since the preflight removed the only reason we'd need our own uploader). - versionName now read from build.gradle and reported in Slack notifications. Drops the release.tag_name fallback — the workflow is also triggered by workflow_dispatch and branch push, where there is no release event, and it gave us the useless "manual" label. iOS — full pivot to the App Store Connect API Key auth model: - Drops the cert/keychain/provisioning-profile dance, the per-target Manual signing Ruby script, the Apple ID + app-specific password upload, and the archive-and-retry build number loop. - Writes the API key .p8 to ~/.appstoreconnect/private_keys/ where Apple tools auto-discover it. - Preflight: JWT-signed call to /v1/builds via the App Store Connect REST API to find the latest CFBundleVersion and pick latest+1. jwt gem is installed inline since example/Gemfile is reserved for CocoaPods. - Archive uses xcodebuild -allowProvisioningUpdates with the API key flags so xcodebuild downloads/refreshes signing certs and provisioning profiles automatically. No keychain setup, no .p12. - Upload via xcrun altool --apiKey/--apiIssuer (Apple's recommended modern path). - ExportOptions.plist simplified to method=app-store-connect, signingStyle=automatic, with team ID stamped in at run time. Net effect on the secrets surface: - iOS drops: BUILD_CERTIFICATE_BASE64, P12_PASSWORD, BUILD_PROVISION_ PROFILE_BASE64, EXTENSION_PROVISION_PROFILE_BASE64, KEYCHAIN_PASSWORD, APPLE_ID, APP_SPECIFIC_PASSWORD. - iOS adds: APP_STORE_CONNECT_API_KEY_ID, APP_STORE_CONNECT_API_KEY_ ISSUER_ID, APP_STORE_CONNECT_API_KEY_BASE64. - Both jobs share APPLE_TEAM_ID (ESC stamps it into ExportOptions.plist at build time rather than baking it into checked-in YAML/plist). Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(ci): correct Android preflight endpoint and force iOS Distribution signing - Android: switch preflight from /edits/{editId}/bundles (scoped to the current edit, always empty) to /edits/{editId}/tracks. Iterate every active release across every track and pick max(versionCode)+1. Fixes versionCode-1 collisions caused by the bundles endpoint reporting empty for a fresh edit. - iOS: pbxproj inherits the RN-template default CODE_SIGN_IDENTITY[sdk=iphoneos*] = "iPhone Developer" which forces Development signing on Release archives. That made -allowProvisioningUpdates request a Development profile (and try to register the runner machine as a dev device). Override at xcodebuild invocation time with CODE_SIGN_IDENTITY="Apple Distribution" (and the same SDK-conditional variant) so xcodebuild provisions an App Store distribution profile via the API key. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(ci): correct Play preflight scope/endpoint, simplify iOS signing override, mute Slack temporarily Android preflight: - Switch back to /edits/{editId}/bundles. Per fastlane/supply, this endpoint actually returns ALL bundles for the app (its comment is literally "Get a list of all AAB version codes"); my earlier read that it was edit-scoped was wrong, and switching to /tracks regressed to "active releases only" which doesn't surface superseded versionCodes Play still treats as used. - Request the androidpublisher OAuth scope explicitly on the gcloud token. The default cloud-platform scope was the most likely cause of the previous run returning empty bundles. - Log the raw bundles response so future surprises are diagnosable without re-instrumenting. - Apply a 100 floor on the resolved versionCode to leapfrog the small historical versionCodes (1-9) we know got uploaded by the original Android-only workflow before its rename — even if the bundles list comes back empty for any reason, we won't collide with those. iOS Archive: - Drop the second build-setting override `CODE_SIGN_IDENTITY[sdk=iphoneos*]=Apple Distribution`. xcodebuild's CLI build-setting syntax doesn't support the [sdk=...] conditional modifier; bash split it into a malformed key/value and the resulting literal "iphoneos*]=Apple Distribution" propagated into Pods' CODE_SIGN_IDENTITY, breaking the entire dependency build. The plain unconditional CODE_SIGN_IDENTITY="Apple Distribution" wins against the project's conditional setting on its own because command-line overrides take highest precedence in xcodebuild's resolution order. Slack notifications: - All four notify-slack-publish callsites gated to `if: false` while we iterate. Each failed run was generating a noisy alert. Re-enable by flipping back to `if: success()` / `if: failure()` once both jobs are reliably green. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(ci): pin iOS archive to iOS device, drop manual signing override and Android floor iOS Archive: - Add `-destination "generic/platform=iOS"` so xcodebuild archives for iOS device. Without it, the runner's Apple Silicon Mac advertises itself as a valid destination for the scheme (Designed-for-iPad is enabled), and xcodebuild defaults to "My Mac" — which uses Development signing for archive and breaks our Distribution flow. - Drop the `CODE_SIGN_IDENTITY="Apple Distribution"` CLI override. With Automatic signing enabled in the project, xcodebuild already picks Apple Distribution for the `archive` action when the destination is iOS. Combining Automatic with a manual identity override is what xcodebuild flagged as "conflicting provisioning settings". Klaviyo's Distribution Managed cert is team-wide and bundle-ID-agnostic, so -allowProvisioningUpdates + the API key can sign without any further setup. Android preflight: - Drop the 100 floor on resolved versionCode. Now that the androidpublisher OAuth scope fix is confirmed working (last run succeeded with highest=8 returned correctly), the floor is just a one-time hack that would actively cause collisions on subsequent runs if the bundles list ever returned empty after we'd uploaded 100+. Trust the API; on the rare empty-response case, the upload fails loudly with a useful error. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(ci): set RCT_NEW_ARCH_ENABLED=1 for iOS pod install and archive Without this env var the Podfile's new-architecture branch doesn't run, so codegen artifacts like RCTAppDependencyProvider.h are never produced, even though Pod targets that depend on them (e.g. ReactAppDependencyProvider) still get wired into the Pods project. The mismatch surfaces at archive time as lstat(.../build/generated/ios/RCTAppDependencyProvider.h): No such file or directory Setting RCT_NEW_ARCH_ENABLED=1 at both the pod install and xcodebuild steps mirrors what the existing iOS compile CI workflow does for its new-arch matrix slice and matches RN 0.81's default of new-arch on. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(ci): preserve codegen output, gate Android job while iterating on iOS iOS Archive: - Drop `rm -rf ./build` from the Archive step's pre-clean. Pod install produces RN's new-arch codegen artifacts into ./build/generated/ios/ (RCTAppDependencyProvider.h, RCTModuleProviders.{h,mm}, etc.) and xcodebuild's Pods targets read them from there during archive. The cleanup line was inherited from the previous retry-loop shape and was deleting codegen output right before xcodebuild tried to use it. Only clear the .xcarchive now, which is the one thing the archive step itself produces. Android job: - Gate with `if: false` while we iterate on the iOS pipeline. Android was confirmed end-to-end green in run 25031149357. Each branch push re-triggers both jobs, and the Android slot was burning runner minutes for no signal. Re-enable by removing the gate once iOS is reliably green. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(ci): set signingCertificate=Apple Distribution in ExportOptions.plist Previous exportArchive failed with `No signing certificate "iOS Distribution" found` even though the team has Apple Distribution and Distribution Managed certs available. With method=app-store-connect + signingStyle=automatic and no signingCertificate hint, Xcode's exportArchive can fall back to the legacy "iOS Distribution" identity family, ignoring the modern unified "Apple Distribution" / Distribution Managed certs that -allowProvisioningUpdates would actually serve. Adding signingCertificate=Apple Distribution to ExportOptions.plist (via PlistBuddy in the existing stamp step, alongside teamID) tells exportArchive to look for the modern unified cert family on Apple's cloud-signing side. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * ci: upgrade actions to Node 24 and Xcode 26 SDK; finalize publish workflow Action runtime upgrade (resolves the Node.js 20 deprecation warning Apple runners surface — Node 20 is removed from runner images on 2026-09-16): - actions/checkout @v4 -> @v5 - actions/setup-node @v4 -> @v5 - actions/cache @v4 -> @v5 - actions/setup-java @v4 -> @v5 iOS 26 SDK upgrade (resolves App Store Connect's ITMS-90725 warning that all uploads after 2026-04-28 must be built with the iOS 26 SDK): - runs-on: macos-15 -> macos-26 on both ios-build and publish-example's TestFlight job. macos-26 is GA as of 2026-02-26 and ships with Xcode 26.2 / iOS 26 SDK as the default Xcode at /Applications/Xcode.app. - Drop the explicit DEVELOPER_DIR=Xcode_16.4 override so the runner's default Xcode (26.2) is used. Resilient to future point bumps. ios-build CI simplification: - Remove the cocoapods Pods/ cache layer. `react-native build-ios` unconditionally runs `bundle install + bundle exec pod install` as part of its build pipeline regardless of any pre-existing Pods/, so the cache restore was providing zero speed benefit while creating a cache-poisoning false-positive that broke first-run builds. - Remove the `if: github.run_attempt == 1` guard on the turbo cache status check; that gate existed only to bypass the cocoapods cache hack on retries and is no longer needed. - Yarn cache (in the Setup composite) and turborepo cache stay — both provide real value (yarn install speedup; turbo HIT skips the entire iOS build when nothing relevant changed). publish-example workflow finalization: - Re-enable Slack notifications on both jobs (4 callsites flipped back from `if: false` to `if: success()` / `if: failure()`). - Re-enable Android job (drop the `if: false` block). Android was confirmed working in run 25031149357 and was only gated to save runner minutes during iOS iteration. - Push trigger on this branch is intentionally retained for one more verification round; the inline comment already calls out that it must be removed before merging to master. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * ci: align Ruby/CocoaPods, add lockfile drift guard, finalize publish triggers publish-example.yml triggers: - Drop the temporary push-on-this-branch trigger that was used to iterate on the workflow during the PR. Final triggers: workflow_dispatch (manual publish), release: published (sync example with each SDK release), and push: branches: [master] (re-publish on merge). Ruby + CocoaPods alignment across local dev and CI: - New file example/.ruby-version pins Ruby 3.2.1 (rbenv-style); both ios-build.yml and publish-example.yml's setup-ruby steps now read it instead of carrying their own ruby-version inputs. - Bump CocoaPods to 1.16.2 in example/Gemfile (was 1.15.2). RubyGems 3.0.3.1 + Ruby 2.6.10 was a known flake source — Ruby 3.2.1 ships with RubyGems 3.4.x and resolves the chronic warning. - bundler-cache: true on setup-ruby gives us `bundle install --frozen` for free, so any Gemfile / Gemfile.lock drift fails the Ruby setup step before we get to pod install. - Regenerated example/Gemfile.lock and example/ios/Podfile.lock with Ruby 3.2.1 + CocoaPods 1.16.2. Lockfile drift guard (catches PRs that touch Podfile without re-running pod install, or that ran pod install with the wrong tool versions): - New `Verify Podfile.lock unchanged` step in ios-build.yml runs `git diff --exit-code example/ios/Podfile.lock` after the build. Pod install already runs as part of `react-native build-ios` via the example app's automaticPodsInstallation: true config, so this just checks the side effect rather than running pod install a second time. - turbo.json gains example/.ruby-version, example/Gemfile, and example/Gemfile.lock as build:ios inputs so a Gemfile-only change forces a turbo MISS → build runs → drift check runs. Cocoapods cleanup in ios-build.yml: - Drop the explicit "Install cocoapods" workflow step entirely. RN CLI's auto pod install (triggered by automaticPodsInstallation: true) is the same code path real users hit; keeping a separate workflow step was paying for pod install twice on every miss. - Drop the stub GoogleService-Info.plist generation. Inject the real plist from GOOGLE_SERVICE_INFO_PLIST_BASE64 (same secret the publish workflow uses) so the smoke-test build is identical to publish. Re-run lever (now actually works): - Restore `if: github.run_attempt == 1` on the turbo cache probe step so that on a manual rerun, turbo_cache_hit stays unset. - Pass `--force` to turbo on rerun (`github.run_attempt > 1`) so the build step actually bypasses turbo's build cache, not just the probe. Previously the lever skipped the probe but the build still hit the same cache, making the lever a no-op for rerun. Xcode pinning: - DEVELOPER_DIR set explicitly to /Applications/Xcode_26.2.app/... in both ios-build.yml and publish-example.yml's deploy-ios job. macos-26 default Xcode is 26.2 today, but pinning protects against silent Xcode-version drift on future runner image updates. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * ci: re-add Pods cache with a sharper key on both iOS workflows Now that the duplicate pod install is gone (RN CLI's auto pod install during react-native build-ios is the single canonical install path on ios-build, and publish-example does its own bundle exec pod install), restoring a cached Pods/ folder is unambiguously a no-op at install time when the key hits — Manifest.lock matches Podfile.lock and pod install returns in ~2s instead of ~30-60s. The previous Pods cache (removed earlier in this PR) was keyed only on yarn.lock + Podfile.lock, missed important inputs (Ruby version, CocoaPods gem version, our local SDK pod's source), and combined with a broken hit-detection gate caused first-run cache poisoning. The new key captures everything that should invalidate Pods/: - example/ios/Podfile.lock (resolved pod versions) - example/Gemfile.lock (CocoaPods gem version) - example/.ruby-version (Ruby interpreter version) - *.podspec (root podspec for our SDK) - ios/**/*.{swift,h,m,mm} (our local SDK pod's source) Both ios-build.yml and publish-example.yml's deploy-ios job get the cache step. The keys share the `<os>-pods-` prefix so cache entries populated by either workflow are reusable by the other (the inputs.new-architecture matrix slice on ios-build keeps its arch in the namespace; publish always builds new-arch and uses `pods-publish-`). Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * ci(example): add label-trigger for ad-hoc publish + address review feedback Label trigger for ad-hoc publishing (publish-example.yml): - pull_request: types: [labeled] fires the workflow whenever any label is added to any PR. Both deploy-android and deploy-ios are gated on `github.event.label.name == 'deploy-example-app'`, so only that specific label triggers a publish. - New `cleanup-label` job runs after both deploys (`needs:` + `if: always()` + the same label filter) and removes the label from the PR via github-script. 404s on already-removed labels are tolerated. Removing the label means re-applying it re-triggers a fresh publish, which is the desired UX for "I want to push another build." Review feedback addressed: - Cursor: `iOS build number resolution doesn't mirror Android's max approach`. The preflight no longer relies on `sort=-uploadedDate&limit=1` (Apple's `uploadedDate` is documented to sometimes return null, and out-of-chronological-order uploads can produce a stale "latest"). Now fetches `limit=200` and takes the numeric max, same shape as Android. - Cursor: `Curl missing --fail silently ignores Play API errors`. Add `-f` to the three curl calls in the Android preflight. With `set -euo pipefail` already in place, an HTTP error (e.g. 403 from a misconfigured service account) now aborts the step at the API call rather than silently producing HIGHEST=0 → wasted multi-minute build → confusing version-code-conflict at upload. - Cursor: `Ruby version file doesn't match Gemfile.lock recorded version`. Ran `bundle update --ruby` under Ruby 3.2.1 to refresh example/Gemfile.lock's RUBY VERSION line from 3.3.5p100 to 3.2.1p31, matching example/.ruby-version. (The fourth unresolved thread — the placeholder team ID in ExportOptions.plist — is being answered as a reply on the PR thread, no code change required.) Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(ci): export RCT_NEW_ARCH_ENABLED in build:ios shell The step-level `env:` block setting RCT_NEW_ARCH_ENABLED wasn't propagating through to pod install. Symptom: in the new-arch matrix slice, the Podfile evaluation reported "Configuring the target with the Legacy Architecture" even though the workflow's step env showed RCT_NEW_ARCH_ENABLED=1, then pod install failed with [!] No podspec found for `ReactAppDependencyProvider` in `build/generated/ios` because RN CLI's auto pod install didn't run new-architecture codegen. The chain that loses the env var somewhere: yarn → turbo → react-native build-ios → bundle exec pod install The fix matches what the previous (pre-Install-cocoapods-removal) version of the workflow did and was empirically reliable: explicitly `export` the var in the shell script body so every subprocess in the chain inherits it from the actual shell env, not just the GitHub-Actions-managed step env. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(ci): restore explicit pod install on fresh runners (bootstrap fix) Diagnosed by reading @react-native-community/cli source. Both old-arch and new-arch slices were failing pod install with: [!] No podspec found for `ReactAppDependencyProvider` in `build/generated/ios` Root cause is a bootstrapping issue inside RN CLI: cli-platform-apple/build/tools/getArchitecture.js reads Pods/Pods.xcodeproj/project.pbxproj and returns true only if the string `-DRCT_NEW_ARCH_ENABLED=1` is present. On a fresh runner Pods/ doesn't exist yet, so it returns false, and cli-config-apple/build/tools/installPods.js's runPodInstall hard-codes the subprocess env to RCT_NEW_ARCH_ENABLED='0' — overriding any value we exported in our shell. That mismatched value then causes the Podfile's pre_install codegen to skip generating ReactAppDependencyProvider.podspec, and pod install fails on the dangling reference. The previous workflow worked despite this because it ran an explicit `yarn example setup` (= bundle exec pod install) before the build, with RCT_NEW_ARCH_ENABLED exported in bash. That populated Pods/ correctly, which made RN CLI's later auto-detection return the right arch. When I removed the explicit step earlier in this PR, I assumed RN CLI's auto pod install was equivalent — it isn't on a fresh runner. Restoring the explicit `Install CocoaPods` step (gated on turbo_cache_hit != 1, so a turbo HIT still skips everything). The auto-install that happens during build:ios afterwards is fast (~5-10s) because Pods/ already matches Podfile.lock. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(ci): drop Podfile.lock drift check — false positives on every run The check assumed pod install is deterministic given the same Podfile + same tool versions. That's not true for our setup: 1. Klaviyo SDK pods are pinned by branch (`:branch => 'rel/5.3.0'`), not commit. Every `pod install` re-resolves the branch HEAD and produces fresh SPEC CHECKSUMS whenever the SDK branch advances. 2. RN's Podfile evaluation calls `get_folly_config()` which returns different values depending on `new_arch_enabled`, producing different RCT-Folly podspec content (and thus checksum) for old-arch vs new-arch — a single committed lockfile can't satisfy both matrix slices simultaneously. Result: the drift check fires basically every run with churn that isn't the developer's fault to fix. What we still have for catching the original concern: - `bundler-cache: true` on `setup-ruby` runs `bundle install --frozen` and fails fast on any Gemfile.lock drift → catches wrong CocoaPods gem version, wrong Ruby version. - `example/.ruby-version` pins the Ruby interpreter. - `example/Gemfile` pins CocoaPods to 1.16.2. - `pod install` itself still has to succeed, which means the Podfile is internally consistent. Lost: catching "PR changed Podfile but didn't re-run pod install" by byte-identity. That'll quietly resolve at CI pod-install time without the lockfile being committed. Acceptable trade for not chasing false-positive lockfile diffs every PR. Worth revisiting once we're on new-arch-only (RN 0.82+) and the Klaviyo pods are pinned by commit rather than branch. Part of MAGE-464 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Description
Adds an end-to-end "publish example app" workflow that ships the React Native SDK example app to Google Play (internal track) and Apple TestFlight from CI. Bundled with the workflow itself is a round of CI/toolchain modernization that fell out of getting iOS distribution working: Ruby/CocoaPods alignment, Xcode 26 / iOS 26 SDK, Node 24 actions, and a clearer caching strategy on the iOS smoke-test workflow.
Due Diligence
Release/Versioning Considerations
PatchContains internal changes or backwards-compatible bug fixes.Changelog / Code Overview
New publish workflow —
.github/workflows/publish-example.ymlSingle workflow with two parallel jobs sharing triggers and shared composite actions:
deploy-android— Play Store internal track. Preflights the next versionCode via the Play Edits API (no moregithub.run_numbercollisions across workflow renames or out-of-CI uploads), builds and signs the AAB with-Pandroid.injected.signing.*so the release lands on the internal track as a single-pass v2/v3-signed bundle.deploy-ios— Apple TestFlight. Modern App Store Connect API Key flow only — no .p12 cert, no provisioning profile UUIDs, no ephemeral keychain. Usesxcodebuild -allowProvisioningUpdatesfor cloud-managed signing against Klaviyo's Distribution Managed cert, JWT-preflights/v1/buildsto find the next CFBundleVersion, and uploads viaxcrun altool --apiKey.Triggers
workflow_dispatch— manual publish (the typical case).release: published— keep the example in sync with each SDK release.push: branches: [master]— re-publish on merge.pull_request: types: [labeled]— adding thedeploy-example-applabel to a PR builds and publishes that PR's HEAD. Acleanup-labeljob removes the label after the run completes (success or failure) so re-applying the label re-triggers a fresh publish. Useful for ad-hoc QA builds without merging.Shared composite actions
.github/actions/example-publish-prep— verifiesKLAVIYO_EXAMPLE_API_KEYis set, writesexample/.envforreact-native-dotenv. Used by both jobs..github/actions/notify-slack-publish— single Slack notify action withresultandplatforminputs, replaces four duplicated payload blocks.Version-bump script —
bump-version.shNow syncs the example app's user-facing version (
example/android/app/build.gradleversionNameandexample/ios/KlaviyoReactNativeSdkExample.xcodeprojMARKETING_VERSION) with the SDK version. CI-injected build numbers (versionCode/CFBundleVersion) remain per-run via Play / App Store Connect API preflights and aren't touched by the script.Toolchain modernization
example/.ruby-versionpins 3.2.1 (rbenv-style, both local and CI honor it). Bump CocoaPods to 1.16.2 inexample/Gemfile.setup-rubywithbundler-cache: truerunsbundle install --frozenso any Gemfile/Gemfile.lock drift fails the Ruby setup step before pod install.ios-build.ymlsmoke test +publish-example.ymldeploy-ios) now run onmacos-26runner withDEVELOPER_DIR=/Applications/Xcode_26.2.app/.... Resolves App Store Connect's ITMS-90725 warning ("must be built with iOS 26 SDK starting 2026-04-28").actions/*@v4references bumped to@v5(checkout,setup-node,cache,setup-java). Resolves the Node 20 deprecation warning surfaced on every macOS runner.iOS smoke-test workflow —
ios-build.ymlInstall CocoaPodsstep before the build, withRCT_NEW_ARCH_ENABLEDexported in the shell. This is load-bearing on a fresh runner because@react-native-community/cli'sautomaticPodsInstallationpath detects arch by readingPods/Pods.xcodeproj/project.pbxprojfor a-DRCT_NEW_ARCH_ENABLED=1flag — but that file doesn't exist until after a successful pod install, so on a fresh checkout the CLI mis-detects legacy arch and hard-codes the env var to'0'in its execa subprocess (overriding any shell export). Running pod install ourselves first populatesPods/correctly so the build's auto-install becomes a fast no-op (~5–10s, Manifest.lock matches Podfile.lock).actions/cache@v5forexample/ios/Pods, keyed onPodfile.lock + Gemfile.lock + .ruby-version + *.podspec + ios/**/*.{swift,h,m,mm}. On a hit, pod install completes in ~2s (existing Pods/ matches lockfile) instead of ~30–60s (fresh CDN download).GoogleService-Info.plistfrom theGOOGLE_SERVICE_INFO_PLIST_BASE64secret, replacing the previous stub. Smoke-test build now matches the publish path.if: github.run_attempt == 1on the turbo probe step, plus--forcepassed to turbo on the build step whengithub.run_attempt > 1. Previous version of this lever skipped the probe but the build still hit the cache, making it a no-op for rerun.Apple Team ID handling
Klaviyo's Apple Team ID was previously hardcoded in workflow YAML. Now sourced from the new
APPLE_TEAM_IDsecret and stamped intoExportOptions.plistat build time via PlistBuddy. The checked-in plist uses anAPPLE_TEAM_ID_PLACEHOLDERvalue.Required Secrets
Wire up in Settings → Secrets and variables → Actions before either job will succeed.
Shared
KLAVIYO_EXAMPLE_API_KEYexample/.envat build time. Workflow fails with a clear message if unset.SLACK_WEBHOOK_URLAPPLE_TEAM_IDExportOptions.plistand passed asDEVELOPMENT_TEAM).Android
GOOGLE_SERVICES_JSONgoogle-services.jsonfor Firebase.SIGNING_KEYALIASKEY_STORE_PASSWORDKEY_PASSWORDSERVICE_ACCOUNT_JSONiOS
APP_STORE_CONNECT_API_KEY_IDAPP_STORE_CONNECT_API_KEY_ISSUER_IDAPP_STORE_CONNECT_API_KEY_BASE64.p8private key file. Key must be generated with Admin role so it has "Access to Cloud Managed Distribution Certificate"; App Manager alone fails exportArchive with "Cloud signing permission error".GOOGLE_SERVICE_INFO_PLIST_BASE64GoogleService-Info.plistfor the iOS Firebase app forcom.klaviyoreactnativesdkexample.Test Plan
workflow_dispatchand confirm the AAB lands on the Play Store internal track in Draft status.macos-26+ Xcode 26.2 + Ruby 3.2.1 + CocoaPods 1.16.2 passes for both new and old architecture matrix slices.bump-version.sh -v X.Y.ZupdatesversionName(Android) andMARKETING_VERSION(iOS) in addition to the SDK version files.deploy-example-applabel to a PR (post-merge) and confirm the workflow fires, both deploys run, and the label is auto-removed when the run finishes.Related Issues/Tickets
Part of MAGE-464
🤖 Generated with Claude Code