Open-source DAST + Nuclei scan worker for Google Cloud Run Jobs. Writes findings directly to Supabase.
A standalone, container-based scan worker that runs on Google Cloud Run Jobs. Designed to be invoked from a serverless app (Vercel, Lambda, Cloud Functions) when a 5-minute function timeout is not enough — typical scans run 5–20 minutes.
This is the engine that powers the public Motrix Pandora scanner. It is published
under MIT so you can run, fork, audit, or self-host it without the SaaS.
- Reads a
SCAN_JOB_IDenv var injected by Cloud Run Jobs. - Loads the job row from
scan_jobsin Supabase. - Dispatches based on
job.type:nuclei→ runs the Nuclei CLI with curated templates, streams JSONL findings.dast→ launches headless Chromium (Playwright), crawls same-origin, probes for reflected XSS, open redirects, SQLi timing anomalies, CSRF, clickjacking, cookie flags.
- Writes each finding to
scan_job_findings. - Updates
scan_jobswith status, progress, duration, score, grade, risk level, and a structuredresultsummary.
| Constraint | Vercel Functions (Hobby/Pro) | Cloud Run Jobs (this worker) |
|---|---|---|
| Max execution time | 60s / 300s | 30 min (configurable to 24h) |
| Memory | 1024 MB | up to 32 GiB |
| CPU | shared | dedicated up to 8 vCPU |
| Native binaries (Nuclei, Chromium) | painful via layers | first-class support |
| Cost for sporadic 5-min jobs | n/a (impossible) | ≈ $0.005 per scan |
┌──────────────┐ ┌──────────────┐ ┌─────────────┐
│ Vercel app │────▶│ Supabase │ │ Cloud Run │
│ /scan-jobs │ │ scan_jobs │ │ Jobs │
│ POST │ │ (queued) │ │ motrix- │
└──────┬───────┘ └──────────────┘ │ worker │
│ └──────▲──────┘
│ run.googleapis.com │
│ /v2/.../jobs:run │
│ JWT RS256 (SA key) │
└──────────────────────────────────────────┘
│
▼
SCAN_JOB_ID=<uuid>
worker reads job
runs Nuclei/DAST
writes findings
updates scan_jobs
gcloud auth login
gcloud config set project <YOUR_PROJECT_ID>
gcloud services enable \
run.googleapis.com \
artifactregistry.googleapis.com \
cloudbuild.googleapis.comgcloud artifacts repositories create motrix-worker \
--repository-format=docker \
--location=us-central1 \
--description="Motrix scan worker"From inside this directory:
gcloud builds submit \
--tag us-central1-docker.pkg.dev/<PROJECT>/motrix-worker/worker:latest .gcloud run jobs create motrix-worker \
--image us-central1-docker.pkg.dev/<PROJECT>/motrix-worker/worker:latest \
--region us-central1 \
--max-retries 1 \
--task-timeout 1800 \
--memory 2Gi \
--cpu 2 \
--set-env-vars SUPABASE_URL=<URL>,SUPABASE_SERVICE_ROLE_KEY=<KEY>One job definition handles all scan types — the runner switches on
job.typeread from Supabase.
gcloud run jobs execute motrix-worker \
--region us-central1 \
--update-env-vars SCAN_JOB_ID=<uuid-from-scan_jobs-row>Build and run the image without GCP:
docker build -t motrix-worker-dev .
docker run --rm \
-e SUPABASE_URL=https://xxx.supabase.co \
-e SUPABASE_SERVICE_ROLE_KEY=eyJ... \
-e SCAN_JOB_ID=<uuid> \
motrix-worker-devFast TS iteration without Docker (Nuclei + Chromium not available):
npm install
npm run build
SCAN_JOB_ID=... SUPABASE_URL=... SUPABASE_SERVICE_ROLE_KEY=... node dist/index.jsCloud Run Jobs pricing: per vCPU-second and GiB-second while running.
- Free tier (monthly): 180k vCPU-s + 360k GiB-s + 2M requests.
- Typical scan: 5 min @ 2 vCPU + 2 GiB ≈ 600 vCPU-s + 600 GiB-s per run.
- 500 scans/month @ 2 vCPU → 300k vCPU-s → above free, ≈ $3-6/month.
- 500 scans/month @ 1 vCPU (slower but fine for Nuclei) → within free tier.
For reasonable volumes this is effectively free, well under $10/month even at heavy volume.
You provide these tables. Suggested schema:
create table scan_jobs (
id uuid primary key default gen_random_uuid(),
user_id uuid not null,
type text not null check (type in ('nuclei','dast','leakcheck')),
target text not null,
status text not null check (status in ('queued','running','completed','failed','timeout')),
config jsonb default '{}'::jsonb,
progress int default 0,
started_at timestamptz,
completed_at timestamptz,
duration_ms int,
findings_count int default 0,
score int,
grade text,
risk_level text,
result jsonb,
error text,
created_at timestamptz default now(),
gcp_job_execution text
);
create table scan_job_findings (
id uuid primary key default gen_random_uuid(),
scan_job_id uuid not null references scan_jobs(id) on delete cascade,
severity text,
title text,
description text,
evidence jsonb,
template_id text,
matched_at text,
created_at timestamptz default now()
);The worker uses a Supabase service-role key (bypasses RLS). Never ship that key to a browser.
score = max(0, 100 - (CRITICAL×15 + HIGH×8 + MEDIUM×3 + LOW×1))
grade = A (≥90) | B (≥80) | C (≥70) | D (≥60) | F (<60)
risk_level = critical > high > medium > low > minimal
- DAST crawler stays same-origin, caps at 30 URLs, depth 3, 5 req/s.
- SQLi probe uses
SLEEP(0)(zero delay) — never DoS-inducing payloads. - XSS probe uses a unique random marker per run; never executes injected JS.
- Max runtime: 20 min Nuclei, 15 min DAST. Cloud Run task timeout 30 min.
Run scans only against targets you own or are authorized to test. The authors are not responsible for misuse.
PRs welcome. For non-trivial changes please open an issue first.
Security disclosures: see SECURITY.md.
MIT — see LICENSE.
Built by Fordrax Solutions.