🌐 Languages: English · Русский
Self-hosted transparent proxy manager for Raspberry Pi 4/5 (and any other Linux box). Drops in next to your router, intercepts LAN traffic via nftables TPROXY, and routes it through xray-core based on your rules — domain, GeoIP, GeoSite, MAC, port, protocol — with a web UI.
📸 Screenshots: jump to gallery.
- What it is
- Screenshots
- Architecture
- Features
- Supported protocols
- Quick start
- Server-side proxy install
- Configuration
- Troubleshooting
- Development
- Tech stack
- Acknowledgements
- Contributing
- License
PiTun turns a small Linux box into a transparent proxy gateway for your home network. Devices that use the box as their default gateway have their outbound traffic intercepted at the kernel level, routed through one of several supported VPN protocols, and either tunnelled, sent direct, or dropped — all according to rules you set up in the web UI.
It was built for and primarily tested on the Raspberry Pi 4 / 5
(64-bit Raspberry Pi OS), but the project ships linux/amd64 images
too, so any Intel/AMD mini-PC, NUC, old laptop or x86_64 server that
can run Docker works just as well. Multi-arch images for both
linux/arm64 and linux/amd64 are produced by the
release workflow.
It's designed for the case where you want a single shared exit policy for the whole house (TVs, phones, IoT) without installing a client app on every device, and without depending on cloud-managed routers.
Three proxy endpoints exposed simultaneously, all sharing the same routing rule set:
| Endpoint | Default port | Use case |
|---|---|---|
| TPROXY | 7893 |
Transparent gateway — devices set the box as gateway |
| SOCKS5 | 1080 |
Explicit proxy for browsers and apps |
| HTTP | 8080 |
For apps without SOCKS5 support |
┌──────────────────────────────────────────────┐
Devices ────► │ PiTun host (RPi / mini-PC) │
(LAN) │ │
│ nftables TPROXY :7893 │
│ │ │
│ ▼ │
│ xray-core ─┬─ rules (geoip / geosite / │
│ │ domain / IP / MAC / port) │
│ │ │
│ ├─► proxy (VPN node / chain) │
│ ├─► direct (home router) │
│ └─► block │
│ │
│ + balancer groups (leastPing / random) │
│ + node circles (auto-rotate active node) │
│ + per-domain DNS (plain / DoH / DoT) │
└──────────────────────────────────────────────┘
Web UI talks to a FastAPI backend that owns the xray-core process, the nftables ruleset, and a SQLite database with all configuration. Frontend is a single-page React app served by nginx.
Core
- Transparent proxy via TPROXY + nftables, no per-device client
- SOCKS5 / HTTP proxies on the LAN
- Optional TUN mode and combined TPROXY+TUN
- QUIC (UDP/443) blocking — forces TCP fallback for protocols TPROXY can intercept
- Tunnel chaining — VLESS-inside-WireGuard, etc.
- Proxy Chains (multi-panel, two-hop VLESS+Reality across two
x-ui panels with independent channels per chain; managed clients
- per-channel delete + live healthcheck)
- Kill switch — drop all forwarded traffic if xray crashes
Routing
- Rule types:
mac,src_ip,dst_ip,domain,port,protocol,geoip,geosite - Actions:
proxy,direct,block,node:<id>,balancer:<id> - Drag-and-drop priority, bulk import, V2RayN/Shadowrocket JSON round-trip
- Per-MAC overrides ("this device always direct, that one always through node #5")
Health & resilience
- Background liveness probe with two-tier auto-failover: if the failed node belongs to an enabled NodeCircle, the failover handler delegates recovery to the circle (which skips dead siblings via pre-ping + retry); otherwise it walks a configurable fallback list
- Speed test per node via short-lived isolated xray instance
- Naive sidecar supervisor — auto-restarts crashed Naive containers with a sliding-window rate limiter
- Recent Events feed on Dashboard surfaces failovers, sidecar restarts, geo updates, circle rotations
Balancing & rotation
- Balancer groups (xray's
leastPingorrandomstrategies) - Node Circles — automatically rotate the active node on a schedule, seamlessly via xray's gRPC API (no dropped connections); each candidate is TCP-pinged with a single retry before switching, so dead siblings are skipped without a connection blip
Subscriptions
- Periodic refresh from VLESS / VMess / Trojan / SS / Hysteria2 / Clash YAML / xray JSON subscription URLs
- Per-subscription User-Agent (v2ray, clash, sing-box, happ, …), optional regex filter, configurable interval
Devices & DNS
- LAN discovery via
arp-scan, OUI vendor lookup - Per-device routing policy (default / always-include / always-bypass)
- Per-domain DNS rules (plain, DoH, DoT)
- FakeDNS pool for sniffing-friendly geoip resolution
- DNS query log with stats
Servers & deployments
- Inventory of remote VPS hosts (host, SSH credentials, tags) separate from runtime nodes — async-SSH probe, deployment records track which protocol/port is set up on which box, optional manual provisioning scripts (Caddy + naive, xray, SSH hardening) over the same SSH link
- One-click auto-deploy over SSH for NaiveProxy, WireGuard, x-ui (3x-ui / x-ui-pro) — live log streaming, status badges, cascade-clean on uninstall
- Dedicated X-ui Panels page — full inbound + client management (6 wired presets covering Reality / TLS / domain modes), live healthcheck (panel API, xray, nginx, UFW, TLS cert, disk, mem), cache↔panel sync for hand-added clients, random / custom fakesite rotation
Operations
- One-click GeoIP / GeoSite refresh — three switchable upstream profiles: Loyalsoldier (CN-focused community list), runetfreedom (Russian-internet curated list), v2fly (vanilla baseline)
- Full-fidelity JSON Export/Import for Nodes and Servers — versioned bundle envelope, append/replace modes, optional secret redaction (separate from URI/subscription import which is single-node only)
- Plain-text URI export (
.txt, onevless://…per line) — share your node list with any v2rayN-compatible client; symmetricImportbutton auto-detects URI list vs JSON bundle - Built-in diagnostics page (DNS reachability, gateway, xray status, resource usage)
- Streaming xray log viewer
- Multi-language UI (English / Russian)
| Protocol | Notes |
|---|---|
| VLESS | Plain, TLS, REALITY, XTLS Vision, with WebSocket / gRPC / xhttp / HTTP/2 / HTTPUpgrade / mKCP / QUIC transports |
| VMess | Same transport menu as VLESS |
| Trojan | TLS / WebSocket / gRPC / xhttp |
| Shadowsocks | All modern stream / AEAD ciphers |
| WireGuard | Native xray outbound; works inside chains |
| Hysteria2 | UDP, with optional obfuscation password |
| SOCKS5 | As outbound (e.g. for chaining) |
| NaiveProxy | Per-node sidecar container (Caddy + forwardproxy on the server side); xray connects via local SOCKS5 |
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 64-bit ARM (RPi 4) or x86_64, 4 cores | RPi 5 / any modern x86_64 mini-PC |
| RAM | 1 GB | 2 GB+ (helps with naive sidecars and large geo data refresh) |
| Disk | 4 GB free | 8 GB+ (docker images + DB growth + DNS query log) |
| Network | 1 LAN interface, static IP, wired preferred | 1× wired GbE for LAN |
| OS | Any modern 64-bit Linux with kernel ≥ 5.4 (TPROXY support) | Raspberry Pi OS 64-bit, Debian 12+, Ubuntu 22.04+ |
| Architectures | linux/arm64 (RPi 4/5) · linux/amd64 (Intel/AMD mini-PC, NUC, x86_64 server) |
— |
- One of the supported architectures above
- Docker + Docker Compose v2
- Root access on the host (nftables + raw socket binding)
- A static LAN IP for the host
The simplest install is a single command that downloads everything, prepares the host, and brings up the stack. It pulls pre-built images from the latest GitHub Release, so no Docker build runs locally — total time is ~5 minutes on a fresh RPi, and the install resumes cleanly if the connection drops mid-way (every download is retried and atomically renamed).
curl -fsSL https://raw.githubusercontent.com/DaveBugg/PiTun/master/install.sh | sudo bashHeads up — passing flags to a piped script. The
--flagarguments below need to reach our installer, not bash. There are three working forms; pick the one that's hardest to mistype:(A) Foolproof — download then run:
curl -fsSL https://raw.githubusercontent.com/DaveBugg/PiTun/master/install.sh \ -o /tmp/pitun-install.sh sudo bash /tmp/pitun-install.sh --version v1.3.0-beta.8(B) Pipe with
bash -s --separator (the-s --is required):curl -fsSL https://raw.githubusercontent.com/DaveBugg/PiTun/master/install.sh \ | sudo bash -s -- --version v1.3.0-beta.8(C) Environment variable (no
-s --voodoo needed):curl -fsSL https://raw.githubusercontent.com/DaveBugg/PiTun/master/install.sh \ | sudo PITUN_VERSION=v1.3.0-beta.8 bash❌ Do NOT do this:
curl ... | sudo bash --version v1.3.0-beta.8— bash swallows--versionas its own flag (prints bash's version + exits) before our installer ever runs. Common copy-paste trap.
Useful flags (work via any of the three forms above; examples use form B):
# Pin a specific version (current: v1.3.0-beta.8)
... | sudo bash -s -- --version v1.3.0-beta.8
# Force rebuilding from source (no published release available, or
# you're testing local changes). Slower, needs reliable internet
# during the docker build.
... | sudo bash -s -- --build
# Air-gapped / hybrid offline install — point at a directory containing
# pre-downloaded artifacts. ANY file present in the directory is used
# as-is; missing ones fall back to a normal download (hybrid mode).
# Also auto-detected when install.sh is launched from a directory that
# already has any of the six expected filenames sitting next to it —
# no --offline flag needed in that case. Full instructions and the
# exact file list: docs/INSTALL_OFFLINE.md.
... | sudo bash -s -- --offline /tmp/pitun-artifacts
# Custom install path (default: /opt/pitun)
... | sudo bash -s -- --dir /srv/pitun
# Just preview what it would do — no changes made
... | sudo bash -s -- --dry-run
# Force IPv4 is the default. If you have a v6-only network, opt in:
... | sudo bash -s -- --ipv6
# Recover from a stale kill-switch lockup. ONLY when:
# - a previous PiTun run died with kill_switch=true active, AND
# - curl from this host now hangs on the very first download.
# On a HEALTHY install with kill_switch + a running backend, omit
# this flag — the install works over xray's normal bypass path.
# The pre-flight detects + warns automatically; re-run with the flag
# only if it suggests so. See "Troubleshooting" below for details.
... | sudo bash -s -- --fix-blockersAfter the script finishes:
- Web UI is at
http://<this-host-ip>/, loginadmin/password(change it on first login via Settings → Account). /opt/pitun/.envwas generated with a randomSECRET_KEYand the network block autodetected from your default-route interface:INTERFACE,LAN_CIDR,GATEWAY_IP(the PiTun host's own LAN IP),VITE_API_BASE_URL,VITE_WS_BASE_URL,CORS_ORIGINS. Verify withhead -30 /opt/pitun/.envbefore going to production; if anything looks off, edit anddocker compose -f /opt/pitun/docker-compose.yml restart.
See
install.sh --helpfor the full option list.
If you want the source tree alongside the running stack (e.g. for development, or to apply patches before deploy), the classic path still works:
git clone https://github.com/DaveBugg/PiTun pitun
cd pitun
# Host bootstrap: installs Docker (if missing), xray-core, GeoIP/GeoSite,
# system packages, kernel modules, sysctl tweaks, log rotation, daily
# cleanup cron. Skip if you'd rather do it manually — see "Manual install"
# below.
sudo bash scripts/setup.sh
cp .env.example .env
# Edit .env — at minimum set SECRET_KEY, INTERFACE, LAN_CIDR,
# GATEWAY_IP (the PiTun host's own LAN IP — what devices will use as
# their default gateway). A random SECRET_KEY: openssl rand -hex 32
#
# Tip: instead of editing manually, run `sudo bash install.sh
# --skip-host-prep` from the same checkout — it autodetects all the
# network values from your default-route interface and writes them
# into .env (only on first generation).
docker compose up -d --buildThe web UI listens on the host's LAN IP, port 80. Default login is
admin / password — change it on first run via Settings → Account.
If you'd rather wire the host yourself, here's the equivalent checklist.
Everything below must be done before docker compose up:
# 1. System packages
sudo apt update
sudo apt install -y curl wget ca-certificates nftables iproute2 \
net-tools iptables arp-scan dnsutils unzip jq cron
# 2. Free UDP/5353 (PiTun's DNS port)
sudo systemctl stop avahi-daemon avahi-daemon.socket || true
sudo systemctl disable avahi-daemon avahi-daemon.socket || true
sudo systemctl mask avahi-daemon || true
# 3. Sysctl: IP forwarding + TPROXY loopback
sudo tee /etc/sysctl.d/99-pitun.conf <<'EOF'
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.all.route_localnet = 1
EOF
sudo sysctl --system
# 4. TPROXY kernel modules (load now + pin for next boot)
sudo modprobe nft_tproxy xt_TPROXY
echo -e "nft_tproxy\nxt_TPROXY" | sudo tee /etc/modules-load.d/pitun.conf
# 5. Docker + Compose v2 (skip if already installed)
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker "$USER" # then log out + back in
# 6. GeoIP/GeoSite databases (bind-mounted RW into the backend container
# so the user can refresh them from the UI without an image rebuild).
# The xray binary itself is bundled inside the backend image as of
# v1.2.0 — no separate host install needed.
sudo mkdir -p /usr/local/share/xray
sudo curl -fsSL -o /usr/local/share/xray/geoip.dat https://github.com/Loyalsoldier/v2ray-rules-dat/releases/latest/download/geoip.dat
sudo curl -fsSL -o /usr/local/share/xray/geosite.dat https://github.com/Loyalsoldier/v2ray-rules-dat/releases/latest/download/geosite.dat
# 7. Static IP on the LAN interface (use NetworkManager, dhcpcd, or netplan
# depending on your distro; not scripted because the right tool varies).
# 8. Now you can deploy
cp .env.example .env && $EDITOR .env
docker compose up -d --buildWhy the geo databases are on the host, not inside the image.
geoip.datandgeosite.datare refreshable from the UI (GeoData → Update). Keeping them as a host bind-mount means a singlecurlupdates them in place — no image rebuild required. The xray binary itself, by contrast, is baked into the backend image as of v1.2.0 (used to be a host install). One less host-side prerequisite, version pinned to the release tag.
The CI release workflow publishes loadable Docker tarballs (linux/amd64 and linux/arm64) as GitHub Release assets. Useful for air-gapped / factory-fresh RPi installs:
# On a machine with internet
curl -LO https://github.com/DaveBugg/PiTun/releases/download/vX.Y.Z/pitun-backend-vX.Y.Z-arm64.tar.gz
curl -LO https://github.com/DaveBugg/PiTun/releases/download/vX.Y.Z/pitun-frontend-vX.Y.Z.tar.gz
# Transfer to the host, then:
docker load < pitun-backend-vX.Y.Z-arm64.tar.gz
tar -xzf pitun-frontend-vX.Y.Z.tar.gz -C frontend/dist/
docker compose up -dFor RPi-specific bootstrap (first boot, OS-level dependencies, network
config) scripts/ ships with helpers — see scripts/README.md.
Once your PiTun box is up and you've added a VPS under Servers, the Install button (rocket icon) lets you provision the upstream proxy on the VPS itself over SSH, with the install log streamed live in the modal. Three protocols are supported:
| Protocol | Topology | Where the credentials live |
|---|---|---|
| NaiveProxy | Single TLS tunnel per server | One Node row per deploy |
| WireGuard | Multi-client (one server, N peers) | One DeploymentClient row per peer; export selected peers to Node rows on demand |
| x-ui | Full 3x-ui / x-ui-pro panel (many inbounds + many clients per inbound) | Panel rows under XuiServer; each inbound's clients live on the panel itself, surfaced + cached by PiTun |
The 443 slot — NaiveProxy and x-ui-pro can't coexist on one VPS (both want :443 with Let's Encrypt). Bare-mode x-ui (no domain → panel on a random high port) IS compatible with NaiveProxy. PiTun enforces this slot rule at deploy time with a clear error message.
Form: domain (with an A-record pointing at the VPS), Let's Encrypt email, optional Caddy basic-auth user/password. Behind the scenes:
- PiTun uploads
scripts/setup-naive-server.shto the VPS. - The script installs Caddy + the
caddyserver/forwardproxyplugin viaxcaddy, fetches a TLS cert, writes a Caddyfile, enables the service. - On success the script emits
URI=naive+https://…on its last line; PiTun parses that, creates aNoderow, and the deploy is done.
A second deploy on the same server overwrites the existing
ServerDeployment row; the linked Node is left intact (you can
recreate it from the deployment if it was deleted).
Form: first-client name (defaults to client1), UDP port (default
51820), DNS servers, AllowedIPs. The install script
(scripts/setup-wireguard-server.sh) is sub-command driven:
| Sub-command | Used by | What it does |
|---|---|---|
install |
Initial deploy | apt, sysctl, generate server keypair, write wg0.conf, enable wg-quick@wg0, add the first peer |
add-client |
"Add" in the Clients modal | New keypair + wg syncconf reload (no tunnel restart) |
remove-client |
"Remove" in the Clients modal | Strip peer + wg syncconf reload |
list-clients |
"Sync" in the Clients modal | List current peer names + pubkeys |
get-conf |
"Download .conf" | Re-print the cached INI for one client |
Clients are a separate layer. The Servers page shows a Users
icon next to any server with a WG deployment — that opens the
Clients modal where you:
- Add a peer (creates a new keypair on the VPS + caches the priv
key locally so we can render a
.conffor download). - Sync to reconcile against the VPS — peers added manually on the
server show up; peers deleted on the server are flagged
orphanbut not auto-deleted from PiTun, because the admin may have linked them to Nodes. - Download .conf for QR-coding into a phone.
- Export to Node to actually route traffic through this peer
(creates a
Noderow referencing theDeploymentClient; the Nodes page renders "from<server name>" alongside the row). - Remove to strip the peer from the server AND delete the
DeploymentClientrow. Any Nodes that were exported from it stay but get anorphanbadge so the admin sees the upstream is gone.
One VPS can host clients used by multiple PiTun instances — each PiTun sees the peers it added itself + any it imported via Sync.
Form: optional domain (empty → bare-mode panel on a random high port,
self-signed cert; non-empty → xui-pro stack with nginx + Let's Encrypt
on :443) and optional LE registration email (defaults to
admin@<apex-domain>). Behind the scenes
(scripts/setup-xui-server.sh):
- Installs upstream 3x-ui v3.0.1 in
--non-interactivemode. - For xui-pro: also installs nginx + certbot + the
GFW4Fun/randomfakehtmlarchive of fakesite templates, picks one at random, and wires the reverse-proxyexternalProxy: 443block. - Generates fresh admin user/password + a Bearer API token + panel
basepath + sub-port, then emits
URI=xui://…so PiTun creates theXuiServerrow + sets up the Bearer-authenticated client. - Runs
cleanup-go.shto remove the Go SDK / build cache the installer left behind (~2.5 GB on a 10 GB VPS). - Applies the VPS optimisation profile (BBR + sysctl + swap + ulimits + logrotate).
Once registered, every inbound + client on the panel becomes
manageable from the dedicated Панели X-ui page in the UI:
6 wired-in inbound presets (VLESS+Reality / VLESS+xhttp+Reality /
VLESS+WS+TLS / VLESS+xhttp+TLS / Trojan+gRPC / SOCKS5), per-client
export to Node rows (cache-backed, idempotent on re-export),
multi-layer healthcheck, cache↔panel sync that picks up clients
added directly via the panel UI, and (xui-pro only) fakesite
rotation — random pick from the bundled archive or upload a custom
.zip. Chain orchestration (see Proxy Chains below) glues two
registered panels into a two-hop VLESS+Reality tunnel.
All runtime config goes through the web UI. The only settings that
must be set before first start, via .env:
| Variable | Default | What |
|---|---|---|
SECRET_KEY |
changeme-… |
JWT signing key — openssl rand -hex 32 |
INTERFACE |
eth0 |
LAN interface name on the host |
LAN_CIDR |
192.168.1.0/24 |
Your LAN subnet (autodetected by install.sh) |
GATEWAY_IP |
192.168.1.100 |
The PiTun host's own LAN IP — devices set this as their default gateway. (Misnomer kept for backward compat; not the router's IP.) Autodetected by install.sh. |
BACKEND_PORT |
8000 |
Backend listen port (behind nginx) |
TPROXY_PORT_TCP |
7893 |
TPROXY TCP listener |
DNS_PORT |
5353 |
Internal DNS forwarder port |
NAIVE_PORT_RANGE_START |
20800 |
Allocator range for naive sidecars |
NAIVE_IMAGE |
pitun-naive:latest |
Image tag built locally or loaded from release |
Full annotated example: .env.example.
About
GATEWAY_IP: the variable name predates the LAN-gateway feature and refers to the PiTun host itself, not your home router. If the .env value disagrees with the actual interface IP, the backend auto-syncs the live IP into the database on the firstGET /settings, so the UI always shows the truth.LAN_CIDRhas the same runtime fallback as of 1.2.3.
Symptom. You run install.sh and the very first download
(api.github.com/.../releases/...) hangs forever (~75 s before
TCP gives up). You can ping 8.8.8.8 from the host but curl https://api.github.com doesn't return.
Cause. PiTun's kill_switch=true mode installs an
inet pitun nftables table + ip rule fwmark 0x1 lookup 100
policy route that TPROXYs all non-bypass traffic to xray on
127.0.0.1:7893. If the backend dies with that protection still
active (crash, OOM, manual docker compose down), the rules stay
in the kernel but xray isn't there to receive — every outbound
packet drops silently. Even the installer.
Auto-detection. Since v1.3.0-beta.3, install.sh checks for
this state at startup and prints a clear warning if it finds stale
rules without a running pitun-backend container. If you see:
[WARN] ════════════════════════════════════════════════════════════════════
[WARN] Detected stale kill-switch state on this host:
[WARN] * 'inet pitun' nftables table is present but backend is down
[WARN] * 'ip rule fwmark 0x1 lookup 100' policy route is present
…
…re-run with the --fix-blockers flag (or PITUN_FIX_BLOCKERS=1):
sudo bash /tmp/pitun-install.sh --version v1.3.0 --fix-blockersIt will flush the stale rules before any download, then proceed. The backend will reinstall them at startup if needed.
When NOT to use
--fix-blockers: on a healthy install wherepitun-backendis currentlyUp, kill-switch is doing its job and traffic flows fine through xray's bypass path. The installer auto-detects this case and leaves nftables alone — no flag needed. Adding the flag anyway just causes a brief LAN exposure while you re-stack.
If the install hangs on a fresh host (no PiTun previously installed)
and --fix-blockers doesn't help, suspect IPv6. Some Debian 13 RPi
images and a number of VPS providers have advertised but unroutable
IPv6 paths to GitHub. Since v1.3.0-beta.3 the installer defaults to
IPv4 (-4) — but if you've passed --ipv6 or set
PITUN_FORCE_IPV6=1, drop those.
If you upgraded to v1.3.0-beta.1 or .2 (via the original beta release artifacts) on an arm64 device — particularly RPi 4 (Cortex-A72) — the backend container will crash-loop with one of:
ValueError: bad marshal data (unknown type code)
or, worse, a silent Segmentation fault (core dumped) at
import uvicorn. Root cause was a CI-side QEMU cross-build
producing arm64 wheels with corrupted bytecode caches.
Fixed in v1.3.0-beta.3 via Dockerfile (PYTHONDONTWRITEBYTECODE=1
- post-install
*.pycstrip) and a switch to native arm64 runners in CI. Just re-run the installer with--version v1.3.0-beta.3(or later); the new image loads cleanly on every Cortex-A72 / A76 we've tested.
Symptom. Right after a fresh install.sh run the dashboard
displays a red banner "Валидация конфигурации Xray не прошла /
xray validation failed: (empty stderr)". Backend logs show
xray process died unexpectedly (rc=-11) (SIGSEGV, exit 139)
and Auto-restart aborted: config verification failed: with no
stderr — xray -test -config … segfaults before it can print
anything.
Cause. The bundled xray binary inside the freshly-loaded
Docker image is bit-corrupted. The release tarball on GitHub is
fine (verified by per-arch sha256 pinning at build time, since
v1.3.0-beta.5), but docker load < tarball writes layer files
to local storage, and on a flaky SD card / SSD or with active
ext4 metadata corruption, individual bytes flip silently. The
binary still has a valid ELF header (so file reports it as a
plain executable), but executing instructions land on garbage
addresses → segfault.
RPi 4 / 5 with a USB3-connected SSD? The most common cause is the UAS USB-storage protocol corrupting bytes on heavy writes — not actual disk damage. Skip directly to RPi 4 / 5 with USB-SATA / USB-NVMe SSD: disable UAS for the one-time kernel-cmdline fix.
Auto-detection. Since v1.3.0-beta.5, install.sh re-runs
docker load up to 3 times and verifies the bundled xray's
sha256 against a pinned digest after each attempt. If all three
fail, the installer aborts with a pointer to this section
instead of leaving you with a non-bootable stack.
Recovery.
-
Verify the diagnosis — confirm the binary is corrupted (and isn't just a config error):
docker run --rm --entrypoint sha256sum pitun-backend:latest \ /usr/local/bin/xray
Compare to the pinned values in
backend/Dockerfile(XRAY_SHA256_AMD64/_ARM64/_ARM). If they don't match, you've hit storage corruption. -
Check the filesystem — the most common root cause is ext4 hash-tree (htree) corruption; one or more of:
sudo dumpe2fs -h /dev/<rootdev> | grep -E 'state|First error|Last error' sudo dmesg -T | grep -iE 'ext4|mmc|sd|ata' sudo smartctl -a /dev/<rootdev> # for SSD/NVMe
Filesystem state: clean with errorsplus recentEFSCORRUPTEDevents is a strong signal. -
Repair — schedule an fsck on the next boot:
sudo touch /forcefsck sudo tune2fs -c 1 /dev/<rootdev> sudo reboot
On a Pi, the boot will pause for fsck (1–10 min depending on disk size and damage). After the reboot,
dumpe2fsshould showFilesystem state: cleanwith no error timestamps. -
Recover the data —
install.shmakes a pre-upgrade SQLite snapshot at/opt/pitun/data-backup-pre-vX.Y.Z-*.dbon every run. Compose'sdata:/app/databind mount also keeps the live DB at/opt/pitun/data/pitun.db. Either is a safe starting point if the running stack got into a bad state. -
Re-install — wipe
/var/lib/docker/{overlay2,image,containers}/*while the daemon is stopped (orphaned layers from the corrupted attempt won't auto-clean), then runinstall.sh --version vX.Y.Zagain. The retry loop in step 2 of the installer should now pass. -
If it keeps recurring — the storage hardware is dying. Back up
/opt/pitun/data+/opt/pitun/.envto another machine and swap the SD card / SSD before re-installing. RPi 4 has no ECC RAM and SD cards are notoriously prone to silent bit-rot; an M.2 SSD via USB3 is much more reliable.
Who this affects. Raspberry Pi 4 / 5 owners running root from an
SSD (or plain HDD) connected via a USB3 → SATA / NVMe adapter — i.e.
not SD card, not PCIe-direct NVMe HAT. Common culprits are
ASMedia bridges (174c:1051, 174c:1153, 174c:1156, 174c:55aa)
and JMicron (152d:0578, 152d:1561, 152d:583*). Check yours:
lsusb | grep -i -E 'asmedia|jmicron|realtek'
lsusb -t # look for "Driver=uas" — that's the trouble signalWhy it breaks install.sh. Most cheap USB-SATA bridges on Linux
default to UAS (USB Attached SCSI) for speed, but several bridge
- Pi-firmware combinations silently flip bytes during heavy single-
stream writes — exactly the workload of
docker load < pitun-backend- *.tar.gz(90 MB+). The result: bytes inside the bundledxraybinary get corrupted on disk, the binary segfaults on first use, and the dashboard greets you with "xray validation failed: (empty stderr)". SMART says the SSD is fine; the bridge / driver is the culprit. Forcing the slower BOT (Bulk-Only Transport) protocol trades ~20 % throughput for rock-solid integrity.
One-time fix (kernel cmdline quirk). Replace <VID:PID> with
your bridge's IDs from lsusb. The example below is for an Argon
ONE M.2 + ASM1156 (174c:1156):
# 1. Find your bridge VID:PID
lsusb -t # confirms "Driver=uas"
lsusb | grep -i -E 'asmedia|jmicron|realtek'
# → Bus 002 Device 002: ID 174c:1156 ASMedia Technology Inc. ...
VIDPID=174c:1156 # ← put YOUR VID:PID here
# 2. Backup the current cmdline + insert the quirk
sudo cp /boot/firmware/cmdline.txt /boot/firmware/cmdline.txt.bak
sudo sed -i "1s|^|usb-storage.quirks=${VIDPID}:u |" /boot/firmware/cmdline.txt
cat /boot/firmware/cmdline.txt # verify
# 3. Reboot
sudo reboot
# 4. After reboot — verify the driver flipped to usb-storage (not uas)
lsusb -t # expect "Driver=usb-storage"
sudo dmesg | grep -i 'uas is ignored'
# → usb 2-2: UAS is ignored for this device, using usb-storage insteadAfter that, re-run install.sh --version vX.Y.Z — the new
load-time sha verification (since v1.3.0-beta.5) will pass on the
first attempt and any subsequent upgrade will be stable.
To revert (if your bridge actually works fine on UAS and you
want the speed back), restore /boot/firmware/cmdline.txt.bak over
cmdline.txt and reboot.
The quirk only disables UAS for the matching VID:PID; other USB mass-storage devices on the same Pi (USB stick, second drive) are unaffected.
# Backend
cd backend
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt -r requirements-dev.txt
python -m uvicorn app.main:app --reload --port 8000
python -m pytest tests/ -q
# Frontend
cd frontend
npm ci
npm run dev # http://localhost:5173
npm run build # tsc + vite (catches type errors)
npm run test:ci
npm run lintThe full Docker stack lives in docker-compose.yml. For local UI work
without RPi-specific bits (TPROXY, nftables) you can skip Docker — auth,
nodes, routing rules and most of the UI work fine on macOS/Windows
against a backend running on localhost:8000.
See CONTRIBUTING.md for PR conventions and code
style.
Backend — Python 3.11, FastAPI, SQLModel/SQLAlchemy, Alembic, Pydantic v2, Uvicorn, httpx, aiohttp, aiosqlite, bcrypt, python-jose, psutil, docker-py, PyYAML.
Frontend — React 19, TypeScript, Vite, Tailwind CSS 3, TanStack Query (React Query) v5, Zustand, React Router 6, Recharts, Lucide React, axios, clsx, tailwind-merge.
Infrastructure — Docker + Compose, nginx (frontend), Tecnativa's docker-socket-proxy (read-only Docker API access from the backend), nftables, systemd.
Testing — pytest, Vitest, Testing Library.
PiTun is glue code on top of mature, hard-to-replicate upstream projects. Without them, none of this would exist:
- XTLS/Xray-core — the actual proxy engine. PiTun manages an xray-core process, generates its config, and talks to its gRPC API.
- klzgrad/naiveproxy —
Chromium-based HTTPS-tunnelling proxy used as a per-node sidecar.
PiTun's
docker/naive/builds an image from upstream releases. - Caddy with caddyserver/forwardproxy
(klzgrad's fork) — recommended NaiveProxy server.
scripts/setup-naive-server.shbuilds it viaxcaddy. - MHSanaei/3x-ui — the upstream x-ui panel (v3.0.1). PiTun's "bare" x-ui mode auto-installs it and manages its inbounds/clients via the panel API.
- GFW4Fun/x-ui-pro — domain
- nginx + LE fork of 3x-ui used in PiTun's "xui-pro" mode and as relay/exit nodes in Proxy Chains.
- GFW4Fun/randomfakehtml — fakesite templates bundled into xui-pro installs, also driven by the in-app "rotate fakesite" feature.
- Loyalsoldier/v2ray-rules-dat
— GeoIP / GeoSite rule databases used by xray's
geoip:/geosite:matchers. PiTun pulls the latestgeoip.datandgeosite.datfrom here. - MaxMind GeoLite2 — GeoIP-MMDB lookups (optional, opt-in).
- netfilter / nftables — kernel-side TPROXY interception.
- arp-scan — LAN device discovery.
- FastAPI — HTTP framework
- SQLModel + SQLAlchemy — ORM
- Pydantic — validation
- Alembic — migrations
- Uvicorn — ASGI server
- httpx + aiohttp — HTTP clients
- asyncssh — async SSH client used for VPS auto-deploy and remote diagnostics
- websockets — install-log streaming
- aiosqlite — async SQLite
- python-jose + bcrypt — auth
- psutil — host metrics
- docker-py — Docker API client (Naive sidecar lifecycle)
- PyYAML — Clash YAML import
- React, Vite, TypeScript
- Tailwind CSS — styling
- TanStack Query — server state
- Zustand — UI state
- React Router — routing
- Recharts — metrics charts
- Lucide — icons
- axios — HTTP client
- Vitest + Testing Library — tests
- Docker + Compose
- nginx — frontend serving + WebSocket proxy
- Tecnativa/docker-socket-proxy — locked-down Docker API access for the backend
PiTun's import format compatibility (V2RayN / Shadowrocket / Clash JSON) is inspired by the formats of those projects — no code is borrowed.
Bug reports and PRs welcome. See CONTRIBUTING.md
for code style, PR conventions, and what to keep out of the repo.
BSD 3-Clause © PiTun contributors
Disclaimer. PiTun is a network management tool. You are responsible for complying with the laws of your jurisdiction and the terms of service of any upstream provider you use it with. The maintainers provide no warranty and accept no liability for misuse.















