-
Notifications
You must be signed in to change notification settings - Fork 355
SNI Router Setup
This page is the operational companion to Surviving Active Probing. Read that first for the threat model. In short: an SNI router on port 443 that forwards the magic-SNI traffic to mtg and routes everything else to a real web server makes the host indistinguishable from any HTTPS server under both passive DPI and active probing. The router operates at layer 4 — it inspects the unencrypted SNI in the TLS ClientHello and proxies the raw byte stream. TLS is terminated by the backend (mtg or the web server), never by the router.
┌──────────────────┐
:443 ──────>│ SNI router │
│ (HAProxy / nginx │
│ stream / sslh) │
└──┬───────────┬───┘
SNI match │ │ default
v v
┌─────────┐ ┌─────────┐
│ mtg │ │ web │
│ :3128 │ │ :8443 │
│ FakeTLS │ │ real TLS│
└─────────┘ └─────────┘
All configs in this guide were syntax-checked against:
| Tool | Version |
|---|---|
| HAProxy | 2.8.16 (Ubuntu 24.04 LTS) |
| nginx | 1.24.0 with ngx_stream_module
|
| sslh | 1.22c (sslh-fork) |
| mtg | 2.x |
| Caddy | 2.x (caddy:alpine) |
Pin these or newer. Older HAProxy (< 1.8) and older nginx (< 1.11.5) do not have the SNI-preread features used here.
HAProxy in TCP mode is the reference choice — fast, well documented, deterministic logging, hot reloads without dropping connections.
global
log /dev/log local0 info
maxconn 4096
defaults
log global
mode tcp
option tcplog
timeout connect 5s
timeout client 1h # MTProto sessions are long-lived
timeout server 1h
frontend tls
bind *:443
bind :::443
# Wait up to 5s for the ClientHello before we route.
tcp-request inspect-delay 5s
# Only act once we have a full TLS ClientHello (handshake type 1).
tcp-request content accept if { req_ssl_hello_type 1 }
# Replace proxy.example.com with the domain encoded in your mtg secret.
use_backend mtg if { req_ssl_sni -i proxy.example.com }
# Everything else (browsers, probes, scanners) goes to the web server.
default_backend web
backend mtg
server mtg 127.0.0.1:3128
backend web
server web 127.0.0.1:8443Validate the file before reloading:
haproxy -c -f /etc/haproxy/haproxy.cfgAdds: dual-stack listening, ACME passthrough on :80, IPv6 binds, TCP keepalive, structured TCP log line that records the routed SNI, and backend health checks.
global
log /dev/log local0 info
log /dev/log local1 notice
maxconn 20000
nbthread 4
user haproxy
group haproxy
daemon
defaults
log global
mode tcp
option tcplog
option dontlognull
option tcpka # forward TCP keepalive to backend
timeout connect 5s
timeout client 1h
timeout server 1h
timeout client-fin 30s
timeout tunnel 1h # cap on a half-closed tunnel
retries 3
# --- HTTP :80 — ACME challenges + redirect ----------------------------------
frontend http
bind *:80
bind :::80
mode http
option httplog
# Pass /.well-known/acme-challenge/* straight to the web server so
# Let's Encrypt HTTP-01 validation works.
acl is_acme path_beg /.well-known/acme-challenge/
http-request redirect scheme https code 301 unless is_acme
use_backend web_acme if is_acme
# --- TLS :443 — SNI-based routing -------------------------------------------
frontend tls
bind *:443
bind :::443
tcp-request inspect-delay 5s
# Hard-reject anything that is not a TLS ClientHello.
tcp-request content reject if !{ req_ssl_hello_type 1 }
tcp-request content accept if { req_ssl_hello_type 1 }
# Custom log-format that records the SNI we routed on.
log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts %ac/%fc/%bc/%sc/%rc sni=%[ssl_fc_sni]"
use_backend mtg if { req_ssl_sni -i proxy.example.com }
default_backend web
backend mtg
option tcp-check
server mtg 127.0.0.1:3128 check inter 30s
backend web
option tcp-check
server web 127.0.0.1:8443 check inter 30s
backend web_acme
mode http
server web 127.0.0.1:8080The Ubuntu/Debian package ships its own unit at
/lib/systemd/system/haproxy.service. After installing
(apt install haproxy) and dropping the config above into
/etc/haproxy/haproxy.cfg:
sudo systemctl enable --now haproxy
sudo systemctl reload haproxy # zero-downtime reload after edits
sudo journalctl -u haproxy -fIf you need port 80/443 capability without running as root, the
upstream unit already does the right thing
(AmbientCapabilities=CAP_NET_BIND_SERVICE).
The reference setup at
contrib/sni-router/
runs HAProxy, mtg, and Caddy in a single docker compose stack and
uses PROXY protocol v2 between HAProxy and the backends so each
backend sees the real client IP. See section 7.2 for the trade-off.
docker-compose.yml:
services:
haproxy:
image: haproxy:lts-alpine # 2.8 LTS at the time of writing
ports:
- "443:443"
- "80:80"
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
depends_on: [mtg, web]
restart: unless-stopped
mtg:
image: nineseconds/mtg:2
volumes:
- ./mtg-config.toml:/config/config.toml:ro
expose: ["3128"]
restart: unless-stopped
web:
image: caddy:alpine
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- ./www:/srv:ro
expose: ["80", "8443"]
environment:
DOMAIN: ${DOMAIN:?set DOMAIN in .env}
restart: unless-stopped
volumes:
caddy_data:haproxy.cfg for the docker variant — uses Docker DNS names
(mtg, web) instead of 127.0.0.1:
global
log stdout format raw local0 info
maxconn 4096
defaults
log global
mode tcp
option tcplog
timeout connect 5s
timeout client 1h
timeout server 1h
frontend http
bind *:80
mode http
acl is_acme path_beg /.well-known/acme-challenge/
http-request redirect scheme https code 301 unless is_acme
use_backend web_acme if is_acme
frontend tls
bind *:443
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend mtg if { req_ssl_sni -i proxy.example.com }
default_backend web
backend mtg
server mtg mtg:3128 send-proxy-v2
backend web
server web web:8443 send-proxy-v2
backend web_acme
mode http
server web web:80mtg-config.toml:
secret = "PASTE_YOUR_SECRET_HERE"
bind-to = "0.0.0.0:3128"
# Required because HAProxy prepends a PROXY protocol v2 header.
# Drop this line AND the `send-proxy-v2` from haproxy.cfg if you do
# not need real client IPs at the mtg layer.
proxy-protocol-listener = true
[defense.anti-replay]
enabled = true
max-size = "1mib"
error-rate = 0.001Caddyfile:
{
http_port 80
https_port 8443
servers :8443 {
listener_wrappers {
proxy_protocol {
timeout 5s
# Tighten this to your compose subnet in production.
allow 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16
}
tls
}
}
}
{$DOMAIN} {
root * /srv
file_server
}Generate the secret and bring the stack up:
docker run --rm nineseconds/mtg:2 generate-secret --hex YOUR_DOMAIN
# paste into mtg-config.toml, also update the SNI in haproxy.cfg
DOMAIN=YOUR_DOMAIN docker compose up -d
docker compose exec mtg mtg access /config/config.tomlnginx in stream mode plus ssl_preread does the same job as HAProxy
in TCP mode. It is convenient if you already run nginx for the web
backend — one process, one config, one reload.
Requires the stream module (Ubuntu/Debian: apt install libnginx-mod-stream). Verify with nginx -V 2>&1 | tr ' ' '\n' | grep stream.
# /etc/nginx/nginx.conf
load_module modules/ngx_stream_module.so; # if not auto-loaded
worker_processes auto;
events { worker_connections 4096; }
stream {
log_format sni '$remote_addr [$time_local] '
'sni=$ssl_preread_server_name '
'upstream=$upstream_addr '
'sent=$bytes_sent recv=$bytes_received '
'duration=$session_time';
access_log /var/log/nginx/stream.log sni;
# Map the SNI to a named upstream. `default` catches anything
# that doesn't match (browsers, probes).
map $ssl_preread_server_name $upstream {
proxy.example.com mtg_backend;
default web_backend;
}
upstream mtg_backend { server 127.0.0.1:3128; }
upstream web_backend { server 127.0.0.1:8443; }
server {
listen 443;
listen [::]:443;
proxy_pass $upstream;
ssl_preread on; # parse the ClientHello, do NOT terminate
proxy_timeout 1h; # MTProto sessions
proxy_connect_timeout 5s;
}
}
http {
server {
listen 80;
listen [::]:80;
location /.well-known/acme-challenge/ { root /var/www/acme; }
location / { return 301 https://$host$request_uri; }
}
}Validate and reload:
sudo nginx -t
sudo systemctl reload nginxNote: nginx-stream supports proxy_protocol on; if you want PROXY
protocol forwarding to mtg — the trade-off is the same as with
HAProxy (see 7.2).
sslh is a simple multiplexer that probes the first bytes of the connection and routes by protocol type, not by full L4 features. For SNI-based TLS it works, but its design is different:
- sslh listens on one port and forks (
sslh-fork) or selects (sslh-select) per connection. Each connection is matched against the first probe in order that succeeds, so the order of theprotocolslist matters. - Logging is sparse — basically connection counts and protocol names.
- No hot reload: a config change requires a process restart, which drops in-flight connections.
- Good when you also want to multiplex SSH/OpenVPN/HTTP on port 443 alongside mtg.
/etc/sslh.cfg:
verbose: 0;
foreground: false;
inetd: false;
numeric: false;
transparent: false;
timeout: 2;
user: "sslh";
pidfile: "/run/sslh.pid";
syslog_facility: "auth";
listen:
(
{ host: "0.0.0.0"; port: "443"; },
{ host: "::"; port: "443"; }
);
protocols:
(
# Specific SNI first — Telegram clients with the magic SNI go to mtg.
{
name: "tls";
host: "127.0.0.1";
port: "3128";
sni_hostnames: [ "proxy.example.com" ];
log_level: 0;
},
# Catch-all TLS goes to the web backend (real cert).
{
name: "tls";
host: "127.0.0.1";
port: "8443";
log_level: 0;
}
);
Run via the packaged unit:
sudo systemctl enable --now sslh
# Or test interactively:
sudo sslh -F /etc/sslh.cfg -f -v 1sslh has no native PROXY protocol support, so client IPs are lost on
the backend unless you enable transparent mode (transparent: true;)
which requires CAP_NET_ADMIN and matching iptables/policy-routing
rules — out of scope here.
| Feature | HAProxy | nginx-stream | sslh |
|---|---|---|---|
| Layer-4 SNI routing | yes (req_ssl_sni) |
yes (ssl_preread) |
yes (sni_hostnames) |
| ALPN-based routing | yes (req.ssl_alpn) |
yes (ssl_preread_alpn_protocols) |
yes |
| Hot reload (zero connection drop) | yes (systemctl reload) |
yes (worker rotation) | no |
| PROXY protocol v1/v2 to backend | yes (send-proxy[-v2]) |
yes (proxy_protocol on) |
no (transparent only) |
| TCP health checks | yes | yes (commercial: passive) | no |
| Per-connection log line | rich (tcplog, custom) |
rich (log_format) |
minimal (syslog) |
| Memory per 10k idle conns | ~50 MB | ~80 MB | ~5 MB (fork: per-conn) |
| CPU at 10k req/s SNI peek | very low | very low | low |
| Config complexity | medium | medium-low | low |
| Multiplex non-TLS protocols | with mode tcp ACLs |
limited | first-class |
| Best fit | dedicated SNI router | already running nginx | tiny boxes, SSH+TLS mux |
If in doubt, pick HAProxy. It is the only option here with hot reloads and a deterministic, well-documented log line format — both matter operationally.
The web backend (Caddy / nginx / whatever) terminates TLS itself, so that is where the certificate lives. The router never touches keys. Three reasonable options:
Caddy obtains and renews a cert automatically via HTTP-01. HAProxy
forwards /.well-known/acme-challenge/* from :80 to Caddy on :80.
Nothing else to configure. Best for greenfield deployments.
If your web backend is nginx, run acme.sh as a per-user daemon and use the webroot mode:
# Initial issuance (router is up, /.well-known passthrough works)
acme.sh --issue -d proxy.example.com -w /var/www/acme \
--server letsencrypt
# Install into nginx and reload
acme.sh --install-cert -d proxy.example.com \
--key-file /etc/nginx/certs/proxy.key \
--fullchain-file /etc/nginx/certs/proxy.crt \
--reloadcmd "systemctl reload nginx"acme.sh also supports DNS-01 (no need to expose :80) for any DNS provider with an API. Use that if your :80 is firewalled.
If you already have a wildcard or a paid cert, drop the PEMs into the web backend and configure it to terminate TLS as usual. The router needs no certificate at all.
Each command tests one path through the router. Run them from a host that is not behind the same firewall.
# Path A: random SNI must reach the real web server, not mtg.
openssl s_client -connect YOUR_IP:443 \
-servername random.example.org -showcerts </dev/null
# Expect: a certificate for your real domain (or whatever Caddy serves).
# Path B: matching SNI without MTProto -> mtg's domain fronting kicks in,
# you should still see your domain's certificate (mtg relays the
# upstream handshake).
openssl s_client -connect YOUR_IP:443 \
-servername proxy.example.com </dev/null
# Expect: a certificate for proxy.example.com.
# Path C: HTTP redirect / ACME passthrough.
curl -v http://YOUR_IP/.well-known/acme-challenge/test
curl -vI http://YOUR_IP/ # expect 301 to https://
# Path D: real Telegram path - use a real client, then check the proxy
# is actually routing connections via mtg's stats endpoint.
docker compose exec mtg mtg access /config/config.toml
# Path E: mtg's own self-test (DNS / SNI / fronting reachability).
mtg doctor /etc/mtg/config.tomlDiagnostics on the router:
# HAProxy: live stats + per-frontend connection counts
echo "show info" | sudo socat stdio /run/haproxy/admin.sock
# nginx-stream: tail the access log to see SNI -> upstream mapping
sudo tail -f /var/log/nginx/stream.log
# sslh: count routings per protocol
sudo journalctl -u sslh -n 200 | grep 'connection from'The router must operate at layer 4. If you accidentally configure
TLS termination on the router (HAProxy mode http with a bind *:443 ssl crt …, nginx listen 443 ssl;, etc.), the router presents its own
certificate to every client, mtg's FakeTLS handshake breaks, and
domain fronting is lost. Symptoms: Telegram clients fail to connect,
openssl s_client -servername proxy.example.com shows a cert from the
router not from the upstream.
The configs in this guide use mode tcp (HAProxy) or ssl_preread on;
(nginx) precisely to avoid this.
PROXY protocol prepends a small text/binary header to every connection so the backend sees the original client IP. Useful for logging and abuse handling, but every speaker on both ends must agree, otherwise the first connection just hangs and times out.
| Component | Setting |
|---|---|
| HAProxy | server … send-proxy-v2 |
| nginx-stream |
proxy_protocol on; on the server block |
| mtg |
proxy-protocol-listener = true in its TOML |
| Caddy | listener_wrappers { proxy_protocol { … } tls } |
| sslh | not supported (use transparent: true instead) |
If you do not need real client IPs at the backend, leave PROXY protocol off everywhere. It is one less thing to mismatch. The minimal HAProxy config in section 1.1 has it disabled; the docker-compose variant in section 1.4 has it enabled to mirror the reference deployment.
MTProto sessions sit idle for minutes. Default L7 timeouts (30–60s) will kill them. Use:
- HAProxy:
timeout client 1h,timeout server 1h,timeout tunnel 1h,option tcpka. - nginx-stream:
proxy_timeout 1h;. - sslh: not relevant — sslh disconnects from the routing socket once the connection is spliced.
If you expect bursts (e.g. a popular channel posting), bump
net.core.somaxconn and net.ipv4.tcp_max_syn_backlog at the kernel:
sudo sysctl -w net.core.somaxconn=4096
sudo sysctl -w net.ipv4.tcp_max_syn_backlog=4096Then maxconn 20000 in HAProxy or worker_connections 4096; in nginx.
Linux honours net.ipv6.bindv6only. On most distros this is 0, so
binding [::]:443 accepts both v4 and v6. But:
- HAProxy: be explicit — list both
bind *:443andbind :::443. Some kernels and namespaces setbindv6only=1and you'll silently miss IPv4. - nginx: the same —
listen 443; listen [::]:443;. - Docker bridge networks are v4-only by default. For IPv6 you need
enable_ipv6: trueon the network and anip6tablespolicy.
Verify both:
ss -tlnp 'sport = :443'
# Should show LISTEN on 0.0.0.0:443 and [::]:443- Bridge mode (default): containers see Docker's RFC1918 source IPs unless you enable PROXY protocol end-to-end. Useful when you want the host to NAT and the backends to be unreachable from outside.
-
Host mode (
network_mode: host): the container shares the host network stack, real client IPs are visible without PROXY protocol, but you lose port isolation andexpose:no longer matters. Pick this if PROXY protocol is too much hassle and you do not need per-service IP fencing. - A frequent mistake is binding mtg to
0.0.0.0:3128inside a bridged container while also publishing the port (-p 3128:3128). That exposes mtg directly to the internet, bypassing the SNI router. Useexpose: ["3128"](compose) so the port is reachable only inside the Docker network. - Compose's default
bridgedriver creates fresh subnets per project. If you whitelist client networks in Caddy'sallowdirective, pin the subnet indocker-compose.yml(networks: ipam: config:) and whitelist that exact CIDR instead of the broad RFC1918 ranges.
You already run mtg directly on :443 and want to slot an SNI router
in front of it. Plan:
-
Keep mtg on :443 for now. Bring up the new web backend on a
spare port that is not exposed to the internet (say :8443 on
localhost). Issue a cert via DNS-01 (acme.sh
--dns dns_…) so :80 stays free for mtg. -
Move mtg to a new internal port. Edit
mtg-config.toml:bind-to = "127.0.0.1:3128". Restart mtg. Existing sessions drop — this is the only unavoidable disruption (typically ≤ 1 second of reconnect for clients). If your domain pattern allows, do this in a low-traffic window. -
Start the SNI router on :443. HAProxy/nginx/sslh listening on
:443, routing the magic SNI to
127.0.0.1:3128and the default to127.0.0.1:8443. Verify with theopenssl s_clientpaths from section 6. - Verify Telegram connectivity with at least one real client before you tear anything down.
-
Remove the public bind from mtg and confirm
ss -tlnp 'sport = :3128'shows only127.0.0.1.
If you cannot tolerate even a one-second drop:
- Bind mtg to both the public IP and localhost (
bind-to = "0.0.0.0:3128"plus a second loopback bind via two mtg instances pointing at the same secret). Live clients keep talking to the public:443 instance. - Bring up the SNI router on :8443 first, smoke-test it end-to-end,
then swap firewall rules (
iptables -t nat -I PREROUTING …) so :443 redirects to :8443 atomically. Drop the public bind on mtg afterwards.
A cleaner approach if you have a second IP available: put the SNI router on the new IP, point DNS at it, wait for TTL to expire, then retire the old IP. No connection drop at all because the old and new endpoints coexist for the TTL window.
- Surviving Active Probing — why the SNI router exists.
-
contrib/sni-router/— the reference docker-compose deployment this page is based on. - HAProxy 2.8 docs: https://docs.haproxy.org/2.8/configuration.html
- nginx stream module: https://nginx.org/en/docs/stream/ngx_stream_core_module.html
- nginx ssl_preread: https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html
- sslh: https://github.com/yrutschle/sslh