Skip to content
Alexey Dolotov edited this page Apr 25, 2026 · 1 revision

This page is the operational companion to Surviving Active Probing. Read that first for the threat model. In short: an SNI router on port 443 that forwards the magic-SNI traffic to mtg and routes everything else to a real web server makes the host indistinguishable from any HTTPS server under both passive DPI and active probing. The router operates at layer 4 — it inspects the unencrypted SNI in the TLS ClientHello and proxies the raw byte stream. TLS is terminated by the backend (mtg or the web server), never by the router.

              ┌──────────────────┐
 :443  ──────>│   SNI router     │
              │ (HAProxy / nginx │
              │  stream / sslh)  │
              └──┬───────────┬───┘
    SNI match    │           │  default
                 v           v
           ┌─────────┐  ┌─────────┐
           │   mtg   │  │  web    │
           │ :3128   │  │ :8443   │
           │ FakeTLS │  │ real TLS│
           └─────────┘  └─────────┘

All configs in this guide were syntax-checked against:

Tool Version
HAProxy 2.8.16 (Ubuntu 24.04 LTS)
nginx 1.24.0 with ngx_stream_module
sslh 1.22c (sslh-fork)
mtg 2.x
Caddy 2.x (caddy:alpine)

Pin these or newer. Older HAProxy (< 1.8) and older nginx (< 1.11.5) do not have the SNI-preread features used here.


1. HAProxy

HAProxy in TCP mode is the reference choice — fast, well documented, deterministic logging, hot reloads without dropping connections.

1.1 Minimal config (bare metal, no PROXY protocol)

global
    log /dev/log local0 info
    maxconn 4096

defaults
    log     global
    mode    tcp
    option  tcplog
    timeout connect 5s
    timeout client  1h          # MTProto sessions are long-lived
    timeout server  1h

frontend tls
    bind *:443
    bind :::443
    # Wait up to 5s for the ClientHello before we route.
    tcp-request inspect-delay 5s
    # Only act once we have a full TLS ClientHello (handshake type 1).
    tcp-request content accept if { req_ssl_hello_type 1 }

    # Replace proxy.example.com with the domain encoded in your mtg secret.
    use_backend mtg if { req_ssl_sni -i proxy.example.com }

    # Everything else (browsers, probes, scanners) goes to the web server.
    default_backend web

backend mtg
    server mtg 127.0.0.1:3128

backend web
    server web 127.0.0.1:8443

Validate the file before reloading:

haproxy -c -f /etc/haproxy/haproxy.cfg

1.2 Production config

Adds: dual-stack listening, ACME passthrough on :80, IPv6 binds, TCP keepalive, structured TCP log line that records the routed SNI, and backend health checks.

global
    log /dev/log local0 info
    log /dev/log local1 notice
    maxconn 20000
    nbthread 4
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    tcp
    option  tcplog
    option  dontlognull
    option  tcpka                  # forward TCP keepalive to backend
    timeout connect 5s
    timeout client  1h
    timeout server  1h
    timeout client-fin 30s
    timeout tunnel  1h             # cap on a half-closed tunnel
    retries 3

# --- HTTP :80 — ACME challenges + redirect ----------------------------------

frontend http
    bind *:80
    bind :::80
    mode http
    option httplog

    # Pass /.well-known/acme-challenge/* straight to the web server so
    # Let's Encrypt HTTP-01 validation works.
    acl is_acme path_beg /.well-known/acme-challenge/
    http-request redirect scheme https code 301 unless is_acme
    use_backend web_acme if is_acme

# --- TLS :443 — SNI-based routing -------------------------------------------

frontend tls
    bind *:443
    bind :::443

    tcp-request inspect-delay 5s
    # Hard-reject anything that is not a TLS ClientHello.
    tcp-request content reject  if !{ req_ssl_hello_type 1 }
    tcp-request content accept  if  { req_ssl_hello_type 1 }

    # Custom log-format that records the SNI we routed on.
    log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts %ac/%fc/%bc/%sc/%rc sni=%[ssl_fc_sni]"

    use_backend mtg if { req_ssl_sni -i proxy.example.com }
    default_backend web

backend mtg
    option tcp-check
    server mtg 127.0.0.1:3128 check inter 30s

backend web
    option tcp-check
    server web 127.0.0.1:8443 check inter 30s

backend web_acme
    mode http
    server web 127.0.0.1:8080

1.3 systemd unit (bare metal)

The Ubuntu/Debian package ships its own unit at /lib/systemd/system/haproxy.service. After installing (apt install haproxy) and dropping the config above into /etc/haproxy/haproxy.cfg:

sudo systemctl enable --now haproxy
sudo systemctl reload  haproxy   # zero-downtime reload after edits
sudo journalctl -u haproxy -f

If you need port 80/443 capability without running as root, the upstream unit already does the right thing (AmbientCapabilities=CAP_NET_BIND_SERVICE).

1.4 docker-compose form

The reference setup at contrib/sni-router/ runs HAProxy, mtg, and Caddy in a single docker compose stack and uses PROXY protocol v2 between HAProxy and the backends so each backend sees the real client IP. See section 7.2 for the trade-off.

docker-compose.yml:

services:
  haproxy:
    image: haproxy:lts-alpine        # 2.8 LTS at the time of writing
    ports:
      - "443:443"
      - "80:80"
    volumes:
      - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
    depends_on: [mtg, web]
    restart: unless-stopped

  mtg:
    image: nineseconds/mtg:2
    volumes:
      - ./mtg-config.toml:/config/config.toml:ro
    expose: ["3128"]
    restart: unless-stopped

  web:
    image: caddy:alpine
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy_data:/data
      - ./www:/srv:ro
    expose: ["80", "8443"]
    environment:
      DOMAIN: ${DOMAIN:?set DOMAIN in .env}
    restart: unless-stopped

volumes:
  caddy_data:

haproxy.cfg for the docker variant — uses Docker DNS names (mtg, web) instead of 127.0.0.1:

global
    log stdout format raw local0 info
    maxconn 4096

defaults
    log     global
    mode    tcp
    option  tcplog
    timeout connect 5s
    timeout client  1h
    timeout server  1h

frontend http
    bind *:80
    mode http
    acl is_acme path_beg /.well-known/acme-challenge/
    http-request redirect scheme https code 301 unless is_acme
    use_backend web_acme if is_acme

frontend tls
    bind *:443
    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }
    use_backend mtg if { req_ssl_sni -i proxy.example.com }
    default_backend web

backend mtg
    server mtg mtg:3128 send-proxy-v2

backend web
    server web web:8443 send-proxy-v2

backend web_acme
    mode http
    server web web:80

mtg-config.toml:

secret = "PASTE_YOUR_SECRET_HERE"
bind-to = "0.0.0.0:3128"

# Required because HAProxy prepends a PROXY protocol v2 header.
# Drop this line AND the `send-proxy-v2` from haproxy.cfg if you do
# not need real client IPs at the mtg layer.
proxy-protocol-listener = true

[defense.anti-replay]
enabled = true
max-size = "1mib"
error-rate = 0.001

Caddyfile:

{
    http_port 80
    https_port 8443
    servers :8443 {
        listener_wrappers {
            proxy_protocol {
                timeout 5s
                # Tighten this to your compose subnet in production.
                allow 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16
            }
            tls
        }
    }
}

{$DOMAIN} {
    root * /srv
    file_server
}

Generate the secret and bring the stack up:

docker run --rm nineseconds/mtg:2 generate-secret --hex YOUR_DOMAIN
# paste into mtg-config.toml, also update the SNI in haproxy.cfg

DOMAIN=YOUR_DOMAIN docker compose up -d
docker compose exec mtg mtg access /config/config.toml

2. nginx with the stream module

nginx in stream mode plus ssl_preread does the same job as HAProxy in TCP mode. It is convenient if you already run nginx for the web backend — one process, one config, one reload.

Requires the stream module (Ubuntu/Debian: apt install libnginx-mod-stream). Verify with nginx -V 2>&1 | tr ' ' '\n' | grep stream.

# /etc/nginx/nginx.conf
load_module modules/ngx_stream_module.so;     # if not auto-loaded

worker_processes auto;
events { worker_connections 4096; }

stream {
    log_format sni '$remote_addr [$time_local] '
                   'sni=$ssl_preread_server_name '
                   'upstream=$upstream_addr '
                   'sent=$bytes_sent recv=$bytes_received '
                   'duration=$session_time';
    access_log /var/log/nginx/stream.log sni;

    # Map the SNI to a named upstream.  `default` catches anything
    # that doesn't match (browsers, probes).
    map $ssl_preread_server_name $upstream {
        proxy.example.com  mtg_backend;
        default            web_backend;
    }

    upstream mtg_backend { server 127.0.0.1:3128; }
    upstream web_backend { server 127.0.0.1:8443; }

    server {
        listen 443;
        listen [::]:443;
        proxy_pass $upstream;
        ssl_preread on;            # parse the ClientHello, do NOT terminate
        proxy_timeout 1h;          # MTProto sessions
        proxy_connect_timeout 5s;
    }
}

http {
    server {
        listen 80;
        listen [::]:80;
        location /.well-known/acme-challenge/ { root /var/www/acme; }
        location / { return 301 https://$host$request_uri; }
    }
}

Validate and reload:

sudo nginx -t
sudo systemctl reload nginx

Note: nginx-stream supports proxy_protocol on; if you want PROXY protocol forwarding to mtg — the trade-off is the same as with HAProxy (see 7.2).


3. sslh

sslh is a simple multiplexer that probes the first bytes of the connection and routes by protocol type, not by full L4 features. For SNI-based TLS it works, but its design is different:

  • sslh listens on one port and forks (sslh-fork) or selects (sslh-select) per connection. Each connection is matched against the first probe in order that succeeds, so the order of the protocols list matters.
  • Logging is sparse — basically connection counts and protocol names.
  • No hot reload: a config change requires a process restart, which drops in-flight connections.
  • Good when you also want to multiplex SSH/OpenVPN/HTTP on port 443 alongside mtg.

/etc/sslh.cfg:

verbose: 0;
foreground: false;
inetd: false;
numeric: false;
transparent: false;
timeout: 2;
user: "sslh";
pidfile: "/run/sslh.pid";
syslog_facility: "auth";

listen:
(
    { host: "0.0.0.0"; port: "443"; },
    { host: "::";       port: "443"; }
);

protocols:
(
    # Specific SNI first — Telegram clients with the magic SNI go to mtg.
    {
        name: "tls";
        host: "127.0.0.1";
        port: "3128";
        sni_hostnames: [ "proxy.example.com" ];
        log_level: 0;
    },
    # Catch-all TLS goes to the web backend (real cert).
    {
        name: "tls";
        host: "127.0.0.1";
        port: "8443";
        log_level: 0;
    }
);

Run via the packaged unit:

sudo systemctl enable --now sslh
# Or test interactively:
sudo sslh -F /etc/sslh.cfg -f -v 1

sslh has no native PROXY protocol support, so client IPs are lost on the backend unless you enable transparent mode (transparent: true;) which requires CAP_NET_ADMIN and matching iptables/policy-routing rules — out of scope here.


4. Comparison

Feature HAProxy nginx-stream sslh
Layer-4 SNI routing yes (req_ssl_sni) yes (ssl_preread) yes (sni_hostnames)
ALPN-based routing yes (req.ssl_alpn) yes (ssl_preread_alpn_protocols) yes
Hot reload (zero connection drop) yes (systemctl reload) yes (worker rotation) no
PROXY protocol v1/v2 to backend yes (send-proxy[-v2]) yes (proxy_protocol on) no (transparent only)
TCP health checks yes yes (commercial: passive) no
Per-connection log line rich (tcplog, custom) rich (log_format) minimal (syslog)
Memory per 10k idle conns ~50 MB ~80 MB ~5 MB (fork: per-conn)
CPU at 10k req/s SNI peek very low very low low
Config complexity medium medium-low low
Multiplex non-TLS protocols with mode tcp ACLs limited first-class
Best fit dedicated SNI router already running nginx tiny boxes, SSH+TLS mux

If in doubt, pick HAProxy. It is the only option here with hot reloads and a deterministic, well-documented log line format — both matter operationally.


5. Certificate strategy

The web backend (Caddy / nginx / whatever) terminates TLS itself, so that is where the certificate lives. The router never touches keys. Three reasonable options:

5.1 Caddy + automatic Let's Encrypt (used in the reference)

Caddy obtains and renews a cert automatically via HTTP-01. HAProxy forwards /.well-known/acme-challenge/* from :80 to Caddy on :80. Nothing else to configure. Best for greenfield deployments.

5.2 nginx + acme.sh

If your web backend is nginx, run acme.sh as a per-user daemon and use the webroot mode:

# Initial issuance (router is up, /.well-known passthrough works)
acme.sh --issue -d proxy.example.com -w /var/www/acme \
        --server letsencrypt

# Install into nginx and reload
acme.sh --install-cert -d proxy.example.com \
        --key-file       /etc/nginx/certs/proxy.key  \
        --fullchain-file /etc/nginx/certs/proxy.crt  \
        --reloadcmd      "systemctl reload nginx"

acme.sh also supports DNS-01 (no need to expose :80) for any DNS provider with an API. Use that if your :80 is firewalled.

5.3 Manual certificates

If you already have a wildcard or a paid cert, drop the PEMs into the web backend and configure it to terminate TLS as usual. The router needs no certificate at all.


6. Verification

Each command tests one path through the router. Run them from a host that is not behind the same firewall.

# Path A: random SNI must reach the real web server, not mtg.
openssl s_client -connect YOUR_IP:443 \
                 -servername random.example.org -showcerts </dev/null
# Expect: a certificate for your real domain (or whatever Caddy serves).

# Path B: matching SNI without MTProto -> mtg's domain fronting kicks in,
# you should still see your domain's certificate (mtg relays the
# upstream handshake).
openssl s_client -connect YOUR_IP:443 \
                 -servername proxy.example.com </dev/null
# Expect: a certificate for proxy.example.com.

# Path C: HTTP redirect / ACME passthrough.
curl -v http://YOUR_IP/.well-known/acme-challenge/test
curl -vI http://YOUR_IP/                # expect 301 to https://

# Path D: real Telegram path - use a real client, then check the proxy
# is actually routing connections via mtg's stats endpoint.
docker compose exec mtg mtg access /config/config.toml

# Path E: mtg's own self-test (DNS / SNI / fronting reachability).
mtg doctor /etc/mtg/config.toml

Diagnostics on the router:

# HAProxy: live stats + per-frontend connection counts
echo "show info" | sudo socat stdio /run/haproxy/admin.sock

# nginx-stream: tail the access log to see SNI -> upstream mapping
sudo tail -f /var/log/nginx/stream.log

# sslh: count routings per protocol
sudo journalctl -u sslh -n 200 | grep 'connection from'

7. Common pitfalls

7.1 Forwarding vs terminating TLS — never terminate

The router must operate at layer 4. If you accidentally configure TLS termination on the router (HAProxy mode http with a bind *:443 ssl crt …, nginx listen 443 ssl;, etc.), the router presents its own certificate to every client, mtg's FakeTLS handshake breaks, and domain fronting is lost. Symptoms: Telegram clients fail to connect, openssl s_client -servername proxy.example.com shows a cert from the router not from the upstream.

The configs in this guide use mode tcp (HAProxy) or ssl_preread on; (nginx) precisely to avoid this.

7.2 PROXY protocol — opt in carefully

PROXY protocol prepends a small text/binary header to every connection so the backend sees the original client IP. Useful for logging and abuse handling, but every speaker on both ends must agree, otherwise the first connection just hangs and times out.

Component Setting
HAProxy server … send-proxy-v2
nginx-stream proxy_protocol on; on the server block
mtg proxy-protocol-listener = true in its TOML
Caddy listener_wrappers { proxy_protocol { … } tls }
sslh not supported (use transparent: true instead)

If you do not need real client IPs at the backend, leave PROXY protocol off everywhere. It is one less thing to mismatch. The minimal HAProxy config in section 1.1 has it disabled; the docker-compose variant in section 1.4 has it enabled to mirror the reference deployment.

7.3 Backlog and timeouts for long-lived MTProto

MTProto sessions sit idle for minutes. Default L7 timeouts (30–60s) will kill them. Use:

  • HAProxy: timeout client 1h, timeout server 1h, timeout tunnel 1h, option tcpka.
  • nginx-stream: proxy_timeout 1h;.
  • sslh: not relevant — sslh disconnects from the routing socket once the connection is spliced.

If you expect bursts (e.g. a popular channel posting), bump net.core.somaxconn and net.ipv4.tcp_max_syn_backlog at the kernel:

sudo sysctl -w net.core.somaxconn=4096
sudo sysctl -w net.ipv4.tcp_max_syn_backlog=4096

Then maxconn 20000 in HAProxy or worker_connections 4096; in nginx.

7.4 IPv4 vs IPv6 listen rules

Linux honours net.ipv6.bindv6only. On most distros this is 0, so binding [::]:443 accepts both v4 and v6. But:

  • HAProxy: be explicit — list both bind *:443 and bind :::443. Some kernels and namespaces set bindv6only=1 and you'll silently miss IPv4.
  • nginx: the same — listen 443; listen [::]:443;.
  • Docker bridge networks are v4-only by default. For IPv6 you need enable_ipv6: true on the network and an ip6tables policy.

Verify both:

ss -tlnp 'sport = :443'
# Should show LISTEN on 0.0.0.0:443 and [::]:443

7.5 Docker networking gotchas

  • Bridge mode (default): containers see Docker's RFC1918 source IPs unless you enable PROXY protocol end-to-end. Useful when you want the host to NAT and the backends to be unreachable from outside.
  • Host mode (network_mode: host): the container shares the host network stack, real client IPs are visible without PROXY protocol, but you lose port isolation and expose: no longer matters. Pick this if PROXY protocol is too much hassle and you do not need per-service IP fencing.
  • A frequent mistake is binding mtg to 0.0.0.0:3128 inside a bridged container while also publishing the port (-p 3128:3128). That exposes mtg directly to the internet, bypassing the SNI router. Use expose: ["3128"] (compose) so the port is reachable only inside the Docker network.
  • Compose's default bridge driver creates fresh subnets per project. If you whitelist client networks in Caddy's allow directive, pin the subnet in docker-compose.yml (networks: ipam: config:) and whitelist that exact CIDR instead of the broad RFC1918 ranges.

8. Migrating a live proxy without downtime

You already run mtg directly on :443 and want to slot an SNI router in front of it. Plan:

  1. Keep mtg on :443 for now. Bring up the new web backend on a spare port that is not exposed to the internet (say :8443 on localhost). Issue a cert via DNS-01 (acme.sh --dns dns_…) so :80 stays free for mtg.
  2. Move mtg to a new internal port. Edit mtg-config.toml: bind-to = "127.0.0.1:3128". Restart mtg. Existing sessions drop — this is the only unavoidable disruption (typically ≤ 1 second of reconnect for clients). If your domain pattern allows, do this in a low-traffic window.
  3. Start the SNI router on :443. HAProxy/nginx/sslh listening on :443, routing the magic SNI to 127.0.0.1:3128 and the default to 127.0.0.1:8443. Verify with the openssl s_client paths from section 6.
  4. Verify Telegram connectivity with at least one real client before you tear anything down.
  5. Remove the public bind from mtg and confirm ss -tlnp 'sport = :3128' shows only 127.0.0.1.

If you cannot tolerate even a one-second drop:

  • Bind mtg to both the public IP and localhost (bind-to = "0.0.0.0:3128" plus a second loopback bind via two mtg instances pointing at the same secret). Live clients keep talking to the public:443 instance.
  • Bring up the SNI router on :8443 first, smoke-test it end-to-end, then swap firewall rules (iptables -t nat -I PREROUTING …) so :443 redirects to :8443 atomically. Drop the public bind on mtg afterwards.

A cleaner approach if you have a second IP available: put the SNI router on the new IP, point DNS at it, wait for TTL to expire, then retire the old IP. No connection drop at all because the old and new endpoints coexist for the TTL window.


See also