You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you very much for the great new version 2.7.0 – and for all the continuous work you put into OpenVPN!
We are currently testing it on an IPFire platform and really appreciate the improvements, especially DCO support.
During our tests, we encountered a scenario that we believe may others also face, and we would like to suggest an enhancement that would restore a long‑documented use‑case.
Summary
OpenVPN’s official documentation describes a common use‑case: assigning different IP ranges to different user groups so that firewall rules can be applied based on the client’s virtual IP address. This works perfectly with topology net30 but is problematic in topology subnet (recommended and required for DCO). This issue requests a native way to give clients IP addresses from per‑CCD pools, thereby restoring that documented functionality.
Concrete Use‑Case (our setup)
We run an OpenVPN server on a firewall (IPFire) with DCO enabled. The global server config uses:
server 10.1.1.0 255.255.255.0
topology subnet
client-config-dir /var/ipfire/ovpn/ccd
We need different groups of clients to receive tunnel IPs from different subnets:
Group A → 10.2.1.0/24
Group B → 10.2.2.0/24
etc.
This allows simple iptables rules based on the source IP range, e.g.:
When we try to assign a fixed IP from a separate subnet in a CCD file (e.g., for client1):
ifconfig-push 10.2.1.2 255.255.255.0
the server logs:
MULTI ERROR: primary virtual IP for client1 (10.2.1.2) violates tunnel network/netmask constraint (10.1.1.0/255.255.255.0)
The check is performed in multi_client_connect_late_setup (src/openvpn/multi.c) by the function ifconfig_push_constraint_satisfied:
if (!ifconfig_push_constraint_satisfied(&mi->context))
{
...
msg(D_MULTI_ERRORS,
"MULTI ERROR: primary virtual IP for %s (%s) ""violates tunnel network/netmask constraint (%s/%s)",
...);
}
ifconfig_push_constraint_satisfied compares the client IP (push_ifconfig_local) with the network defined by the --server directive (push_ifconfig_constraint_network / push_ifconfig_constraint_netmask). Because the client IP lies outside that network, the error is triggered.
However, the connection still succeeds! The error is treated as non‑fatal; the server learns the route to the client IP via multi_learn_addr and sends the PUSH_REPLY with the correct IP. This proves that OpenVPN’s internal routing and multi‑client infrastructure already supports clients with IPs outside the global pool – only the constraint check is too strict.
The Workaround (and why it works)
To make the client reach the server (which has IP 10.1.1.1 in the global net), we need to tell the client how to get there. Adding two extra pushes in the CCD file does exactly that:
The server sends these pushes inside the PUSH_REPLY. The client then sets:
its tunnel gateway to 10.2.1.1
a host route to the server IP 10.1.1.1 via that gateway
Result: The connection works flawlessly, the client has an IP from the desired separate subnet, and the server can apply iptables rules based on that range.
This workaround demonstrates that OpenVPN already has all the necessary mechanisms – only the initial constraint check needs to be relaxed or may configurable, and the automatic PUSH generation for the required routes is missing.
Proposed Solutions
Potential ways to address this.
Option A: Should the global constraint check be relaxed a bit
Simply change ifconfig_push_constraint_satisfied to always return true (or issue only a warning). Pros: Very simple code change (one line). Cons:
The safety net disappears for everyone; typos in CCD files could lead to hard‑to‑debug routing issues.
Administrators would still need to manually add route-gateway and the route back to the server in each CCD (as in the workaround). This is error‑prone and not self‑documenting.
Option B: Might a new CCD directive (e.g., subnet-pool) makes sense ?
An optional directive like e.g. :
subnet-pool <network> <netmask> [<gateway>]
Potential behavior:
The server creates a dedicated dynamic pool for this client from the specified subnet (reusing struct ifconfig_pool and ifconfig_pool_init).
On connection, the client receives the next free IP from that pool – fully automatic, without requiring a static IP in the CCD. This is ideal for group‑based scenarios where firewall rules match entire subnets (e.g., -s 10.2.1.0/24), so individual IPs within the group do not need to be fixed.
The server automatically sets up:
a host route to the client IP (already done today).
pushed options for the client: the correct route-gateway (typically the first IP of the pool) and a host route back to the server’s VPN IP.
The constraint check in multi_client_connect_late_setup is skipped for such clients (or validated against their own pool), eliminating the MULTI ERROR.
Existing ifconfig-push remains fully supported for cases where a truly static, single IP is required – ensuring backward compatibility and flexibility for admins who need fixed IPs.
Example CCD entry for client1 (using the new dynamic pool):
subnet-pool 10.2.1.0 255.255.255.0
# No manual ifconfig-push, no extra route pushes needed
Pros:
Clean, declarative configuration – no manual pushes or external scripts required.
The safety net (global constraint check) stays active for all other clients.
Restores the long‑documented group‑based firewall use‑case.
Cons: Slightly larger code change, but well‑isolated and reusable (the existing pool code can be leveraged).
Recommendation
We believe Option B might be the cleaner, more maintainable, and future‑proof approach. It adds a clear, self‑documenting feature while preserving backward compatibility and safety. The workaround proves that the internal infrastructure is ready; only a small amount of glue code is missing.
Would the maintainers be open to such a directive? Possibly there might also be better ways?
Relevant Code Locations
Constraint check: multi_client_connect_late_setup in multi.c, calling ifconfig_push_constraint_satisfied.
Pool management: ifconfig_pool_init, ifconfig_pool_acquire in pool.c – these can be reused for per‑client pools.
CCD parsing: options.c – add handling for subnet-pool.
PUSH generation: send_push_reply in push.c – automatically add route-gateway and the server‑host route.
Why This Matters
Restores a long‑documented, standard OpenVPN use‑case that many administrators rely on.
Eliminates the need for external scripts or manual per‑client static assignments.
Fully compatible with topology subnet and so with DCO.
Keeps configurations simple and declarative (everything stays in the CCD file).
First of all, thank you very much for the great new version 2.7.0 – and for all the continuous work you put into OpenVPN!
We are currently testing it on an IPFire platform and really appreciate the improvements, especially DCO support.
During our tests, we encountered a scenario that we believe may others also face, and we would like to suggest an enhancement that would restore a long‑documented use‑case.
Summary
OpenVPN’s official documentation describes a common use‑case: assigning different IP ranges to different user groups so that firewall rules can be applied based on the client’s virtual IP address. This works perfectly with
topology net30but is problematic intopology subnet(recommended and required for DCO). This issue requests a native way to give clients IP addresses from per‑CCD pools, thereby restoring that documented functionality.Concrete Use‑Case (our setup)
We run an OpenVPN server on a firewall (IPFire) with DCO enabled. The global server config uses:
We need different groups of clients to receive tunnel IPs from different subnets:
10.2.1.0/2410.2.2.0/24This allows simple
iptablesrules based on the source IP range, e.g.:Current Problem with
topology subnetWhen we try to assign a fixed IP from a separate subnet in a CCD file (e.g., for
client1):the server logs:
The check is performed in
multi_client_connect_late_setup(src/openvpn/multi.c) by the functionifconfig_push_constraint_satisfied:ifconfig_push_constraint_satisfiedcompares the client IP (push_ifconfig_local) with the network defined by the--serverdirective (push_ifconfig_constraint_network/push_ifconfig_constraint_netmask). Because the client IP lies outside that network, the error is triggered.However, the connection still succeeds! The error is treated as non‑fatal; the server learns the route to the client IP via
multi_learn_addrand sends thePUSH_REPLYwith the correct IP. This proves that OpenVPN’s internal routing and multi‑client infrastructure already supports clients with IPs outside the global pool – only the constraint check is too strict.The Workaround (and why it works)
To make the client reach the server (which has IP
10.1.1.1in the global net), we need to tell the client how to get there. Adding two extra pushes in the CCD file does exactly that:The server sends these pushes inside the
PUSH_REPLY. The client then sets:10.2.1.110.1.1.1via that gatewayResult: The connection works flawlessly, the client has an IP from the desired separate subnet, and the server can apply iptables rules based on that range.
This workaround demonstrates that OpenVPN already has all the necessary mechanisms – only the initial constraint check needs to be relaxed or may configurable, and the automatic PUSH generation for the required routes is missing.
Proposed Solutions
Potential ways to address this.
Option A: Should the global constraint check be relaxed a bit
Simply change
ifconfig_push_constraint_satisfiedto always returntrue(or issue only a warning).Pros: Very simple code change (one line).
Cons:
route-gatewayand the route back to the server in each CCD (as in the workaround). This is error‑prone and not self‑documenting.Option B: Might a new CCD directive (e.g.,
subnet-pool) makes sense ?An optional directive like e.g. :
Potential behavior:
struct ifconfig_poolandifconfig_pool_init).This is ideal for group‑based scenarios where firewall rules match entire subnets (e.g.,
-s 10.2.1.0/24), so individual IPs within the group do not need to be fixed.route-gateway(typically the first IP of the pool) and a host route back to the server’s VPN IP.multi_client_connect_late_setupis skipped for such clients (or validated against their own pool), eliminating theMULTI ERROR.ifconfig-pushremains fully supported for cases where a truly static, single IP is required – ensuring backward compatibility and flexibility for admins who need fixed IPs.Example CCD entry for
client1(using the new dynamic pool):Pros:
Cons: Slightly larger code change, but well‑isolated and reusable (the existing pool code can be leveraged).
Recommendation
We believe Option B might be the cleaner, more maintainable, and future‑proof approach. It adds a clear, self‑documenting feature while preserving backward compatibility and safety. The workaround proves that the internal infrastructure is ready; only a small amount of glue code is missing.
Would the maintainers be open to such a directive? Possibly there might also be better ways?
Relevant Code Locations
multi_client_connect_late_setupinmulti.c, callingifconfig_push_constraint_satisfied.ifconfig_pool_init,ifconfig_pool_acquireinpool.c– these can be reused for per‑client pools.options.c– add handling forsubnet-pool.send_push_replyinpush.c– automatically addroute-gatewayand the server‑host route.Why This Matters
topology subnetand so with DCO.Environment
topology subnetand DCO)I’m happy to help test or provide more details if needed. Thanks for considering and looking into this!
Best regards,
Erik