One trick to build a TLS-enabled IPv6-only empire with only ONE legacy IP

Original version: 2023-03-20.
Last update: 2023-03-20T01:08:49+01:00.
6 minutes of reading time

IPv4 is the fourth version of the Internet Protocol, which has been in use for over three decades. However, the rapid growth of the internet and the explosion of connected devices have made it clear that the available pool of IPv4 addresses is no longer sufficient to meet the demand. Moreover, IPv4 suffers from several limitations, such as lack of built-in security features and the inability to support advanced routing and network management capabilities. These limitations have been addressed by the newer IPv6 protocol, which offers a much larger address space, enhanced security features, and better support for modern network technologies.

Therefore, it’s high time to move on from IPv4 and adopt IPv6 as the standard protocol for the internet to ensure better scalability, security, and performance.

Assuming you adopted IPv6, you still have some users which cannot access your service while travelling or for any reason related to a legacy network deployment (i.e. IPv4-only).

Here, I will explain a simple trick to move the problem to border nodes, in exchange of some performance degradation which can be mitigated with extra unimplemented efforts, inspired by https://www.mythic-beasts.com/support/topics/proxy.

The key: Server Name Indication (SNI)

SNI stands for Server Name Indication, which is an extension to the Transport Layer Security (TLS) protocol. It allows a client to indicate which hostname it is attempting to connect to, so that the server can use the appropriate certificate to establish a secure connection. This is particularly useful in cases where multiple domains are hosted on the same IP address, which is common with shared services, e.g. NGINX virtual hosts.

When it comes to proxying an IPv4 connection to an IPv6 connection, SNI is useful because it allows the proxy to inspect the SNI header and determine which IPv6 address to route the connection to, allowing the client to access the IPv6-only service.

How to leverage SNI to proxy IPv4 to IPv6 in NixOS ?

Assuming you have a sniproxy NixOS module, you can write expressions like those:

{ config, lib, pkgs, ... }:
let 
  mkDirectRule = domain: {
    match = domain;
    dest = domain;
  };
  # Proxy the whole *.domain (not domain!).
  mkWholeSubdomainRule = domain: {
    match = ".*\\.${lib.replaceStrings ["."] ["\\."] domain}";
    dest = "*:443";
  };
  
  publicIPv4 = "<my only and sole IPv4>";
in
{
  networking.firewall.allowedTCPPorts = [ 443 ];

  services.sniproxy = {
    enable = true;

    resolver = {
      mode = "ipv6_first";
    };

    listeners = [
      {
        address = publicIPv4;
        table = "vhosts";
        fallback = "127.0.0.1:443";
      }
    ];

    tables.vhosts = [
      (mkDirectRule "ryan.lahfa.xyz")
    ];
  };
}

And get nice logs such as:

Mar 20 00:32:48 amadeus sniproxy[926982]: 1.1.1.1:6601 -> 2.2.2.2:443 -> [2001:470:ca5e::1]:443 [ryan.lahfa.xyz] 1385/1385 bytes tx 1971/1971 bytes rx 66.058 seconds

It is very helpful to enable legacy IP users to access your service whatever it is as long it uses TLS and can use SNI therefore.1

The problem : lost origin legacy IP information!

Doing naive IPv4 → IPv6 proxies like this is bound to give you headaches when you will want to understand the nature of your traffic by reading your server logs, and if you want to do moderation by blocking some IPv4, it is almost impossible.

Indeed, the target host does not receive the original IP, it only receives the previous hop IP information.

Though, this is not an inherent limitation for most software.

The solution : PROXY protocol

The PROXY protocol is a protocol that enables a load balancer, such as HAProxy, to transparently pass on client connection information to a backend server. This is useful in cases where the backend server needs to know the original source IP address and port of the client, rather than the IP address of the load balancer.

The PROXY protocol works by encapsulating the original client connection information in a special header, which is then passed on to the backend server. This header contains information about the original source IP address, source port, destination IP address, and destination port, among other details.

By using the PROXY protocol, the backend server can make more informed decisions about how to handle the connection, such as applying different firewall rules or routing the traffic to a different server based on the original client IP address.

HAProxy, initial developers of the PROXY protocol, supports both the PROXY v1 and v2 protocols, and can be configured to use the PROXY protocol for incoming connections, outgoing connections, or both. However, it’s important to note that not all backend servers support the PROXY protocol, so it’s important to verify compatibility before enabling it.

In our case, we use NGINX and sniproxy, which both support PROXY protocol v1.

What does it look like in NixOS?

It is easy to enable PROXY protocol listener on a virtual host in NGINX one by one by adding a listen directive.

Nevertheless, this is not good for us because we have 50+ virtual hosts and we would like to have a smart “default” of having a PROXY listener.

We introduced https://github.com/NixOS/nixpkgs/pull/213510 to this effect which enable to have default PROXY protocol listeners separate from non PROXY ones.

Here’s what it looks like to have a “global PROXY protocol” aware system, create a profile called v6-proxy-aware.nix for example:

{ lib, ... }: 
let
  withFirewall = true;
  allowedUpstream = "2001:db8::1/128";
in
{
  services.nginx = {
    # IPv6-only server
    defaultListen = [
      { addr = "[::0]"; proxyProtocol = true; port = 444; ssl = true; }
      { addr = "[::0]"; port = 443; ssl = true; }
      { addr = "[::0]"; port = 80; ssl = false; }
    ];

    appendHttpConfig = ''
      # Your central sniproxy node
      set_real_ip_from ${allowedUpstream};
      real_ip_header proxy_protocol;
    '';
  };

  # Move to nftables if firewall is enabled.
  networking.nftables.enable = withFirewall;
  networking.firewall.allowedTCPPorts = lib.mkIf (!withFirewall) [ 444 ];
  networking.firewall.extraInputRules = lib.mkIf withFirewall ''
    ip6 saddr ${allowedUpstream} tcp dport 444 accept
  '';
}

This is what is used in production, the nftables stuff is a bit verbose because we are not modern and we were using iptables on some hosts, so we will use it as an excuse to transition, but any kind of firewalling can be used. :)

And now, you can transform the previous sniproxy example:

{ config, lib, pkgs, ... }:
let 
  mkDirectRule = domain: {
    match = domain;
    # Notice the difference here.
    dest = "${domain}:444";
    useProxyProtocol = true;
  };
  # Proxy the whole *.domain (not domain!).
  mkWholeSubdomainRule = domain: {
    match = ".*\\.${lib.replaceStrings ["."] ["\\."] domain}";
    # Notice the difference here.
    dest = "*:444";
    useProxyProtocol = true;
  };
  
  publicIPv4 = "<my only and sole IPv4>";
in
{
  networking.firewall.allowedTCPPorts = [ 443 ];

  services.sniproxy = {
    enable = true;

    resolver = {
      mode = "ipv6_first";
    };

    listeners = [
      {
        address = publicIPv4;
        table = "vhosts";
        fallback = "127.0.0.1:443";
      }
    ];

    tables.vhosts = [
      (mkDirectRule "ryan.lahfa.xyz")
    ];
  };
}

And now, not only no one can spoof their original address, but you can recover the information in your access logs in the destination endpoints!

How to test this?

$ curl --haproxy-protocol https://vhost:444

Then, you can read the access logs of your server looking for the IPv4.

In a future version of curl, I hope to add ways to change the client IP information using a flag --haproxy-client-ip with https://github.com/curl/curl/pull/10779, it will also provide ways to test actively for spoofing situations.

Conclusion

Run everything IPv6-only !

Yes, you can run IPv6-only services for most of them nowadays, it’s practical and feasible even for legacy IP users with proper logs. No excuse!

I run Jellyfin behind such proxies and do not notice any significant latency induced by this proxy mechanism, of course, your mileage may vary, I have good peering between my servers, therefore, the cost is small.

It is probably possible to add more optimizations such as TCP Fast Open or ditch TCP for QUIC, etc, etc. The sky is the limit.

What about mail servers, SSH, ?

Ha… Unfortunately :).

Stuff like IRCd works fine because it has indeed TLS so can benefit from SNI.

I would definitely behind getting a modern reimplementation of SSH with TLS as a data transport part.2

And for mail servers, SMTPS/IMAPS are actually a thing and I don’t understand why are we not using them yet except for legacy reasons.


  1. There is also ALPN too, but this is out of scope for this simple post.↩︎

  2. SSH has some shortcomings with encrypt-and-MAC encryption, encrypted “length” field, which are apparently solved in the newer ETM modes and its chacha20-poly1305 implementation.↩︎