If you’ve ever tried to host stuff at your home that should be reachable from the internet, you might have stumbled upon the hurdle of dynamic IPs and being behind NAT and/or having one of those plastic routers that aren’t very configurable. In this post I’ll show how to set up a cloud jumphost to eliminate the need for DynDNS and/or port forwardings which some routers aren’t even capable of.

The only thing that is required is a permanently running (and online) computer at your home. If it’s a machine that is also used for personal stuff, it might be a better idea to install a virtual machine with a bridged network connection on it instead.

The main drawback of hosting stuff at home apart from the power consumption is that all backups have to be made by you. Hosters usually offer backup services that are sometimes hard to recreate yourself for a reasonably price.

Whyreguard Jumphost?

This method is my favorite because it basically circumvents any restrictions that might be present with a generic home internet connection and imo costs a reasonably small amount of money.

The basic idea is to get a cheap virtual server (the average price of which is around $2~3/month the last time I checked) somewhere in the cloud and let your home server connect to it using the Wireguard VPN. Then you can run a reverse proxy on the cloud server and pass every request to the VPN address of your home server. The dynamic IP problem can be solved by configuring Wireguard to send a keep alive every few seconds which will cause the server to get the new IP. This will result in a maximum outage of as long as your keep alive period is set.

It’s also not necessary to configure any settings in the router. Only the firewall on the home server has to be configured to allow traffic to the VPN interface address where the services should listen on.

“But why not just host all the stuff on the cheap cloud server?”

The main argument for not hosting everything on that cheap cloud server is that its storage is not encrypted (and encfs is currently consideredhackable”). Getting a cloud VM with only 10GB of storage to store a small Linux, the VPN config, and a reverse proxy is also a lot cheaper than having to get more persistent storage in case you have it at home anyways.

Setting up Wireguard

In my “How to monitor dedicated servers (IMHO)” post, I’ve already described how to set up Wireguard. The setup is the same for this scenario, only that the cloud server should have SaveConfig = false in its [Interface] section and the home server must have PersistentKeepalive = 5 set in the [Peer] section (where the address of the cloud server is). In case of a connection loss because of a new IP, the home server will connect to the remote server again after a maximum of five seconds basically removing the need to use services like DynDNS or similar.

Setting up NGINX

I’d recommend NGINX as remote proxy. The config is pretty readable and unlike apache stuff mostly works as expected.

On Debian based systems, NGINX can be installed by running:

apt install nginx

The configuration of hosts happens in two folders:

  • /etc/nginx/sites-available/
  • /etc/nginx/sites-enabled/

The sites-available folder usually contains the config files; NGINX will ignore this folder. Those files can then be symlinked to sites-enabled where NGINX will look for config files. Config files can be named whatever and don’t need to have any specific extension, I usually use the domain name as filename.

Catch all the requests

The first config I’d recommend to create is a catchall server block to return a custom page whenever someone connects to the server the wrong way.

A configuration for this scenario can look like:

server {
    listen 80 default_server;
    listen [::]:80 default_server;
    root /var/www/html/;
    index index.htm index.html;
    server_tokens off;
    autoindex off;

This will make NGINX listen for HTTP on port 80 on IPv4 and IPv6 and serve files in /var/www/html using index.htm and index.html as files to load when only the domain is given and the default_server keyword is what makes NGINX use this server block for any incoming request that doesn’t have another server block or when the IP of the server is used directly.

I’d recommend writing a very simple index.html explaining that the request was not successful which then will be served whenever someone connects to the server using an unknown Host header.

The two last statements (autoindex and server_tokens) are security measures to prevent the server from showing directory listings (autoindex) and from sending the application version in responses (server_tokens).

An example website

So let’s configure NGINX to serve a website from the server at our home on example.com.

The cloud server is the first thing we should configure. An example NGINX configuration can look like:

server {
    listen 80;
        listen [::]:80;
        server_name example.com;
        return 301 https://$host$request_uri; # redirect to https
    server {
    listen 443 ssl http2; # you can remove the http2 if you want v1
    listen [::]:443 ssl http2; # same here
    server_name example.com;
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols TLSv1.3 TLSv1.2; # v2 left in for compatibility
    ssl_prefer_server_ciphers on;
    ssl_dhparam /etc/nginx/dhparam.pem;
    ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
    ssl_session_timeout  10m;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off; # Requires nginx >= 1.5.9
    ssl_stapling on; # Requires nginx >= 1.3.7
    ssl_stapling_verify on; # Requires nginx => 1.3.7
    resolver valid=300s;
    resolver_timeout 5s;
    server_tokens off;
    autoindex off;
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";
    location / {
        proxy_pass; # wireguard client IP
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header Proxy "Cloud-LB"; # this can be changed to whatever
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_pass_header Server;
        proxy_buffering off;
        proxy_redirect off;
        proxy_http_version 1.1;
        tcp_nodelay on;

This will make NGINX listen on port 80 for HTTP, redirect any request to HTTPS and listen on port 443 for HTTPS to serve the website.

As you can see, there is a lot of SSL/TLS configuration going on, I’ll explain that in the next section so we’ll ignore it for now because the only other difference to the catchall block is the location statement.

The only argument to the location statement is the location we want to apply the contained config for, which is / in or example since we want to serve at https://example.com/.

The contained config is using the proxy_pass functionality provided by NGINX to send any request to the Wireguard client IP. We also set a lot of headers there which are required for some web applications to work properly. Since they are usually ignored when not needed it’s better to add them anyways in case they are needed later.


Concerning the SSL/TLS config above, there are also a few things left to do before everything will work. The first thing to do is to generate Diffie-Hellman parameters for key exchange.

This can be done by running:

openssl dhparam -out /etc/nginx/dhparam.pem 4096

The config above is expecting the file to be at /etc/nginx/dhparam.pem so if you changed the path here, you also need to change it in the NGINX config.

The other config options are taken from various sources (like cipherli.st, Mozilla Observatory and tutorials from other software) and are quite secure as of writing this post. Still, I’d recommend that you double check that the values are still secure if possible.

One thing you might want to change is the resolver statement which is set to the Google DNS ( in the config above.

You also won’t get a perfect A++ score with SSL testers as of now because HSTS is not used. This is intentional because if you deploy an invalid config with HSTS enabled, your server might be unreachable unless every client resets their browser cache (or the cache time runs out).
HSTS can be activated by adding the following line above (ie. outside of) the location statement:

add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";

The last step to do to is to get a valid SSL certificate. Since the cloud server is reachable from the internet and example.com points to it, we can simple use certbot (by/using LetsEncrypt) to get a certificate.
It can be installed by running:

apt install certbot

And getting a certificate is as easy as:

systemctl stop nginx
certbot certonly -d example.com

Certbot will ask how you want to obtain the certificate and I usually use the standalone mode (which is why NGINX has to be stopped first), but you can also use certbots NGINX integration if you want, although I’m not covering that here. You will also have to supply your email address to get notifications before certificates expire and other important messages.

The certificate will probably be placed at /etc/letsencryt/live/example.com/fullchian.pem and the key will be right next to it called privkey.pem. If certbot reports different values for this, you also need to change the paths above!

Final steps

Now that everything should work, you can check the NGINX config by running:

nginx -t

If this command succeeds, NGINX can be started:

systemctl start nginx

When visiting https://example.com now there should be a “Bad Gateway” error page served using the valid LetsEncrypt certificate. If there is an SSL error instead, you need to double check the SSL config.

The “Bad Gateway” error appears because the computer at home does not serve anything yet, so NGINX on the cloud server produces this error page.

To serve a simple website on the home computer, you can basically use the catchall server block, remove the default_server and add the server_name that is used in the cloud server config. This allows to serve any number of domains from that single NGINX setup because the cloud NGINX also passes the Host header to the Wireguard client which will be used to find the server config with the matching server_name.


You now have a way to serve stuff from your home (or your school, university, workplace, shopping mall, train station, fast food restaurant) without touching any router config and without having to worry about changing IPs.

If you used this guide for somewhere other than your home, be very sure that the local admins are okay with it, because it might result in you(r stuff) being broken by a horde of pissed off bofhs.

Also keep in mind that this only helps with HTTP/S. So to forward something else like for example a Minecraft server, you might need to use SSH Port Forwarding which I might cover in a later post.

As always I hope this post was useful and don’t forget to sMaCk ThAt LiKe BuTtOn AnD fOlLoW mE oN sOuNdClOuD!1!11 sorry