07 October 2023 | Linux Guide, Privacy, Self-hosting
Exposing your home server to the internet can be dangerous. Look up some online guides about securing your servers before you do anything stupid. You have been warned. Also, I have not included any instructions related to SELinux.
You’ve set up a home server, and are hosting some services like Vaultwarden, or Jellyfin, or perhaps Nextcloud. But now, you want to share it with friends and family, or maybe you just need the ability to access it remotely. So, you decided to expose it to the internet, but your ISP does not let you do that. Issues like dynamic IP can be resolved using a service like Duck DNS or No-IP, but if your ISP does not let you forward your ports, then you have to rely on third-parties to forward your traffic.
There are many easy solutions to this problem. Cloudflare Tunnel is a free and popular solution. And if you just want remote access, Tailscale is another good option. If Tailscale’s backend servers are not being open-source is an issue, people can rely on Headscale.
But there is something you must know before considering these solutions. All these rely on TLS/SSL termination, which means your data is decrypted in the servers owned by these third parties.
Let me explain this in detail with taking Cloudflare Tunnel as an example.
One of the reasons we use SSL certificates on our websites to ensure that when the client requests data from the servers, or sends any data back to us, nobody else can look at that it, ensuring the client’s privacy. When we use Cloudflare Tunnel, the data may be encrypted on our server, but it is decrypted on Cloudflare’s servers, then re-encrypted and sent to the client. And when client enters any data like passwords, or upload any image, that data is, again, decrypted on Cloudflare’s servers (e2ee services are different, discussed below), then re-encrypted and sent back to us.
If you set up a Let’s Encrypt certificate on your server and route your traffic through a Cloudflare Tunnel, your clients will see a Cloudflare certificate. If you want them to see your Let’s Encrypt certificate, you will have to subscribe to their Business or Enterprise plan.
Take a look at this diagram for better understanding:
Diag
Let’s say you want to send your friend a message, but you don’t want anyone else to read it while in transit. So, you put the message in a locked box. So, if the box gets stolen on the way others won’t be able to read the message. That is what SSL certificates do.
But, let’s say you cannot go out of your house to deliver the box yourself, because your parents, that is, your ISP, won’t let you. So, you hire someone else, say, Cloudflare. But what Cloudflare says, is that they will look inside the box before if you want them to deliver it for free, If you want the box locked, you will have to pay them money.
There are some applications, like Vaultwarden, and Nextcloud with end-to-end encryption plugin, that are not affected with this because they encrypt the data themselves in the clients’ devices, using their own algorithms.
Earlier, I used to do the same thing, but manually. I rented a VPS on Hetzner and connected it to my home server using WireGuard. But since, the certificate management was handled by the VPS using Nginx Proxy Manager my VPS provider, Hetzner, could look at the data. So, I decided to learn about implementing TLS passthrough.
Now, my current setup is – I host services on my home server, manage certificates locally, and use the VPS to pass the data to the client without terminating the TLS/SSL connection.
Here is a diagram to explain my setup:
Diag
If you have looked at the diagram above, you may have already understood what you need to replicate my setup. Here are the details:
I am assuming you have already updated and secured both of your machines and have access to both using ssh
or dropbear
.
First, let’s install WireGuard on both.
sudo apt install wireguard
sudo dnf install wireguard-tools
sudo pacman -S wireguard-tools
For instructions to install WireGuard on other distributions, visit the official documentation.
On both servers, make sure forwarding is enabled. Run
sudo nano /etc/sysctl.conf
Make sure net.ipv4.ip_forward=1
is present. If it is not, type it at the end of the file. It might also be the case that it is present but has a pound sign (#) at the start of the line. This means that it is commented, and not enabled. Removing the sign will enable it.
Tip – If the file is too big, and you cannot find this line, you can press ctrl + w
to find it.
Save and close the file by pressing ctrl + x
, then y
, and then enter
. If you have not made any changes to the file, pressing ctrl + x
will simply close the file.
If you made any changes to the file, run the following command:
sudo sysctl -p
On most distributions, iptables comes pre-installed. But if, for any reason, it is not, install it using your system’s default package manager.
sudo apt install iptables
sudo dnf install iptables-services
sudo pacman -S iptables
For other distributions, a quick search on your favourite search engine will fetch you the instructions.
You may have to start the iptables service.
sudo systemctl enable iptables.service
sudo systemctl start iptables.service
Now, let us set up WireGuard. The basic idea is, the both servers will generate a pair of private and public keys. The WireGuard configuration files on both servers will contain their own private key and each other’s public key. There are many ways of doing it, but I find this way to be the easiest.
Run the following commands:
wg genkey | sudo tee /etc/wireguard/private.key
sudo chmod go= /etc/wireguard/private.key
sudo cat /etc/wireguard/private.key | wg pubkey | sudo tee /etc/wireguard/public.key
The first command generates the private key of the VPS, and it will be saved in a specific location. The second command removes any permissions on the file for users and groups other than the root user to ensure that only it can access the private key. And the third command generates the public key of the VPS, and it will be saved in the same location as the private key.
Now, create a new wireguard configuration file using
sudo nano /etc/wireguard/wg0.conf
Insert these lines:
[Interface]
Address = 10.0.0.1/24
ListenPort = 51820
PrivateKey =
PostUp = iptables -t nat -A PREROUTING -p tcp -i eth0 '!' --dport 22 -j DNAT --to-destination 10.0.0.2; iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source SERVER-IP
PostUp = iptables -t nat -A PREROUTING -p udp -i eth0 '!' --dport 55107 -j DNAT --to-destination 10.0.0.2;
PostDown = iptables -t nat -D PREROUTING -p tcp -i eth0 '!' --dport 22 -j DNAT --to-destination 10.0.0.2; iptables -t nat -D POSTROUTING -o eth0 -j SNAT --to-source SERVER-IP
PostDown = iptables -t nat -D PREROUTING -p udp -i eth0 '!' --dport 55107 -j DNAT --to-destination 10.0.0.2;
[Peer]
PublicKey =
AllowedIPs = 10.0.0.2/32
Replace the SERVER-IP
, at the end of those lines, with the public IP address of your VPS. For now, we will keep the PrivateKey and PublicKey empty.
Press ctrl +x
, then y
, and then enter
, to save the configuration file.
Run the same commands as we did on the VPS to generate public and private keys.
wg genkey | sudo tee /etc/wireguard/private.key
sudo chmod go= /etc/wireguard/private.key
sudo cat /etc/wireguard/private.key | wg pubkey | sudo tee /etc/wireguard/public.key
Create a new wireguard configuration file using
sudo nano /etc/wireguard/wg0.conf
Insert these lines:
[Interface]
Address = 10.0.0.2/24
PrivateKey =
[Peer]
PublicKey =
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 25
Endpoint = X.X.X.X:51820
Replace X.X.X.X with the public IP address of your VPS. So, the last line should look like this:
Endpoint = 42.11.109.1:51820
Press ctrl +x
, then y
, and then enter
, to save the configuration file.
Now, we will insert the public and private keys in the config files. We will have to go back and forth in your home server and the VPS to print keys and change the configuration files.
On your home server, run
sudo cat /etc/wireguard/private.key
This will print out the private key. Copy it. Now open the config file using
sudo nano /etc/wireguard/wg0.conf
Paste the copied key in front of the PrivateKey =
.
The line should look like this:
PrivateKey = U9uE2kb/nrrzsEU58GD3pKFU3TLYDMCbetIsnV8eeFE=
Save and exit.
Now, run
sudo cat /etc/wireguard/public.key
This will print the public key of your home server. Copy it.
Return to the VPS and run
sudo nano /etc/wireguard/wg0.conf
Paste the copied key in front of the PublicKey =
. Then, save and exit.
Run
sudo cat /etc/wireguard/private.key
This will print out the private key. Copy it. Now open the config file using
sudo nano /etc/wireguard/wg0.conf
Paste the copied key in front of the PrivateKey =
. Now, save and exit.
Run
sudo cat /etc/wireguard/public.key
This will print the public key of the VPS. Copy it.
Go back to your home server and run
sudo nano /etc/wireguard/wg0.conf
Paste the copied key in front of the PublicKey =
. Then, save and exit.
Finally, run the following command on both of the server to start WireGuard:
sudo wg-quick up wg0
You can test the connection by pinging the WireGuard IP from either of the servers.
On your VPS, run
ping 10.0.0.2
Press ctrl + c
to stop.
If what you see is something like in the following screenshot, then your configuration is okay and everything should be routed through the VPS.
screenshot
If you have any issues, feel free to post a comment, below.
To make sure that WireGuard is turned on automatically after reboot, run the following command on both the systems:
sudo systemctl enable wg-quick@wg0
Now, you can point your domain(s) and/or subdomains to the public IP address of your VPS.
For a reverse proxy, any solution would work. But personally, I shifted from Nginx Proxy Manager to HAProxy because, In my opinion, it is faster, lightweight and provides more control.
To install HAProxy, use your default package manager.
sudo apt install haproxy
sudo dnf install haproxy
sudo pacman -S haproxy
For instructions to install a more recent version, or to install on other distributions, use your favourite search engine.
Start the HAProxy service, and enable it to ensure it is started after every boot, using the following command:
sudo systemctl start haproxy
sudo systemctl enable haproxy
To configure haproxy. Use
sudo nano /etc/haproxy/haproxy.cfg
Here is what my configuration looks like.
To enable the changes after editing the configuration file, we must restart the HAProxy service.
sudo systemctl restart haproxy
If you are using my config file, you will see that I have added a location for an SSL certificate. If you restart the service without providing a valid SSL certificate, it will throw an error and the service will stop.
Now, let us jump to generating an SSL certificate.
The official documentation states that you must install certbot using Snap package manager. I do not like it at all due to its back-end being proprietary. I used my distibution’s (Fedora’s) package manager to install certbot and it works fine. So, I leave the installtion of certbot to you.
There are many ways to generate a certbot certificate, depending upon your requirements. I recommend setting up a wildcard certificate. You will need your domain provider’s API key. A simple search on your search engine will help you find a decent guide. Generate a certificate using certbot certonly
command, as we are going to set up HAProxy with the certificate ourselves.
Certbot will generate a private key and a public key certificate in /etc/letsencrypt/live/YOURDOMAIN.COM
folder. We will have to pipe both of them into a single file.
Run the following commands, after replacing YOURDOMAIN.COM
with your actual domain and providing a proper path to certificate:
sudo cat /etc/letsencrypt/live/YOURDOMAIN.COM/fullchain.pem /etc/letsencrypt/live/YOURDOMAIN.COM/privkey.pem | sudo tee /path/to/certificate.pem
After making sure that certbot will be autorenewing your certificate, you can add this command in your root user’s crontab. Run the following to create a new cronjob: sudo crontab -e
Add the above command with proper syntax. Add
0 22 * * * sudo cat /etc/letsencrypt/live/YOURDOMAIN.COM/fullchain.pem /etc/letsencrypt/live/YOURDOMAIN.COM/privkey.pem | sudo tee /path/to/certificate.pem
This will copy the generated keys into your single certificate file, everyday at 10 PM.
Save the file and exit the editor.
Confirm the certificate path in your haproxy.cfg, and restart HAProxy using
sudo systemctl restart haproxy
That is it. You are done. Whenever you create new services, make sure you update your HAProxy configuration file and restart the HAProxy service.
Although, you do not have to touch your VPS anymore, I still recommend loging into the machine updating and rebooting it regularly.
If you have any questions, or suggestions, leave a comment down below, or reach out to me directly.
Music – Other Projects – – Privacy Policy – About
Comments