Dear fellow or potential fellow gotosocial instance admins,

I’ve come up with a novel way to set up a gotosocial server behind a reverse proxy, which avoids the use of making new firewalling rules - both on a VPS, and creating port forwarding on one’s home router. This method is ideal for minimizing the cost of running one’s own ActivityPub/Mastodon server, in a way that leverages inexpensive fast storage on the backend (say, on a RaspberryPi 5, 2GB of RAM, with an NVMe). As many valiant and praiseworthy Mastodon server admins might attest to, renting cloud VPS’ can cost a lot, especially when storing many tens or hundreds of GB of user data.

My method avoids the need of forwarding ports 443 and 80 into one’s home LAN, using DNAT (on the VPS) and port forwarding (on one’s home router). In a nutshell, it’s a novel use of Wireguard, in conjunction with nginx on the frontend, and gotosocial on the backend. This can save the cost of renting a dedicated VPS, to get the exclusive use of ports 443 and 80, in conjunction with static IPv4 and IPv6 addresses. My method optimizes on reliability and cheapness, but it’s not the most secure - decryption and re-encryption happens on the VPS, before the data travels down the Wireguard tunnel. This exposes the data to any underlying hypervisor at one’s hosting company. So full disclosure there.

I’ve run my method by the helpful gotosocial furries in their Matrix Help chatroom (and I’m grateful for their help to debug subtle warts the method had), and got their blessing, at least to the technical soundness of the method.

I’ll give this method a name - the "Super-Owl Reverse Proxy", so as to distinguish it from the “normal” way a gotosocial reverse proxy works. Here’s what characterizes a “Super-Owl Reverse Proxy” (which is not spoken of, in any way in the current gotosocial docs):

  • The frontend and backend are on separate servers: nginx is in the cloud (which has a static IP address and SSL), and gotosocial runs on a backend machine, within your LAN. The backend does all the heavy lifting: database I/O, as well as bulk storage. This saves major costs, since the VPS needs very little CPU/ RAM, and disk space. It’s just running nginx in a very lightweight way (and “proxy_pass” is the only real “magic” it’s performing; acting as a “go-between”, between the SSL and Wireguard encryption).
  • the frontend (nginx) does not monopolize public-facing ports 443 and 80. You can be running several other websites and domains on that frontend server. Here’s where cost-savings are further realized. A dedicated VPS isn’t needed, just for gotosocial.
  • There’s an encrypted tunnel between them (eg. Wireguard), and “firewall-punching” is used (to “keep it alive”).
  • the traffic in the tunnel is also SSL-encrypted: is https, is not http
  • SSL certs are used on both machines. Yes, it sounds wild: used twice.

Note: No AI assisted me in formulating and debugging this totally original and unique method. I had to completely dream up this method from scratch, innovating and iterating until I got it right.

Pre-requisities:

You would need at least an intermediate level of Linux System administration knowledge to understand what I’m about to explain. Before explaining further, you would be expected to have a familiarity with configuring and administering nginx for various web services. Also, you should have a basic proficiency in setting up wireguard tunnels, such as the example shown on the “Quick Start” page of the wireguard website. Having ssh’ed into both your backend server where gotosocial will run, and the cloud VPS, you should be able to ping each oppsite end of the tunnel. This is to say, the backend can ping the VPS, and the VPS can ping the backend, through the wireguard tunnel. This must persist durably across reboots.

The Technical Details:

Within the backend’s wireguard configuration, use:

PersistentKeepalive = 25

…such that the backend keeps the wireguard connection open. This is the “firewall-punching” goodness which the frontend expects, thereby avoiding the need of DNAT on the frontend, as well as port forwarding in your home internet router.

In this example, we’ll call the VPS side of the tunnel 10.10.10.1, and the backend side of the tunnel will be called 10.10.10.100.

So when we look at the current method of doing a reverse proxy with nginx (for gotosocial, shown here), we see the crucial, vital “proxy_pass” line (see the bottom of the page, line 5) as:

proxy_pass http://127.0.0.1:8080;

However, in my method, we set this to the backend’s wireguard address and we crucially use https, not http:

proxy_pass https://10.10.10.100:8080;

Then on the backend, in /opt/gotosocial/config.yaml, we crucially set six things:

protocol: "https"

bind-address: "10.10.10.100"

port: 8080

trusted-proxies:
   - 10.10.10.1/32

You’ll also need the same SSL certs - which are on the cloud VPS - copied into the backend, where they get used a second time!

tls-certificate-chain: "/etc/tls/example.com/fullchain.pem"
tls-certificate-key: "/etc/tls/example.com/privkey.pem"

I have a testing instance of gotosocial 0.21.0 set up with this new method: https://g.toque.im

I’m the user @owl@g.toque.im on that instance, should you wish to befriend me there.