DIY cloud - Being paranoid

Annoyed of being a glass citizen? Follow this how-to to create your own private cloud

“Good old days”

Do you remember the time when the internet was still a “new thing” and totally not riddled with ads and the need to commercialize everything? When niche hobby's were not controlled by “influencers”, huge sponsors and ad-networks spying on you, trying to collect as much data as possible?

When Facebook was not a thing, Google did not exist yet and free speech was actually possible without “getting cancelled”. I remember and I am not even that old yet.

Nowadays it simply feels like you are being watched constantly. Not only to sell you more bullshit but also to create a profile. To get to know more about you and to figure about what you are up to, who you are connected to, what your hobbies are, what you bought, what you are going to buy, your ideas and thoughts and basically anything else.

With smartphones, alexas, echos, glass (does anyone remember google glass?) it has become much easier for trackers to actually follow you around and be with you 24/7. Do you know where you have been 5 years ago, what you did, where you traveled to, who you spoke to and what you said? No? Well, Google sure does

Edward Snowden showed us that not only is Google and Facebook interested in your data but also the Government and that the phrase nothing to hide is dangerous and simply wrong. Your data will be used against you and there is not much you can do about it after it has gotten in the wrong hands. In fact, government agencies will actively work against privacy to be able to get access to your data.

To combat this companies started more and more to implement features like E2E (end-to-end-encryption), Zero-trust, FOSS and so on. At the same time governments tried to fight those privacy features by implementing several different laws and regulations, such as ACTA (EU), Vorratsdatenspeicherung (Germany), Metadata Access (Australia) and several others. With Chat Control 2.0 being the latest BS.

The main argument for those laws and regulations is usually to better combat terrorism, child-porn or online-bullying. The goal however is always the same, get access to any citizens private data. This includes any form of data as we already know from the Snowden-leaks.

Those regulations and laws are getting pushed constantly with different levels of success. If a law does not make it all the way through (denied/vetoed/whatever) then you can be certain that the party/parties behind it will try it again with a slightly modified approach.

“Throw shit against the wall until something sticks” seems to be the 'modus operandi' when it comes to bringing those laws into reality.

Offline is the new Online

Because of this and because I needed something to distract me from constantly looking at WinDBG I decided to write this lengthy blog, where the main idea is to host everything yourself and not rely on Google/Facebook/Dropbox or any other big company that is potentially selling and/or leaking your data. The goal here is pretty simple:

Anything that needs to be online needs to be hosted yourself. Mail, File share, IM and anything you could think of needs to go on your own server.

What is the point of self-hosting if you can not trust the code. Is FOSS free of exploits or backdoors? No, but they could potentially be found. Going FOSS 100% is technically not always possible.

Firmware-Blobs or specific parts like the TPM can not be open-sourced or open-source code is not available at all. There are other options in place to get around those, by disabling those features/parts or blocking them or similar.

I do not see the point in going the privacy-route if you are still using the same services on your smartphone and have to trust Google or Apple. Therefore, I am also switching from Apple/Google to something privacy-focused. This will be a 2-step process which I will explain later on.

105% paranoid
My data is my data and only goes out of my hand if I say so. To stay true to this, we need to implement security and privacy-features on every level. I am talking about hardware and software. Cold-Boot-Attacks, Supply chain attacks or UEFI rootkits are all things we need to be aware of if we want to minimize data loss at any point.

In the next pages I am going to show you how I took my data back in my hands and restored what should have been the norm all along…privacy on the internet.

In the first part I will focus on the server-side and how to get a basic setup running. The next posts will touch on the actual services and how to deploy them and how to replace your iPhone/Android with a more privacy-focused option.


When choosing the right server we need to differentiate between keeping the server at home or renting one. I believe that keeping a server at home is not potentially safer than keeping it in a secure and cool data center. Our goal is to keep the data on the hard drives secure from everyone except our-self.

This should also be possible if the server or hard drives get seized by law enforcement or a rogue data-center-employee. If you are not a whistleblower then this may sound a bit over the top but it for sure helps me to sleep better at night, knowing that my data is very likely safe.

Choosing the hoster

If you want to get your own server then this obviously does not apply to you

When choosing a hoster we should keep the following in mind:

You should choose a server in your country or at least the country where you reside usually. This is not a privacy-concern but related to latency and speed. If your server is half a globe away then things will be slow. (Given of course that there are good hosters in the chosen country)

Dedicated root-server
We do not want to go with containers or VMs (provided by the hoster). Simply because those run on a hypervisor which is not under our control. This opens up timing and caching-attacks or simply sniffing out the network traffic. While the latter is also possible on a dedicated root-server, we at least do not have to worry about a hypervisor on top.

Avoid Intel
Now why should we avoid Intel? IME (Intel Management Engine) is a small piece of hardware running on every Intel-MB 10 years or younger. It could be used to access any part of your computer/server no matter what OS you are running.

This is a nightmare when it comes to security. Unfortunately AMD is not much better with AMD PSP but at least there is an option to disable it in new BIOS-versions, which does NOT disable it completely but stops the different parts talking to each other.

Even if you can not disable PSP, I would still go with AMD simply because IME could be interacted with remotely, PSP (AFAIK) could only be used locally. Therefore, avoid Intel IME and try to go with AMD and if possible disable PSP in the BIOS.

If you are running your own Server, then I would even suggest to check out Coreboot and Libreboot to replace your manufacturers BIOS.

This is mostly optional but if possible get a server with Idrac (Dell), Ilo (HP) or any other form of Hardware-Monitoring. One downside of running a dedicated root-server is that you are responsible for any HW-faults. May it be failing hard drive, failing fan or whatever.

This is usually taken care of by the hosting provider but they will not monitor your server so it is your responsibility to inform the hoster when something like this happens.

Some hosters also build their own system around their servers. Be careful with how much access and info you want to send through those.

The usual
When going for a dedicated server take some time to think about the specs and what you need. Better plan ahead and leave some space. We all got used to the simplicity of simply adding some RAM and cores to VMs with just a few clicks, but this is a real server where simply adding more memory or space needs someone to walk up to the server, shut it down and plugin the hardware.

This can become expensive really fast and technicians on-site are not cheap. Other than that, it really comes down to what you are about to run on your server, which will then decide how beefy your machine has to be.


  • Location: Get a server close to your location
  • Dedicated root-server: NO VM, NO CONTAINER, choose a real server, not shared
  • Avoid Intel and disable AMD PSP if possible
  • Monitoring: Make sure you can monitor your Server. If not provided then go with something like Zabbix
  • Plan ahead. Adding hardware is possible but expensive usually

In my case I went with Hetzner and one of their smaller root-servers. The specs are:

  • AMD Ryzen 5 3600
  • 64GB DDR4
  • 2×2TB Ent HDD (Software Raid1)
  • 1 static IPv4

Altogether this will cost me 40€/month which I think is a good payoff in terms of security and privacy once its all done.

If you are going with Hetzner and this is your first time ordering, there will be a small delay where Hetzner will ask for proof of documents and so on. Otherwise most servers are deployed within a few minutes.

First of all, decide if you need Software- or HW-Raid or any RAID at all. My server comes with Software-Raid (RAID is no backup) which I am going to use. The services I intend to run are either backed up offline (@home) or will not be backed up at all because provisioning them takes less time than restoring (Docker FTW).

After you know which configuration you are going with, deploy the Unix-Distro of your choice. Debating distro-choices is like debating religion. It is usually pointless and ends in a fight.

IMHO I would choose either Debian stable or Fedora or SUSE or something from the ground up, like Arch or Ubuntu minimal for simplicity. We should avoid unstable or “testing” editions because those usually end up breaking something in the long run.

Going completely open-source is not that easy after all and we are trying to find the least painful option here.

As I said, the distro is your choice but I would recommend Proxmox which is a hypervisor build on Debian. Another option would be Containers like LXC or Docker. However, IMHO breakouts from containers are “easier” and more common than breakouts from a fully virtualized VM in a hypervisor like Proxmox. In addition to being in a VM, we will also harden Proxmox itself to increase security.


The scripts used here are based on a Hetzner Script. They should run anywhere with little modification. First thing we want to do is install Debian 11 which we will later put Proxmox on top.

The goal is to install Dropbear and Busybox which will unlock/decrypt our Debian which is running Proxmox. You will need to modify those scripts if you are going with another hoster.

Basic setup
After provisioning log in to your server and follow the script I linked. The only thing you need to replace is the actual image used, hostname and the secret for the encryption in /tmp/setup.conf

Afterwards change the ssh-key for the actual system so you end up with 2 ssh-keys. One for dropbear and one for your OS. To do so create a new user first with adduser user1 then add this user to the wheels/sudo group. The actual name differs with each distro. usermod -aG sudo user1.

Then edit your ssh-config in /etc/ssh/sshd_config and change (uncomment if needed) the following:

Port 22 (Change it to anything you want like 22334)
LoginGraceTime 2m
MaxAuthTries 6
MaxSessions 5
PermitRootLogin yes (We will disable this after we confirmed our user is working)
PubkeyAuthentication yes
PasswordAuthentication yes (Will be disabled as well later on)

Now create a new ssh-key for your user with ssh-keygen -t rsa and then copy it over (after unlocking) to the main OS on your server with

ssh-copy-id -p 22334 -i user1

When that is done log back in and set both PasswordAuthentication and PermitRootLogin to "no" and restart the ssh-service.

When this is done we simply add Proxmox on top. When asked for Postfix-Setup you can easily set it to “No configuration” or anything else as it can be rerun later on.

echo "deb [arch=amd64] buster pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list

wget -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg

apt update && apt full-upgrade -y && apt install proxmox-ve postfix open-iscsi -y && apt remove os-prober -y

With this setup done we can continue to either buy a license for proxmox or use the free license and edit the repo-list.
To do so, edit the the pve-enterprise.list in /etc/apt/sources.list.d/ and remove the enterprise and replace it with download and the name of the list pve-enterprise with pve-no-subscription. Then rename the file and give it the same name pve-no-subscription.

To remove the nag-screen we open the file

/usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js and replace with void and then reboot the server.

Simple hardening
When everything is done we want Proxmox only accessible to us. Proxmox will be put behind a webserver which will be only reachable via VPN. For now we want to disable some services. Skip those steps if you need any of these of course.

Disable NFS:
/etc/default/nfs-common set NEED_STATD=no

Disable RPC:
systemctl disable --now rpcbind.service rpcbind.socket

Disable postfix IPv6:
/etc/postfix/ set inet_protocols = ipv4

Disable Postfix completely if not needed:
systemctl disable postfix

Disable SPICE-Proxy (Proxmox) if not needed or lock it down:
systemctl disable spice

Given that you are not running anything else, your server should have only ssh, pvedaemon and the pveproxy open. You can check with netstat -tulpen

Proxy it
Next step is to put Proxmox behind a proxy. One downside of keeping the Proxmox-GUI out in the open is that the proxmox-proxy itself is reachable for potential vulnerabilities and exploits. If there is an unknown vuln. in the proxy component, we would potentially loose the whole server with it.

To get around this we put Proxmox behind Nginx and lock that down further down the road. Now is the time to get certificates for your server and a domain-name.

Technically this is not really needed when we access the server via IP over VPN but I’ll keep it in regardless. Regarding certificates I do recommend LetsEncrypt but use what you are comfortable with. When you got your certificates, go ahead and install Nginx and prepare a config-file for your server.

apt install nginx -y
rm /etc/nginx/sites-available/default
rm /etc/nginx/sites-enabled/default

Then create a file in /etc/nginx/sites-available/proxmox.conf and paste in the following:

upstream proxmox {
    server YOUR.FQDN;

server {
    listen 80 default_server;
    rewrite ^(.*) https://$host$1 permanent;


server {
    listen 443 ssl;
    server_name _;
    ssl on;
    ssl_certificate /etc/letsencrypt/live/your.fqdn/fullchain.pem; 		# managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/your.fqdn/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
    proxy_redirect off;
    location / {
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection “upgrade”;
            proxy_buffering off;
            client_max_body_size 0;
            proxy_connect_timeout  3600s;
            proxy_read_timeout  3600s;
            proxy_send_timeout  3600s;
            send_timeout  3600s;


When this is done link it to be available for nginx with

ln -s /etc/nginx/sites-available/proxmox.conf /etc/nginx/sites-enabled/proxmox.conf

Turn on nginx with systemctl enable --now nginx and add the following to the nginx-service with systemctl edit nginx

[Unit] Requires=pve-cluster.service After=pve-cluster.service

At last we need to disallow any connection except from localhost to pveproxy. Create the file /etc/default/pveproxy and add the following:


After all that check if you can access the proxmox-gui via either IP or FQDN. It should not be possible anymore to access the GUI directly (8006) but only through nginx.

While proxmox-gui is only accessible via nginx we really do not want it to be accessible at all over the internet. Some providers/hosters allow you to create private networks which you can use to connect trough. Otherwise I would recommend to install a VPN like wireguard and only allow access through the VPN.

Fortunately setting up a VPN got much easier with wireguard and there are a ton of guides out there for every distro and OS possible. I can recommend Wireguard.How

The steps are:

apt install wireguard -y

Then setup the keys with (umask 077 && wg genkey > wg-private.key) && wg pubkey < wg-private.key > wg-public.key && cat wg-private.key

Afterwads create a file /etc/wireguard/wg0.conf and add the following (replace with your private key)

[Interface] PrivateKey = AAAAABBBBBCCCCCC ListenPort = 51820

Then create the wg-interface at /etc/network/interfaces.d/wg0

auto wg0
iface wg0
inet static address
pre-up ip link add $IFACE type wireguard
pre-up wg setconf $IFACE /etc/wireguard/$IFACE.conf
post-down ip link del $IFACE

That is it for the server part. To create a client download wireguard for your OS and create a tunnel (empty tunnel) with the following data on your client. is the IP of your client and of your server. The endpoint is the public IP of your wg-server. To see the PublicKey run wg show wg0 on your server.

Address =
AllowedIPs =
Endpoint =

Now go back to your server and add your client. To do so edit the file /etc/wireguard/wg0.conf and add the following data. Replace your keys accordingly.


When that is done run wg syncconf wg0 /etc/wireguard/wg0.conf and wg show wg0 to see if you were able to add your client. Afterwards try to connect. Now you should be able to visit your proxmox-server by visiting Next step is to lock down nginx.

To lock down nginx add the following file /etc/nginx/blockips.conf and add the following data:

allow; deny all;

With this setup we only allow connections to nginx from the VPN-Network and deny everything else. If you want you can further lock it down to only allow one IP instead of the whole subnet Afterwards systemctl restart nginx and check if you are still able to visit proxmox via internet (not VPN). You should get a 403.

What is next?
If you followed the whole thing then you should now have a running encrypted, hardened proxmox-server, accessible only via wireguard (except ssh). There is more you can do to lock down the server, like install 2FA, add an additional user and enable the firewall.

In addition AppArmor/SELinux should be configured to lock down Debian itself. I recommend doing that after you have all your services running to not break anything while setting those up.

More of the same

After we set up the basic requirements in the first post, we are now going to setup and install the basics for our services we are going to host. If you followed the first part of this blog-series, then you should now have the following:

  • Running Proxmox-Server
  • Encrypted disks with LUKS
  • Locked down Access via VPN
  • Domainname to use for the next steps

Network setup
After we got our Proxmox-Server online we now need to make sure that the VMs itself have access to the Internet as well. This step really differs from hoster to hoster. In case you are going with Hetzner then you can copy and paste my code and modify it to fit your environment.

Before you do that though, make sure that the MAC of your WAN-Interface (usually vmbr0) fits the MAC your hoster expects to see.

If you skip this step, you may get Abuse-Mails regarding MAC-Spoofing.

auto vmbr0
iface vmbr0 inet static
        address IPv4_FROM_HOSTER/26
        gateway GW_FROM_HOSTER
        bridge-ports enp41s0
        bridge-stp off
        bridge-fd 1
        pointopoint GW_FROM_HOSTER
        bridge_hello 2
        bridge_maxage 12

auto vmbr1
iface vmbr1 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

To have at least a few snapshots of your VMs we should attach another HDD to our server. You can also use the same disk or even send the backup to a NFS. If you do not want to pay for another HDD, you can also disable RAID and use the other HDD as backup drive.

Simply make sure to also encrypt the data on disk. Proxmox provides a very simple solution which automatically encrpyts each backup saved on to the specific storage.

pvesm set ID_OF_STORAGE --encryption-key /path/of/enc-key.txt

Network Setup – Firewall

Now that the basics are done you should be ready to setup the first VM which is the Firewall. You can choose what you want but I would recommend either OpnSense, PfSense or IpFire.

I went with OpnSense and created a simple VM with 2 NICs, where vmbr0 is WAN and vmbr1 is going to be LAN. When setting up the VM make sure to toggle mitigations for Spectre and Meltdown.

Depending on your CPU you need to figure out what works and what does not. This can be done by simply changing the CPU Flags to “On” (example ibpb) when creating the VM. I also recommend to enable the “AES”-Flag for your Firewall-VM to increase perfomance.

Again, when you create the WAN-Interface make sure that the MAC-Adress matches with what your hoster is expecting and modify the rest to your needs.

Jump-Host or another VPN
After setting up the Firewall you will need to access the Web-GUI from the internal LAN. To do so you could either create a new VPN, putting you right into the LAN-Network (I chose the VPN) or set up another VM which will act as a sort of jumphost.

The choice is yours but I would recommend a VPN as it makes it simpler in the long-term and something I would consider the “cleaner” version of the two.

After that I set up VLANs for each service I am about to host. This is done in order to further separate each VM. Worst case, one of my services gets popped and taken over.

In this case I “only” have to worry about what I lost with this specific service while those VMs have no access to any of my other VMs. To do so you need to prepare a few extra NICs on the Firewall and give them a specific VLAN-Tag. Positive sidenote, you do not need to shutdown the VMs or reboot the host itself. In short:

  • Add another vmbr on Proxmox
  • Proxmox > Network > Linux Bridge (VLAN-aware)
  • OpnSense > Hardware > Add NIC (VLAN 10)
  • OpnSense > Create Firewall Rule for new NIC to allow Internet-Access
  • OpnSense > Enable DHCP if needed for new NIC
  • OpnSense > add additional NICs if needed

When you are done it should look like the following:

Now we will set up Caddy as our reverse proxy which will then proxy all the services to the right server. A more traditional setup would be to use something like Nginx as reverse proxy. You can even run Nginx on OpnSense as plugin however I never got it to a stable point and had a lot of connection issues.

There is also Trafik which is also able to proxy bare-metal or non-docker-endpoints. In this case I went with Caddy which makes it very simple to proxy several different services.

Not only that but Caddy also takes care of requesting and installing certificates for each of your services. To follow what I did, do the following:

  • Install caddy
  • edit the file in /etc/caddy/Caddyfile and comment out the default listener
  • Add the service you intend to proxy to: {

And that is all you need to do on the caddy-side. In case not done yet, forward HTTP and HTTPS from WAN to your proxy via OpnSense (Firewall > NAT > Port forward) and then reload caddy with systemctl reload caddy

Are we there yet?
Ok cool but when are we actually going to roll out services you may ask? Now!! ;)

This was actually the last step and we should just take a quick check if everything is running as expected. A short recap of what we should have achieved so far:

  • Proxmox:
  • Locked behind VPN
  • Working backup
  • Firewall:
  • Locked down only allowing HTTP/HTTPS
  • Working VLANs
  • Proxy:
  • Receiving HTTP/HTTPS from the Internet via NAT
  • Setup one service with caddy

If you got here then we can finally begin with the fun part…rolling out services


One of the best ressources for selfhosted solutions is reddit/r/selfhosted and this Github list. There you should find a solution to nearly any kind of service.

For starters I want to find solutions for the following services:

  • File-Storage
  • Bookmark-Sync
  • …and more…

This is of course not all and will be expanded further down the road but as a general example for this blog, I want to get those 2 services online first.


MEGA, OneDrive, DropBox, GoogleDrive, iCloud and any other kind of file-storage that allows to sync and share data with other devices and users falls under this category. After a lot of trial and error I went with SeaFile because

  • Stability
  • Simplicity
  • No fluff on top
  • Open source

Another option would be Nextcloud or Owncloud, however IMHO those try to offer a ton of additional features that tend to break here and there and cause headaches in the long run.

This may be just me but rolling out Nextcloud always needed a lot of optimization and fixing afterwards while SeaFile “just works” and setting it up is a no-brainer.

Setting up Seafile is pretty simple and very straightforward. Seafile is split into a community and enterprise version with the community version being for free. Seafile works on a lot of OS but for simplicity I rolled out another Ubuntu-Server minimal 20.04.

Then I created another VLAN and modified Firewall-Rules (as mentioned before). In the end the new Ubuntu-Server should not be able to access anything but Internet and the Proxy but not any of the other VLANs.

I also added another 1TB-HDD to the VM and mounted /var on that HDD. This is where all the Docker-containers will run.

Next install Docker and Docker-compose with apt install docker docker-compose -y and then download the docker-compose.yml.

Modify the file to your liking but make sure to at least modify the following:

  • memcached (I recommend at least 512mb)
  • Timezone
  • Ports, disable HTTP and enable HTTPS

Now if not already done go back to your Caddyfile on the Proxy-Server and edit the file /etc/caddy/Caddyfile and add the following for your Seafile-Server. {
        reverse_proxy IP_OF_YOUR_SEAFILE-SERVER

When that is done docker-compose up -d and visit your new Seafile-Server. By the way, did you notice that we did not set up any kind of SSL-certificate?

All taken care of by caddy :)


With File-Storage completed it is time to set-up bookmark-sync for all my devices. The idea was to replace Apples iCloud and Firefox Bookmark sync. The requirements were simple: Be able to read and write bookmarks to the same profile on all of my devices (Linux and Android).

The sync-process should be as simple as adding a new bookmark locally without having to visit an extra website. On Android that could be solved by an app that does the job for you. Linux-Firefox usually relies on addons.

If you check the GitHub-list then you’ll find out that there are not that many solutions that check all the requirements. After going through a lot of them, I went with xBrowserSync which offers addons for Firefox and Chrome and an app for Android.

Basics first
First things first, which is to setup the following:

  • Proxmox -> New NIC
  • OpnSense -> Add new NIC and give it a new VLAN-Tag
  • Proxmox -> Create a new VM
  • Add new NIC with just created VLAN-Tag
  • OpnSense -> Set up FW-Rules
  • Finish basic installation
  • Install docker & docker-compose

After thats done go ahead and login to your VM and make sure that docker is working. You can check with docker run hello-world which should give you the typical hello world message. Then clone the xsync-repo with:

git clone

Now edit the file .env and change the username and password, the fqdn where your xsync-api can be found at and optionally the db-name. Then edit the file docker-compose.yml and remove the caddy-part.

Normally running this image would also spin up a container with caddy, proxying the whole setup. We do not need this because we got caddy running already. Fortunately we only need to change a few things, such as commenting out the whole caddy-part and adding the listener-ports to our API.

With that done we only need to spin up the containers and edit the Caddyfile again on our proxy. Before you do that, make sure that your DNS for your domain is set up correctly and pointing to your proxy-ip.

When done edit the file /etc/caddy/Caddyfile on your proxy and add another entry like so (replace fqdn and ip accordingly): {

After one more systemctl reload caddy run docker-compose up -d on your xsync-server and then you should be able to see Xsync running. :)

Whats next?

If everything worked out, you should now have at least 2 services running on your own cloud. :) Even though Bookmark-Sync and FileShare are important services, setting them up was very simple. Next I am going to replace a few more complicated services like Messaging and Mail and PW-Sync.

I tried Passk but ultimately went with Keeweb which is a PW-manager for Keepass. Installing Keeweb is a no-brainer. Simply install docker and run

docker run --name keeweb -d -p 443:443 -p 80:80 -v $EXT_DIR:/etc/nginx/external/ antelle/keeweb

Then create another caddy-entry and send all traffic to port 443. Next you will need to setup a small script to periodically transfer the certificate and key to your keeweb-server. The steps are simple:

  1. Create a new user on the keeweb-server and restrict the user to his home-folder
  2. Switch to root and create a cronjob (crontab -e) and copy over the key and cert to /etc/nginx/external/ every week (or at least every 3 months)
  3. Switch to caddy and create another cronjob which copies over the key and cert to the keeweb-server (, like so:
scp /var/lib/caddy/.local/share/caddy/certificates/ keyuser@

scp /var/lib/caddy/.local/share/caddy/certificates/ keyuser@

4. To be able to run this script you will need to create a ssh-key without passphrase to be able to login without password. To do so, switch to the caddy-server and run ssh-keygen -t rsa and use no passphrase.

Safe the key in the default location (hit enter) otherwise add the switch -i and the location of the created key in the script above.

5. Copy over the key to the keeweb-server with ssh-copy-id keyuser@ ( is the keyweb-server)

6. With all that done try to run the script. It should copy over 2 files to /home/keyuser

That is it :) Now you should have a running keeweb-server

Something with cows

With Bookmark-sync, file-sync and password-sync down, I want to tackle the more important but also more "complicated" parts of the whole journey to my own cloud.

First thing on the list is mail. Few people really realize how important mail actually still is. With 2FA, password-reset and account activation mail is usually the last resort before locking oneself out of an account. Just imagine, what would happen if your gmail-account got suspended today?

Would you still be able to access all your accounts in case you need to reset your password? I probably would not, which is another reason to take control of your mail-account by hosting your own mail server, being in control of the whole domain. If you then still loose access or the data itself, you can at least blame it on yourself ;)

There are 2 awesome projects (Mail-in-a-box, Mailcow) regarding mail, which include everything you could ask for when it comes to reading, sending and securing mail.

Some of the main features are:

  • DKIM
  • DNSSEC (with DANE TLSA if supported, check your DNS)
  • Nextcloud
  • Webmail
  • Spamfilter
  • Backup
  • and more

I like both of them and I would recommend you to try them both to see which one you prefer. For this case though, I went with Mailcow simply because it does not include extras that I do not need like Nextcloud.


Transfering mail requires a bit more todo on your side before you actually install anything on Proxmox. We will need to backup existing mail, setup opnsense, setup DNS and prepare caddy.


This one is simple and may take a while depending on the amount of mails we are talking about. If you are using clients like kmail or thunderbird then make sure you save the mails offline to simply drag-and-drop them to your new account later on.


With all mails backed up, you will need to open a few holes in the firewall itself. Before we do that though, we need to make sure that some crucial things are taken care of.

The first few steps are similar to all the other VMs we installed before:

  1. Create new VLAN
  2. Attach VLAN to OpnSense and configure it
  3. Create new Ubuntu 20 minimal server and do basic installation

Now decide on the hostname for your mail-server. Other than the other VMs this is actually important as we will use this for reverse DNS later on and also as the URL for the Webmail. If you already gave your VM a name and want to change it then edit the following files

  • /etc/hosts
  • /etc/hostname

Next set up the correct timezone and make sure that NTP is working. On Ubuntu this is pretty simple with

timedatectl set-timezone Europe/Rome
timedatectl set-ntp true

Now browse to your OpnSense-Firewall and create a few NAT-Rules. We want to send all traffic needed for Mail directly to the new Mail-Server:

  • Destination: WAN-Net
  • Ports (check Screenshot)
  • Redirect: No redirect (use the same port)
  • Description: Port XXX to Mail-Server


With the Firewall set up, we now need to prepare the DNS itself. The mailcow-documentation is pretty easy to understand here and I recommend to simply set up your DNS with your provider according to the docs.

(Do not worry about DKIM at this point. Key will be created after running the setup)

Do not forget to set up the reverse dns for your domain as well. Example, I gave my Mail-VM the hostname "mail" and therefore set reverse dns to

When everything is done you can use tools like MX Toolbox to check if everything works as expected.


Fortunately caddy is pretty simple. We are going to setup the Webmail which is also our Admin-portal. I set up another A-record with "mail". With this add another entry in your Caddyfile like the following:

/etc/caddy/Caddyfile {

It is important here that you proxy all traffic to port 8080 which is the HTTP-Port of the docker-instance of Mailcow. If you want you can change it later on but it is not required.


Now install docker and docker-compose with apt install docker-compose docker -y and run docker run hello-world to see if everything is working fine.


Next run the following:

cd /opt
git clone
cd mailcow-dockerized

#generate config with fqdn (

nano mailcow.conf

Edit the file mailcow.conf and change the following:

  • (set to your mail-fqdn)
  • DBNAME & DBUSER (create something secure)
  • HTTP_PORT=8080
  • TZ=Europe/Rome (set to your timezone)
  • SKIP_LETS_ENCRYPT=y (this one is very important as we do not want extra certs)


Because we run mailcow behind caddy we need to make sure that mailcow is able to use the certificates as well. To do so, we need to create another 2 cronjobs and copy over the certificates periodically. For more detailed steps go back to the keeweb-installation at the top of this post and replace the keys and IPs accordingly.

Mailcow expects the certificate and key in the following path:




There is no need to convert the files from caddy. Simply renaming them from .crt to .pem works as well.


  1. Create new user (non-sudo) on mail-server and restrict to home-folder
  2. Create ssh-key for new user and copy it to caddy so caddy can automatically login without password on mail-server as new user
  3. Caddy: Create cronjob to scp and -key to home-folder of new user at mail-server
  4. Mail-server: Create cronjob to copy files from home-folder of new user to the mentioned folder in /opt/ for mailcow to use

With all that done you just need to run

docker-compose pull
docker-compose up -d

Afterwards you should be able to browse to your new mailcow-webgui at :)

Default login is admin:moohoo

The remaining setup is pretty self-explanatory so I will skip it and leave it up to the reader to figure out the rest ;)


While mails are important and should be secured, messaging and messaging apps can leak even more data that could be sold or used against you.

Whatsapp, Instagram, Line, Google Hangouts or oldschool apps like ICQ or MSN (anyone still using that?) are all known to collect and use your data. Switching apps is usually easy but convincing your friends and relatives to leave those apps is mostly a fruitless endeavour.

That means if you want to keep talking to those people you will need to keep Whatsapp or Instagram or Facebook or whatever messenger you really do not want to keep.

The solution? Bridging!

With Matrix Synapse Bridges we can sort of proxy a lot of different apps like Whatsapp, Facebook Messenger and Telegram and therefore get around installing those apps on smartphones or desktops.

In the end you will only need one app to connect to all those messengers and only need to worry about what you say on those apps. This will obviously not make the messaging itself any more secure but at least the apps itself do not need to be installed and do not get the permissions to look through your pictures, track your location and so on.

Note: Matrix Synapse supports a lot of different apps and has a ton of features. I only need it for bridging though which is why the setup is kept basic on purpose. Some things like Federation will not work (Matrix-Servers talking to each other) If this is something you need, you will need to fix those issues afterwards.


To install Matrix we need a jumphost running ansible and our actual Matrix-Server. The jumphost needs the following installed

  • Ansible
  • Root-Access to the Matrix-Server via ssh

The Matrix-Server needs the following:

  • CentOS7
  • SSH allow root-access (can be disabled later on)


As mentioned, the ansible-playbook is about to install a lot of different services which are disabled by default but just need to be turned on. For this reason I will simply copy & paste the whole DNS-Setup from the playbook without changing anything.


Next thing is the caddyfile. Like I mentioned before, Federation needs more than a simple entry and depending on your domain-setup this makes it a bit more complicated. I will keep it simple and use Matrix only for bridging which is why I only need the following: {


Now we need a few things for the playbook to work. Clone the playbook into any folder on your jumphost with

git clone

and then follow those steps from 1-5 and come back here. Then open up the file /inventory/host_vars/ once again and change or add the following lines:

# Example value:

# The Matrix homeserver software to install.
# See `roles/matrix-base/defaults/main.yml` for valid options.
matrix_homeserver_implementation: synapse

# A secret used as a base, for generating various other secrets.
# You can put any string here, but generating a strong one is preferred (e.g. `pwgen -s 64 1`).
matrix_homeserver_generic_secret_key: 'AAABBBCCCCDDDD'

# The playbook creates additional Postgres users and databases (one for each enabled service)
# using this superuser account.
matrix_postgres_connection_password: 'AAAABBBBBBCCCCCC'

# Do not retrieve SSL certificates. This shall be managed by another webserver or other means.
matrix_ssl_retrieval_method: none

# Do not try to serve HTTPS, since we have no SSL certificates.
# Disabling this also means services will be served on the HTTP port
# (`matrix_nginx_proxy_container_http_host_bind_port`).
matrix_nginx_proxy_https_enabled: false

matrix_coturn_enabled: false

# Trust the reverse proxy to send the correct `X-Forwarded-Proto` header as it is handling the SSL >
matrix_nginx_proxy_trust_forwarded_proto: true

# Trust and use the other reverse proxy's `X-Forwarded-For` header.
matrix_nginx_proxy_x_forwarded_for: '$proxy_add_x_forwarded_for'

#enable dimension
#matrix_dimension_enabled: true
#  - "@dimension:{{ }}"

#matrix_dimension_access_token: "AAAAAAAAAAAAAA"

#mautrix telegram bridge
#matrix_mautrix_telegram_enabled: true
#matrix_mautrix_telegram_api_id: 123123123
#matrix_mautrix_telegram_api_hash: SOME_API_HASH

#shared secret
#matrix_synapse_ext_password_provider_shared_secret_auth_enabled: true
#matrix_synapse_ext_password_provider_shared_secret_auth_shared_secret: #SECRET_BLA

Replace the "AAAABBBBCCCCC" and and with a password/secret (generate it yourself) and the others with your fqdn and full domain-name.

When that is done run the playbook with the following:

ansible-playbook -i inventory/hosts setup.yml --tags=setup-all --ask-pass

After entering the password for root (matrix-server) it should take a while to install everything. Beware that if any of the services installed do not start fast enough, you will get an error at the end of the installation. You can ignore those.

Create users

Next we want to create users. I would recommend to create 3 for now

  • Admin
  • Your user
  • Dimension-user

The command is always the same. All you need to change is "isAdmin" and the password and username:

ansible-playbook -i inventory/hosts setup.yml --extra-vars='username=<your-username> password=<your-password> admin=<yes|no>' --tags=register-user

Example, I created an "admin", "user1" and "dimension" with "admin" being the admin.

After running that command, go back to your vars.yml and uncomment the following lines and replace the shared_secret with something you created yourself:

#matrix_synapse_ext_password_provider_shared_secret_auth_enabled: true
#matrix_synapse_ext_password_provider_shared_secret_auth_shared_secret: #SECRET_BLA

Then rerun the playbook with the following command:

ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start --ask-pass


For the next steps we need a private Firefox/Chrome-Session

After all that you should be able to login as the user dimension at your matrix-portal at After logging in open up settings, go to Help&About and scroll down to API. There you should be able to find a token which we will need shortly.

Write down the token and then CLOSE (do not logout) the private browser-window. Go back to your vars.yml and change the following lines and replace the token you just wrote down.

#enable dimension
matrix_dimension_enabled: true
  - "@dimension:{{ }}"

matrix_dimension_access_token: "AAAAAAAAAAAAAA"

#mautrix telegram bridge
matrix_mautrix_telegram_enabled: true
matrix_mautrix_telegram_api_id: 123123123
matrix_mautrix_telegram_api_hash: SOME_API_HASH

Telegram Bot

Now we need to create a Telegram Bot which you can do here

After going through the setup, write down the api_id and api_hash and replace them accordingly in vars.yml. With all that done run the playbook again:

ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start --ask-pass

Beware, everytime from this point on the playbook threw an error about at least one (or more) services not starting. If that happens manually check the corresponding service on the matrix-server. In my case it just took a bit longer to actually start but worked fine.

Last step

After all that you need to log back in (as your normal user) to matrix and then start a new chat with @telegrambot:YOUR_DOMAIN. In the chat then type "login" and follow the prompts. If everything worked out you should be able to see your chats and groups popping in.

More services

With all that you should now be able to chat and message over telegram via Element (the matrix-client). If you need voice and video-calls you will need to invest a bit more effort to get the rest of matrix working.

After Telegram I also added bridges for

  • Whatsapp
  • Signal
  • Discord
  • Slack
  • Facebook Messenger

The steps are mostly the same for all of them and are described in great detail in the ansible-playbook. Just beware that because of the basic setup most "Matrix-Check-Tools" will not work (fail) and not help you in analyzing any issues. If you want a full matrix-server with federation, video-calls and so on, you should follow the whole playbook from start to finish.

Nevertheless, if you followed all the steps up to this point then you should now have at least the following services running:

  • Mail
  • Messenger (Matrix)
  • Bookmark-Sync
  • Password-Manager
  • File-share

By now it should be clear on how to add new services and what obstacles you are likely to face. I encourage you to try and find more services you could host yourself. Simply going through posts of other people may also give you ideas on what you would like to install yourself.

Basic Hardening

After you finished installing and rolling out all your services in part 1,2,3 you should now invest a bit more time in hardening those services and containers. Why did we not do that beforehand? Because the next few steps have the potential to break any of your services from working.

I usually recommend to get to a working setup first and then attempt to secure it afterwards. I mainly used Ubuntu minimal 20.04 but most of those steps also work for any other distro.

Before you attempt any of those steps make sure you have a working snapshot you can revert back to in case you screw up at any point.

More basic stuff

  • Install UFW

UFW is a simple Firewall that creates iptables-rules. Install ufw with

apt install ufw

and then allow specific ports such as SSH, HTTPS and so on with

ufw allow 22 ufw allow 443

When you are done you can enable ufw with

ufw enable
  • Change SSH Port in /etc/ssh/sshd_config

This one is very simple. Change the default SSH-Port to something higher up like 22334 for example. This does not really help with security but will stop bots and simple attacks from reaching your SSH-Server.

  • Disable root-access via ssh

Another simple one. In /etc/ssh/sshd_config set PermitRootLogin to no to disallow root-logon via ssh.

  • Secure shared memory (tmpfs)

This one should normally not lead to any issues but a typo here or there could brick your system, so make sure that you have a working snapshot.

Open up the file /etc/fstab and look for the line starting with "tmpfs" (if not found simply add it yourself) and modify it to look like the following:

tmpfs	/dev/shm	tmpfs	defaults,noexec,nosuid	0	0

Afterwards reboot and check if everything is still working.

  • Netctl

The next few lines disallow spoofing, enable syncookies and so on. This could potentially break a few things here and there so make sure to check everything after rebooting. Modify the following lines or add them if needed:

net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.default.accept_source_route = 0
net.ipv4.conf.all.log_martians = 1
  • 2FA

This one should be obvious. Add 2FA if it makes sense and if supported.

  • AppArmor

AppArmor can protect you from different kind of threats. Downside is that it can potentially take a lot of time to get it all working and will propably break things if you lock down "too hard"

For a quick setup do the following:

# Install AppArmor
apt install apparmor-easyprof apparmor-notify apparmor-utils certspotter

Afterwards follow this guide from Ubuntu and make sure you understand the terminology. Basically it works like this:

  1. Lock down service
  2. Check "complains"
  3. Allow what is needed
  4. Continue with next service


Because a lot of our services are running in a container we should also invest some time to lock those down and tighten security.

  • Different user

One simple change is to use a different user instead of root to run our service. Those users need to exist in the image obviously. To do so, add the following lines in the dockerfile:

RUN adduser -D someuser
USER someuser

Replace someuser with your actual user. In case you are not using a dockerfile you can manually add new users with:

docker run -it your_container /bin/bash

This will allow you to run a normal shell inside of your docker-container where you can then simply add new users.

When done then re-run the container with the switch -u USER and replace USER with your newly created user.

  • Docker-bench-security

With that done, check with the following script if you are following best practice regarding docker-security.

Something missing?

What about fail2ban? What about SELinux?

I intentionally left out some solutions simply because they would not make sense or the trade off was too small. Some ports like SSH are only reachable over VPN so Fail2ban was not needed IMHO. SELinux is similar to AppArmor and will not be used.

The approach for SELinux is more like "allow nothing and then loosen up" while AppArmor is the opposite. Because of the setup I went with AppArmor and will stay with it.

There are other tools out there that you could try to tighten security such as Tiger which will scan your system for best practice and misconfigurations.


Last thing we want to setup is IPS on OpnSense. This will actually block invalid and suspicious traffic. For a basic setup navigate to Services > Intrusion Detection and enable IPS. At least enable the WAN-Interface and if needed also other interfaces.

Then navigate to Rules and enable the rules you want to use.

When this is done make sure to thoroughly check all your services for any blocked or dropped traffic. It is very likely that some may stop working or start dropping packets here and there.

If that happens then you may use other rules or in case of multiple interfaces, may need to disable IPS for some interfaces.


Next up we have DNS. If you followed my tutorial up to here then your OpnSense-FW is already running Unbound which will act as DNS-Server for all the other VMs in your network.

What we want is to never use the ISP or the hoster for DNS but instead use DNSSEC and OpenDNS. To achieve that we need to go to Settings > General and type in the OpenDNS-Servers which are and Also make sure to disable "Allow DNS server list to be overridden by DHCP/PPP on WAN"

With that done set up all your servers to use the OpnSense-FW as DNS-Server and you are good to go. :)


I mentioned in my first post that you should create backups for your VMs and store them in a safe place. All effort invested up to this point is pointless if you safe your backups unencrypted for any attacker to grab from your HDD.

I did not create any backup of Proxmox itself because setup is very simple and takes less time than restoring. That leaves us with the VMs only.

Proxmox supports encrypted backups but you need to make sure that this actually works. To do so run the following from a root-shell on your actual proxmox-server

pvesm set <storageID> --encryption-key YOUR_ENC_KEY

This should in theory encrypt each new backup from this point on.


If you followed everything up until this point then you should now have most of your services in your own secure cloud. Congrats! :)

We are not done yet however. If you are still using Windows or Apple, iPhones or stock Androids, searching with Google, texting via Whatsapp or Facebook, your data is still being used for tracking and custom ads.

Next up we want to switch to something more private and leave Windows and OSX behind and show you how to configure your notebook/desktop for privacy and security.