Welcome to another IMHO post.

This time I am going to write about setting up servers. From zero to serving stuff. Additionally I’ll try to describe basic security measures that I think are a good base for a secure deployment.

Note: This guide was written after I worked through a similar post on fribbledom’s Journal multiple times and wanted to add some of the caveats I found and add a bit about securing the server after the installation.

Prerequisites

This guide requires a few things to be possible to work:

  • You have to be able to trigger a reboot remotely
    • This can be done in “management panels” that some providers offer
  • You are able to reboot into a “rescue os”
    • This is usually solved by adding a TFTP boot entry when a certain button in the aforementioned management panel is pressed
    • The rescue OS should be at least some sort of Debian, it doesn’t necessarily have to be the same version you’re going to install
  • You are able to connect to the rescue system via SSH

Note that those are mostly requirements that your provider has to fulfill. It’s better to check those things before you click “buy” anywhere.

Preparing the installation

The first thing you should do after getting a new server is to boot the rescue os and execute whatever hardware check tools your provider bundles in their rescue os. In most cases there is some readme or knowledge-base article that covers this. If the hardware check shows any errors, you should contact your provider to get it fixed.

Wiping the disks

After you’re sure that everything is okay you can start deleting the data on the servers hard drives. There are a few standards regarding secure deletion but since that should be mostly the concern of the person giving away the machine, one round of dd if=/dev/urandom of=/dev/sdX bs=1M count=<disk_size_mb> should be enough for each disk if you even care to do it because with SSDs you usually want as few writes as possible, so securely wiping any SSD should be skipped to increase life time.

Partitioning

The way you partition your server depends on what hardware you have. You might have two big HDDs, and maybe an additional SSD or some other configuration.

Since I can’t write a guide that covers all setups perfectly I’ll try to give some more general advice on the partitioning instead.

I usually use parted to partition the drives. It has a shell-like interface (enter h for help) and is pretty easy to use.

RAID

If you have two or more large drives the most logical way to partition them is to use them in a RAID configuration. If you have an additional SSD you might want to put the system on there and not put it in RAID.

The easiest way to setup RAID is when your server has an additional RAID card that can make the two (or more) drives appear as a single drive to the os and work from there.

When doing software RAID you need to keep in mind that all the disks/partitions you want to have in RAID need to have the same size and layout across all disks.

Encryption

If you want to encrypt your server (which is imo the only reason you might want to do the custom installation at all) you need to adapt your partition layout as well, since you need an unencrypted /boot partition to unlock the system.

Final layout

Considering all points above, this is the partition layout I usually use when I have the “classic” two HDD setup:

Number  Start   End     Size    File system  Name  Flags
      1      1049kB  538MB   537MB                      bios_grub
      2      538MB   1612MB  1074MB  ext3               boot, esp
      3      1612MB  3000GB  2998GB

As you can see, this setup has a ~500MB partition with the bios_grub flag in the beginning. This partition (and it’s flag) has to exist for GRUB to work properly and it seems that in some tutorials this partition is mounted somewhere. I’m not sure where it should go because it works just as well when it’s not mounted so I’m not mounting it.

The second partition is what will be /boot, only that it will also be in RAID with the corresponding partition on the other disk. This partition also has the boot/esp flag.

The third partition will also be in RAID and will be encrypted after creating the RAID. There are no special flags needed for this partition.

Since both disks are going to be used in RAID, the above layout needs to be applied to both of them.

Setting up software RAID

Getting RAID to work is actually not that hard, only one command to run per RAID partition:

mdadm --create /dev/md0 --auto md --level=1 --raid-devices=2 /dev/sda2 /dev/sdb2

This command will create a RAID device of /dev/sda2 and /dev/sdb2, which will result in the device we will use as /boot (/dev/md0). Remember to create an actual file system on it (using mkfs.ext3 /dev/md0) so it’s usable.

The same command (only with md1 as device name) has to be executed for /dev/sdX3 as well, which will create the RAID device we will use to store data. For this device the file system has to be be created after we’re done setting up encryption.

If you have an additional SSD and you want to put the system on it, it might make sense not to put /boot on the larger HDDs but on the SSD instead. In that case you can just create a single RAID-ed partition with no file system on the hard disks.

Setting up encryption

The thing with encryption is: where do you store the key?
If the server should unlock and boot fully automated, you’d have to put the key in /boot which is not encrypted. That would render the encryption useless because as soon as someone can read the boot partition, they can also unlock the disk.

The solution to this is of course not storing the key anywhere (except in your password vault maybe) and manually entering it on every boot.
Now you might ask “how am I going to get to the server and enter the key if SSH is only starting after the unlock?” - this is where dropbear comes in. It’s a small SSH server that will start in preboot to enable remote unlock.

Encrypting partitions

The following command will encrypt /dev/md1, the data partition from the “classic” example:

cryptsetup luksFormat /dev/md1

To unlock and access the disk for installation, the following command can be used:

cryptsetup luksOpen /dev/md1 cryptroot

This will create a “new” device at /dev/mapper/cryptroot which is basically just the unlocked partition.

The cryptroot that is given as the last argument to cryptsetup luksOpen is the mapping name of the encrypted partition. This has to be changed when you encrypt other partitions.

Automatic unlocking of secondary partitions

Considering the special case with the additional SSD, there is the possibility to add a keyfile to unlock the storage partition(s) which will be stored on the SSD. In this case the SSD still has to be unlocked manually but the storage will be unlocked automatically as soon as the SSD is available to cryptsetup.

Such a setup can be created as simple as:

dd if=/dev/urandom of=/mnt/root/keyfile bs=1024 count=4 # create random key
chmod 0400 /mnt/root/keyfile # set permissions
cryptsetup luksAddKey /dev/md0 /mnt/root/keyfile # add keyfile to volume

Note that /dev/md0 is used here because the data partition would be the only thing in RAID since the system is on the SSD. It’s important that you also adjust your backup strategy to this as data outside of the RAID is not stored redundantly!

I’ve “lost” the keyfile, wat do?

To delete the keyfile from the luks setup, you first have to find out the key slot number:

cryptsetup luksDump /dev/md0

This should output something like the following:

Keyslots:
  0: luks2
	Key:        512 bits
	...
  1: luks2
	Key:        512 bits
	...

Then, to find out which one of those is the file, you can run the following (make sure to close the device before in case it is currently open):

cryptsetup -v luksOpen /dev/md0 randomalias

This should output something like the following, which means that the lost keyfile is not key 0:

Key slot 0 unlocked.

Now you have to repeat the unlocking with -v for any additional key you might have (if any) and then run the following to delete the only remaining key slot (eg. the lost keyfile):

cryptsetup -v luksKillSlot /dev/md0 1

File systems

After that it’s time to set up the file systems. I’d recommend that you use LVM because it allows to easily add encrypted swap. The downside is that it slows down the system a bit in some cases.

Setting up LVM

To setup LVM the encrypted partition has to be made a physical volume:

pvcreate /dev/mapper/cryptroot

Now the volume group (called vg0) and add the partitions (swap and root) can be created:

vgcreate vg0 /dev/mapper/cryptroot
lvcreate -L 32G -n swap vg0
lvcreate -l 100%FREE -n root vg0

This will create a 32GB partition that will be used for swap and a second partition which contains the root file system and takes up the rest of the available space in the volume group.

In case of the setup with the additional SSD, I’d recommend to only use LVM on the SSD and skip using LVM for the data partition.

Creating the file systems

Now that all the partitions are available, the file systems can be created:

mkfs.ext4 /dev/vg0/root # creates an ext4 fs on the root partition
mkswap /dev/vg0/swap # creates a swapspace on the swap partition

In case of the setup with the additional SSD, the file system for the encrypted data partition on the HDDs would look like:

mkfs.ext4 /dev/mapper/<name>

Where <name> is the mapping name you chose while encrypting the RAID partition.

Installing the operating system

Now that the partitions have file systems, the OS can be installed.
To do this, it’s necessary to mount the root partition to /mnt:

mount /dev/vg0/root /mnt

Basic setup

After this, the “installation” of Debian buster on /mnt can be started:

debootstrap --arch amd64 buster /mnt http://deb.debian.org/debian

The above command will start downloading and installing all the necessary packages for a minimal Debian setup. But it’s not doing a complete installation like you’ll get from the .iso (which is a good thing imo).

After this is done, you can chroot into the newly installed partition to continue the setup process:

LANG=C.UTF-8 chroot /mnt /bin/bash
export TERM=xterm-color

The next thing that has to be done is to populate /dev using MAKEDEV(8):

apt install makedev
mount none /proc -t proc
cd /dev
MAKEDEV generic

After this, it’s necessary to exit the chroot because additional directories will have to be mounted from the rescue os to successfully install additional packages that require files in them.

Once back in the rescue os shell, the additional directories can be mounted and the chroot will be entered once that is done:

mount /dev/md0 /mnt/boot
mount --bind /dev /mnt/dev
mount --bind /sys /mnt/sys
mount --bind /proc /mnt/proc

mkdir -p /mnt/run/udev
mount --bind /run/udev /mnt/run/udev

LANG=C.UTF-8 chroot /mnt /bin/bash

Configuration files

Before we can continue installing packages, we’ll have to create the basic configuration files.

General

The first one is /etc/adjtime. This file can be created automatically by using hwclock from the util-linux package:

hwclock --systohc

Another one is /etc/fstab. This file contains the partition layout.
The content of this file is dependent of your setup and could look like this for the setup without an SSD:

/dev/vg0/root  /      ext4 defaults 0 1
/dev/md0       /boot  ext3 defaults 0 2
/dev/vg0/swap  none   swap sw       0 0
proc           /proc  proc defaults 0 0

And like this for the setup with the additional SSD:

/dev/vg0/root     /         ext4 defaults 0 1
/dev/md0          /boot     ext3 defaults 0 2
/dev/vg0/swap     none      swap sw       0 0
proc              /proc     proc defaults 0 0
/dev/mapper/data  /mnt/data ext4 defaults 0 1

Note that you have to change the device names and mount points to fit your setup!

Network

To have working network once the system boots the newly installed os, the file /etc/network/interfaces has to be configured. You can get the values you have to enter from the management panel of hosting provider!

This file should look like:

auto lo
iface lo inet loopback

auto enp1s0
iface enp1s0 inet static
    address 192.168.0.10
    gateway 192.168.0.1
    netmask 255.255.255.0
    broadcast 192.168.0.255

The only thing that you’ll have to find out for yourself is the name of the interface (enp1s0 in the example above).
On older versions of Debian the first “found” interface always was named eth0 by default. This has changed since and the devices are now named according to their PCI address.
I’ve found that the rescue OS of my provider still has eth0 configured and I found out that the default changed the hard way!

To find out the name of your network interfaces, you’ll have to install the pciutils package which will install the lspci tool.

After executing lspci, search it’s output for a line like the following:

01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06)

This shows us that the ethernet adapter for this machine will be named enp1s0, because it has the PCI address 01:00.0.
A device with an address of 03:01.0 would be called enp3s1.

Additionally to the interfaces file, you should also take a look at /etc/resolv.conf. But in my case this file was automatically populated with default values of my hosting provider which were acceptable for my use case.

As a last step for this part of the setup, the root password can be set using passwd.

Making it bootable

Now it’s time to install all the packages needed to boot the installed system:

apt install locales linux-image-amd64 busybox dropbear mdadm lvm2 cryptsetup grub-pc ssh
More configuration

After this is done, you can edit /etc/locale.gen and remove the # in front of the locales you plan to use on the server. After this is done, the locales can be built by executing:

locale-gen

Next, it’s time to configure SSH. The SSH server is configured in the /etc/ssh/sshd_config file. The following options have to be changed:

  • PermitRootLogin
    • New value: yes
    • This is required to log in during preboot to unlock the disks
  • PasswordAuthentication
    • New value: no
    • No password authentication to prevent brute-force and because a key is used to login.

The SSH key you’ll use to access the server has to be added to /root/.ssh/authorized_keys and /etc/dropbear-initramfs/authorized_keys. Both files will have to be created if they don’t exist.

Additionally, the file /etc/initramfs-tools/initramfs.conf has to be edited. The option BUSYBOX has to be changed to y to enable the preboot shell, we’ll use to unlock the disks.

To get the decryption to work properly, the file /etc/crypttab has to be created. This file is similar to /etc/fstab as it contains information about the encrypted partitions.

For the setup without the SSD, this can look like:

cryptroot /dev/md1 none luks

And for the setup with the SSD, where the data partition is unlocked by using the generated keyfile, it can look like:

cryptroot /dev/sda3 none luks
cryptstorage /dev/md0 /root/keyfile luks,keyscript=/lib/cryptsetup/scripts/passdev
Completing the setup

Now that everything is configured correctly (better check it once more just to be safe), we can update the initramfs and install grub to the hard drives:

update-initramfs -u
update-grub
grub-install /dev/sda
grub-install /dev/sdb

Remember to change the drives grub is installed to according to your setup.

If the initramfs was updated successfully and grub installed without errors, you can exit the chroot, unmount everything and reboot the server while nervously chewing up your shirt:

exit
umount /mnt/boot /mnt/proc /mnt/sys /mnt/dev /mnt/run/udev
umount /mnt
sync
shutdown -r now

In my case I often get device busy errors while running umount. From my experience, this can be solved by running umount again multiple times for any mounted volume until it works. Also, if you used screen during the installation, check for leftover sessions that might still be in the chroot (and thus preventing umount from working).

If the server is done rebooting (you can check this by pinging the machine), you can login using the following command:

ssh root@server.example.com -o UserKnownHostsFile=/dev/null

The UserKnownHostsFile option is required because dropbear (the preboot ssh server used to unlock the disks) will have a different key fingerprint than the booted up server. Saving the key to /dev/null prevents a “Man in the middle” warning when connecting to the server after it finished booting.

Troubleshooting

In case you can’t successfully ping the server, you might want to trigger a reboot into the rescue OS to double check your network configuration, especially the interface name. If you fixed the error(s), don’t forget to run update-initramfs -u again, so the changes are applied to the initramfs!

In case you can ping the server but can’t login using ssh, you might need to check the SSH configuration and the authorized_keys files. Also, when changing those, you’ll need to regenerate the initramfs!

If the server does not seem to boot after you successfully unlocked the disk(s) you can check the files in /var/log/ for errors.

For all of the above troubleshooting tips, you’ll need to re-enter the chroot. To do this, first reboot the rescue os and connect using SSH.
After you are connected to the rescue os, the following commands can be used to unlock the disks and re-enter the chroot:

cryptsetup luksOpen /dev/md1 cryptroot

mount /dev/vg0/root /mnt
mount /dev/md0 /mnt/boot

mount --bind /dev /mnt/dev
mount --bind /sys /mnt/sys
mount --bind /proc /mnt/proc
mount --bind /run/udev /mnt/run/udev

LANG=C.UTF-8 chroot /mnt /bin/bash

Once you’re in the chroot, you can fix whatever is wrong, rebuild initramfs if necessary and exit it the way it’s described in “Completing the setup”.

Post-installation steps

After the server rebooted successfully, you can now start installing the stuff you need.

But before doing that, I’d recommend that you secure your server.

Securing your server

There are multiple ways of securing a server and there are probably other ways of doing what I’m suggesting here, but this is the stuff that works for me.

Firewall

The most important thing to do when securing a server is blocking access for unauthorized users. And the most basic thing you can do to prevent unauthenticated access is by using a firewall.

I’m using ufw (universal firewall) which is some kind of wrapper around IPTables which is really easy to use.

It can be installed by running:

apt install ufw

To activate it, first let’s allow access to SHH. Because if you activate the firewall without this, you will lost access to the server!

ufw default deny # sets the default policy to deny
ufw allow ssh # allows ssh access
ufw enable # activate the firewall

UFW will even warn you that activating it might cause SSH access loss.

If you later decide to install software that needs open ports, you allow access by using a command like this:

ufw allow http # allows port 80
ufw allow 1337/udp # allows port 1337 when using UDP

You can also deny access using a similar command:

ufw deny 1337/udp # forbids access to port 1337 when using UDP
Fail2Ban

Now consider this: The firewall allows access to SSH, so any attacker can try to brute force the SSH access without ever getting blocked.

To mitigate this, fail2ban can be used to block access after some amount of failed logins. Keep in mind that if you exceed the threshold amount of failed logins, you will also be blocked.

Fail2Ban can be installed by running:

apt install fail2ban

The default config for Fail2Ban usually has SSH monitoring enabled by default but we still have to configure it. To do this, the file /etc/fail2ban/jail.local has to be edited/created.
It should have a content like the following:

[DEFAULT]

# "ignorself" specifies whether the local resp. own IP addresses should be ignored
# (default is true). Fail2ban will not ban a host which matches such addresses.
ignorself = true

# "ignoreip" can be a list of IP addresses, CIDR masks or DNS hosts. Fail2ban
# will not ban a host which matches an address in this list. Several addresses
# can be defined using space (and/or comma) separator.
ignoreip = 127.0.0.1/8 ::1

# "bantime" is the number of seconds that a host is banned.
bantime  = 10m

# A host is banned if it has generated "maxretry" during the last "findtime"
# seconds.
findtime  = 10m

# "maxretry" is the number of failures before a host get banned.
maxretry = 5

# "usedns" specifies if jails should trust hostnames in logs,
#   warn when DNS lookups are performed, or ignore all hostnames in logs
#
# yes:   if a hostname is encountered, a DNS lookup will be performed.
# warn:  if a hostname is encountered, a DNS lookup will be performed,
#        but it will be logged as a warning.
# no:    if a hostname is encountered, will not be used for banning,
#        but it will be logged as info.
# raw:   use raw value (no hostname), allow use it for no-host filters/actions (example user)
usedns = warn

# "logencoding" specifies the encoding of the log files handled by the jail
logencoding = "utf-8"

# Destination email address used solely for the interpolations in
# jail.{conf,local,d/*} configuration files.
destemail = email@example.com

# Sender email address used solely for some actions
sender = root@hostname
sendername = Fail2Ban on hostname

[sshd]
enabled = true
port = 22

[sshd-ddos]
enabled = true
port = 22

You should change the email@example.com and hostname values to match your setup. The mail address configured here will receive Fail2Ban status mails. The configuration above enables the ssh and the ssh-ddos jail configs, for more available jails, take a look at: /etc/fail2ban/jail.conf

Logwatch

Logwatch is a tool that collects log entries and other system information, creates a nice summary of those and sends them via email every time it’s executed.

For example the tool will show the differences in installed packages, failed logins, who executed which sudo command, which IPs got blocked by Fail2Ban and a lot more.

Logwatch can be installed by running:

apt install logwatch

After installation, logwatch has to be configured. To do this, the file /etc/logwatch/conf/logwatch.conf has to be created. It should contain something like this:

Detail = High
Output = mail
Range = between -7 days and -1 days

This will tell Logwatch that it should create a report from seven days ago to yesterday, with high detail and send it using mail.

Usually when Logwatch is installed, there is a cronjob which is installed with it. Since this cron triggers daily and is not configured, it has to be deleted:

rm -f /etc/cron.daily/00logwatch

Now we’ll need to create a new cronjob that triggers Logwatch and sends a mail to our address. Please keep in mind that the email will be unencrypted and might contain sensitive information, so maybe not send it to a non-trustworthy email provider!

The cronjob for Logwatch can be created by running the following as root:

crontab -e

This will open the crontab of the root user (which has to execute Logwatch because of permissions). The following line has to be added:

@weekly /usr/sbin/logwatch --mailto "email@example.com"

This will trigger Logwatch once a week, (which will then generate a “weekly” report based on the configuration in logwatch.conf) and send it to email@example.com which you will have to change to match your setup.

After saving the crontab you’ll receive your first report within a week.

Another thing to keep in mind with Logwatch is that the generated mail can get pretty large, ~8MB is usual on my setup ( and it’s plaintext only!) which will crash some mail readers.

Since the Logwatch mails are only send in plain text, you need to make sure that you can trust your mail server. If it is/gets compromised an attacker could be able to see the software you use, your users, your fail2ban stats, and if their attacks generated log entries and/or errors.

Logwatch using a custom mail server

In the case that your mail server rejects mails from your server, you can also use an external mail server to send the mails. This can be done by installing msmtp:

apt install msmtp

MSMTP is configured per user. Since the logwatch mails will be generated using the root user, a file called .msmtprc has to be created in the home folder of root (usually /root/):

account default
host mail.example.com
port 587
from logs@example.com
auth on
user logs@example.com
password changeme
tls on
tls_starttls on
tls_certcheck off

In the above sample, msmtp will connect to mail.example.com, using the user logs@example.com and the password changeme using STARTTLS on port 587. It will then send the mail from the logs@example.com account. Those values can be either found in your mail server config (or database) or your mail providers documentation.

To get Logwatch to use msmtp, the file /etc/logwatch/conf/logwatch.conf needs to have the following lines added:

MailFrom = logs@example.com
mailer = "/usr/bin/msmtp email@example.com"

This will make Logwatch send the mail from logs@example.com to email@example.com.

The last thing to do is to remove the --mailto "email@example.com" part from the cronjob, since the receiver is already defined in the Logwatch config.

You can also test your configuration by executing logwatch as root.

Updating the server

Updates can be installed by running:

apt update && apt upgrade

Sometimes after installing updates, you might see an alert similar to *** system reboot required ***. This happens when there have been kernel updates or other things that change the initramfs.

Before you reboot your server, double check that your current SSH key is in /etc/dropbear-initramfs/authorized_keys and /root/.ssh/authorized_keys, otherwise you’ll have to reboot into the rescue OS and restore access the hard way.

Conclusion

After finishing this guide, you should have an encrypted server, that can be used to host virtually any application you might want to host and has a reasonably secure default setup.

Keep in mind that with the installation of additional software, you also might open new security holes that have to be closed one way or another.

If you now are looking for a guide to set up Monitoring with Prometheus and Grafana, I also made a post about this, it also covers monitoring behind a NAT or in situations where the monitor IP is changing regularly.

I hope this guide was useful for you. If you have any suggestions, additions or found a bug in this guide, please don’t hesitate to contact me!