Building & Maintaining PeerTube and Mastodon

In this article, I am recording my experience with building and maintaining PeerTube & Mastodon servers.

Installing PeerTube & Mastodon

For installing, follow the instructions on /

For setting up security, you should also follow the Preparing your machine advice from Mastodon - e.g. I installed Fail2Ban on my PeerTube server too.

If you are experiencing problems with a step, e.g. with registering a certificate with Let's Encrypt, you can also switch to the other guide - some of the things you need will be identical. Unfortunately, I did not record where I switched over for documenting it in this article.

Configuring Email on Mastodon

I ran into problems with getting e-mail to work for Mastodon. This is the configuration I ended up with (hosted at Hetzner). This will of course depend on your particular e-mail setup, so try the default recommended options first.

Configuring sendmail

Install sendmail, and for testing & convenience the mailutils:

apt install mailutils sendmail

Then follow this guide. Read the output carefully. Call sendmailconfig too to check for any error messages or additional instructions.

Important: Do not change /etc/hosts! This will break the Mastodon mailer.

I got an error "local host name (peertube) is not qualified; see cf/README: WHO AM I?". This value is set in /etc/hostname but best ignore the message and test if it works as is; it might break your Mastodon/Peertube mail setup.

Debug logs are available at:

cat /var/log/mail.log

Error log:

cat /var/log/mail.err

You can send a test mail with:

echo "This is a test mail" | mail -s "Test"

Elasticsearch Security on Mastodon

When I first tried to deploy Elasticsearch, I got stuck at a bug. So, I restored from backup to get a clean system again. I tried again after a few days and it worked then.

There's no documentation on how to use a username & password in Mastodon for accessing Elasticsearch, but I managed to configure some basic security.

First, find the Elasticsearch configuration file:

find / -name elasticsearch.yml

Then edit the file and add the following lines:

discovery.type: single-node true
       username: anonymous_user
       roles: superuser
       authz_exception: true

Automated Backups & Security Updates

I'm using my webhost's rolling backups, but I also pull the important bits down regularly. This includes creating a backup of the postgreSQL database.

Script for database dumps

First, find out your directory's path by calling:


Let's say, this is /home/admin

Create a script file:

chmod +x

The contents for PeerTube and Mastodon are slightly different.

The +7 in the final line of the script means that files older than 7 days get deleted. Adjust this to the amount of days that you want.



set -ex

# Dump database
pg_dump peertube_prod -h localhost -p 5432 -U peertube -w | gzip > /home/admin/backups/pg_peertube-$(date +"%F_%H-%M-%S").gz
# delete old backups
find /home/admin/backups/pg_peertube* -mtime +7 -exec rm {} ;

Then create the backup directory:

mkdir backups



set -ex

su - mastodon -c "pg_dump mastodon_production | gzip > /home/mastodon/backups/pg_mastodon-$(date +"%F_%H-%M-%S").gz"
su - mastodon -c "find /home/mastodon/backups/pg_mastodon* -mtime +7 -exec rm {} ;"

Then create the backup directory:

su - mastodon
mkdir backups

Script for system security updates

These instructions are for Debian/Ubuntu/Mint etc. If you are running a different Linux, the installation commands will also be different.

Create a script file:

chmod +x

Then edit it and paste:


set -x

sudo journalctl --vacuum-time=60d

sudo apt autoremove -y

sudo apt update
sudo unattended-upgrade

sudo reboot


First, find out your directory's path by calling:


Let's say, this is /home/admin

Now create a maintenance script. This will call our two previous scripts: First we run our backup, and then we run the updates. We will want the updates to go through even if the previous script failed.

chmod +x

Then edit the file and paste in:


pushd /home/admin

./ || true
./ || true


Replace /home/admin with the output you got from the pwd command above.

Now we want to run the maintenance script every night, when not much is happening on the server. Type:

crontab -e

And add the following line:

30 2 * * * /home/admin/

Again, replace /home/admin with the output you got from the pwd command above.

This will run the backup, updates & reboot at 2:30 am every day. If you want to schedule this differently, use to help you with the cron expression.

Pulling the backups down

I could not convince rsnapshot to cooperate, so I'm using rsync instead. This means I will get only 1 current file backup rather than a rotating one.

For security reasons, I am using encryptfs to encrypt the backup directory, because it's not sitting on my encrypted start volume.

Rather than using cron, the backup is run at every system startup, as I usually shut down my PC for the night.

Mounting the encrypted directory

I got lazy and used automount-ecryptfs which works for me to get around the problem of needing sudo rights. This will mount the encrypted drive at every system startup.

Automate pulling the files down

Create a shell script file and chmod +x. Then paste the contents below, adjusting to your setup's paths. I am assuming that you're logging in via ssh using a key.


# Give the system some time to start and mount the encrypted directory
sleep 300s

# Show the commands
set -x

# Sync folders, including deletions (current version only)
rsync -av --delete-after -e ssh admin@peertube.server:/etc/nginx/* /media/cuideigin/Linux/Backup/Websites/peertube/nginx/
rsync -av --delete-after -e ssh admin@peertube.server:/home/admin/* /media/cuideigin/Linux/Backup/Websites/peertube/home/admin/
rsync -av --delete-after -e ssh admin@peertube.server:/var/www/peertube/* /media/cuideigin/Linux/Backup/Websites/peertube/peertube/
rsync -av --delete-after -e ssh admin@peertube.server:/var/lib/redis/* /media/cuideigin/Linux/Backup/Websites/peertube/redis/
rsync -av --delete-after -e ssh admin@peertube.server:/var/spool/cron/crontabs/* /media/cuideigin/Linux/Backup/Websites/peertube/crontabs/

rsync -av --delete-after -e ssh admin@mastodon.server:/home/admin/* /media/cuideigin/Linux/Backup/Websites/mastodon/home/admin/
rsync -av --delete-after -e ssh admin@mastodon.server:/etc/elasticsearch/* /media/cuideigin/Linux/Backup/Websites/mastodon/elasticsearch/
rsync -av --delete-after -e ssh admin@mastodon.server:/etc/nginx/* /media/cuideigin/Linux/Backup/Websites/mastodon/nginx/
rsync -av --delete-after -e ssh admin@mastodon.server:/home/mastodon/* /media/cuideigin/Linux/Backup/Websites/mastodon/home/mastodon/ --exclude=live/public/system/cache/
rsync -av --delete-after -e ssh admin@mastodon.server:/var/lib/redis/* /media/cuideigin/Linux/Backup/Websites/mastodon/redis/
rsync -av --delete-after -e ssh admin@mastodon.server:/var/spool/cron/crontabs/* /media/cuideigin/Linux/Backup/Websites/mastodon/crontabs/

On Ubuntu/Mint, search the start menu for "Startup Applications". Click the "Add" button and provide a name and the path to this bash script. I also set the maximum delay.

Automated cleanup on Mastodon

The Mastodon CLI offers various commands for cleaning up the files & database so you will not clutter your server with content that is no longer needed. I chose a selection of those and am running the less heavy tasks once a week and the heavy tasks once a month. At the time of writing, my server is just 1 week old, so take these with a grain of salt - it has not been tried and tested.

We will need to run these with the mastodon user:

su - mastodon

Then edit the mastodon user's crontab:

crontab -e

And insert:

###### Weekly #######

# Remove locally cached copies of media attachments from other servers.

0 3 3,10,17,24 * * live/bin/tootctl media remove --days=30

# Remove local thumbnails for preview cards.

0 3 5,12,19,26 * * live/bin/tootctl preview_cards remove --days=30

###### Monthly #######

# Generate and broadcast new RSA keys, as part of security maintenance.
0 3 1 * * live/bin/tootctl accounts rotate --all

# Remove remote accounts that no longer exist. Queries every single remote account in the database to determine if it still exists on the origin server, and if it doesn't, then remove it fr>

0 3 7 * * live/bin/tootctl accounts cull

# Scans for files that do not belong to existing media attachments, and remove them. Please mind that some storage providers charge for the necessary API requests to list objects. Also, thi>

0 3 14 * * live/bin/tootctl media remove-orphans

# Remove unreferenced statuses from the database, such as statuses that came from relays or from users who are no longer followed by any local accounts, and have not been replied to or othe>

# This is a computationally heavy procedure that creates extra database indices before commencing, and removes them afterward.

0 3 21 * * live/bin/tootctl statuses remove

Important: Make sure you leave enough space between these and the nightly automaintenance above - you won't want the automated reboot interfering with these!

Once you're done, you can switch back to the admin user:


Disk usage alerts

You will want to watch your hard disk so that it won't run full. I have set up automated disk usage alerts when usage reaches a threshold of 90%.

First, Configure sendmail so that the automated e-mails can go through.

Create a script file in your home directory:

chmod +x

Edit the file and paste the following content, adapting it with the desired threshold and your e-mail address:


CURRENT=$(df / | grep / | awk '{ print $5}' | sed 's/%//g')


if [ "$CURRENT" -gt "$THRESHOLD" ] ; then

    echo "Current usage: $CURRENT%" | mail -s "Disk Space Alert"


Then add a cron job to run this script regularly. will help with defining your cron expression.

Fixing server load

I found this excellent guide: Title
Last edited: 15 November 2022 10:14:28