Building & Maintaining PeerTube and Mastodon

In this article, I am recording my experience with building and maintaining PeerTube & Mastodon servers.

Installing PeerTube & Mastodon

For installing, follow the instructions on /

For setting up security, you should also follow the Preparing your machine advice from Mastodon - e.g. I installed Fail2Ban on my PeerTube server too.

If you are experiencing problems with a step, e.g. with registering a certificate with Let's Encrypt, you can also switch to the other guide - some of the things you need will be identical. Unfortunately, I did not record where I switched over for documenting it in this article.

Configuring Email on Mastodon

I ran into problems with getting e-mail to work for Mastodon. This is the configuration I ended up with (hosted at Hetzner). This will of course depend on your particular e-mail setup, so try the default recommended options first.

Configuring sendmail

Install sendmail, and for testing & convenience the mailutils:

apt install mailutils sendmail

Then follow this guide. Read the output carefully. Call sendmailconfig too to check for any error messages or additional instructions.

Important: Do not change /etc/hosts! This will break the Mastodon mailer.

I got an error "local host name (peertube) is not qualified; see cf/README: WHO AM I?". This value is set in /etc/hostname but best ignore the message and test if it works as is; it might break your Mastodon/Peertube mail setup.

Debug logs are available at:

cat /var/log/mail.log

Error log:

cat /var/log/mail.err

You can send a test mail with:

echo "This is a test mail" | mail -s "Test"

Security configuration

Elasticsearch Security on Mastodon

When I first tried to deploy Elasticsearch, I got stuck at a bug. So, I restored from backup to get a clean system again. I tried again after a few days and it worked then.

Memory requirements: You should have 8GB of RAM. When I had 4GB, the Out Of Memory Manager kept killing the elasticsearch process. Also, if sidekiq complains about Faraday connection lost, it's elasticsearch down. You can check what the problem is with "systemctl status elasticsearch". Test with "live/bin/tootctl search deploy". You can also reduce how much memory elasticsearch will use.

Follow the documentation on to get elasticsearch running.

There's no documentation on how to use a username & password in Mastodon for accessing Elasticsearch, but I managed to configure some basic security.

First, find the Elasticsearch configuration file:

find / -name elasticsearch.yml

Then edit the file and add the following lines:

discovery.type: single-node true

Generate the passwords following the guide at Set up minimal security for Elasticsearch (replace the version number in the link with your Elasticsearch version).

Then configure live/.env.production with:

ES_PASS somethingsafe
ES_PORT 9200
ES_USER elastic

Restart the service with sudo systemctl restart elasticsearch and then test the setup with live/bin/tootctl search deploy --only tags

Edit the elasticsearch.yml file again and add this entry:
       username: anonymous_user
       roles: superuser
       authz_exception: true

If you are running a cluster rather than just a local installation, refer to Secure the Elastic Stack. I'm not running a cluster, so I have no experience to share about this.

HTTPS Security

This is configured in:

  • /etc/nginx/sites-available/peertube
  • /etc/nginx/sites-available/mastodon

At the time of writing, I went for the following cyphers:


For Peertube, I also copied the following configuration over from Mastodon:

add_header Strict-Transport-Security "max-age=31536000" always;

Make sure to add this in wherever there are add_header directives.

Also, set ssl_prefer_server_ciphers on;

There's a Mozilla tool to help you with configuration at SSL Configuration Generator. You will need your OpenSSL and Nginx versions. You can get those calling:

openssl version
nginx -v

Double-check your configuration by scanning with both Mozilla Observatory and CryptCheck. Mozilla allows new scans every 5 minutes, CryptCheck every hour.

For PeerTube, also add some headers to your nginx configuration. Mastodon handles these per software.

add_header Content-Security-Policy "default-src 'self';" always;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";

Do this wherever you added Strict-Transport-Security, but skip Content-Security-Policy in the main server section (only add it to location sections). If you add this to the main section, the site will break. If you find a working value for this, please let me know! See also nginx Content-Security-Policy examples.

Fix ElasticSearch out of memory errors

If you don't have enough RAM, the Out Of Memory Manager will kill the elasticsearch service.

It is possible to reduce how much memory elasticsearch will use by setting the Java Virtual Machine's memory allocation. See the official documentation.

To check how much memory is allocated, run:

curl -X GET "localhost:9200/_nodes/_all/jvm?pretty"

I then set it to 2GB:

cd /etc/elasticsearch/jvm.options.d
nano memory.options

Paste these 2 lines:


Restart the service:

sudo systemctl restart elasticsearch.service

Then check the new memory allocation:

curl -X GET "localhost:9200/_nodes/_all/jvm?pretty"

Automated Backups & Security Updates

I'm using my webhost's rolling backups, but I also pull the important bits down regularly. This includes creating a backup of the postgreSQL database.

Script for database dumps

First, find out your directory's path by calling:


Let's say, this is /home/admin

Create a script file:

chmod +x

The contents for PeerTube and Mastodon are slightly different.

The +7 in the final line of the script means that files older than 7 days get deleted. Adjust this to the amount of days that you want.



# Command for restoring later:
# su - peertube -c "pg_restore -FC --clean --if-exists -U peertube -n public --no-owner --role=peertube -d peertube_prod /var/www/peertube/backups/pg_peertube-<date>.dump"

set -ex

# Dump database
su - peertube -c "pg_dump -Fc peertube_prod -f /var/www/peertube/backups/pg_peertube-$(date +"%F_%H-%M-%S").dump"

# Delete old backups
find /var/www/peertube/backups/pg_peertube* -mtime +7 -exec rm {} ;

echo "Database backup done."

Then create the backup directory:

su - peertube
mkdir /var/www/peertube/backups



# Command for restoring later:
# su - mastodon -c "pg_restore -FC --clean --if-exists -U mastodon -n public --no-owner --role=mastodon -d mastodon_production /home/mastodon/backups/pg_mastodon-<date>.dump"

set -ex

# Dump database
su - mastodon -c "pg_dump -Fc mastodon_production -f /home/mastodon/backups/pg_mastodon-$(date +"%F_%H-%M-%S").dump"

# Delete old backups
su - mastodon -c "find /home/mastodon/backups/pg_mastodon* -mtime +7 -exec rm {} ;"

echo "Database backup done."

Then create the backup directory:

su - mastodon
mkdir backups

Script for system security updates

These instructions are for Debian/Ubuntu/Mint etc. If you are running a different Linux, the installation commands will also be different.

Install unattended Upgrades:

sudo apt install unattended-upgrades

Shut up the e-mails to the local mail:

sudo nano /etc/apt/apt.conf.d/50unattended-upgrades

Uncomment and edit the following 2 parameters:

Unattended-Upgrade::Mail "";

Unattended-Upgrade::MailReport "only-on-error";

Save and exit.

Create a script file:

chmod +x

Then edit it and paste:


set -x

sudo journalctl --vacuum-time=60d --vacuum-size=1G

sudo apt autoremove -y

sudo apt update
sudo unattended-upgrade

sudo reboot


First, find out your directory's path by calling:


Let's say, this is /home/admin

Now create a maintenance script. This will call our two previous scripts: First we run our backup, and then we run the updates. We will want the updates to go through even if the previous script failed.

chmod +x

Then edit the file and paste in:


pushd /home/admin

./ || true
./ || true


Replace /home/admin with the output you got from the pwd command above.

Now we want to run the maintenance script every night, when not much is happening on the server. Type:

crontab -e

And add the following line:

30 2 * * * /home/admin/

Again, replace /home/admin with the output you got from the pwd command above.

This will run the backup, updates & reboot at 2:30 am every day. If you want to schedule this differently, use to help you with the cron expression.

Pulling the backups down

For rotating backups, use rsnapshot. For a single backup, use rsync. You can also set up both.

For security reasons, I am using eCryptFS to encrypt the backup directory, because it's not sitting on my encrypted start volume.

For the rsync backup, rather than using cron, the backup is run at every system startup, as I usually shut down my PC for the night.

Mounting the encrypted directory

I got lazy and used automount-ecryptfs which works for me to get around the problem of needing sudo rights. This will mount the encrypted drive at every system startup.

Automate pulling the files down


For this to work, rsync needs to be installed on both systems.

Create a shell script file and chmod +x. Then paste the contents below, adjusting to your setup's paths. I am assuming that you're logging in via ssh using a key.


# Give the system some time to start and mount the encrypted directory
sleep 300s

# Show the commands
set -x

# Sync folders, including deletions (current version only)
rsync -av --delete-after -e ssh admin@peertube.server:/etc/nginx /path/to/backup/peertube/
rsync -av --delete-after -e ssh admin@peertube.server:/root /path/to/backup/peertube/ --exclude=".*"
rsync -avz --delete-after -e ssh admin@peertube.server:/var/www/peertube /path/to/backup/peertube/ --exclude=.cache --exclude=storage/cache
rsync -av --delete-after -e ssh admin@peertube.server:/var/lib/redis /path/to/backup/peertube/
rsync -av --delete-after -e ssh admin@peertube.server:/var/spool/cron/crontabs /path/to/backup/peertube/

rsync -av --delete-after -e ssh admin@mastodon.server:/root /path/to/backup/mastodon/ --exclude=".*"
rsync -av --delete-after -e ssh admin@mastodon.server:/etc/elasticsearch /path/to/backup/mastodon/
rsync -av --delete-after -e ssh admin@mastodon.server:/etc/nginx /path/to/backup/mastodon/
rsync -avz --delete-after -e ssh admin@mastodon.server:/home /path/to/backup/mastodon/ --exclude=mastodon/live/public/system/cache/ --exclude=mastodon/.bundle/cache --exclude=mastodon/.cache --exclude=mastodon/live/node_modules/.cache --exclude=mastodon/live/tmp
rsync -av --delete-after -e ssh admin@mastodon.server:/var/lib/redis /path/to/backup/mastodon/
rsync -av --delete-after -e ssh admin@mastodon.server:/var/spool/cron/crontabs /path/to/backup/mastodon/

On Ubuntu/Mint, search the start menu for "Startup Applications". Click the "Add" button and provide a name and the path to this bash script. I also set the maximum delay.


For this to work, rsync needs to be installed on both systems. You will need rsnapshot on the system that is pulling the backups down.

Rsnapshot uses a configuration file. I have prepared examples for download where you can go through the TODO comments to adapt them to your system.

Call rsnapshot specifying the configuration file and the interval name, like this:

rsnapshot -c /home/user/rsnapshot/mastodon.conf alpha

Once it has been tested and is working, set verbose to 1 in the configuration file and run the call as a cron job.

Automated cleanup on Mastodon

The Mastodon CLI offers various commands for cleaning up the files & database so you will not clutter your server with content that is no longer needed. I chose a selection of those and am running the less heavy tasks once a week and the heavy tasks once a month.

We will need to run these with the mastodon user:

su - mastodon

Create symlink to Ruby:

ln -s /home/mastodon/.rbenv/shims/ruby /usr/bin/ruby

For making testing easier, set the Ruby environment to production by adding the following lines to the mastodon user's .bashrc:

export PATH="$HOME/.rbenv/bin:$PATH"
eval "$(rbenv init -)"
export RAILS_ENV=production

Then edit the mastodon user's crontab:

crontab -e

And insert:


###### Weekly #######

# Remove locally cached copies of media attachments from other servers.

0 3 3,10,17,24 * * live/bin/tootctl media remove --days=2

# Remove local thumbnails for preview cards.

0 3 5,12,19,26 * * live/bin/tootctl preview_cards remove --days=30

###### Monthly #######

# Generate and broadcast new RSA keys, as part of security maintenance.
0 3 1 * * live/bin/tootctl accounts rotate --all

# Prune remote accounts that never interacted with local users
# Remove remote accounts that no longer exist. Queries every single remote account in the database to determine if it still exists on the origin server, and if it doesn't, then remove it from the database. Accounts that have had confirmed activity within the last week are excluded from the checks, in case the server is just down.
0 3 7 * * live/bin/tootctl accounts prune && live/bin/tootctl accounts cull

# Scans for files that do not belong to existing media attachments, and remove them. Please mind that some storage providers charge for the necessary API requests to list objects. Also, this operation requires iterating over every single file individually, so it will be slow.

0 3 14 * * live/bin/tootctl media remove-orphans

# Remove unreferenced statuses from the database, such as statuses that came from relays or from users who are no longer followed by any local accounts, and have not been replied to or otherwise interacted with.

# This is a computationally heavy procedure that creates extra database indices before commencing, and removes them afterward.

0 3 21 * * live/bin/tootctl statuses remove --days=60

Important: Make sure you leave enough space between these and the nightly automaintenance above - you won't want the automated reboot interfering with these!

Once you're done, you can switch back to the admin user:


Automated alerts

Disk usage

You will want to watch your hard disk so that it won't run full. I have set up automated disk usage alerts when usage reaches a threshold of 90%.

First, Configure sendmail so that the automated e-mails can go through.

Create a script file in your home directory:

chmod +x

Edit the file and paste the following content, adapting it with the desired threshold and your e-mail address:


CURRENT=$(df / | grep / | awk '{ print $5}' | sed 's/%//g')


if [ "$CURRENT" -gt "$THRESHOLD" ] ; then

    echo "Current usage: $CURRENT%" | mail -s "Disk Space Alert"


Then add a cron job to run this script regularly. will help with defining your cron expression.


For checking the RAM, the percentage can be extracted with:

CURRENT=$(free -tm | grep Total: | awk '{ print $3/$2*100}')


For systemd services, you can create a script that looks like this:


CURRENT=$(systemctl | grep "(failed|error)")

if ! [[ -z "$CURRENT" ]]; then

# Send yourself an e-mail
echo "Service down: $CURRENT%" | mail -s "Disk Space Alert"

# Try to extract the service name from the failure line and restart the service
SERVICENAME=$(systemctl | grep "(failed|error)" | cut -d' ' -f2)
sudo systemctl restart ${SERVICENAME} > /dev/null


Useful commands for upgrading

Always follow the official upgrade guide, but if you need to upgrade to an intermediate version, here's the commands you can run. This can be unstable, so make extra sure that you have working backups!

RUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install <version>
bundle install
yarn install
RAILS_ENV=production SKIP_POST_DEPLOYMENT_MIGRATIONS=true bundle exec rails db:migrate
RAILS_ENV=production bundle exec rails assets:precompile NODE_OPTIONS=--openssl-legacy-provider
RAILS_ENV=production bundle exec rails db:migrate

Also, diff the config file in /home/mastodon/live/dist/nginx.conf against /etc/nginx/sites-available/mastodon to see if there are any important config changes.

Last edited: 12 May 2024 18:19:03