Building & Maintaining PeerTube and Mastodon

In this article, I am recording my experience with building and maintaining PeerTube & Mastodon servers.

Installing PeerTube & Mastodon

For installing, follow the instructions on docs.joinpeertube.org / docs.joinmastodon.org.

For setting up security, you should also follow the Preparing your machine advice from Mastodon - e.g. I installed Fail2Ban on my PeerTube server too.

If you are experiencing problems with a step, e.g. with registering a certificate with Let's Encrypt, you can also switch to the other guide - some of the things you need will be identical. Unfortunately, I did not record where I switched over for documenting it in this article.

Configuring Email on Mastodon

I ran into problems with getting e-mail to work for Mastodon. This is the configuration I ended up with (hosted at Hetzner). This will of course depend on your particular e-mail setup, so try the default recommended options first.

SMTP_SERVER=mail.your-server.de
SMTP_PORT=587
SMTP_LOGIN=admin@mydomain.foo
SMTP_PASSWORD=somelongpasswordthelongerthebetter
SMTP_FROM_ADDRESS=admin@mydomain.foo


Configuring sendmail

Install sendmail, and for testing & convenience the mailutils:

apt install mailutils sendmail

Then follow this guide. Read the output carefully. Call sendmailconfig too to check for any error messages or additional instructions.

Important: Do not change /etc/hosts! This will break the Mastodon mailer.

I got an error "local host name (peertube) is not qualified; see cf/README: WHO AM I?". This value is set in /etc/hostname but best ignore the message and test if it works as is; it might break your Mastodon/Peertube mail setup.

Debug logs are available at:

cat /var/log/mail.log

Error log:

cat /var/log/mail.err

You can send a test mail with:

echo "This is a test mail" | mail -s "Test" -aFrom:admin@mydomain.foo admin@mydomain.foo

Security configuration


Elasticsearch Security on Mastodon

When I first tried to deploy Elasticsearch, I got stuck at a bug. So, I restored from backup to get a clean system again. I tried again after a few days and it worked then.

Memory requirements: You should have 8GB of RAM. When I had 4GB, the Out Of Memory Manager kept killing the elasticsearch process. Also, if sidekiq complains about Faraday connection lost, it's elasticsearch down. You can check what the problem is with "systemctl status elasticsearch". Test with "live/bin/tootctl search deploy".

Follow the documentation on joimastodon.org to get elasticsearch running.

There's no documentation on how to use a username & password in Mastodon for accessing Elasticsearch, but I managed to configure some basic security.

First, find the Elasticsearch configuration file:

find / -name elasticsearch.yml

Then edit the file and add the following lines:

discovery.type: single-node

xpack.security.enabled: true


Generate the passwords following the guide at Set up minimal security for Elasticsearch (replace the version number in the link with your Elasticsearch version).

Then configure live/.env.production with:

ES_ENABLED true
ES_HOST my.instance.host.tld
ES_PASS somethingsafe
ES_PORT 9200
ES_USER elastic


Restart the service with sudo systemctl restart elasticsearch and then test the setup with live/bin/tootctl search deploy --only tags

Edit the elasticsearch.yml file again and add this entry:

xpack.security.authc:
   anonymous:
       username: anonymous_user
       roles: superuser
       authz_exception: true


If you are running a cluster rather than just a local installation, refer to Secure the Elastic Stack. I'm not running a cluster, so I have no experience to share about this.

HTTPS Security

This is configured in:

  • /etc/nginx/sites-available/peertube
  • /etc/nginx/sites-available/mastodon

At the time of writing, I went for the following cyphers:

ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;

For Peertube, I also copied the following configuration over from Mastodon:

add_header Strict-Transport-Security "max-age=31536000" always;

Make sure to add this in wherever there are add_header directives.

Also, set ssl_prefer_server_ciphers on;

There's a Mozilla tool to help you with configuration at SSL Configuration Generator. You will need your OpenSSL and Nginx versions. You can get those calling:

openssl version
nginx -v


Double-check your configuration by scanning with both Mozilla Observatory and CryptCheck. Mozilla allows new scans every 5 minutes, CryptCheck every hour.

For PeerTube, also add some headers to your nginx configuration. Mastodon handles these per software.

add_header Content-Security-Policy "default-src 'self';" always;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";


Do this wherever you added Strict-Transport-Security, but skip Content-Security-Policy in the main server section (only add it to location sections). If you add this to the main section, the site will break. If you find a working value for this, please let me know! See also nginx Content-Security-Policy examples.

ElasticSearch hashtag indexing bug workaround

Add the following lines to the mastodon user's .bashrc:

export PATH="$HOME/.rbenv/bin:$PATH"
eval "$(rbenv init -)"
export RAILS_ENV=production


Then add the following lines to the mastodon user's crontab by calling crontab -e:

# Workaround for https://github.com/mastodon/mastodon/issues/20230

0 */1 * * * live/bin/tootctl search deploy --only tags


Automated Backups & Security Updates

I'm using my webhost's rolling backups, but I also pull the important bits down regularly. This includes creating a backup of the postgreSQL database.

Script for database dumps

First, find out your directory's path by calling:

pwd

Let's say, this is /home/admin

Create a script file:

touch pgdump.sh
chmod +x pgdump.sh


The contents for PeerTube and Mastodon are slightly different.

The +7 in the final line of the script means that files older than 7 days get deleted. Adjust this to the amount of days that you want.

PeerTube:

#!/bin/bash

set -ex

# Dump database
pg_dump peertube_prod -h localhost -p 5432 -U peertube -w --format=tar | gzip > /home/admin/backups/pg_peertube-$(date +"%F_%H-%M-%S").gz
# delete old backups
find /home/admin/backups/pg_peertube* -mtime +7 -exec rm {} ;


Then create the backup directory:

mkdir backups


Mastodon:

#!/bin/bash

set -ex

su - mastodon -c "pg_dump mastodon_production --format=tar | gzip > /home/mastodon/backups/pg_mastodon-$(date +"%F_%H-%M-%S").gz"
su - mastodon -c "find /home/mastodon/backups/pg_mastodon* -mtime +7 -exec rm {} ;"


Then create the backup directory:

su - mastodon
mkdir backups
exit


Script for system security updates

These instructions are for Debian/Ubuntu/Mint etc. If you are running a different Linux, the installation commands will also be different.

Create a script file:

touch system-update.sh
chmod +x system-update.sh


Then edit it and paste:

#!/bin/sh

set -x

sudo journalctl --vacuum-time=60d --vacuum-size=1G

sudo apt autoremove -y

sudo apt update
sudo unattended-upgrade

sudo reboot

Automating

First, find out your directory's path by calling:

pwd

Let's say, this is /home/admin

Now create a maintenance script. This will call our two previous scripts: First we run our backup, and then we run the updates. We will want the updates to go through even if the previous script failed.

touch maintenance.sh
chmod +x maintenance.sh


Then edit the file and paste in:

#!/bin/bash

pushd /home/admin

./pgdump.sh || true
./system-update.sh || true

popd


Replace /home/admin with the output you got from the pwd command above.

Now we want to run the maintenance script every night, when not much is happening on the server. Type:

crontab -e

And add the following line:

30 2 * * * /home/admin/maintenance.sh

Again, replace /home/admin with the output you got from the pwd command above.

This will run the backup, updates & reboot at 2:30 am every day. If you want to schedule this differently, use crontab.cronhub.io to help you with the cron expression.

Pulling the backups down

For rotating backups, use rsnapshot. For a single backup, use rsync. You can also set up both.

For security reasons, I am using eCryptFS to encrypt the backup directory, because it's not sitting on my encrypted start volume.

For the rsync backup, rather than using cron, the backup is run at every system startup, as I usually shut down my PC for the night.

Mounting the encrypted directory

I got lazy and used automount-ecryptfs which works for me to get around the problem of needing sudo rights. This will mount the encrypted drive at every system startup.

Automate pulling the files down


rsync

For this to work, rsync needs to be installed on both systems.

Create a shell script file and chmod +x. Then paste the contents below, adjusting to your setup's paths. I am assuming that you're logging in via ssh using a key.

#!/bin/bash

# Give the system some time to start and mount the encrypted directory
sleep 300s

# Show the commands
set -x

# Sync folders, including deletions (current version only)
#PeerTube
rsync -av --delete-after -e ssh admin@peertube.server:/etc/nginx /path/to/backup/peertube/
rsync -av --delete-after -e ssh admin@peertube.server:/root /path/to/backup/peertube/ --exclude=".*"
rsync -avz --delete-after -e ssh admin@peertube.server:/var/www/peertube /path/to/backup/peertube/ --exclude=.cache --exclude=storage/cache
rsync -av --delete-after -e ssh admin@peertube.server:/var/lib/redis /path/to/backup/peertube/
rsync -av --delete-after -e ssh admin@peertube.server:/var/spool/cron/crontabs /path/to/backup/peertube/

#Mastodon
rsync -av --delete-after -e ssh admin@mastodon.server:/root /path/to/backup/mastodon/ --exclude=".*"
rsync -av --delete-after -e ssh admin@mastodon.server:/etc/elasticsearch /path/to/backup/mastodon/
rsync -av --delete-after -e ssh admin@mastodon.server:/etc/nginx /path/to/backup/mastodon/
rsync -avz --delete-after -e ssh admin@mastodon.server:/home /path/to/backup/mastodon/ --exclude=mastodon/live/public/system/cache/ --exclude=mastodon/.bundle/cache --exclude=mastodon/.cache --exclude=mastodon/live/node_modules/.cache --exclude=mastodon/live/tmp
rsync -av --delete-after -e ssh admin@mastodon.server:/var/lib/redis /path/to/backup/mastodon/
rsync -av --delete-after -e ssh admin@mastodon.server:/var/spool/cron/crontabs /path/to/backup/mastodon/


On Ubuntu/Mint, search the start menu for "Startup Applications". Click the "Add" button and provide a name and the path to this bash script. I also set the maximum delay.

rsnapshot

For this to work, rsync needs to be installed on both systems. You will need rsnapshot on the system that is pulling the backups down.

Rsnapshot uses a configuration file. I have prepared examples for download where you can go through the TODO comments to adapt them to your system.

Call rsnapshot specifying the configuration file and the interval name, like this:

rsnapshot -c /home/user/rsnapshot/mastodon.conf alpha

Once it has been tested and is working, set verbose to 1 in the configuration file and run the call as a cron job.

Automated cleanup on Mastodon

The Mastodon CLI offers various commands for cleaning up the files & database so you will not clutter your server with content that is no longer needed. I chose a selection of those and am running the less heavy tasks once a week and the heavy tasks once a month. At the time of writing, my server is just 1 week old, so take these with a grain of salt - it has not been tried and tested.

We will need to run these with the mastodon user:

su - mastodon

Create symlink to Ruby:

ln -s /home/mastodon/.rbenv/shims/ruby /usr/bin/ruby

Then edit the mastodon user's crontab:

crontab -e

And insert:

RAILS_ENV=production


###### Weekly #######

# Remove locally cached copies of media attachments from other servers.

0 3 3,10,17,24 * * live/bin/tootctl media remove --days=2

# Remove local thumbnails for preview cards.

0 3 5,12,19,26 * * live/bin/tootctl preview_cards remove --days=30


###### Monthly #######

# Generate and broadcast new RSA keys, as part of security maintenance.
0 3 1 * * live/bin/tootctl accounts rotate --all

# Prune remote accounts that never interacted with local users
# Remove remote accounts that no longer exist. Queries every single remote account in the database to determine if it still exists on the origin server, and if it doesn't, then remove it from the database. Accounts that have had confirmed activity within the last week are excluded from the checks, in case the server is just down.
0 3 7 * * live/bin/tootctl accounts prune && live/bin/tootctl accounts cull


# Scans for files that do not belong to existing media attachments, and remove them. Please mind that some storage providers charge for the necessary API requests to list objects. Also, this operation requires iterating over every single file individually, so it will be slow.

0 3 14 * * live/bin/tootctl media remove-orphans

# Remove unreferenced statuses from the database, such as statuses that came from relays or from users who are no longer followed by any local accounts, and have not been replied to or otherwise interacted with.

# This is a computationally heavy procedure that creates extra database indices before commencing, and removes them afterward.

0 3 21 * * live/bin/tootctl statuses remove --days=60


Important: Make sure you leave enough space between these and the nightly automaintenance above - you won't want the automated reboot interfering with these!

Once you're done, you can switch back to the admin user:

exit

Automated alerts

Disk usage

You will want to watch your hard disk so that it won't run full. I have set up automated disk usage alerts when usage reaches a threshold of 90%.

First, Configure sendmail so that the automated e-mails can go through.

Create a script file in your home directory:

touch check_disk.sh
chmod +x check_disk.sh


Edit the file and paste the following content, adapting it with the desired threshold and your e-mail address:

#!/bin/bash

#fedingo.com/shell-script-to-check-disk-space-and-send-email-alerts/

CURRENT=$(df / | grep / | awk '{ print $5}' | sed 's/%//g')

THRESHOLD=90

if [ "$CURRENT" -gt "$THRESHOLD" ] ; then

    echo "Current usage: $CURRENT%" | mail -s "Disk Space Alert" -aFrom:admin@mydomain.foo recipient@mydomain.foo

fi


Then add a cron job to run this script regularly. https://crontab.cronhub.io/ will help with defining your cron expression.

Memory

For checking the RAM, the percentage can be extracted with:

CURRENT=$(free -tm | grep Total: | awk '{ print $3/$2*100}')
CURRENT=${CURRENT%.*}


Services

For systemd services, you can create a script that looks like this:

#!/bin/bash

CURRENT=$(systemctl | grep "(failed|error)")

if ! [[ -z "$CURRENT" ]]; then

# Send yourself an e-mail
echo "Service down: $CURRENT%" | mail -s "Disk Space Alert" -aFrom:admin@mydomain.foo recipient@mydomain.foo

# Try to extract the service name from the failure line and restart the service
SERVICENAME=$(systemctl | grep "(failed|error)" | cut -d' ' -f2)
sudo systemctl restart ${SERVICENAME} > /dev/null

fi

Useful commands for upgrading

Always follow the official upgrade guide, but if you need to upgrade to an intermediate version, here's the commands you can run. This can be unstable, so make extra sure that you have working backups!

RUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install <version>
bundle install
yarn install
RAILS_ENV=production SKIP_POST_DEPLOYMENT_MIGRATIONS=true bundle exec rails db:migrate
RAILS_ENV=production bundle exec rails assets:precompile NODE_OPTIONS=--openssl-legacy-provider
RAILS_ENV=production bundle exec rails db:migrate


Also, diff the config file in /home/mastodon/live/dist/nginx.conf against /etc/nginx/sites-available/mastodon to see if there are any important config changes.

Last edited: 4 December 2023 19:19:23