Well, you were warned.. Below will be topics regarding infrastructure supporting friendsofdesoto.social as well as some if the specific tweaks done to the setup. It is documented here to give users a better idea of what is going on behind the scenes, and for other admins for either help or to poke fun at.
This instance currently operates on a distributed architecture. Services are divided across the following systems:
The rough architecture of the instance as configured is documented below:
Below is the current specifications of the instance servers current as of 8/1/2023:
Host Servers | |
OS | Ubuntu |
CPU | 2 |
RAM | 10 GB |
Disk | 80 GB |
SQL Server | |
OS | Ubuntu |
CPU | 2 |
RAM | 6 GB |
Disk | 40 GB |
Hosting of the primary server instance is via Kamatera Cloud in their Chicago data center. This location was chosen for it's geographic and weather stability.
Hosting of all media content is via Backblaze B2.
Media content distribution is done via Bunny CDN through their global distribution network. Additional Caching is done via Bunny Perma-Cache storage replicated in North America, South America, Europe, and Australia.
Web content for the Mastodon front end is hosted on a Bunny Flash Storage Zone replicated to North American, South America, Europe, Japan, and Australia. It is distributed via Bunny CDN through their global distribution network.
Backups for this instance are handled using a borg backup combined with some custom scripting. All backups are transmitted via SSH to an offsite Borg server. The backup target is a mirrored encrypted ZFS pool. Regular snapshots of the pool are retained to guard against corruption of the backup storage.
Backups of databases are scheduled every 6 hours, backups of web/app servers are scheduled daily. This information is also visible on the public status page
In addition to conventional backups, storage snapshots of the PostgreSQL database system kept to guard against logical corruption of the database. These snapshots are taken every 5 minutes and retained on the following retention plan:
Snapshot Kept | For How Long |
Every 5 Minutes | For 2 Hours |
Every Hour | For 1 Day |
Every Day | For 5 Days |
In the event of corruption to the database, these snapshots can be used to quickly restore known-good database copies with high granularity and minimal downtime.
Below is a breakdown of the monthly costs of instance operations:
Below are items created for this instance. They may be useful for admins of other instances, or they may serve as a warning of what not to do. Who's to say?
Below is a php script that can be used in combination with the account.confirmed admin web hook to automatically send a welcome DM to new users.
<?php
//////////////////////////////////////
//
// File: Welcome.php
//
// Author: Mike Lendvay
//
/////////////////////////////////////
// This needs to point to an include that declares a Mastodon Bearer Token with posting
// privileges in the form below. You could do it directly here, but maybe don't
// $authorization = "Authorization: Bearer XXXXXXXXXXXXXXXXXX"
require '../config.php';
$data = json_decode(file_get_contents('php://input'));
!empty($data) or die("\nno object specified\n");
$user = "@" . $data->object->username;
$welcome = " Place a nice welcome message here. Make sure to limit it to less than 500 characters
minus space for the username. Also keep a space at the beginning of the string";
$request = array(
'status' => $user . $welcome,
'visibility' => 'direct',
);
$post = json_encode( $request );
//replace instance with the domain of the Mastodon Instance
$message = curl_init("https://instance/api/v1/statuses");
curl_setopt($message,CURLOPT_RETURNTRANSFER, true);
curl_setopt($message, CURLOPT_HEADER, true);
curl_setopt($message, CURLOPT_HTTPHEADER, array('Content-Type: application/json' , $authorization ));
curl_setopt($message, CURLOPT_RETURNTRANSFER, true);
curl_setopt($message, CURLOPT_POST, 2);
curl_setopt($message, CURLOPT_POSTFIELDS, $post);
curl_setopt($message, CURLOPT_VERBOSE, 1);
$response = curl_exec($message);
curl_close($message);
print_r($response);
?>
Below is a version if the script built on top of borg backup used to backup the instance. This has proved to be an efficient and inexpensive way to handle safely backing up the server to offsite storage without costly third-party services, or excessive storage utilization. This script is presented without storage configured, but on the local instance, two cloud providers are configured via rclone. The script can mount and unmount them, provided the rclone config names.
#!/bin/bash
###############################################################################
# File: Borgs Backup Script
#
# Author: Mike Lendvay
#
###############################################################################
#
# Configuration Options
#
###############################################################################
#Put the encryption Passphrase for backups here
. .secrets
# Clear Repository for backup storage
BORG_REPO=
#
# For each repository, the pre-configured rclone repository name, and a valid
# mount path must be provided. For supported repositories
# This script in theory supports any backup storage supported by rclone
# and up to two repositories. It could be modified to target more. But
# that was outside the scope of the initial design.
#
declare -A REPO_1
REPO_1[bucket]=
REPO_1[mount]=
REPO_1[folder]=
declare -A REPO_2
REPO_2[bucket]=
REPO_2[mount]=
REPO_2[folder]=
#Name for backup sets
BACKUP_NAME=
#Default logging location
LOG=/var/log/borg.log
#Backup Options
OPTIONS=( \
--verbose \
--filter AME \
--list \
--stats \
--show-rc \
--compression zstd,20 \
--exclude-caches \
)
#Items excluded from backup
EXCLUSIONS=( \
--exclude '/etc/NetworkManager' \
--exclude '/etc/X11' \
--exclude '/etc/alternatives' \
--exclude '/etc/hosts' \
--exclude '/etc/hostname' \
--exclude '/etc/iproute2' \
--exclude '/etc/kernel' \
--exclude '/etc/ldap' \
--exclude '/etc/network' \
--exclude '/etc/ssh' \
--exclude '/etc/ppp' \
--exclude '/etc/perl' \
--exclude '/etc/issue' \
--exclude '/etc/resolve.conf' \
--exclude 'home/*/cache/' \
--exclude 'home/*/.cache/' \
--exclude '/root/.cache/' \
--exclude '/opt/*/.cache/' \
--exclude '/opt/*/cache/' \
)
#Backup selections
SELECTIONS=( \
/etc \
/home/mastodon \
/root \
/var/lib/postgresql/15/backup \
/var/lib/redis/dump.rdb \
)
#Database Selections
DATABASES=( mastodon_production )
#Retention periods for pruning
RETENTION=( \
--keep-hourly 7 \
--keep-daily 7 \
--keep-weekly 7 \
--keep-monthly 7 \
)
######################################################################
# End of configuration
######################################################################
# some helpers and error handling:
info() { printf "\n%s %s\n\n" "$( date )" "$*" | tee -a $LOG >&2; }
trap 'echo $( date ) Backup interrupted | tee -a $LOG >&2; exit 2' INT TERM
THIS_BACKUP=$BACKUP_NAME-`date +"%Y-%m-%dT%H:%M:00"`
BORG_REPO_1=${REPO_1[mount]}/${REPO_1[folder]}
#if [ "${REPO_1[bucket]}" ]; then
# rclone mount ${REPO_1[bucket]}: ${REPO_1[mount]} \
# --daemon --vfs-cache-mode full --no-modtime \
# --dump openfiles --transfers 32 \
# 2>&1 | tee -a $LOG
info "Dumping databases"
for i in ${DATABASES[*]}
do
echo $i | tee -a $LOG
sudo -Hiu postgres pg_dump -Ft $i -f /var/lib/postgresql/15/backup/$i.bak 2>&1 | tee -a $LOG
done
# Backup the most important directories into an archive named after
# the machine this script is currently running on:
info "Starting backup"
#Backup #1
try=0
while :
do
[[ $try -eq 3 ]] && info "Backup has failed repeatedly and will not be retried" && break
info "Mounting backup repository"
if [ "${REPO_1[bucket]}" ]; then
rclone mount ${REPO_1[bucket]}: ${REPO_1[mount]} \
--daemon --vfs-cache-mode full --no-modtime \
--dump openfiles --transfers 32 \
2>&1 | tee -a $LOG
fi
info "Creating backup archive"
borg create ${OPTIONS[*]} ${EXCLUSIONS[*]} $BORG_REPO_1::$THIS_BACKUP ${SELECTIONS[*]} 2>&1 | tee -a $LOG
backup_1_exit=${PIPESTATUS[0]}
case $backup_1_exit in
0)
break
;;
*)
info "Backup Failed. Will retry in 5 minutes..."
pkill borg
if [ "${REPO_1[bucket]}" ]; then fusermount -u ${REPO_1[mount]}; fi
((try++))
sleep 5m
continue
;;
esac
done
info "Pruning repository"
# Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly
# archives of THIS machine. The '{hostname}-' prefix is very important to
# limit prune's operation to this machine's archives and not apply to
# other machines' archives also:
borg prune \
--list \
--prefix $BACKUP_NAME- \
--show-rc \
${RETENTION[*]} \
$BORG_REPO_1 \
2>&1 | tee -a $LOG
local_prune_exit=${PIPESTATUS[0]}
local_compact_exit=$?
remote_compact_exit=${PIPESTATUS[0]}
# Verifying Creates a great deal of download traffic and is off by default.
# Uncomment these lines to enable it.
#info "Verifying backup copy 1"
#borg check $BORG_REPO_1::$THIS_BACKUP 2>&1 | tee -a $LOG
info "Unmounting backup repository"
if [ "${REPO_1[bucket]}" ]; then fusermount -u ${REPO_1[mount]}; fi
if [ "${REPO_2[bucket]}" ]; then
info "Syncing backups to secondary repository"
rclone sync -v --fast-list --transfers 32 ${REPO_1[bucket]}:/${REPO_1[bucket]} ${REPO_2[bucket]}: 2>&1 | tee -a $LOG
backup_2_exit=${PIPESTATUS[0]}
fi
exits=( $backup_1_exit $backup_2_exit $local_prune_exit $remote_prune_exit )
global_exit=0
for i in ${exits[*]}
do
if [ $i -gt 0 ] && [ $i -gt $global_exit ]
then global_exit=$i
fi
done
CHARSET="utf-8"
if [ $global_exit -eq 0 ]; then
info "Backup, Prune, and Compact finished successfully"
elif [ $global_exit -eq 1 ]; then
info "Backup, Prune, and/or Compact finished with warnings"
else
info "Backup, Prune, and/or Compact finished with errors"
fi
exit $global_exit
Below is a simple script used for pulling the list of defederated servers and blocking them via iptables (special thanks to solarisfire for correcting my mistakes):
#############################################################################
#
# Script to Firewall Defederated Servers from Local Instance
#
#############################################################################
#!/bin/bash
#Set the name of the Mastodon Database
DB=mastodon_production
ipset -L defederated >/dev/null 2>&1
if [ $? -ne 0 ]; then
ipset create defederated hash:net
iptables -I INPUT -m set --match-set defederated src -j DROP
else
ipset flush defederated
fi
list=`sudo -u postgres psql -d $DB -t -c "select domain from domain_blocks where severity=1;"`
for i in $list
do
ips=`dig +short $i`
for ip in $ips
do
ipset add defederated $ip
done
done