At the time of this commit, I've changed to ubuntu server 20.04. Most of the stuff still applies I guess
So just use docker exec for doing this stuff. See the example for a database dump:
docker exec postgres-db pg_dump > backup.bakThe above code snippet will actually put the output in backup.bak on the host machine.
You can do this for everything, from backing up to just catting something
- cd /etc/netplan
- (only once) sudo cp 00-installer-config.yaml 00-installer-config.yaml.bak
- change accordingly
network:
ethernets:
eno1:
dhcp4: no
addresses:s
- 192.168.178.101/16
gateway4: 192.168.178.1
nameservers:
addresses: [8.8.8.8, 1.1.1.1]
version: 2- sudo netplan apply
- sudo -i
- curl -L http://install.pivpn.io | bash
- port 51820
enabling firewall
sudo apt install ufwufw enableufw allow <port>ufw deny <port>
ban users for login attempts
- sudo apt install fail2ban
- creates folder /etc/fail2ban
- sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
- sudo vim /etc/fail2ban/jail.local
[ssh]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 6
bantime = -1- See the backup script in this repository
- Check if the directory and the subdirectory are correct
sudo crontab -e- every week on Monday 02:00 ->
0 2 * * 1 <path-to-repo>/backup-script.sh
- Install Etcher if not done so already
- Use the tool to flash backup on SD card
- Plugin
- ls -l /dev/disk/by-uuid/ -> remember the UUID of /dev/sda1 for 6
- sudo mkdir /media/usb (already done, no need to do this again)
- sudo chown -R pi:pi /media/usb (already done, no need to do this again)
- sudo mount /dev/sda1 /media/usb -o uid=pi,gid=pi (already done, no need to do this again)
- to auto mount on reboot -> sudo vim /etc/fstab -> UUID=<uuid from 2>/media/usb ntfs-3g auto,nofail,noatime,users,rw,uid=pi,gid=pi 0 0 6b. note that ntfs-3g for ntfs file system, vfat for fat32
chsh -s /bin/bash
So I've basically following the following blog
In short:
- Use
certbot/certbotimage in docker-compose for generating the certificates - Commands in certbot and nginx are used to automatically renew the certificates when they expire
- There's an
init scriptin theweb-proxydirectory that needs to be executed from the root directory to work properly
Quirks:
- When executing the
init script, the certbot image will generate certificates in theweb-proxy/datadirectory. - However, these certificates won't have the correct rights, as they will be generated by the root user.
- So every time we will have to
chown -R apper web-proxy/dataandchgrp -R apper web-proxy/datato be able to rebuild docker-compose 😟