Upgrading the NAS
Set up Deluge
Deluge has a cool web interface, so we don’t need to SSH to the NAS for monitoring.
So let’s follow the tutorial for a Headless install:
$ sudo apt install deluged deluge-web deluge-consoleNow we just need to open the ports used for the webUI on ufw:
sudo ufw allow <port>
sudo ufw enable
sudo ufw statusand the rest of it can be easily configured through the WebUI, very convenient!
Install new disk and new card
Having a 1TB disk in the cluster is a little sad, and since we’ve been using mfs as the
file creation policy, the 1TB drive has been empty anyways.
So we can just swap it out for a new 4TB drive!
9.3% price increase… not too bad!
After installing, I’m always getting errors accessing the new drive, and changing SATA cables doesn’t seem to work, so I suspect the error is with the HBA card.
ACTIMED PCI-E to 4 SATA port expansion card/ Marvell 88SE9215 chipset £24.99
Updated version of the NAS costs table:
| Price(£) | ||
|---|---|---|
| 30.00 | ||
| exp card | ACTIMED expansion card | 24.99 |
| HDD | Seagate Ironwolf 4TB | 83.99 |
| HDD | Seagate Ironwolf 4TB | 83.99 |
| HDD | WD Blue 4TB (pulled from another computer) | |
| HDD | Seagate Ironwolf 4TB | 91.80 |
| Total | 314.77 | |
| Grand Total | 731.41 |
Now everything works quite smoothly!
We can also switch the policy back to epmfs.
Remove disk spin down
Since we are now using the NAS as a 24/7 torrent machine, we don’t want disks to repeated spin up and down anymore, so let’s switch it off.
$ sudo ./openSeaChest_PowerControl --device /dev/sdx --idle_a default --idle_b default --idle_c default --standby_z disable
$ sudo ./openSeaChest_PowerControl -d /dev/sdx --showEPCSettingsServe Static Images from S3
Actually this hasn’t got much to do with the NAS, but eh.
I want to build a number of image based blog posts. These images certainly can be hosted directly in my GitHub repository, since each image is around 1MB, and GitHub’s recommended repository size is at 1GB, we won’t be angering our Overlords just yet.
However, that limits our blog to <1000 images, and if want to scale up at that point, it will be a lot more difficult to clean up if we make a mistake.
One of the best option we have for host static images is using AWS. It’s a useful tool to learn about for the future as well, so let’s start!
S3
Most of the initial set up is done with AWS’s web interface.
First we need to create a bucket in S3.
We chose us-east-2 (Ohio) because it’s cool!
us-east-1 (N.Virginia) is probably the default, and they are the same price,
but not using the default is cool!
Cloudfront
Although we can directly download from the S3 bucket, and even expose URLs directly from there, it is probably a really bad idea to host images directly on S3. AWS also discourages us from doing so by setting a price for downloading from S3 from the first request, but giving us 1TB of data transfer out to the internet and 10,000,000 HTTP or HTTPS requests for free each month from Cloudfront.
Sync NAS folder to S3
Now that we have a Cloudfront connected to our S3 bucket, the files in there are automatically exposed to the internet through the domain name provided. We can either upload the folder manually from the web interface, or we can use the AWS CLI.
First we create an IAM user, and give it the AmazonS3FullAccess permission.
Then we can just run this to sync our folders up to S3, thereby making it avaliable in the blog!
aws s3 sync "$LOCAL_FOLDER" "$S3_BUCKET" \
--cache-control "public, immutable, max-age=31536000" \
--content-type "image/jpeg" \
--size-onlyWe make use of some headers to reduce the number of requests made to our S3 bucket.
We use 3 Cache Control
headers: public means everyone can keep a cache if they want to, and immutable with max-age = 1 year
means if you have a cached copy, you don’t have to make requests back to the server to check if it’s good
for an entire year.
size-only just means when we sync, we don’t unnecessarily check if the image has been modified.
As long as the server’s version has the same name and size, we skip uploading it from the NAS.
Set up Plex
curl https://downloads.plex.tv/plex-keys/PlexSign.key | sudo gpg --dearmor -o /usr/share/keyrings/plex-archive-keyring.gpg
echo deb [signed-by=/usr/share/keyrings/plex-archive-keyring.gpg] https://downloads.plex.tv/repo/deb public main | sudo tee /etc/apt/sources.list.d/plexmediaserver.list
sudo apt install plexmediaserver
sudo ufw allow <plex-port>/tcpcheck if it is running:
sudo systemctl status plexmediaserverEverything is done in the webUI, easy easy!
Set up Jellyfin
After using Plex for a few weeks, I found it to be too limiting on the free tier. Although the life time subscription is tempting, Jellyfin as an alternative looked really good. Especially as it offers transcoding videos as a free feature.
An official install script is provided so its easy enough:
curl https://repo.jellyfin.org/install-debuntu.sh | sudo bash
sudo ufw allow <jellyfin-port>/tcpSome extra installation and configuration is required for transcoding:
sudo apt update && sudo apt install -y jellyfin-ffmpeg7
sudo usermod -aG render jellyfin
sudo apt install -y intel-opencl-icd
sudo systemctl restart jellyfinThen we can monitor it like any other services:
sudo systemctl status jellyfinEverything else is done in the webUI, so easy easy as well!
Well, not really, Jellyfin is a lot more of a stickler for file naming and directory structures. It’s nothing unmanageable, just some renaming work.
The transcoding thing is great! I had some HDR resources but no good HDR screens, so I really needed it to see more sensible colour palettes.
Set up Awair Element Logging
I bought an Awair Element for some air quality monitoring at home. I probably have a technical capabilities to bodge something together from off-the-shelf electronics, but for less than £80, i see no reason not to just buy a premade one.
We’ve already set up Prometheus + Grafana for data logging and monitoring on the NAS, so there’s no reason not to use that. Maybe in the future we can think about upgrading to a dedicated Home Assistant, but this is good enough for now.
There are existing exporter on GitHub such as rtrox/prometheus-awair-exporter,
but writing our own exporter (with help from GenAI) is not difficult,
so getting that working on our machine is equal if not more effort.
create /opt/awair-exporter.py:
import requests
from prometheus_client import start_http_server, Summary
from prometheus_client.core import GaugeMetricFamily, REGISTRY
import time
class AwairCollector(object):
def __init__(self):
# Your Awair's local IP
self.endpoint = "http://192.168.0.<awair-ip>/air-data/latest"
def collect(self):
try:
# This only runs when Prometheus requests /metrics
r = requests.get(self.endpoint, timeout=5)
data = r.json()
# Create the metrics on the fly
yield GaugeMetricFamily('awair_score', 'Awair Score', value=data['score'])
yield GaugeMetricFamily('awair_temp_celsius', 'Temperature', value=data['temp'])
yield GaugeMetricFamily('awair_humidity_percent', 'Humidity', value=data['humid'])
yield GaugeMetricFamily('awair_co2_ppm', 'CO2 ppm', value=data['co2'])
yield GaugeMetricFamily('awair_voc_ppb', 'VOC ppb', value=data['voc'])
yield GaugeMetricFamily('awair_pm25_ugm3', 'PM2.5 ug/m3', value=data['pm25'])
yield GaugeMetricFamily('awair_dew_point_celsius', 'Dew Point Temperature', value=data['dew_point'])
yield GaugeMetricFamily('awair_abs_humidity_gm3', 'Absolute Humidity', value=data['abs_humid'])
yield GaugeMetricFamily('awair_pm10_est_ugm3', 'Estimated PM10 concentration', value=data['pm10_est'])
yield GaugeMetricFamily('awair_voc_ethanol_raw', 'Raw Ethanol sensor output', value=data['voc_ethanol_raw'])
yield GaugeMetricFamily('awair_voc_h2_raw', 'Raw H2 sensor output', value=data['voc_h2_raw'])
except Exception as e:
print(f"Scrape failed: {e}")
if __name__ == '__main__':
# Unregister default Python metrics (optional, keeps things clean)
# REGISTRY.unregister(prometheus_client.PROCESS_COLLECTOR)
REGISTRY.register(AwairCollector())
start_http_server(<awair-port>)
print("Awair Pull-Exporter running on port <awair-port>...")
# Just keep the main thread alive
while True:
time.sleep(1)Install pip3, then requests and prometheus_client for the root user since it will be ran as a service.
Installing dependencies directly on root is not good practice. Set up venv if you want to do this properly.
[Unit]
Description=Custom Awair Prometheus Exporter
After=network.target
[Service]
ExecStart=/usr/bin/python3 /opt/awair-exporter.py
Restart=always
User=root
[Install]
WantedBy=multi-user.targetAdd the new exporter to /etc/prometheus/prometheus.yml:
scrape_configs:
- job_name: node
...
- job_name: awair-exporter
scrape_interval: 10s
static_configs:
- targets: ['localhost:<awair-port>']Then restart it:
sudo systemctl daemon-reload
sudo systemctl restart prometheusNow everything should be working on the Grafana webUI!