

You should only enter a password once to log in, so maybe we just use our machines 1000x more than other people?


You should only enter a password once to log in, so maybe we just use our machines 1000x more than other people?


This is so cool! Thanks for your work uncovering these things, and thanks for posting it.


Podman/docker leave behind old images, image layers, and containers that need to be cleaned up occasionally. podman system prune will do so.
If 8TB was taken up quickly or unexpectedly, it might be something like a container failing to start and being recreated over and over, leaving each failed container behind as it goes. podman ps --all will list all containers, running or stopped. Before doing the system prune run that and podman image ls --all to see if anything looks amiss.
I just set up Readeck a few weeks ago, and I’ve been liking it. Very minimalist, utilitarian. One feature I’d like that isn’t included is the ability to add specific labels or collections to the sidebar, but that’s my only quibble so far.
It has an official browser extension for adding urls to it, but if you can’t or don’t want to use that, it has a nice api. I use the api to add bookmarks from my phone using a termux-url-opener script, which is as easy as the extension - just hit the “share” button and select termux, and it does the rest.
Navigating around supporting bad actors in the foss community is probably far easier than in the closed, commercial software space, given that all the code, discussion, and money are out in the open.
Also I think the proportion of fascists and bad actors in the foss community is probably lower than elsewhere in the first place, given that the community is based on the free and open sharing of work and knowledge.


First time I’ve ever seen this, and I love it.



I wouldn’t say it’s a big mistake, you’ve likely still got a few years left on your current drives as-is. And you can replace them with same- or larger-capacity drives one at a time to spread the cost out.
Keep an eye out for retired enterprise ssds on ebay or the like - I got lucky and found mine there for $20 each, with 5 years of uptime but basically nothing written to them so no wearout at all - probably just sat in a server with static data for a full refresh cycle. They’ve been great.


The actual write cache there - writeback accumulates writes before flushing them in a larger chunk. It doesn’t make a huge difference, nor did tweaking zfs cache settings when I tried it a few years ago, but it can help if the guest is doing a constant stream of very small writes.


The datasheet for the Samsung PM893 3.84TB drives say they’re warrantied for 7PBW and 2 million hours MTBF (can write 7PB or run for 2 million hours before average drive failure). Quite pricey, but looks like it’ll run forever in a home environment.
Good luck!


I delved into exactly this when I was running proxmox on consumer ssds, since they were wearing out so fast.
Proxmox does a ton of logging, and a ton of small updates to places like /etc/pve and /var/lib/pve-cluster as part of cluster communications, and also to /var/lib/rrdcached for the web ui metrics dashboard, etc. All of these small writes go through huge amounts of write amplification via zfs, so a small write to the filesystem ends up being quite a large write to the backing disk itself.
I found that vms running on the same zfs pool didn’t have quite the degree of write amplification when their writes were cached - they would accumulate their small writes into one large one at intervals, and amplification on the larger dump would be smaller.
For a while I worked on identifying everywhere these small writes were happening, and backing those directories with hdds instead of ssds, moving /var/log from each vm onto its own disk and moving it onto the same hdd-backed zpool, and my disk wearout issues mostly stopped.
Eventually, though, I found some super cheap retired enterprise ssds on ebay, and moved everything back to the much simpler stock configuration. Back to high sustained ssd writes, but I’m 3 years in and still at only around 2% wearout. They should last until the heat death of the universe.


The rclone fuse mount is essentially running in the memory of the container, and doesn’t translate back into the filesystem that the host presents from itself into that container.
Since rclone is available in the debian repos, the simplest and easiest option would be to do the rclone mount on the host and then pass that via bind mounting into the Plex container.
If you want to keep the rclone mounting containerized though (or if your Proxmox host is clustered, you want to mount it on the host, and you want the mount to be shared between your nodes), you can use rclone’s experimental but built-in nfs server feature: https://rclone.org/commands/rclone_serve_nfs/
Make sure your 2 containers can talk to each other over a secure network (“this server does not implement any authentication so any client will be able to access the data”), start the nfs server in the rclone container, and mount it via nfs in the Plex container.
Good luck!


apt-get clean will clear the apt cache and should give you enough temporary storage headroom on /var to do things, but if you’re bumping up on this limit often, you’ll need to reconfigure your storage.


/var is often where processes dump a lot of data (logs, databases, etc), and subpartitioning of /var sets a cap so that when too much data is dumped there, the application crashes instead of the whole system. /var/log is often recommended to be subpartitioned separately as well, so that logging can still go on if the application data fills up and crashes.
These kinds of overruns can be intentional DOS attacks, also, so the subpartitioning is often a security recommendation. NIST 800-171 requires separate partitions for /var, /var/log, /var/log/audit, and /var/tmp
I run a 3-node HA cluster. The extra functionality is really nice - live migrations are amazing, and the ability to move guests around gives you a lot of flexibilty.
If your third box supports it, I’d recommend installing proxmox pve alongside pbs on it (pve and pbs can run on the same node), and then create a cluster out of all 3. You don’t even have to run any guests on that machine, if you want, but having the flexibility to is nice, along with having everything unified into one cluster.


Forgejo (Gitea fork used by codeberg.org) is a lightweight self-hostable option, and has a web-ui-based file editor. It’s got an official docker image, and it’s packaged for freebsd, as well, which makes it very easy to deploy and maintain either containerized or on a server.


With the size of modern linux kernels, I think 1GiB for a /boot partition is the absolute minimum I would go for a current full-sized distributuon. You’ll run into these out-of-space issues on updates all the time otherwise.
I’ve used an old, out-of-support phone as a permanently plugged-in homeassistant control panel. Not quite self-hosting as in phone-server, but a fun easy project and a great way to keep an old device in use.
The bang syntax makes duckduckgo easily the best search engine - it’s a shortcut to everything, the perfect gateway to the internet.


Can you give us the full output of the following commands?
ip addr
sysctl net.ipv6.conf.all.disable_ipv6
This quote from your link on the main client e2ee issues captures the zeitgeist of modern tech so beautifully: