• 0 Posts
  • 24 Comments
Joined 2 years ago
cake
Cake day: July 31st, 2023

help-circle
  • I just set up Readeck a few weeks ago, and I’ve been liking it. Very minimalist, utilitarian. One feature I’d like that isn’t included is the ability to add specific labels or collections to the sidebar, but that’s my only quibble so far.

    It has an official browser extension for adding urls to it, but if you can’t or don’t want to use that, it has a nice api. I use the api to add bookmarks from my phone using a termux-url-opener script, which is as easy as the extension - just hit the “share” button and select termux, and it does the rest.


  • Navigating around supporting bad actors in the foss community is probably far easier than in the closed, commercial software space, given that all the code, discussion, and money are out in the open.

    Also I think the proportion of fascists and bad actors in the foss community is probably lower than elsewhere in the first place, given that the community is based on the free and open sharing of work and knowledge.



  • I wouldn’t say it’s a big mistake, you’ve likely still got a few years left on your current drives as-is. And you can replace them with same- or larger-capacity drives one at a time to spread the cost out.

    Keep an eye out for retired enterprise ssds on ebay or the like - I got lucky and found mine there for $20 each, with 5 years of uptime but basically nothing written to them so no wearout at all - probably just sat in a server with static data for a full refresh cycle. They’ve been great.




  • I delved into exactly this when I was running proxmox on consumer ssds, since they were wearing out so fast.

    Proxmox does a ton of logging, and a ton of small updates to places like /etc/pve and /var/lib/pve-cluster as part of cluster communications, and also to /var/lib/rrdcached for the web ui metrics dashboard, etc. All of these small writes go through huge amounts of write amplification via zfs, so a small write to the filesystem ends up being quite a large write to the backing disk itself.

    I found that vms running on the same zfs pool didn’t have quite the degree of write amplification when their writes were cached - they would accumulate their small writes into one large one at intervals, and amplification on the larger dump would be smaller.

    For a while I worked on identifying everywhere these small writes were happening, and backing those directories with hdds instead of ssds, moving /var/log from each vm onto its own disk and moving it onto the same hdd-backed zpool, and my disk wearout issues mostly stopped.

    Eventually, though, I found some super cheap retired enterprise ssds on ebay, and moved everything back to the much simpler stock configuration. Back to high sustained ssd writes, but I’m 3 years in and still at only around 2% wearout. They should last until the heat death of the universe.


  • The rclone fuse mount is essentially running in the memory of the container, and doesn’t translate back into the filesystem that the host presents from itself into that container.

    Since rclone is available in the debian repos, the simplest and easiest option would be to do the rclone mount on the host and then pass that via bind mounting into the Plex container.

    If you want to keep the rclone mounting containerized though (or if your Proxmox host is clustered, you want to mount it on the host, and you want the mount to be shared between your nodes), you can use rclone’s experimental but built-in nfs server feature: https://rclone.org/commands/rclone_serve_nfs/

    Make sure your 2 containers can talk to each other over a secure network (“this server does not implement any authentication so any client will be able to access the data”), start the nfs server in the rclone container, and mount it via nfs in the Plex container.

    Good luck!




  • mlfh@lemmy.sdf.orgtoProxmox@lemmy.worldproxmox HA ?
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    I run a 3-node HA cluster. The extra functionality is really nice - live migrations are amazing, and the ability to move guests around gives you a lot of flexibilty.

    If your third box supports it, I’d recommend installing proxmox pve alongside pbs on it (pve and pbs can run on the same node), and then create a cluster out of all 3. You don’t even have to run any guests on that machine, if you want, but having the flexibility to is nice, along with having everything unified into one cluster.










  • I switched a workstation to Secureblue for the very specific security priorities targeted by that project, but I think for the majority of users, the main reason for not switching to atomic is one you mentioned: why fix what isn’t broken? The main selling point promoted to potential new users seems to be that updates don’t break anything, but I can’t remember a single time since Debian Sarge that an update broke anything for me, and I actually find the rpm-ostree package layering and updating process to be far more of a headache than otherwise.

    Unless it’s prepackaged like a steam deck, moving from the traditional way of doing things to atomic is a major change. Like any major change, people need a good reason to make it, and I think right now the only compelling ones are either hyper-specific (switching to okd and needing to build it on coreos, wanting to move to a specific atomic project, etc.), or just general curiosity.