• 0 Posts
  • 47 Comments
Joined 7 months ago
cake
Cake day: June 4th, 2025

help-circle



  • Thanks! BentoPDF is fantastic, I never knew something like this existed.

    I have a todo list where I keep track of services I might be interested in one day, I read your post a few hours ago and added Bento to my list, thinking I might get around to it in a few days/weeks/months. Then out of nowhere 15 minutes ago I randomly needed to crop and split a PDF and realized I didn’t have anything to do it. I fired Bento up and was done in under a minute.



  • Disagree. Their priorities are backwards.

    Company A releases a product, it runs closed-source proprietary firmware on-board, and it can’t be updated by the user even if bugs or compatibility issues are found later on in the product’s life cycle.

    Company B releases a product, it runs closed-source proprietary firmware on-board, but it can be updated by the user if bugs or compatibility issues are found later on in the product’s life cycle.

    According to the FSF, product A gets the stamp of approval, product B doesn’t. That makes no sense.


  • I use node_exporter + VictoriaMetrics + Grafana for network-wide system monitoring. node_exporter also has provisions to include text files placed in a directory you specify, as long as they’re written out in the right format. I use that capability on my systems to include some custom metrics, including CPU and memory usage of the top 5 processes on the system, for exactly this reason.

    The resulting file looks like:

    # HELP cpu_usage CPU usage for top processes in %
    # TYPE cpu_usage gauge
    cpu_usage{process="/usr/bin/dockerd",pid="187613"} 1.8
    cpu_usage{process="/usr/local/bin/python3",pid="190047"} 1.4
    cpu_usage{process="/usr/bin/cadvisor",pid="188999"} 1.0
    cpu_usage{process="/opt/mealie/bin/python3",pid="190114"} 0.9
    cpu_usage{process="/opt/java/openjdk/bin/java",pid="190080"} 0.9
    
    # HELP mem_usage Memory usage for top processes in %
    # TYPE mem_usage gauge
    mem_usage{process="/usr/local/bin/python3",pid="190047"} 3.0
    mem_usage{process="/usr/bin/Xvfb",pid="196573"} 2.4
    mem_usage{process="/usr/bin/Xvfb",pid="193606"} 2.4
    mem_usage{process="next-server",pid="194634"} 1.2
    mem_usage{process="/opt/mealie/bin/python3",pid="190114"} 1.2
    

    And it gets scraped every 15 seconds for all of my systems. The result looks like this for CPU and memory. Pretty boring most of the time, but it can be very valuable to see what was going on with the active processes in the moments leading up to a problem.












  • Got a friend or family member willing to let you drop a miniPC at their place?

    You could also go the offline route - buy two identical external drive setups, plug one into your machine and make regular backups to it, drop the other one in a drawer in your office at work. Then once a month or so swap them to keep the off-site one fresh.

    Also there’s really nothing wrong with cloud storage as long as you encrypt before uploading so they never have access to your data.

    Personally I do both. The off-site offline drive is for full backups of everything because space is cheap, while cloud storage is use for more of a “delta” style backup, just the stuff the changes frequently, because of the price. If the worst were to happen, I’d use the offsite drive to get the bulk infrastructure back up and running, and then the latest cloud copy for any recently added/modified files.