• 0 Posts
  • 10 Comments
Joined 2 years ago
cake
Cake day: January 17th, 2022

help-circle

  • Because it’s a tool by one of the biggest, if not THE biggest, corporation ever made. It’s nothing more than a way to lock-in users deeper in an ecosystem of extortion and learned helplessness.

    Through Windows, computer users discover that they have a black box at work and then at home. It is NOT their computer. It is a computer that they are allowed to use a certain way. This then is extended in a myriad of ways, through other tools, e.g mobile phone, and services, e.g Office360, reinforcing that behavior. It becomes a second nature to the point that computer users dare not even imagine HOW they want to use a computer. Instead they buy whatever they are allowed to consume.

    I do not care for Windows as an OS, I absolutely do HATE it though as a vehicle for cognitive enslavement. I do so keeping in mind the history of the company that made it. It is not a repeated random process, it’s a strategy. This is what I find disgusting.



  • utopiah@lemmy.mltoLinux@lemmy.mlLauncher for Everything*
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    13 days ago

    Superficial feedback but I can’t read more than 3 lines without syntax highlighting. Here I believe lines short for the text but makes code even harder to read due to new line. Maybe Codeberg allows for HTML embedding.

    Now for a comment on the content itself, how is that different from aliases in ~/.bashrc? I personally have a bunch of commands that are basically wrapped or shortcuts around existing ones with my default parameters.

    Finally, if the result is visual, like dmenu which I only use a bit in the PinePhone, then please start by sharing a screenshot of the result.

    Anyway, thanks for sharing, always exciting to learn from others how they make THEIR systems theirs!



  • utopiah@lemmy.mltoLinux@lemmy.mlDeduplication tool
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    14 days ago

    FWIW just did a quick test with rmlint and I would definitely not trust an automated tool to remove on my filesystem, as a user. If it’s for a proper data filesystem, basically a database, sure, but otherwise there are plenty of legitimate duplication, e.g ./node_modules, so the risk of breaking things is relatively high. IMHO it’s better to learn why there are duplicates on case by case basis but again I don’t know your specific use case so maybe it’d fit.

    PS: I imagine it’d be good for a content library, e.g ebooks, ROMs, movies, etc.




  • utopiah@lemmy.mltoLinux@lemmy.mlDeduplication tool
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    14 days ago

    I don’t actually know but I bet that’s relatively costly so I would at least try to be mindful of efficiency, e.g

    • use find to start only with large files, e.g > 1Gb (depends on your own threshold)
    • look for a “cheap” way to find duplicates, e.g exact same size (far from perfect yet I bet is sufficient is most cases)

    then after trying a couple of times

    • find a “better” way to avoid duplicates, e.g SHA1 (quite expensive)
    • lower the threshold to include more files, e.g >.1Gb

    and possibly heuristics e.g

    • directories where all filenames are identical, maybe based on locate/updatedb that is most likely already indexing your entire filesystems

    Why do I suggest all this rather than a tool? Because I be a lot of decisions have to be manually made.