Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 2 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • Riskable@programming.devtoTechnology@lemmy.worldThe Cult of Microsoft
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    1
    ·
    2 days ago

    Ahaha! Microsoft employees are using AI to write hallucinate their own performance reviews and managers are using that very same AI to “review” said performance reviews. Which is exactly the dystopian vision of the future that OpenAI sells!

    What’s funny is that the “cult of Microsoft” is 100% bullshit so the AI is being trained in bullshit and as time goes on its being reinforced with it’s own hallucinated bullshit because everyone is using it to bullshit the bullshitters in management who are demanding this bullshit!






  • As another (local) AI enthusiast I think the point where AI goes from “great” to “just hype” is when it’s expected to generate the correct response, image, etc on the first try.

    For example, telling an AI to generate a dozen images from a prompt then picking a good one or re-working the prompt a few times to get what you want. That works fantastically well 90% of the time (assuming you’re generating something it has been trained on).

    Expecting AI to respond with the correct answer when given a query > 50% of the time or expecting it not to get it dangerously wrong? Hype. 100% hype.

    It’ll be a number of years before AI is trustworthy enough not to hallucinate bullshit or generate the exact image you want on the first try.


  • Just a point of clarification: Copyright is about the right of distribution. So yes, a company can just “download the Internet”, store it, and do whatever TF they want with it as long as they don’t distribute it.

    That the key: Distribution. That’s why no one gets sued for downloading. They only ever get sued for uploading. Furthermore, the damages (if found guilty) are based on the number of copies that get distributed. It’s because copyright law hasn’t been updated in decades and 99% of it predates computers (especially all the important case law).

    What these lawsuits against OpenAI are claiming is that OpenAI is making a derivative work of the authors/owners works. Which is kinda what’s going on but also not really. Let’s say that someone asks ChatGPT to write a few paragraphs of something in the style of Stephen King… His “style” isn’t even cooyrightable so as long as it didn’t copy his works word-for-word is it even a derivative? No one knows. It’s never been litigated before.

    My guess: No. It’s not going to count as a derivative work. Because it’s no different than a human reading all his books and performing the same, perfectly legal function.



  • You had corruption with btrfs? Was this with a spinning disk or an SSD?

    I’ve been using btrfs for over a decade on several filesystems/machines and I’ve had my share of problems (mostly due to ignorance) but I’ve never encountered corruption. Mostly I just run out of disk space because I forgot to balance or the disk itself had an issue and I lost whatever it was that was stored in those blocks.

    I’ve had to repair a btrfs partition before due to who-knows-what back when it was new but it’s been over a decade since I’ve had an issue like that. I remember btrfs check --repair being totally useless back then haha. My memory on that event is fuzzy but I think I fixed whatever it was bitching about by remounting the filesystem with an extra option that forced it to recreate a cache of some sort. It ran for many years after that until the disk spun itself into oblivion.


  • I wouldn’t say, “repairing XFS is much easier.” Yeah, fsck -y with XFS is really all you have to do 99% of the time but also you’re much more likely to get corrupted stuff when you’re in that situation compared to say, btrfs which supports snapshotting and redundancy.

    Another problem with XFS is its lack of flexibility. By that I don’t mean, “you can configure it across any number of partitions on-the-fly in any number of (extreme) ways” (like you can with btrfs and zfs). I mean it doesn’t have very many options as to how it should deal with things like inodes (e.g. tail allocation). You can increase the total amount of space allowed for inode allocation but only when you create the filesystem and even then it has a (kind of absurdly) limited number that would surprise most folks here.

    As an example, with an XFS filesystem, in order to store 2 billion symlimks (each one takes an inode) you would need 1TiB of storage just for the inodes. Contrast that with something like btrfs with max_inline set to 2048 (the default) and 2 billion symlimks will take up a little less than 1GB (assuming a simplistic setup on at least a 50GB single partition).

    Learn more about btrfs inlining: https://btrfs.readthedocs.io/en/latest/Inline-files.html


  • One point: ext4 has a maximum file size of 16TiB. To a regular user that is stupidly huge and of no concern but it’s exactly the type of thing you overlook if you “just use ext4” on anything and everything then end up with your database broken at work because of said bad advice.

    Use the filesystem that makes the most sense for your use case. Consider it every single time you format a disk. Don’t become complacent! Also fuck around with the new shit from time to time! I decided to format my Linux desktop partitions with btrfs over a decade ago and as a result I’m an excellent user of that filesystem but you know what? I’m thinking I’ll try bcachefs soon and fiddle around more with my zfs partition on my HTPC.

    BTW: If you’re thinking about trying out btrfs I would encourage you to learn about it’s non-trivial maintenance tasks. btrfs needs you to fuck with it from time to time or you’ll run out of disk space “for no reason”. You can schedule cron jobs to take care of everything (as I have done) but you still need to learn how it all works. It’s not a “set it and forget it” FS like ext4.



  • Tom’s Hardware tested this software version of BitLocker last year and found it could slow drives by up to 45 percent.

    WTF‽ In Linux full disk encryption overhead is minimal:

    While in pure I/O benchmarks like FIO there is an obvious impact to full disk encryption and other synthetic workloads, across the real-world benchmarks the performance impact of running under full disk encryption tended to be minimal

    https://www.phoronix.com/review/hp-devone-encrypt/5

    There’s like five million ways you can use disk encryption on Linux though and not all of them are very performant. So keep that in mind if you see other benchmarks showing awful performance (use the settings Phoronox used).

    I suspect Microsoft made some poor decisions in regards to disk encryption (probably because of bullshit/insecure-by-design FIPS compliance) and now they’re stuck with them.