• 0 Posts
  • 35 Comments
Joined 1 year ago
cake
Cake day: July 30th, 2023

help-circle
    • I never said anything about EFI not supporting multi boot. I said that the had to be kept in lockstep during updates. I recognize the term “manual” might have been a bit of a misnomer there, since I included systems where the admin has to take action to enable replication. ESXi (my main hardware OS for now) doesn’t even have software RAID for single-server datastores (only vSAN). Windows and Linux both can do it, but its a non-default manual process of splicing the drives together with no apparent automatic replacement mechanism - full manual admin intervention. With a hardware RAID, you just have to plop the new disk in and it splices the drive back into the array automatically (if the drive matches)
    • Dell and HPe both have had RAM caching for reads and writes since at least 2011. That’s why the controllers have batteries :)
      • also, I said it only had to handle the boot disk. Plus you’re ignoring the fact that all modern filesystems will do page caching in the background regardless of the presence of hardware cache. That’s not unique to ZFS, Windows and Linux both do it.
    • mdadm and hardware RAID offer the same level of block consistency validation to my current understanding- you’d need filesystem-level checksumming no matter what, and as both mdadm and hardware RAID are both filesystem agnostic, they will almost equally support the same filesystem-level features (Synology implements BTRFS on top of mdadm - I saw a small note somewhere that they had their implementation request block rebuild from mdadm if btrfs detected issues, but I have been unable to verify this claim so I do not consider it (yet) as part of my hardware vs md comparison)

    Hardware RAID just works, and for many, that’s good enough. In more advanced systems, all its got to handle is a boot partition, and if you’re doing your job as a sysadmin there’s zero important data in there that can’t be easily rebuilt or restored.


  • I never said I didn’t use software RAID, I just wanted to add information about hardware RAID controllers. Maybe I’m blind, but I’ve never seen a good implementation of software RAID for the EFI partition or boot sector. During boot, most systems I’ve seen will try to always access one partition directly and a second in order, which is bypassing the concept of a RAID, so the two would need to be kept manually in sync during updates.

    Because of that, there’s one notable place where I won’t - I always use hardware RAID for at minimum the boot disk because Dell firmware natively understands everything about it from a detect/boot/replace perspective. Or doesn’t see anything at all in a good way. All four of my primary servers have a boot disk on either a Startech RAID card similar to a Dell BOSS or have an array to boot off of directly on the PERC. It’s only enough space to store the core OS.

    Other than that, at home all my other physical devices are hypervisors (VMware ESXi for now until I can plot a migration), dedicated appliance devices (Synology DSM uses mdadm), or don’t have a redundant disks (my firewall - backed up to git, and my NUC Proxmox box, both firewalls and the PVE are all running ZFS for features).

    Three of my four ESXi servers run vSAN, which is like Ceph and replaces RAID. Like Ceph and ZFS, it requires using an HBA or passthrough disks for full performance. The last one is my standalone server. Notably, ESXi does not support any software RAID natively that isn’t vSAN, so both of the standalone server’s arrays are hardware RAID.

    When it comes time to replace that Synology it’s going to be on TrueNAS


  • For recovering hardware RAID: most guaranteed success is going to be a compatible controller with a similar enough firmware version. You might be able to find software that can stitch images back together, but that’s a long shot and requires a ton of disk space (which you might not have if it’s your biggest server)

    I’ve used dozens of LSI-based RAID controllers in Dell servers (of both PERC and LSI name brand) for both work and homelab, and they usually recover the old array to the new controller pretty well, and also generally have a much lower failure rate than the drives themselves (I find myself replacing the cache battery more often than the controller itself)

    Only twice out of the handful of times I went to a RAID controller from a different generation

    • first time from a mobi failed R815 (PERC H700) physically moving the disks to an R820 (PERC H710, might’ve been an H710P) and they were able to foreign import easily
    • Second time on homelab I went from an H710 mini mono to an H730P full size in the same chassis (don’t do that, it was a bad idea), but aside from iDRAC being very pissed off, the card ran for years with the same RAID-1 array imported.

    As others have pointed out, this is where backups come into play. If you have to replace the server with one from a different generation, you run the risk that the drives won’t import. At that point, you’d have to sanitize the super block of the array and re-initialize it as a new array, then restore from backup. Now, the array might be just fine and you never notice a difference (like my users that had to replace a failed R815 with an 820), but the result pattern is really to the extremes of work or fault with no in between.

    Standalone RAID controllers are usually pretty resilient and fail less often than disks, but they are very much NOT infallible as you are correct to assess. The advantage to software systems like mdadm, ZFS, and Ceph is that it removed the precise hardware compatibility requirements, but by no means does it remove the software compatible requirements - you’ll still have to do your research and make sure the new version is compatible with the old format, or make sure it’s the same version.

    All that’s said, I don’t trust embedded motherboard RAIDs to the same degree that I trust standalone controllers. A friend of mine about 8-10 years ago ran a RAID-0 on a laptop that got it’s super block borked when we tried to firmware update the SSDs - stopped detecting the array at all. We did manage to recover data, but it needed multiple times the raw amount of storage to do so.

    • we made byte images of both disks in ddrescue to a server that had enough spare disk space
    • found a software package that could stitch together images with broken super blocks if we knew the order the disks were in (we did), which wrote a new byte images back to the server
    • copied the result again and turned it into a KVM VM to network attach and copy the data off (we could have loop mounted the disk to an SMB share and been done, but it was more fun and rewarding to boot the recovered OS afterwards as kind of a TAKE THAT LENOVO…we were younger)
    • took in total a bit over 3TB to recover the 2x500GB disks to a usable state - and took about a week of combined machine and human time to engineer and cook, during which my friend opted to rebuild his laptop clean after we had images captured - to one disk windows, one disk Linux, not RAID-0 this time :P


  • Sadly the so-called “smart TV” is becoming the norm. Companies add unnecessary crap to TVs that’s often as slow as your car’s factory infotainment system, and when they feel like not upgrading the software anymore for security issues in a few years, it’s a permanent security hazard until you disconnect it from the network.

    I have a Vizio TV from several years ago with Yahoo branded smart functions (that should date it) that I need to factory reset because I can’t find the WiFi password erase.








  • This is true of a even some public universities in the US. I can’t remember if it was a rule where I was, but definitely most freshman did just live in dorms.

    Lot of folks brought their own desktops to set up, and we were allowed Ethernet switches to hook up multiple devices - had to be wired. Wireless had two options, WPA# 802.1X or unencrypted captive portal guest. If your device didn’t support that, it had to be wired by policy.

    And they weren’t wrong, I did a radio scan and they had the full sized enterprise access points about as good as they could (with a few low signal exceptions, and the air waves were still overloaded with too many people. The building uplink was perfectly fine, it was just overcrowded wireless.



  • I’m an American android user and I’m confused too. At least in my area, contactless is pretty ubiquitous now. (I accept adoption is slower, but it’s getting there)

    Sure Apple Pay seemed to come to a lot of terminals first, but NFC Google wallet or whatever it is the phone does automatically I’ve only seen fail at certain terminals. In that rare case, usually someone behind me with Apple Pay often also fails, so I’d be more likely to attribute it to a system glitch rather than lack of support.





  • The first few screens look like a combination reverse job board and Coinbase. But your description says “post stuff and create tokens”, which doesn’t quite seem to line up

    Why?

    I don’t think Web3 contracts have tested case law yet, so who knows if it’s enforceable in court, at most it may only be as strong as a gentleman’s agreement. And the token part looks like an easy way to create rug pull coins, just on the ETH database instead of an independent database.


  • I’d like to politely disagree

    Finding alternatives to large software packages is great, don’t think I’m not saying that - but any time you have competitor X and competitor Y, be they both commercial, both F/OSS, or some combination thereof, the competitors must be cognizant of each other when setting up features.

    Burying your head in the sand and ignoring Microsoft, Apple, and Google is a very solidly Microsoft-Apple-Google-style play. It’s the play of someone who believes the other side offers no competition. That’s how you get unwieldy features these tech giants implement because they know they can make a 70% effort and people won’t be annoyed enough to leave.

    Every tool they make has a reason someone made it. Many tools are very important - for one example, the Microsoft Office document format is considered to be almost a universal format in presentations, spreadsheets, and plain documents for message passing between businesses.

    But as we as a society design alternatives to those various monopolies (as we should), we need users to want to use the new thing. We have to take what people like and keeps them on their old platform, and best preserve the intent of what they want on the new platform. Doing so requires discussing the features those big tech companies

    And as users, when we select the platforms we use, we need to weigh the cost of going with an alternative vs going with a giant. No solution is a perfect solution for everyone, and the chooser needs to weigh the maintenance cost (in hours or money) they will incur, how their users will like/dislike it, and maybe even look at a piece of software and decide “nah the vibes are off”.

    I’d love a world where those three tech giants had proper competition in all fields, and I think their business practices are scummy and need improvement. But the real alternatives to each need some polish before they’re ready to be used by [arbitrary tech illiterate grandmother].


  • Others have some good information here - all I’d like to add to the root is that Windows and Mac have a built-in DNS cache and it’s pretty straightforward to add a DNS cache to systemd distros (if it’s not already installed or in use) using systemd-resolved or dnsmasq if you really dislike systemd. Some distros enable this from install time.

    Systems that utilize a DNS cache will keep copies of DNS query results for a period of time, making the application-level name lookup speed essentially 0ms for a cached result. Cold results obviously incur the latency of the DNS server itself.