• 0 Posts
  • 107 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle



  • Indeed.

    60 years ago we were supposed to having to work very little by now thanks to automation, then automation came and instead of the productivity gains of it ending up spread across society, what happenned instead was that the extra productivity went just pushed up dividend and CxO pay higher and due to the reduced need for workers due to automation the purchasing power of salaries actually went down (for example, in the US the percentage of corporate revenues that went to pay salaries fell from 23% in the 70s down to 7% by 2014).

    Expecting that, under the exact system that’s been moving us more and more towards Dystopia with each wave of automation, AI would somehow end up making things better for most people rather than better just for the Owner Class and worse for part or most of the rest, is pretty ill-informed and naive.




  • There’s a button there to enable/disable air-mouse functionality (basically the tilting of the remote moves the mouse pointer), though it’s awkward to use compared to a normal mouse.

    The keyboard on the back is also awkward to use, not just because the keys are small and not quite in standard positions but also because Shift and Alt are both “press to enable, press to disable”, with no notification lights (so, say, your keyboard might be in “Alt mode” and you’re trying to used it and it’s just doing weird stuff).

    The thing does work as a combo of media player remote + mouse + keyboard, but it’s not very practical for the last 2. Also that specific model seems to have problems with the remote buttons not working if the remote is tilted (which shouldn’t be at all a problem given that’s a wireless remote).

    The idea is good, the implementation could be better. There are other models like that around. Just avoid the “Google” remotes as that’s Android-locked and for voice recognition (plus it comes pre-enshittified with only a handful of buttons which only start apps such as Netflix).

    Even with the quirks of the remote, whilst using that setup I often find myself altogether forgetting that what I’m using there is a PC with Linux.




  • My TV has always been run without the “smarts” ever since I bought it.

    That said, recently I’ve replaced my TV Box and Media Box with a N100 Mini PC running Linux and Kodi plus a wireless remote and in addition to that the thing even works as my home server with additional functionality than just that of the devices it replaced.

    For a cheaper/easier option try LibreELEC on top one of the devices they support (check the downloads page or the Wiki for the list). It’s basically a Linux distro with Kodi, so open and with none of the privacy intrusion risks of Android. The same kind of wireless remote (example - note that you don’t actually need to use the keyboard on the back or the air mouse) also works here since it just relies on standard shortcut keys of media programs like Kodi so works everywhere (even Android).

    However what all these privacy-protecting non-enshittified options have in common is that they’re not fully configured solutions that you just buy and use - as you’ve noticed, if you just buy a streaming stick or device it will likely be at the least “spammy” - and you do have to do some of the work to get them working.

    Something like LibreELEC on a mini PC should be the simplest to put together as the hardware comes preconfigured in an actual box and all that’s needed is to install the LibreELEC image from a bootable USB stick, but if you have a bit more technical know-how (not really that much needed, mind you) you can get something like one of the supported Orange Pi boards along with a box for it and it will cost you less than half as much as even a basic Mini PC - those boards are basically using the same chips as Android TV media boxes so you get the same performance without the “spammyness”.



  • Mate, the horse whip and the wheel were Technology back when they got invented.

    It’s a massivelly generic word.

    Absolutelly some Technology has reduced drudgery. Meanwhile some Technology has managed to increase it (for example: one can make the case that the mobile phone, by making people be always accessible, has often increased pressure on people, though it depends on the job), some Technology has caused immense Environmental destruction, some Technology has even caused epidemics of psychological problems and so on.

    Not only is there a lot of stuff in the big umbrella called Technology, but the total effect of one of those things is often dependent on how its its used and Capitalism seems especially prone to inventing and using Technology that’s very good for a handful of people whilst being bad for everybody else.

    One can’t presume that just because something can be classified as Technology it will reduce drudgery or in even that it will be overall a good thing, even if some past Technologies did.




  • We’re talking about fingerprinting stuff coming in via HDMI, not stuff being played by the “smart” part of the TV itself from some source.

    You would probably not need to actually sample images if it’s the TV’s processor that’s playing something from a source, because there are probably smarter approaches for most sources (for example, for a TV channel you probably just need to know the setting of the tuner, location and the local time and then get the data from available Program Guide info (called EPG, if I remember it correctly).

    The problem is that anything might be coming over HDMI and it’s not compressed, so if they want to figure out what that is, it’s a much bigger problem.

    Your approach does sound like it would work if the Smart TV was playing some compressed video file, though.

    Mind you, I too am just “thinking out loud” rather that actually knowing what they do (or what I’m talking about ;))


  • Well that makes sense but might even be more processor intensive unless they’re using an SOC that includes an NFU or similar.

    I doubt it’s a straight forward hash because a hash database for video which includes all manner of small clips and has to somehow be able to match something missing over 90% of frames (if indeed the thing is sampling it at 2 fps, then it only sees 2 frames out of every 25) would be huge.

    A rough calculation for a system of hashes for groups of 13 frames in a row (so that at least one would be hit if sampling at 2 fps on a 25 fps system) storing just one block of 13 frame hashes per minute in a 5 byte value (so large enough to have 5 trillion distinctive values) would in 1GB store enough hashes for 136k 2h movies in hashes alone so it would be maybe feasible if the system had 2GB+ of main memory, though even then I’m not so sure the CPU speed would be enough to search it every 500ms (though if the hashes are ordered by value in a long array and there’s a matching array of clip IDs, it might be doable since there are some pretty good algorithms for that).



  • I was curious enough to check and with 2KB SRAM that thing doesn’t have anywhere enough memory to process a 320x200 RGB image much less 1080p or 4K.

    Further you definitelly don’t want to send 2 images per-second down to a server in uncompressed format (even 1080p RGB with an encoding that loses a bit of color fidelity to just use two bytes per pixel, adds up to 4MB uncompressed per image), so its either using something with hardware compression or its using processing cycles for that.

    My expectation is that it’s not the snapshoting itself that would eat CPU cycles, it’s the compression.

    That said, I think you make a good point, just with the wrong example - I would’ve gone with: a thing capable of handling video decoding at 50 fps - i.e. one frame per 20ms - (even if it’s actually using hardware video decoding) can probably handle compressing and sending over the network two frames per second, though performance might suffer if they’re using a chip without hardware compression support and are using complex compression methods like JPEG instead of something simpler like LZW or similar.


  • Server-side checks cost processing power and memory hence they need to spend more on servers.

    Client side kernel-level anti-cheat only ever consumes resources and cause problems to the actual gamers, not directly to Rockstart’s bottom line (and if it makes the game comms slightly slower on the client side it might even reduce server resource consumption).

    If Rockstar’s management theory is that gamers will endure just about any level of shit and keep on giving them money (a posture which, so far, has proven correct for just about every large game maker doing that kind of shit) then they will logically conclude that their bottom line won’t even suffer indirectly from making life harder for their existing clients whilst it will most definitelly suffer if they have more server costs due to implementing server side checks for cheating.


  • I played WoW right when it came out, on a PvP server.

    There was already a subset of the crowd just like there back then - some people rushed game progression to have higher levels as soon as possible only to then hang out in beginner areas and “pwn” significantly lower level players.

    That’s around the time when the term “griefer” was coined.

    In these things the real difference is how the servers are structured rather than the human beings: if the architecture is designed so that there is some way to filter players (smaller servers with moderation or some kind of kick voting system that bans repeat offenders), griefers end up in their own griefer instances griefing each other and the rest can actually play the game, otherwise you get a deeply beginner (or people with less time, such as working adults) unfriendly environment.

    As somebody else pointed out environments were people run their own servers tend create those conditions at least for some cases (basically if there’s some kind of moderation) whilst massive world centralized server environments tend to give free right to people whose pleasure in a multiplayer games derives mostly from making it unpleasent for others (in game-making, griefing is actually recognized as one of the 4 core types of enjoyment - along with achiving, exploring and socializing - people can derived from multiplayer games)