• 0 Posts
  • 119 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle





  • I’ll admit, as neat as this is, I’m a little unclear on the use case? Are there really situations where it’s easier to get a command prompt than it is to open a webpage?

    The CLI side I can see more use for since that does expose a lot of actions to bash scripting, which could be neat. But on the whole I can’t say I’ve ever really found myself thinking “Man, I really wish I had a UI for managing Radarr, a program that already includes a really good UI.”

    I know it’s shitty to hate on something just because you’re not the target for it. That’s not my intent, it’s more that I’m just fascinated by the question of how anyone has a burning need for this? It feels like there must be something I’m missing here.



  • Better to say that Google claim they want to use private nuclear reactors because that will allay any fears about the climate impact of their products. In reality the SMRs they’re purporting to invest in basically don’t exist outside of a pipe dream. They’re a less viable product than genAI itself. But just like the supposed magical “good” version of genAI, Google can claim that SMRs are always just around the corner, and that will mean that they’re doing something about the problem.


  • I think it’s mischaracterising the argument against AI to boil it down to “AI is useless” (and I say that as much as a criticism of those who are critical of genAI as I do of those who want to defend it; far too many people express the argument reductively as “AI is useless” when that’s not exactly what’s really being meant).

    The problem is not that genAI is never useful for anything. It is sometimes useful for some things. The problem is that being sometimes useful for some things does not remotely justify what the technology costs. I mean that both on the macro scale - untold climate damage, vast amounts of wasted resources - and on the micro scale; OpenAI alone loses $2.35 for every $1.00 they make.

    That is fundamentally unsustainable. If you like genAI for whatever use cases you’ve found for it, and you really don’t care about the climate toll and other externalities, then you can look forward to paying upwards of $50-$100 a month to actually use it, once we’re out of the “Give it to ‘em cheap/free to get’ em hooked” phase, because that’s what it’ll take to make these models profitable. In fact that’s kind of a lowball estimate.

    I know plenty of people who find this tech occasionally useful as a way of searching for the answer to a question or producing a small snippet of code, but I can’t imagine anyone who finds those uses so compelling that they’d throw “Canadian cell phone contract” levels of money at it.





  • The reason major businesses haven’t bothered using distributed blockchains for auditing is because they fundamentally do not actually help in any way with auditing.

    At the end of the day, the blockchain is just a ledger. At some point a person has to enter the information into that ledger.

    Now, hear me out here, because this is going to be some totally out there craziness that is going to blow your mind… What happens if that person lies?

    Like, you’ve built your huge, complicated system to track every banana you buy from the farm to the grocery store… But what happens if the shipper just sends you a different crate of bananas with the wrong label on them? How does your system solve that? What happens if the company growing your bananas claims to use only ethical practices but in reality their workers are effectively slaves? How does a blockchain help fix that?

    The data in a system is only as good as your ability to verify it. Verifying the integrity of the data within systems was largely a solved problem long before distributed blockchains came along, and was rarely if ever the primary avenue for fraud. It’s the human components of these systems where fraud can most easily occur. And distributed blockchains do absolutely nothing to solve that.




  • A single billion, when put in terms of money, is enough that - simply invested in GICs and bonds, earning a very, very conservative 1% interest - it would earn you ten million dollars a year in interest alone.

    I would challenge you to even come up with a reasonable way to spend ten million dollars a year. By my back of the napkin math you could vacation every single day, living in hotels and eating at fancy restaurants, and still not make a dent in that.

    Musk has an estimated net worth of $247 billion. You could fine him 99% of his current wealth, and he would still struggle to spend enough that he wouldn’t end up increasing his remaining wealth every year.


  • In this particular case, I’m really not sure it’s a loophole.

    Antitrust laws exist to constrain companies so large and powerful that they have become, or are becoming monopolistic forces

    What Twitter successfully proved to the EU court is that Musk’s management of the company has been so spectacularly incompetent that Twitter/X no longer has enough reach or cultural relevance to be in any danger of being a monopoly.

    This is, objectively speaking, a serious L for Twitter. They just proved to a court that they’re no longer even close to being the best place to spend your advertising dollars. The major spenders will take note.



  • This is a long post and I’m not even going to try to address all of it, but I want to call out one point in particular, this idea that if we somehow made a quantum leap from the current generation of models to AGI (there is, for the record, zero evidence of there being any path to that happening) that it will magically hand us the solutions to anthropogenic climate change.

    That is absolute nonsense. We know all the solutions to climate change. Very smart people have spent decades telling us what those solutions are. The problem is that those solutions ultimately boil down to “Stop fucking up the planet for the sake of a few rich people getting richer.” It’s not actually a complicated problem, from a technical perspective. The complications are entirely social and political. Solving climate change requires us to change how our global culture operates, and we lack the will to do that.

    Do you really think that if we created an AGI, and it told us to end capitalism in order to save the planet, that suddenly we’d drop all our objections and do it? Do you think that an AGI created by Google or Microsoft would even be capable of saying “Stop allowing your planets resources to be hoarded by a priveliged few”?


  • Powered flight was an important goal, but that wouldn’t have justified throwing all the world’s resources at making Da Vinci’s flying machine work. Some ideas are just dead ends.

    Transformer based generative models do not have any demonstrable path to becoming AGI, and we’re already hitting a hard ceiling of diminishing returns on the very limited set of things that they actually can do. Developing better versions of these models requires exponentially larger amounts of data, at exponentially scaling compute costs (yes, exponentially… To the point where current estimates are that there literally isn’t enough training data in the world to get past another generation or two of development on these things).

    Whether or not AGI is possible, it has become extremely apparent that this approach is not going to be the one that gets us there. So what is the benefit of continuing to pile more and more resources into it?