• 0 Posts
  • 17 Comments
Joined 7 months ago
cake
Cake day: July 23rd, 2025

help-circle







  • Oh, certainly. The reason I focused on speed is because an idiot using a shoddy LLM may not notice it’s hallucinations or failures as easily as they’d notice it’s sluggishness.

    However, the meaningfulness of the LLM’s responses are a necessary condition, whereas the speed and convenience is more of a sufficient condition (which contradicts my first statement). Either way, I don’t think the average users knows what hardware they need to leverage local AI.

    My point is that this “AI” hardware gives a bad experience and leaves a bad impression of running AI locally, because 98% of people saw “AI” in the CPU model and figured it should work. And thus, more compute is pushed to datatcenters.







  • …do we really need to use the word “raping” to talk about PC performance or can we agree that there are a hundred other words that fit better in that spot?

    EDIT: Wooo free downvotes. Y’all are a bunch of snowflakes. I tried to make a point on behalf of others, since there are people who have traumatic lived experience with the concept of “rape” and would probably prefer not to be reminded of it. Nobody serious about computing is going to go out and say “this process is raping the performance” because it’s just not a good idea. I bet you (if in tech field) wouldn’t say it in front of your boss. But sure, call me sensitive and pull out a semantic argument.

    I can smell you through your screen. Go take a shower and try being human.