

It’d be cool to see a compatible tip with some additional breakaway segments to bypass the logo law.


It’d be cool to see a compatible tip with some additional breakaway segments to bypass the logo law.


Never underestimate AMD’s ability to miss good opportunities.


4K vs 8K on a 49" screen across the room is going to have much less of a noticeable difference than 4K vs 1080p on a 24" screen a foot or two away (dancing around the boundary of retina).
I think an 8K 42" would make a great single monitor for productivity, I just can’t imagine driving 8K at idle is very efficient if there aren’t software/firmware solutions to recognize non-moving screens.


If my 3600X has taught me anything, it’s that AM4 platform is truly a long-haul legend.


Yes, that too.


Grok must be tired of switching between mechahitler mode and trying to logically think through questions.
They’re just not compatible, and yet somehow they keep trying to force it.
(I know, LLMs do not have feelings or get tired)


Oh, certainly. The reason I focused on speed is because an idiot using a shoddy LLM may not notice it’s hallucinations or failures as easily as they’d notice it’s sluggishness.
However, the meaningfulness of the LLM’s responses are a necessary condition, whereas the speed and convenience is more of a sufficient condition (which contradicts my first statement). Either way, I don’t think the average users knows what hardware they need to leverage local AI.
My point is that this “AI” hardware gives a bad experience and leaves a bad impression of running AI locally, because 98% of people saw “AI” in the CPU model and figured it should work. And thus, more compute is pushed to datatcenters.


Infinitely better, as long as your network and encoding are set up properly. At the very least, you won’t need the ice bricks.


I feel like the “AI capable” marketed CPUs are a sham. For the average user, it’s just going to feel slow compared to cloud compute, so it’s just training the average person to not bother buying AI-labelled hardware for AI.


Hah. I never even said the comment hurt me.
I’m just speaking on behalf of those who have been hurt. If you can’t recognize why someone might do that, you’re either a rape apologist or you’re an incel troll. Probably both.


They had a little bit of reprise with the surge of SFF PCs but not much.


Because some people have digital libraries but no hardware to run them on.
At the very least, this is a loss in gaming accessibility by cost since a month of GeForce Now used to be a decent gaming backup for when mygaming system was down (had to RMA GPU) or a friend wanted to test the PC gaming waters.


…do we really need to use the word “raping” to talk about PC performance or can we agree that there are a hundred other words that fit better in that spot?
EDIT: Wooo free downvotes. Y’all are a bunch of snowflakes. I tried to make a point on behalf of others, since there are people who have traumatic lived experience with the concept of “rape” and would probably prefer not to be reminded of it. Nobody serious about computing is going to go out and say “this process is raping the performance” because it’s just not a good idea. I bet you (if in tech field) wouldn’t say it in front of your boss. But sure, call me sensitive and pull out a semantic argument.
I can smell you through your screen. Go take a shower and try being human.


I don’t think it’s an upscale. I think it’s an mspaint recreation. There’s no reason an AI upscale would turn the cheek circles into ellipses and move them away from the edge of the cheek. All the colors are also different, and the geometry differs in places too.


If anything, they might have written most of the infrastructure using LLMs. It’s easier for vibecode to forget about security because LLMs often forget context or hyperfixate on the wrong features.


4k 120hz on the steam cube?? Sounds like a job for Lossless Scaling 4x Frame Gen! /s
The article mentions Grok being antisemitic and making non-consensual sexual images of people… But zero mention of the CSAM issue? Really?