Very insightful comment Mr. IT of lemmy.
Very insightful comment Mr. IT of lemmy.
Honestly don’t feel like a slight reduction on a month long battery life is of much concern. As for waste, I’d say being able to replace a dead battery should reduce waste if anything, also nobody said it must be a AA (on a side note, you seem to imply the use of non-rechargeable AA, which holy shit, if they’re still a thing, must be purged, sweet jesus…who’s dumb enough to waste money on em???). Personally, I’d much prefer having a second battery charging separately somewhere, ready to swap, as opposed to being forced to stop using my computer. Or like the Nintendo Switch I mentioned before, have some spot I can put it away for charing, that way the mouse is also cleaned up and not fucking dangling and wobbling around freely on the table.
I don’t understand what was wrong with the original version that just took 2 AA batteries. Reaching for the AA charger and swapping cells not awkward enough or something?
Smart and elegant design would be hiding a battery charger in the iMac it self (maybe even use something smaller than AA), not expect you to flip and plug in your mouse every time ya leave it. The Nintendo Switch, while a completely different form factor, is a great example of an elegant (you could even say “wireless”) charging solution.
I’m getting really sick of the Apple esthetic of sticking out wires, be it the mouse or the dozen dongles for every portable device they now make. Uh! Can’t forget the world’s only pen that needs charging, for seemingly no reason.
“Debatable” is a heavy stretch for someone with a 0% track record when it comes to promised tech while repeating “we can do it NOW and it will be available NEXT YEAR!” for a literal decade. Robo taxies were supposed to be EVERYWHERE 4 years ago. Same with SpaceX, we were supposed to be sending the first people to mars this year, yet all Elon has managed was burn 3 bilion tax payer dollars for literal fireworks, as not a single “starship” managed to reach high orbit. Even the cybertruck is a cheap knockoff of what was promised. Not to mention the countless people that have died because he’s allowed to beta test his death machines in public. Can’t forget his starlink shenanigans in Ukraine, fucking warlord wannabe… Elon is the greatest scam artist in modern history, and it’s absolutely disgraceful that he isn’t behind bars, let alone valued at all.
Right, my daily reminder that the US doesn’t belive in human rights. Article 23 of the UN human rights declaration for anyone curious. This is also a fun, yet basic resource: https://ourworldindata.org/grapher/countries-with-independent-national-human-rights-institution?time=latest
It’s sad that the west still glorifies this fucking homunculus that not only doesn’t try to adhere to any agreed principals and values, but actively goes against them. The republicans are against half the things written in there, while the dems “compromise” so hard that “barely making it” is the ultimate unachievable goal in the distance to aim towards. Fucking pathetic.
I totally forgot it even had that. The people I talk to only talk about the new side button.
Because the people with power funding this shit have pretty much zero overlap with the people making this tech. The investors saw a talking robot that aced school exams, could make images and videos and just assumed it meant we have artificial humans in the near future and like always, ruined another field by flooding it with money and corruption. These people only know the word “opportunity”, but don’t have the resources or willpower to research that “opportunity”.
Assuming I’m an android fan for pointing out that Apple does shady PR. I literally mention that Apple devices have their selling point. And it isn’t UNMATCHED PERFORMANCE or CUTTING EDGE TECHNOLOGY as their adds seems to suggest. It’s a polished experience and beautiful presentation; that is unmatched. Unlike the hot mess that is android. Android also has its selling points, but this reply is already getting long. Just wanted to point out your pettiness and unwillingness to read more than a sentence.
Big leap for OpenAI, as in a kind of ML model they haven’t explored yet. Not that big for AI in general as others have done the same with similar result. Until they can make graphs where they look exceptionally better compared to other models than their own, it’s not that much of a breakthrough.
Dang, OpenAI just pulled an Apple. Do something other people have already done with the same results (but importantly before they made a big fuss about it), claim it’s their innovation, give it a bloated name so people imagine it’s more than it is and produce a graph comparing themselves to themselves, hoping nobody will look at the competition.
Just like Apple, they have their own selling point, but instead they seem to prefer making up stuff while forgetting why people use em.
On a side note they also pulled an Elon. Where’s my AI companion that can comment on video in realtime and sing to me??? Ya had it “working” “live” a couple months ago, WHERE IS IT?!?
This process is akin to how humans learn…
I’m so fucking sick of people saying that. We have no fucking clue how humans LEARN. Aka gather understanding aka how cognition works or what it truly is. On the contrary we can deduce that it probably isn’t very close to human memory/learning/cognition/sentience (any other buzzword that are stands-ins for things we don’t understand yet), considering human memory is extremely lossy and tends to infer its own bias, as opposed to LLMs that do neither and religiously follow patters to their own fault.
It’s quite literally a text prediction machine that started its life as a translator (and still does amazingly at that task), it just happens to turn out that general human language is a very powerful tool all on its own.
I could go on and on as I usually do on lemmy about AI, but your argument is literally “Neural network is theoretically like the nervous system, therefore human”, I have no faith in getting through to you people.
Calling the reward system hormones, doesn’t really change the fact that we have no clue where to even start. What is a good reward for general intelligence? Solving problems? That’s our current approach, which has the issue of the AI not actually understanding the problems and just ending up remembering question answer pairs (patterns). We need to figure out what defines inteligence and “understanding” in an easily measurable way. Which is something people knew almost a hundred years ago when we came up with the idea of neural networks, and why I say we didn’t get any closer to AGI with LLMs.
In theory. Then comes the question of how exactly are you gonna teach/train it. I feel our current approach is too strict for proper intelligence to emerge, but what do I know. I honestly have no clue how such a model could be trained. I guess it would be similar to how people train actual braincells? Tho that field is very immature atm… The neat thing about the human brain is, that it’s already preconfigured for self learning, tho it does come with its own bias on what to learn due to its unique needs and desires.
They are indeed. It has multiple steps, but pretty much. 3Blue1Brown has a nice visual explanation: https://www.youtube.com/watch?v=wjZofJX0v4M
You can think of the brain as a set of modules, but sensors and the ability to adhere to a predefined grammar aren’t what define AGI if you ask me. We’re missing the most important module. AGI requires cognition, the ability to acquire knowledge and understanding. Such an ability would make larger language models completely redundant as it could just learn langue or even come up with one all on its own, like kids in isolation for example.
What I was trying to point out is that “neural networks” don’t actually learn in the way we do, using the world “learn” is a bit misleading, because it implies cognition. A neural network in the computer science sense is just a bunch of random operations in sequence. In goes a number, out goes a number. We then collect a bunch of input output pairs, the dataset, and semi randomly adjust these operations until they happen to somewhat match this collection. The reasoning is done by the humans assembling the input output pairs. That step is implicitly skipped for the AI. It doesn’t know why they belong together and it isn’t allowed to reason about why, because the second it spits out something else, that is an error and this whole process breaks. That’s why LLMs hallucinate with perfect confidence and why they’ll never gain cognition, because the second you remove the human assembling the dataset, you’re quite literally left with nothing but semi random numbers, and that’s why they degrade so fast when learning from themselves.
This technology is very impressive and quite useful, and demonstrates how powerful of a tool language alone is, but it doesn’t get us any closer to AGI.
Nope, people are quite resilient. As long as it’s not a literal new born, the chance of survival isn’t THAT low. Once you get past 4 years and up, a human can manage quite well.
Also dying because no one takes care of you and you fail to aquire food and dying of a stroke/seizure are 2 very different things.
The 5 year old baby LLM can’t learn shit and lacks the ability to understand new information. You’re assuming that we and LLMs “learn” in the same way. Our brains can reason and remember information, detect new patterns and build on them. An LLM is quite literally incapable of learning a brand new pattern, let alone reason and build on it. Until we have an AI that can accept new information without being tolled what is and isn’t important to remember and how to work with that information, we’re not even a single step closer to AGI. Just because LLMs are impressive, doesn’t mean they posses any cognition. The only way AIs “learn” is by countless people constantly telling it what is and isn’t important or even correct. The second you remove that part, it stops working and turns to shit real quick. More “training” time isn’t going to solve the fact that without human input and human defined limits, it can’t do a single thing. AI cannot learn form it self without human input either, there are countless studies that show how it degrades, and it degrades quickly, like literally just one generation down the line is absolute trash.
Language models are literally incapable of reasoning beyond what is present in the dataset or the prompt. Try giving it a known riddle and change it so it becomes trivial, for example “With a boat, how can a man and a goat get across the river?”, despite it being a one step solution, it’ll still try to shove in the original answer and often enough not even solve it. Best part, if you then ask it to explain its reasoning (not tell it what it did wrong, that’s new information you provide, ask it why it did what it did), it’ll completely shit it self hallucinating more bullshit for the bullshit solution. There’s no evidence at all they have any cognitive capacity.
I even managed to break it once through normal conversation, something happened in my life that was unique enough for the dataset and thus was incomprehensible to the AI. It just wasn’t able to follow the events, no matter how many times I explained.
I don’t understand why people don’t go for something like ZorinOS or Nobara. Both work great out of the box with support for like everything.
Thanks for not providing even a single example… I tried looking around and only found relatively niche use cases, like them being more reliable and resistant (while also being lighter) for extreme environments or infrequent use, for example emergency equipment. And then some people choosing em because they don’t feel like investing more into it, when they have only a couple devices that drain the batteries slowly, let’s say a clock that lasts a year. That could be easily fixed if you could exchange empty batteries for full ones like with gass tanks and similar, and the prior examples aren’t a good enough reason to have such batteries in the convenience store.