Cat@ponder.cat to Technology@lemmy.worldEnglish · edit-23 days agoPerplexity open sources R1 1776, a version of the DeepSeek R1 model that CEO Aravind Srinivas says has been “post-trained to remove the China censorship”.www.perplexity.aiexternal-linkmessage-square29fedilinkarrow-up1210arrow-down114
arrow-up1196arrow-down1external-linkPerplexity open sources R1 1776, a version of the DeepSeek R1 model that CEO Aravind Srinivas says has been “post-trained to remove the China censorship”.www.perplexity.aiCat@ponder.cat to Technology@lemmy.worldEnglish · edit-23 days agomessage-square29fedilink
minus-squarefruitycoder@sh.itjust.workslinkfedilinkEnglisharrow-up5·2 days agoListen, I’m highly critical of the CCP, but LLMs aren’t facts machines, they are make text like what they are trained on machines. They have no grasp of truth, and we can only get some sense of truth of what the average collective text response of its dataset (at best!).
minus-squareiopq@lemmy.worldlinkfedilinkEnglisharrow-up3arrow-down1·2 days agoI’m talking about the example texts
Listen, I’m highly critical of the CCP, but LLMs aren’t facts machines, they are make text like what they are trained on machines.
They have no grasp of truth, and we can only get some sense of truth of what the average collective text response of its dataset (at best!).
I’m talking about the example texts