• 1 Post
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • I’m a seasoned dev and I was at a launch event when an edge case failure reared its head.

    In less than a half an hour after pulling out my laptop to fix it myself, I’d used Cursor + Claude 3.5 Sonnet to:

    1. Automatically add logging statements to help identify where the issue was occurring
    2. Told it the issue once identified and had it update with a fix
    3. Had it remove the logging statements, and pushed the update

    I never typed a single line of code and never left the chat box.

    My job is increasingly becoming Henry Ford drawing the ‘X’ and not sitting on the assembly line, and I’m all for it.

    And this would only have been possible in just the last few months.

    We’re already well past the scaffolding stage. That’s old news.

    Developing has never been easier or more plain old fun, and it’s getting better literally by the week.

    Edit: I agree about junior devs not blindly trusting them though. They don’t yet know where to draw the X.


  • Actually, they are hiding the full CoT sequence outside of the demos.

    What you are seeing there is a summary, but because the actual process is hidden it’s not possible to see what actually transpired.

    People are very not happy about this aspect of the situation.

    It also means that model context (which in research has been shown to be much more influential than previously thought) is now in part hidden with exclusive access and control by OAI.

    There’s a lot of things to be focused on in that image, and “hur dur the stochastic model can’t count letters in this cherry picked example” is the least among them.





  • They got off to a great start with the PS5, but as their lead grew over their only real direct competitor, they became a good example of the problems with monopolies all over again.

    This is straight up back to PS3 launch all over again, as if they learned nothing.

    Right on the tail end of a horribly mismanaged PSVR 2 launch.

    We still barely have any current gen only games, and a $700 price point is insane for such a small library to actually make use of it.


  • Meanwhile, here’s an excerpt of a response from Claude Opus on me tasking it to evaluate intertextuality between the Gospel of Matthew and Thomas from the perspective of entropy reduction with redactional efforts due to human difficulty at randomness (this doesn’t exist in scholarship outside of a single Reddit comment I made years ago in /r/AcademicBiblical lacking specific details) on page 300 of a chat about completely different topics:

    Yeah, sure, humans would be so much better at this level of analysis within around 30 seconds. (It’s also worth noting that Claude 3 Opus doesn’t have the full context of the Gospel of Thomas accessible to it, so it needs to try to reason through entropic differences primarily based on records relating to intertextual overlaps that have been widely discussed in consensus literature and are thus accessible).





  • I’d be very wary of extrapolating too much from this paper.

    The past research along these lines found that a mix of synthetic and organic data was better than organic alone, and a caveat for all the research to date is that they are using shitty cheap models where there’s a significant performance degrading in the synthetic data as compared to SotA models, where other research has found notable improvements to smaller models from synthetic data from the SotA.

    Basically this is only really saying that AI models across multiple types from a year or two ago in capabilities recursively trained with no additional organic data will collapse.

    It’s not representative of real world or emerging conditions.






  • it’s a tech product that runs a series of complicated loops against a large series of texts and returns the closest comparison, as it stands it’s never going to be dangerous in and of itself.

    That’s not how it works. I really don’t get what’s with people these days being so willing to be confidently incorrect. It’s like after the pandemic people just decided that if everyone else was spewing BS from their “gut feelings,” well gosh darnit they could too!

    It uses gradient descent on a large series of texts to build a neural network capable of predicting those texts as accurately as possible.

    How that network actually operates ends up a black box, especially for larger models.

    But research over the past year and a half in simpler toy models has found that there’s a rather extensive degree of abstraction. For example, a small GPT trained only on legal Othello or Chess moves ends up building a virtual representation of the board and tracks “my pieces” and “opponent pieces” on it, despite never being fed anything that directly describes the board or the concept of ‘mine’ vs ‘other’. In fact, in the Chess model, the research found there was even a single vector in the neural network that could be flipped to have the model play well or play like shit regardless of the surrounding moves fed in.

    It’s fairly different from what you seem to think it is. Though I suspect that’s not going to matter to you in the least, as I’ve come to find that explaining transformers to people spouting misinformation about them online has about the same result as a few years ago explaining vaccine research to people spouting misinformation about that.



  • It’s not even that. The model making all the headlines for this paper was the weird shit the base model of GPT-4 was doing (the version only available for research).

    The safety trained models were relatively chill.

    The base model effectively randomly selected each of the options available to it an equal number of times.

    The critical detail in the fine print of the paper was that because the base model had a smaller context window, they didn’t provide it the past moves.

    So this particular version was only reacting to each step in isolation, with no contextual pattern recognition around escalation or de-escalation, etc.

    So a stochastic model given steps in isolation selected from the steps in a random manner. Hmmm…

    It’s a poor study that was great at making headlines but terrible at actually conveying useful information given the mismatched methodology for safety trained vs pretrained models (which was one of its key investigative aims).

    In general, I just don’t understand how they thought that using a text complete pretrained model in the same ways as an instruct tuned model would be anything but ridiculous.


  • People need to understand that LLMs are not smart, they’re just really fancy autocompletion.

    These aren’t exactly different things. This has been a lot of what the past year of research in LLMs has been about.

    Because it turns out that when you set up a LLM to “autocomplete” a complex set of reasoning steps around a problem outside of its training set (CoT) or synthesizing multiple different skills into a combination unique and not represented in the training set (Skill-Mix), their ability to autocomplete effectively is quite ‘smart.’

    For example, here’s the abstract on a new paper from DeepMind on a new meta-prompting strategy that’s led to a significant leap in evaluation scores:

    We introduce Self-Discover, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods. Core to the framework is a self-discovery process where LLMs select multiple atomic reasoning modules such as critical thinking and step-by-step thinking, and compose them into an explicit reasoning structure for LLMs to follow during decoding. Self-Discover substantially improves GPT-4 and PaLM 2’s performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as much as 32% compared to Chain of Thought (CoT). Furthermore, Self-Discover outperforms inference-intensive methods such as CoT-Self-Consistency by more than 20%, while requiring 10-40x fewer inference compute. Finally, we show that the self-discovered reasoning structures are universally applicable across model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share commonalities with human reasoning patterns.

    Or here’s an earlier work from DeepMind and Stanford on having LLMs develop analogies to a given problem, solve the analogies, and apply the methods used to the original problem.

    At a certain point, the “it’s just autocomplete” objection needs to be put to rest. If it’s autocompleting analogous problem solving, mixing abstracted skills, developing world models, and combinations thereof to solve complex reasoning tasks outside the scope of the training data, then while yes - the mechanism is autocomplete - the outcome is an effective approximation of intelligence.

    Notably, the OP paper is lackluster in the aforementioned techniques, particularly as it relates to alignment. So there’s a wide gulf between the ‘intelligence’ of a LLM being used intelligently and one being used stupidly.

    By now it’s increasingly that often shortcomings in the capabilities of models reflect the inadequacies of the person using the tool than the tool itself - a trend that’s likely to continue to grow over the near future as models improve faster than the humans using them.


  • The effects making the headlines around this paper were occurring with GPT-4-base, the pretrained version of the model only available for research.

    Which also hilariously justified its various actions in the simulation with “blahblah blah” and reciting the opening of the Star Wars text scroll.

    If interested, this thread has more information around this version of the model and its idiosyncrasies.

    For that version, because they didn’t have large context windows, they also didn’t include previous steps of the wargame.

    There should be a rather significant asterisk related to discussions of this paper, as there’s a number of issues with decisions made in methodologies which may be the more relevant finding.

    I.e. “don’t do stupid things in designing a pipeline for LLMs to operate in wargames” moreso than “LLMs are inherently Gandhi in Civ when operating in wargames.”