• Terrasque@infosec.pub
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    What llm did you use, and how long ago was it? Claude sonnet usually writes pretty good python for smaller scripts (a few hundred lines)

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      It was ChatGPT from earlier this year. It wasn’t a huge deal for me that it made mistakes, because I had a very specific use case and just wanted to save some time; I knew I’d have to troubleshoot grafting it into my function, but even after I pointed out that it was using depreciated syntax (and how to correct it), it just spat out the code again with even more errors and still using depreciated syntax.

      All LLMs will fail like this in some way, because they don’t actually understand what they’re generating (i.e. they have no mechanism for self-evaluating the veracity of their statements).

      • Terrasque@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        This is a very simple one, but someone lower down apparently had issue with a script like this:

        https://i.imgur.com/wD9XXYt.png

        I tested the code, it works. If I was gonna change anything, probably move matplotlib import to after else so it’s only imported when needed to display the image.

        I have a lot more complex generations in my history, but all of them have personal or business details, and have much more back and forth. But try it yourself, claude have a free tier. Just try to be clear in the prompt what you want. It might surprise you.

        • Telorand@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          I appreciate the effort you put into the comment and your kind tone, but I’m not really interested in increasing LLM presence in my life.

          I said what I said, and I experienced what I experienced. Providing me an example where it works is in no way a falsification of the core of my original comment: LLMs have no place generating code for secure applications apart from human review, because they don’t have a mechanism to comprehend or proof their own work.