• Hector_McG@programming.dev
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      1 year ago

      LLMs produce code that is functionally error prone while looking reasonable (in the same way that it produces answers that are grammatically correct, correctly spelled, but factually incorrect).

      As we all know, fixing bugs in someone else’s code is generally more difficult than writing the code correctly in the 1st place , and that’s going to apply to a LLMs code output just as much as a humans, if not more.

    • Lmaydev@programming.dev
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 year ago

      That’s assuming they’re using one of the generic models like ChatGPT and not something custom they’ve created specifically to do this.

      Edit: they are in fact using their own as per the article

      • andscape@feddit.it
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        I’m aware they’re not using a generic model, but that’s not much better. Current custom-made models still fuck up significantly more than humans, and in less predictable ways.

        Even if their custom model is slightly incorrect 1% of the time, that’s still a major problem in critical systems like those.

        • FinancesDrone98@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I mostly use A.I. to translate. ChatGPT gets that done it gets it done pretty good, especially when you say “translate this mandarin text into English. I don’t care if it is somewhat inaccurate, just do it as best as you can.“