I’m a dev. I’ve been for a while. My boss does a lot technology watch. He brings in a lot of cool ideas and information. He’s down to earth. Cool guy. I like him, but he’s now convinced that AI LLMs are about to swallow the world and the pressure to inject this stuff everywhere in our org is driving me nuts.

I enjoy every part of making software, from discussing with the clients and the future users to coding to deployment. I am NOT excited at the prospect of transitioning from designing an architecture and coding it to ChatGPT prompting. This sort of black box magic irks me to no end. Nobody understands it! I don’t want to read yet another article about how an AI enthusiast is baffled at how good an LLM is at coding. Why are they baffled? They have “AI” twelves times in their bio! If they don’t understand it who does?!

I’ve based twenty years of career on being attentive, inquisitive, creative and thorough. By now, in-depth understanding of my tools and more importantly of my work is basically an urge.

Maybe I’m just feeling threatened, or turning into “old man yells at cloud”. If you ask me I’m mostly worried about my field becoming uninteresting. Anyways, that was the rant. TGIF, tomorrow I touch grass.

  • MagicShel@programming.dev
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    1 year ago

    Having an AI help you code is like having a junior developer who is blazing fast, enthusiastic, and listens well. However, it doesn’t think about what it writes, it does no testing, and it doesn’t understand the big picture at all. For very simple tasks, it gets the job done very fast, but for complex tasks no matter how many times you explain it, it is never going to get it. I don’t think there’s any worry about AI replacing developers any time in the foreseeable future.

    Edit: fixed voice to text issues.

    • mkhoury@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      But you can work with it to write all the tests/acceptance criteria and then have the AI run the code against the tests. We spent a lot of time developing processes for humans writing code, we need to continue integrating the machines into these processes. It might not do 100% of the work you’re currently doing, but it could do maybe 50% reliably. That’s still pretty disruptive!

    • shadowolf@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      in fairness… this is more a limitation of the current technology. your look at gpt4 and going not an expert. but what about gpt5 or 6… or some of the newer ideas like microsoft plan for 1 million token model using attention dialation mechnism. The point being we are still on the ground floor. And these models have emgerent functionality

    • hallettj@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Lol this is what I was thinking too. The junior dev is also a black box. AI automation seems more like delegating than programming to me.

    • Naate@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      This is a pretty apt analogy, I think.

      We’ve been using copilot at work, and it’s really surprised me with some slick suggestions that “mostly work”. But I don’t think it could have written anything beyond the boilerplate my team has done.

      (I also spend way too much time watching Copilot and Intellisense fight, and it pisses me off to no end.)