• dawsoneliasen@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      I haven’t thought about this aspect of AI nearly as much, just because I spend a lot of time thinking about writing code and very little time thinking about macroeconomics. But I think you’re probably right to be worried, especially with the hype explosion caused by ChatGPT. I think ChatGPT is nigh-useless in a real practical sense, but it may work very well for the purpose of making OpenAI money. They’ve announced partnerships with huge consulting firms. It’s easy to imagine it as this engine of making money for corporations based mostly on its perceived amazing capabilities, without actually improving the world in any way. Consulting is already a cesspool of huge sums of capital that accomplishes nothing (I work in consulting). ChatGPT is perfect for accelerating that. It’s also worth pointing out how these latest models like GPT require ungodly amounts of data and compute—you have to be obscenely rich to create them.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        I think ChatGPT is nigh-useless in a real practical sense

        That’s a very strong take. If you want something to write a short story or essay for you it’s mad useful. It also can take a fast food order pretty reliably off the shelf.

      • kool_newt@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I’m a programmer, ChatGPT is incredibly useful to me now. I use it like a search engine that is able to pull up the perfect example I can use to get me over hurdles. Even when it’s wrong it’s still useful like having a human tutor, that can also be wrong.

        Ultimately, I think this type of AI will have a similar effect as Google.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      Not OP, but it’s a real concern, although at this rate of progression I wonder if AI ethics will have much of a splash before AI alignment becomes the question.

  • zkxs@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    Wow, this is just what I’ve been looking for without even realizing it. A lot of my friends who are newer to the world of programming are very excited by this new wave of generative AI, particularly ChatGPT and GitHub Copilot. Conversely, I personally have a lot of misgivings about AI programming sort of half-formed in my mind. I’ve been programming for a while now (although I’m sure relative to all the SDF veterans I’m still pretty new to the game) and I can’t bring myself to believe that prodding ChatGPT into a reasonable output is more efficient than just writing the code yourself… and then I start to worry that perhaps I’m biased. As they say, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it”.

    Anyways, your headline alone is a better argument against the merits of AI programming than anything I was able to come up with, so going into it I knew the post would be a good read. And I wasn’t disappointed: you’ve provided me with a much better framework to discuss generative AI with folks moving forward. Thanks for writing this!

    • Modal@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I think of AI in programming the same way I think about search engines (there are a lot of parallels). It can be helpful when you’re stuck or learning something new, it can be wrong, and if you use it for everything you might get something that works but its not going to look like something someone with experience would have done.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      It’s true, but all programs start as natural language at least partly. Clients tell developers what they want, the developers then translate that into something that makes actual sense and is close enough to the request to make the clients happy.

      • EamonnMR@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        Indeed, it’s the job of the programmer to understand that natural language and use it to design a program. The lack of understanding is one thing that worries me about LLMs writing programs.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Like the article mentions, it’s only good at boilerplate code at the moment, and can’t really do architecture very well. I guess that’s why it’s “Github Copilot” and not “Github Pilot”.

          Going forward, who knows? We fundamentally don’t understand why LLMs work.

    • kool_newt@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Think about using AI output as inspiration, examples, getting over writers block, etc, and less about using it to cut and paste it’s output wholesale as completed work.

  • gjost@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I was able to prod ChatGPT into writing a Python function for computing the compass direction between two points on a 2D grid. It came up with something that worked, but I had to iterate many times and took about as long as it took to google the math when I wrote the function for myself.

    My programming career has been built on googling around to explore problems somebody asked me to solve, and then synthesizing the results I found into code. My first reaction was that ChatGPT might short-circuit that process. What would my career have been like if this had been available back then? I feel like all that googling over the years gave me a sense of problem spaces and a certain amount of domain knowledge, and I would have missed out on that with ChatGPT. On the other hand, it took knowledge to know whether its answer was correct…

    The other thing I thought was that during my career I’ve gone from hand-coded HTML to Perl CGI to Cold Fusion to PHP to web frameworks, and also from straight HTML to CSS to frameworks like Bootstrap. Each time I’ve fretted over not being involved in the layers below. Is ChatGPT just another layer?

    Of course, I have no clue about the browser internals, or about the OS, but I know that somebody does. At some level it’s a clockwork engine that can be picked at and understood. ChatGPT feels different, people don’t actually know its internals, and I worry that future generations of programmers will be generating code that they don’t understand, and maybe nobody will be able to understand.