Every time I have asked ChatGPT to code something it seems to lose the thread halfway through and starts giving nonsensical code. I asked it to do something simple in HP41C calculator code and it invented functions out of whole cloth.
I asked it for something in Powershell and it did the same thing. I asked how it came up with that function and it said it doesn’t exist but if it did that’s how it would work.
Quality of output depends a lot on how common the code is in its training data. I would guess it’d be best at something like Python, with its wealth of teaching materials and examples out there.
It depends on how common the language is and how novel the idea is. It can not create something new. It isn’t creative. It spits out what is predictable based on what other people have written before. It isn’t intelligent. It’s glorified auto-complete.
When it starts going off the rails like that I also ask it to “check its work when its done”, and it seems to extend the amount of usable time before it loses the plot and suggests i use VBA or something.
I like when you give it an answer and you tell it where it made an error. So it changes it to something that’s even more wrong, then you point that out and it says, “Sorry. Here’s the first wrong thing I told you, again.”
Just like me fr fr.
I don’t think so that ChatGPT will take place of humans.
True Generative AI Development assists in writing and debugging code, accelerating software development processes and reducing errors.
eh… no. its not.
- coming from an actual developer.
It’s just a joke lol
Sir, this is programmer humour
We all are