Seems pretty bad?
That example someone posted where the AI refused to explain the
oklch
CSS functional notation, and instead said it doesn’t exist, pretty much exemplifies why this is a bad idea, although I can see how maybe there was good intentions by whoever implemented it.In my opinion, the “AI explain” is unnecessary, as I find the MDN contributors already do an excellent job of explaining things as-is, especially in the Examples section under the documentation itself
maybe there was good intentions by whoever implemented it
If an executive saying “find ways to use ChatGPT so we can be on the cutting edge” and a developer saying “eh, I guess maybe…” counts as good intentions.
Agreed, and the questions I have that MDN doesn’t answer would probably be ones even less likely for the AI explain to get right.
I sometimes think that we might currently be at best AI state in the next 20 years or so until other significant technological improvements are achieved.
these AIs were trained on human generated data, but now we’re gonna trash the Internet with AI generated truth sounding nonesense, so the same methods will likely produce worse and worse results