How would they determine what is AI generated and what is not?
Every tenth line of code needs a comment break for a detailed ascii “drawing” of human hands
This is just a normal fist! I don’t see anything wrong with it!
_______ ---' ____)____ ______) ______) _______) _______) _______) ---.__________)
I don’t think that this is a hard rule. They probably look for the same signs that we do - plausible sounding utter gibberish. They just don’t want the drop in quality due to that. If an author creates content with AI, but takes their time to edit and improve it, I think that the Gentoo team may give it a pass.
Banned.
When you write a copyright notice you aught to specify which code is actually copyrighted and which is AI written? Guess you can just include the code and pretend you wrote it, or just omit which part is actually the non-copyrighted AI code.
Chat-GPT seems to have some issues with excessive amount of code
It’s really not hard to tell.
I’m wary of those with so much confidence.
If you can tell the contribution is ai generated, it’s not good enough
A lot of butthurt techbros getting cockblocked here lmao
Ur mum
Thank you Gentoo Linux for this.
Might as well ban stack overflow based contributions as well.
AI is a great tool for coding. As long as it’s used responsibly. Like any other tool, really.
External LLMs are great for getting ideas and a quick overview of something, and helpers integrated into IDEs are useful to autocomplete longer lines of code or repetitive things.
I frequently ask ChatGPT to make whole functions for me. It’s important to check the code and test it, obviously, but it has saved me quite a bit of time.
I find it difficult to describe single functions that need to be integrated into a larger project. Especially if it needs to utilize a private or more unknown library. For instance, it totally fucked up using Bluetooth via DBus in C++. And the whole project is basically just that.
It certainly has its limitations. I’ve noticed a few topics where it generally gets things wrong, or I can’t seem to explain it properly. In that case, you may just use it as a reference guide. Maybe toss it some code and ask it what it thinks. It’s not always useful information, but sometimes that leads you down a different road that you would not have thought of before.
Problem is, I only ever need to use something more powerful than a search engine with topics that are too complicated for me and/or not well documented, in which case LLMs fail just as bad. So it’s actually only ever useful to get a general direction of a topic, but even then it could be biased to outdated information (eg. preferring bluetooth.h over DBus based bluetooth handling) or it outright doesn’t know new standards, libraries and styles. And in my experience, problems that have one, well accepted and documented standard don’t need any AI to get knowledge of.
Lol Lemmy socialists are so butthurt. Your statement is literally most reasonable and sane/rational, but lemmy.ml only knows cringey extremism.
What the heck are you on about??? There are no comments on this thread that sounds “butthurt”. And I don’t especially like your generalisation of Lemmy users. You sound like a troll.
Socialism is when people use tools to help complete a task?
For fuck sake you may as well come out as a pedophile if you’re going to be posting shit like this.
But how would they know? It’s like Blade Runner.
Lots of companies will do this, eventually advertising the purity and the size of their human created training data.
These will be the companies selling their content to AI companies, although some will probably just be scanned in illegally. Perhaps a new type of copy write lawsuit will have to be invented.
Most people will continue to use these sites, aware their data is being used like this.