• 𝘋𝘪𝘳𝘬@lemmy.ml
        link
        fedilink
        arrow-up
        47
        arrow-down
        1
        ·
        3 months ago

        This is just a normal fist! I don’t see anything wrong with it!

            _______
        ---'   ____)____
                  ______)
                  ______)
                  _______)
                 _______)
                 _______)
        ---.__________)
        
        
    • TechNom (nobody)@programming.dev
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      3 months ago

      I don’t think that this is a hard rule. They probably look for the same signs that we do - plausible sounding utter gibberish. They just don’t want the drop in quality due to that. If an author creates content with AI, but takes their time to edit and improve it, I think that the Gentoo team may give it a pass.

    • tabular@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 months ago

      When you write a copyright notice you aught to specify which code is actually copyrighted and which is AI written? Guess you can just include the code and pretend you wrote it, or just omit which part is actually the non-copyrighted AI code.

    • Titou@feddit.de
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      3 months ago

      Chat-GPT seems to have some issues with excessive amount of code

  • saigot@lemmy.ca
    link
    fedilink
    arrow-up
    34
    ·
    3 months ago

    If you can tell the contribution is ai generated, it’s not good enough

  • Cyborganism@lemmy.ca
    link
    fedilink
    arrow-up
    33
    arrow-down
    18
    ·
    3 months ago

    Might as well ban stack overflow based contributions as well.

    AI is a great tool for coding. As long as it’s used responsibly. Like any other tool, really.

    • 30p87@feddit.de
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      3 months ago

      External LLMs are great for getting ideas and a quick overview of something, and helpers integrated into IDEs are useful to autocomplete longer lines of code or repetitive things.

      • TherouxSonfeir@lemm.ee
        link
        fedilink
        arrow-up
        9
        arrow-down
        5
        ·
        3 months ago

        I frequently ask ChatGPT to make whole functions for me. It’s important to check the code and test it, obviously, but it has saved me quite a bit of time.

        • 30p87@feddit.de
          link
          fedilink
          arrow-up
          7
          ·
          3 months ago

          I find it difficult to describe single functions that need to be integrated into a larger project. Especially if it needs to utilize a private or more unknown library. For instance, it totally fucked up using Bluetooth via DBus in C++. And the whole project is basically just that.

          • TherouxSonfeir@lemm.ee
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            3 months ago

            It certainly has its limitations. I’ve noticed a few topics where it generally gets things wrong, or I can’t seem to explain it properly. In that case, you may just use it as a reference guide. Maybe toss it some code and ask it what it thinks. It’s not always useful information, but sometimes that leads you down a different road that you would not have thought of before.

            • 30p87@feddit.de
              link
              fedilink
              arrow-up
              9
              ·
              3 months ago

              Problem is, I only ever need to use something more powerful than a search engine with topics that are too complicated for me and/or not well documented, in which case LLMs fail just as bad. So it’s actually only ever useful to get a general direction of a topic, but even then it could be biased to outdated information (eg. preferring bluetooth.h over DBus based bluetooth handling) or it outright doesn’t know new standards, libraries and styles. And in my experience, problems that have one, well accepted and documented standard don’t need any AI to get knowledge of.

    • Richard@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      28
      ·
      3 months ago

      Lol Lemmy socialists are so butthurt. Your statement is literally most reasonable and sane/rational, but lemmy.ml only knows cringey extremism.

      • Cyborganism@lemmy.ca
        link
        fedilink
        arrow-up
        14
        ·
        3 months ago

        What the heck are you on about??? There are no comments on this thread that sounds “butthurt”. And I don’t especially like your generalisation of Lemmy users. You sound like a troll.

      • Omega_Haxors@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        For fuck sake you may as well come out as a pedophile if you’re going to be posting shit like this.

  • antidote101@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Lots of companies will do this, eventually advertising the purity and the size of their human created training data.

    These will be the companies selling their content to AI companies, although some will probably just be scanned in illegally. Perhaps a new type of copy write lawsuit will have to be invented.

    Most people will continue to use these sites, aware their data is being used like this.