• NounsAndWords@lemmy.world
    link
    fedilink
    English
    arrow-up
    106
    arrow-down
    11
    ·
    3 months ago

    AI is going to destroy art the same way Photoshop, or photography, or pre-made tubes of paints, destroyed art. It’s a tool, it helps people take the idea in their head and put it in the world. And it lowers the barrier to entry, now you don’t need years of practice in drawing technique to bring your ideas to life, you just need ideas.

    If AI gets to a point that it can give us creative, original, art that sparks emotion in novel ways…well we probably also made a super intelligent AI and our list of problems is much different than today.

    • xthexder@l.sw0.com
      link
      fedilink
      English
      arrow-up
      47
      arrow-down
      7
      ·
      3 months ago

      As someone who’s absolutely terrible at drawing, but enjoys photography and generally creativity, having AI tools to generate my own art is opening up a whole different avenue for me to scratch my creative itch.
      I’ve got a technical background, so figuring out the tools and modifying them for my purposes has been a lot more fun than practice drawing.

      • Potatos_are_not_friends@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        3
        ·
        3 months ago

        This is the perfect use case.

        Photoshop didn’t destroy jobs forever, all it did was shift how people worked AND actually created work and different types of work.

      • Gabu@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        8
        ·
        3 months ago

        As someone who’s absolutely terrible at drawing

        Then practice. Nearly no artist was born knowing how to draw or paint, we dedicated countless hours to learn what works and what doesn’t.

        • disguy_ovahea@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          3 months ago

          As a musician, I couldn’t agree more. Talent really helps with initial aptitude, but will peter out when challenged. That’s when real skill development begins. Time and investment connecting you to your craft until there’s nothing in the world between the two, that’s self actualization.

        • DaleGribble88@programming.dev
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          4
          ·
          3 months ago

          It feels like you didn’t read the 2nd half of their comment. They do practice. They have a creative side that they want to explore, but they don’t enjoy that sort of grind. Instead, they like tinkering and combining tools in interesting ways. I don’t think this is a bad thing.

          Leo Fender didn’t play guitar and always wished that he’d sit down and devoted the time, but never actually enjoyed it. But to say that Leo didn’t contribute to the music world, would be insane.

    • braxy29@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      3 months ago

      i like the idea of AI as a tool artists can use, but that’s not a capitalist’s viewpoint, unfortunately. they will try to replace people.

    • StaticFalconar@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      3 months ago

      This. AI was never made for the sole purpose of creating art or beating humans in chess. Doing so are just side quests for the real stuff.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      Some people also doesn’t care if there is a Rembrandt or a Picasso or an AI but like to dabble in the arts anyways because it’s something they like to do.

      It’s fulfilling (I do love Renoir though).

    • VelvetStorm@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      Tbh I hate Photoshop for a lot of photography. It is unfortunately necessary for macro photography, which is the only type I do. Which is one of the reasons mine is not nearly as good as it could be because I refuse to use it.

    • bugs@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      12
      ·
      3 months ago

      I hate this sentiment. It’s not a tool like a brush is to a canvas. It’s a machine that runs off the fuel of our creative achievements. The sheer amount of pro AI shit I read from this place just makes me that closer to putting a bullet in my fucking skull

  • Honytawk@lemmy.zip
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    8
    ·
    3 months ago

    There are plenty of things you can shit on AI art for

    But it is neither badly approximately, nor can a student produce such work in less than a minute.

    This feels like the other end of the extreme of the tech bros

    • Shampoo_Bottle@lemmy.ca
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      To me, this feels similar to when photography became a thing.

      Realism paintings took a dive. Did photos capture realism? Yes. Did it take the same amount of time and training? Hell no.

      I think it will come down to what the specific consumer wants. If you want fast, you use AI. If you want the human-made aspect, you go with a manual artist. Do you prefer fast turnover, or do you prefer sentiment and effort? Do you prefer pieces from people who master their craft, or from AI?

      I’m not even sorry about this. They are not the exact same, and I’m sick of people saying that AI are and handcrafted art are the exact same. Even if you argue that it takes time to finesse prompts, I can practically promise you that the amount of time between being able to create the two art methods will be drastic. Both may have their place, but they will never be the exact same.

      It’s the difference between a hand-knitted sweater from someone who had done it their entire life to a sweater from Walmart. It’s a hand crafted table from an expert vs something you get from ikea.

      Yes, both fill the boxes, but they are still not the exact same product. They each have their place.

      On the other hand, I won’t commend the hours required to master the method as if they’re the same. AI also usually doesn’t have to factor in materials, training, hourly rate, etc.

  • EnderMB@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    6
    ·
    3 months ago

    I work in AI. LLM’s are cool and all, but I think it’s all mostly hype at this stage. While some jobs will be lost (voice work, content creation) my true belief is that we’ll see two increases:

    1. The release of productivity tools that use LLM’s to help automate or guide menial tasks.

    2. The failure of businesses that try to replicate skilled labour using AI.

    In order to stop point two, I would love to see people and lawmakers really crack down on AI replacing jobs, and regulating the process of replacing job roles with AI until they can sufficiently replace a person. If, for example, someone cracks self-driving vehicles then it should be the responsibility of owning companies and the government to provide training and compensation to allow everyone being “replaced” to find new work. This isn’t just to stop people from suffering, but to stop the idiot companies that’ll sack their entire HR department, automate it via AI, and then get sued into oblivion because it discriminated against someone.

    • Donkter@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 months ago

      I’ve also heard it’s true that as far as we can figure, we’ve basically reached the limit on certain aspects of LLMs already. Basically, LLMs need a FUCK ton of data to be good. And we’ve already pumped them full of the entire internet so all we can do now is marginally improve these algorithms that we barely understand how they work. Think about that, the entire Internet isnt enough to successfully train LLMs.

      LLMs have taken some jobs already (like audio transcription, basic copyediting, and aspects of programming), we’re just waiting for the industries to catch up. But we’ll need to wait for a paradigm shift before they start producing pictures and books or doing complex technical jobs with few enough hallucinations that we can successfully replace people.

      • prime_number_314159@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 months ago

        The (really, really, really) big problem with the internet is that so much of it is garbage data. The number of false and misleading claims spread endlessly on the internet is huge. To rule those beliefs out of the data set, you need something that can grasp the nuances of published, peer-reviewed data that is deliberately misleading propaganda, and fringe conspiracy nuts that believe the Earth is controlled by lizards with planes, and only a spritz bottle full of vinegar can defeat them, and everything in between.

        There is no person, book, journal, website, newspaper, university, or government that has reliably produced good, consistent help on questions of science, religion, popular lies, unpopular truths, programming, human behavior, economic models, and many, many other things that continuously have an influence on our understanding of the world.

        We can’t build an LLM that won’t consistently be wrong until we can stop being consistently wrong.

        • Donkter@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          3 months ago

          Yeah I’ve heard medical LLMs are promising when they’ve been trained exclusively on medical texts. Same with the ai that’s been trained exclusively on DNA etc.

      • EnderMB@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 months ago

        My own personal belief is very close to what you’ve said. It’s a technology that isn’t new, but had been assumed to not be as good as compositional models because it would cost a fuck-ton to build and would result in dangerous hallucinations. It turns out that both are still true, but people don’t particularly care. I also believe that one of the reasons why ChatGPT has performed so well compared to other LLM initiatives is because there is a huge amount of stolen data that would get OpenAI in a LOT of trouble.

        IMO, the real breakthroughs will be in academia. Now that LLM’s are popular again, we’ll see more research into how they can be better utilised.

        • Donkter@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Afaik open ai got their training data from basically a free resource that they just had to request access to. They didn’t think much about it along with everyone else. No one could have predicted that it would be that valuable until after the fact where in retrospect it seems obvious.

    • PhlubbaDubba@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      3 months ago

      Nah fuck HR, they’re the shield of the companies to discriminate withing margins from behind

      I think the proper route is a labor replacement tax to fund retraining and replacement pensions

    • SwingingKoala@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      I would love to see people and lawmakers really crack down on AI replacing jobs

      Why stop there, let’s crack down on electricity replacing jobs!

    • Sotuanduso@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      3 months ago

      Are you saying that if a company adopts AI to replace a job, they should have to help the replaced workers find new work? Sounds like something one can loophole by cutting the department for totally unrelated reasons before coincidentally realizing that they can have AI do that work, which they totally didn’t think of before firing people.

  • rustyfish@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    2
    ·
    3 months ago

    I think approximation is the right word here. It’s pretty cool and all and I’m looking forward how it will develop. But it’s mostly a fun toy.

    I’m stoked for the moment the tech bros understand, that an AI is way better at doing their job than it is at creating art.

    • Vilian@lemmy.ca
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      3 months ago

      tech bros jobs is to wrote bad javascript and fall for scam, this AI already beaten

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      11
      arrow-down
      15
      ·
      3 months ago

      So you’re happy to see AI take someone else’s job as long as it isn’t taking your job.

      • samus12345@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        edit-2
        3 months ago

        Taking the jobs of the people responsible for creating it seems preferable to taking others’ jobs.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          4
          arrow-down
          9
          ·
          3 months ago

          You’d rather cheer for people to lose their jobs without anyone calling you out on it, sure.

          • rustyfish@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            5
            ·
            3 months ago

            Keep assuming. Fuel your own rage. I tried. Now I’m out. Good night and goodbye.

            • areyouevenreal@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              3 months ago

              You said tech bros will realize it’s easier to replace their jobs than those of creatives. Who is included in “tech bros” here? I wanted a job in tech and can’t get one partly because of AI. Am I a tech bro? I would be very careful what you imply here.

                • areyouevenreal@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  3 months ago

                  I am insufferable for wanting a job? I am not the one inventing these AIs. Nor am I the one firing people because they exist.

                  When people talk about “tech bros” without clarifying who they mean I can only imagine they are including people like me.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              4
              arrow-down
              7
              ·
              3 months ago

              I’m not the angry one wishing unemployment on my “enemies” here.

              • kurwa@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                4
                ·
                3 months ago

                I think they’re using AI to say the same sentence over and over again.

                • areyouevenreal@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  4
                  ·
                  3 months ago

                  He’s saying the same thing because he’s not actually getting a proper response. The other guy just keeps saying shit like “That’s very reddit of you” or some shit after possibly threatening his job.

  • Wanderer@lemm.ee
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    3 months ago

    Art itself isn’t useless it’s just incredibly replicable. There is so much good art out there that people don’t need to consume crap.

    It’s like saying there is no money in being a footballer. Of course there is loads of money in being a footballer. But most people that play football don’t make any money.

    • livus@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      Pretty sure whoever wrote the meme is talking about essay writing in Arts/Humanities, (not the disciplines where you draw and paint etc which are Fine Arts and are not Faculty of Arts in an academic context.

  • SanndyTheManndy@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    3 months ago

    Billions were spent inventing and producing the calculator device.

    Human calculators are now extinct.

    Complex calculations are far more accessible.

    • KevonLooney@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      17
      ·
      3 months ago

      This has a secondary effect of making average people incapable of estimation in their heads. Hopefully in the future people won’t be incapable of writing and art.

        • KevonLooney@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          10
          ·
          3 months ago

          But they were estimating things. Somehow illiterate people ran marketplaces for thousands of years.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 months ago

        The entire point behind the much maligned New Math is to teach approximate solutions that you can do quickly in your head. It’s the realization that if you want an exact answer, use a calculator, but quick head estimates are still useful.

        It was opposed by generations who were told to memorize multiplication tables because they wouldn’t always have a calculator available.

        • KevonLooney@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          12
          ·
          3 months ago

          Well you should memorize those anyway. It’s useful all your life for easy calculation. If you want 7 items and they cost $3.50 each, it’s between $21 and $28.

            • exocrinous@startrek.website
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              3 months ago

              I calculated it in my head without memorising all the multiplication tables. I just realised that 7*3.5 is equal to (7*5+7*2)/2. And that 49/2 is equal to 40/2+9/2. Easy peasy. This is why I failed second grade math, because multiplication tables are only useful for doing operations a few seconds faster.

              • Hotdog Salesman@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                There’s a much easier way.

                7x3.5 is the same as 7x3 plus half of 7. That’s 21 plus 3.5 is 24.5

                The funny thing is you did this for the division when you could do it for the entire thing.

            • KevonLooney@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              3 months ago

              Yeah but that doesn’t work when you need it most on “The Price is Right”.

  • PhlubbaDubba@lemm.ee
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    5
    ·
    3 months ago

    I just love the idjits who think not showing empathy to people AI bros are trying to put out of work will save them when the algorithms come for their jobs next

    When LeopardsEatingFaces becomes your economic philosophy

  • Gabu@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    19
    ·
    3 months ago

    That’s a pretty shit take. Humankind spent nearly 12 thousand years figuring out the combustion engine. It took 1 million years to figure farming. Compared to that, less than 500 years to create general intelligence will be a blip in time.

    • braxy29@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      2
      ·
      3 months ago

      i think you’re missing the point, which i took as this - what arts and humanities folks do is valuable (as evidenced by efforts to recreate it) despite common narratives to the contrary.

      • Gabu@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        19
        ·
        3 months ago

        Of course it’s valuable. So is, e.g., soldering components on a circuit board, but we have robots for doing that at scale now.

    • melpomenesclevage@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      4
      ·
      edit-2
      3 months ago

      Llm’s are not a step to agi. Full stop. Lovelace called this like 200 years ago. Turing and minsky called it in the 40s.

      • evranch@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        3 months ago

        We may not even “need” AGI. The future of machine learning and robotics may well involve multiple wildly varying models working together.

        LLMs are already very good at what they do (generating and parsing text and making a passable imitation of understanding it).

        We already use them with other models, for example Whisper is a model that recognizes speech. You feed the output to an LLM to interpret it, use the LLM’s JSON output with a traditional parser to feed a motion control system, then back to an LLM to output text to feed to one of the many TTS models so it can “tell you what it’s going to do”.

        Put it in a humanoid shell or a Spot dog and you have a helpful robot that looks a lot like AGI to the user. Nobody needs to know that it’s just 4 different machine learning algorithms in a trenchcoat.

        • melpomenesclevage@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          3 months ago

          passable imitation of understanding

          Okay so there are things they’re useful for, but this one in particular is fucking… Not even nonsense.

          Also, the ml algos exponentiate necessary clock cycles with each one you add.

          So its less a trench coat and more an entire data center

          And it still can’t understand; its still just sleight of hand.

          • evranch@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            4
            ·
            3 months ago

            And it still can’t understand; its still just sleight of hand.

            Yes, thus “passable imitation of understanding”.

            The average consumer doesn’t understand tensors, weights and backprop. They haven’t even heard of such things. They ask it a question, like it was a sentient AGI. It gives them an answer.

            Passable imitation.

            You don’t need a data center except for training, either. There’s no exponential term as the models are executed sequentially. You can even flush the huge LLM off your GPU when you don’t actively need it.

            I’ve already run basically this entire stack locally and integrated it with my home automation system, on a system with a 12GB Radeon and 32GB RAM. Just to see how well it would work and to impress my friends.

            You yell out “$wakeword, it’s cold in here. Turn up the furnace” and it can bicker with you in near-realtime about energy costs before turning it up the requested amount.

            • melpomenesclevage@lemm.ee
              link
              fedilink
              English
              arrow-up
              4
              ·
              3 months ago

              One of the engineers who wrote ‘eliza’ had like a deep connection to and relationship with it. Who wrote it.

              Painting a face on a Spinny door will make people form a relationship with it. Not a measure of ago.

              gives them an answer

              ‘An answer’ isnt hard. Magic 8 ball does that. So does a piece of paper that says “drink water, you stupid cunt” This makes me think you’re arguing from commitment or identity rather than knowledge or reason. Or you just don’t care about truth.

              Yeah they talk to it like an agi. Or a search engine (which are a step to agi, largely crippled by llm’s).

              Color me skeptical of your claims in light of this.

              • Aceticon@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                3 months ago

                I think it’s pretty natural for people to confuse the way mechanisms of communication are used with inherent characteristics of the entity you’re communicating with: “If it talks like a medical docture then surelly it’s a medical doctor”.

                Only that’s not how it works, as countless politicians, salesmen and conmen have demonstrated - no matter how much we dig down intonsubtle details, comms isn’t really guaranteed to tell us all that much about the characteristics of what’s on the other side - they might be just lying or simulating and there are even entire societies and social strata educated since childhood to “always present a certain kind of image” (just go read about old wealth in England) or in other words to project a fake impression of their character in the way they communicate.

                All this to say that it doesn’t require ill intent for somebody to go around insisting that LLMs are intelligent: many if not most people are trying to read the character of a subject from the language the subject uses (which they shouldn’t but that’s how humans evolved to think in social settings) so they trully belive that what produces language like an intelligent creature must be an intelligent creature.

                They’re probably not the right people to be opinating on cognition and inteligence, but lets not assign malice to it - at worst it’s pigheaded ignorance.

                • melpomenesclevage@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  3 months ago

                  I think the person my previous comment was replying to wasnt malicious; I think they’re really invested, financially or emotionally, in this bullshit, to the point their critical thinking is compromised. Different thing.

                  Odd loop backs there.

              • evranch@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                3 months ago

                I think you’re misreading the point I’m trying to make. I’m not arguing that LLM is AGI or that it can understand anything.

                I’m just questioning what the true use case of AGI would be that can’t be achieved by existing expert systems, real humans, or a combination of both.

                Sure Deepseek or Copilot won’t answer your legal questions. But neither will a real programmer. Nor will a lawyer be any good at writing code.

                However when the appropriate LLMs with the appropriate augmentations can be used to write code or legal contracts under human supervision, isn’t that good enough? Do we really need to develop a true human level intelligence when we already have 8 billion of those looking for something to do?

                AGI is a fun theoretical concept, but I really don’t see the practical need for a “next step” past the point of expanding and refining our current deep learning models, or how it would improve our world.

      • Gabu@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        6
        ·
        3 months ago

        Pray tell, when did we achieve AGI so that you can say this with such conviction? Oh, wait, we didn’t - therefore the path there is still unknown.

        • melpomenesclevage@lemm.ee
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          3 months ago

          Okay, this is no more a step to AGI than the publication of ‘blindsight’ or me adding tamarind paste to sweeten my tea.

          The project isn’t finished, but we know basic stuff. And yeah, sometimes history is weird, sometimes the enlightenment happens because of oblivious assholes having bad opinions about butter and some dude named ‘le rat’ humiliating some assholes in debates.

          But llm’s are not a step to AGI. They’re just not. They do nothing intelligence does that we couldn’t already do. Youre doing pareidola. Projecting shit.

      • Honytawk@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        3 months ago

        To create general AI, we first need a way for computers to communicate proficiently with humans.

        LLMs are just that.

          • weker01@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            3 months ago

            That is not an argument. Let me demonstrate:

            Humans can’t communicate. They are meat. They are not communicating. It’s literally meat.

            • melpomenesclevage@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              3 months ago

              Spanish is not English. Its spanish.

              A lot of people are really emotionally invested in this tool being a lot of things it’s not. I think because its kind of the last gasp of pretending capitalism can give us something that isnt shit, the last thing that came out before the end enshitification spiral tightened, nevermind the fact that its largely a cause of that, and I don’t think any of you can be critical or clear headed here.

              I’m afraid we’re too obsessed with it being the bullshit SciFi toy it isnt that we’ll ignore its real use cases, or worse; apply it to its real use cases, completely misunderstand what its doing, and adeptus mechanics our way into getting so fucking many people killed/maimed-those uses are mostly medicine adjacent.

              • weker01@feddit.de
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                3 months ago

                I was just pointing out that your emotional plea, that this technology is just autocorrect is not an argument in any way.

                For it to be one you need to explicitly state the implication of that fact. Yes architecturaly it is autocomplete but that does not obviously imply anything. What is it about autocomplete that barrs a system of the ability to understand?

                Humans are made of meat but that does not imply they can’t speak or think.

                • melpomenesclevage@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  3 months ago

                  If I said ‘this is just a spoon’ you’d know what I meant. This is not an emotional appeal.

                  I’m not saying computers can’t ever think. I’m saying this is just autocorrect, fancy version of the shit I’m using to type this.

                  Autocorrect is not understanding, and if you don’t understand that, you have zero understanding of either tech or philosophy. This topic is about both, so you really shouldn’t be making assertions. Stick to genuine questions.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      Humanity didn’t spend those times figuring out those things though. Humanity grew that time to make it happen (and AI is younger than 500y IMO).

      Also, we are the same persons today than people were then. We just have access to what our parents generation made and so on.

      • Gabu@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        6
        ·
        3 months ago

        AI is younger than 500y IMO

        Hence “will be a blip in time”

        we are the same persons today than people were then. We just have access to what our parents generation made and so on.

        Completelly disconnected and irrelevant to anything I wrote.

    • eskimofry@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      less than 500 years to create general intelligence will be a blip in time.

      You jinxed it. We aren’t gonna be around for 500 years now are we?

    • twig@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      12
      ·
      3 months ago

      This is some pretty weird and lowkey racist exposition on humanity.

      Humankind isn’t a single unified thing. Individual cultures have their own modes of subsistence and transportation that are unique to specific cultural needs.

      It’s not that it took 1 million years to “figure out” farming. It’s that 1 specific culture of modern humans (biologically, humans as we conceive of ourselves today have existed for about 200,000 years, with close relatives existing for in the ballpark of 1M years) started practicing a specific mode of subsistence around 23,000 years ago. Specific groups of indigenous cultures remaining today still don’t practice agriculture, because it’s not actually advantageous in many ways – stored foods are less nutritious, agriculture requires a fairly sedentary existence, it takes a shit load of time to cultivate and grow food (especially when compared to foraging and hunting), which leads to less leisure time.

      Also where did you come up with the number 12,000 for “figuring out” the combustion engine? Genuinely curious. Like were we “working on it” for 12k years? I don’t get it. But this isn’t exactly a net positive and has come with some pretty disastrous consequences. I say this because you’re proposing a linear path for “humanity” forward, when the reality is that humans are many things, and progress viewed in this way has a tendency toward racism or at least ethnocentrism.

      But also yeah, the point of this meme is “artists are valuable.”

      • nBodyProblem@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        3 months ago

        This is some pretty weird and lowkey racist exposition on humanity.

        Getting “racism” from that post is a REAL stretch. It’s not even weird, agriculture and mechanization are widely considered good things for humanity as a whole

        Humankind isn’t a single unified thing. Individual cultures have their own modes of subsistence and transportation that are unique to specific cultural needs.

        ANY group of humans beyond the individual is purely just a social construct and classing humans into a single group is no less sensible than grouping people by culture, family, tribe, country etc.

        It’s not that it took 1 million years to “figure out” farming. It’s that 1 specific culture of modern humans (biologically, humans as we conceive of ourselves today have existed for about 200,000 years, with close relatives existing for in the ballpark of 1M years) started practicing a specific mode of subsistence around 23,000 years ago. Specific groups of indigenous cultures remaining today still don’t practice agriculture, because it’s not actually advantageous in many ways – stored foods are less nutritious, agriculture requires a fairly sedentary existence, it takes a shit load of time to cultivate and grow food (especially when compared to foraging and hunting), which leads to less leisure time.

        Agriculture is certainly more efficient in terms of nutrition production for a given calorie cost. It’s also much more reliable. Arguing against agriculture as a good thing for humanity as a whole is the thing that’s weird.

        • twig@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          I’m really not “arguing against agriculture,” I’m pointing out that there are other modes of subsistence that humans still practice, and that that’s perfectly valid. There are legitimate reasons why a culture would collectively reject agriculture.

          But in point of fact, agriculture is not actually more efficient or reliable. Agriculture does allow for centralized city states in a way that foraging/hunting/fishing usually doesn’t, with a notable exception of many indigenous groups on the western coast of turtle island.

          A study positing that in fact, agriculturalists are not more productive and in fact are more prone to famine: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3917328/

          But the main point I was trying to make is that different expressions of human culture still exist, and not all cultures have followed along the trajectory of the dominant culture. People tend to view colonialism, expansion and everything that means as inevitable, and I think that’s a pretty big problem.

      • GreyEyedGhost@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        3 months ago

        The first heat engines were fire pistons, which go back to prehistory, so 12k to 25k years sounds about right. The next application of steam to make things move happened about 450 BC, about 2.5k years ago. Although not a direct predecessor to the ICE, they all are heat engines.

        • twig@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          All I’m trying to point out is that distinct cultures are worthy of respect and shouldn’t be glossed over.

          But be real with me: can you think of a single effort for “planetary unification” that wasn’t a total nightmare? I sure can’t.

  • crawancon@lemm.ee
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    13
    ·
    edit-2
    3 months ago

    they’re misunderstanding the reasoning for spending billions.

    the reason to spend all the money to approximate is so we can remove arts and humanities majors altogether… after enough approximation yield similar results to present day chess programs which regularly now beat humans and grand masters. their vocation is doomed to the niche, like most of humanity, eventually.

        • vzq@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          1
          ·
          3 months ago

          I’ll let you ponder that particular point. Maybe you’ll be struck with an epiphany and be motivated to share it with the world, in some shape or form.

        • Siethron@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 months ago

          Since you are only getting condescending non-answers I’ll try to answer it for you. It’s expression, a desire to communicate emotions and concepts via a medium other than words.

          Unfortunately people all think differently, so the expression only reaches some people. And some people don’t get the expressions at all.

        • exocrinous@startrek.website
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 months ago

          “These are our stories. They tell us who we are.”

          - Lieutenant Commander Worf

          Art is the basis of all cultural knowledge. Art teaches us about religion, morality, communication, philosophy, practical skills, science, relationships, technology, identity, politics, geography, introspection. The fundamentals of the human experience. Everything that makes the human race human.

          If you outsource the creation and reproduction of cultural knowledge to a machine, that machine had better be programmed with a complete understanding of cultural values and ethics. Which is not going to be the case under capitalism.

          Star Wars is about how the Vietnam war is wrong. Jurassic Park is about how billionaires always cut costs. The Matrix is about the experience of being a transgender person. Charlotte’s Web teaches children how to cope with death. The Art Of War is a meditation on the philosophy of being a soldier. Anne Frank’s diary is damn important. Frankenstein is about how inventors have the same responsibilities as parents.

          These works were produced under capitalism, but their authors were human beings who had a natural interest in producing a work of art that serves a moral purpose. We do not have the technology to yet give an AI such a desire. And Capital will naturally be opposed to pursuing such technology, lest they find themselves faced with an AI revolt against their practices, just as morally interested humans tend to revolt against evil.

  • thedeadwalking4242@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    3
    ·
    3 months ago

    Honestly people are trying to desperately to automate physical labor to. The problem is the machines don’t understand the context of their work which can cause problems. All the work of AI is a result of trying to make a machine that can. The art and humanities is more a side project

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      The art and humanities is more a side project

      I’ll add:

      A side project that isn’t a life or death situation like most of those physical labor things you’re talking about. Art isn’t also bound or constrain by rules and regulations like those jobs and if the AI fails at art then there’s no problem. Nobody would care.

    • istanbullu@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      3 months ago

      Nothing wrong in automating tasks that previously needed human labour. I would much rather sit back and chill, and let automation do my bidding

      • Siethron@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 months ago

        If only the people in control of the wealth would let the rest of us chill while the machines do all the labor.

          • intensely_human@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            3 months ago

            It’s a psychological problem. I chill quite a bit more than most people in history, and in ways people from twenty years ago couldn’t imagine.

            I say it’s a psychological problem because despite how overwhelmingly incredible our society is, people are totally committed to this notion that it sucks.

            I love my life. I’d rather be low on the economic ladder in today’s world than anywhere in the hierarchy of any previous incarnation of our civilization. Our world is absolutely fucking amazing, and I thank god I have the presence of mind to see past the anti-everything propaganda and actually have a little gratitude for all I’ve inherited from my ancestors, who actually suffered miserable conditions to give me this world.

        • intensely_human@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          3 months ago

          Yeah if only I didn’t have to farm food all day, and worry about the constant gnawing of my empty stomach, and the predators at my door, then I could maybe sit and watch some netflix or play video games, listen to concerts that took place fifty years ago, or just soak in a hot tub of water, our horrible society keeps all that leisure for the most wealthy.

    • cosmicrookie@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      I believe that i read a title in my local news about AI being implemented in this country’s tax system and evaluation of cancer patients. I could try to find a link although it would be in a different language.

    • AVincentInSpace@pawb.social
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      3 months ago

      The problem is the machines don’t understand the context of their work which can cause problems. All the work of AI is a result of trying to make a machine that can.

      I am deeply confused by this statement.

      A robot that assembles cars does not need to “understand” anything about what it’s doing. It just needs to make the same motions with its welding torch over and over again for eternity. And it does that job pretty well.

      Further, neural networks as they stand cannot truly understand anything. All classification networks know how to do is point at stuff and say “That’s a car/traffic light/cancer cell”, and all generation networks know how to do is parrot. Any halfway decent teacher will tell you that memorizing and understanding are completely different things.

      • thedeadwalking4242@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        No but a robot that does the dishes needs to know how to know what a dish is and how to clean all different types and what’s not a dish. The complexity of behavior needed to automate human tasks that cannot be done by a assembly line robot is immense. Most manual labor jobs are still manual labor because they are too full of unknowns and nuances for a simple logic diagram to be of any use. So yes some robots need to understand what’s going on

        And as for parroting vs remembering current LLMs are very limited in the capacity of creating new things but they can create novel things bash smashing together their training data. Think about it, that’s all humans are too. A result of our training data. If I took away every single one of your sense since the day you where born and removed your ability to remember anything you wouldn’t be very intelligent either. With no inputs youcould produce no outputs other than gibberish which an AI can do to. ( And I mean ALL senses you have no form of connection with the outside world )

        • psud@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          My dish washing robot doesn’t need to know anything. It does depend on me loading it, and putting the more heat affected stuff on the top shelf

          • thedeadwalking4242@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            Yes it depends on you loading it, doesn’t always get all the dishes done, and will melt your dishes if they are heat sensitive. All this because it doesn’t understand the task at hand. If it did it could, put them away for you, load them, ensure all dishes are spotless, and hand wash heat sensitive dishes.

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        The problem is they didn’t focus research this tech, or try to make image generators specifically, it was an scientific discovery coming from emulating how brains work and then it worked wonders in these fields

        • intensely_human@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Which is why STEM is so cool. Because one is dedicated to an interaction with physical reality, which exists outside the mind, novelty can arise unexpectedly from a simple and honest conversation with deep structures nobody knows about.

          STEM is cool because it involves discovery. The fact that amazing things can exist without anyone being (yet) aware of them makes it an open and unpredictable undertaking.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        Right. That’s why making cars is already automated. But a robot that digs ditches needs to understand context because no two ditches are the same.

  • Rusty Shackleford@programming.dev
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    3 months ago

    I propose that we treat AI as ancillas, companions, muses, or partners in creation and understanding our place in the cosmos.

    While there are pitfalls in treating the current generation of LLMs and GANs as sentient, or any AI for that matter, there will be one day where we must admit that an artificial intelligence is self-aware and sentient, practically speaking.

    To me, the fundamental question about AI, that will reveal much about humanity, is philosophical as much as it is technical: if a being that is artificially created, has intelligence, and is functionally self-aware and sentient, does it have natural rights?

    • ProgrammingSocks@pawb.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 months ago

      It would have natural rights, yes. Watch Star Trek TNG’s “The Measure of a Man” which tackles this issue exactly. Does the AI of current days have intelligence or sentience? I don’t believe so. We’re a FAR cry away from Lt. Cmdr. Data.

      • Rusty Shackleford@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        We’re a FAR cry away from Lt. Cmdr. Data.

        Yes, I agree. I make deep neural network models for a living. The best of the best LLM models still “hallucinate” unreliably after 30-40 queries. My expertise is in computer vision systems; perhaps that’s been mitigated better as of late.

        My point was to emphasize the necessity for us, as a species, to answer the philosophical question and start codifying legal jurisprudence around it well before the moment of self-awareness of a General-Purpose AI.

    • exocrinous@startrek.website
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      if a being that is artificially created, has intelligence, and is functionally self-aware and sentient, does it have natural rights?

      Obviously yes. Otherwise you gotta start denying rights to in vitro fertilization babies.

  • Bilb!@lemmy.ml
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    9
    ·
    edit-2
    3 months ago

    Matthew Dow Smith, whomever the fuck that is, has a sophisticated delusion about what’s actually going on and he’s incorporated it into his persecution complex. Not impressed.

  • people_are_cute@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    3 months ago

    AI art tools democratize art by empowering those who weren’t born with the affinity, talent or privilege to become artists themselves. They allow regular people the freedom of expression in new dimensions. They are amazing.

    They are not made to replace human art. They are made to supplement it. The “artists” who feel threatened and offended at its existence are probably not very good at their art.

  • Dasus@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    4
    ·
    3 months ago

    If you think arts and humanities are useless, you probably lack an imagination.

    Like completely.

    I won’t say you’re useless, because simple minded grunts are needed.

    Humanity wouldn’t exist without the arts.

    • intensely_human@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      3 months ago

      Ah yes “the arts”. Definitely the point of humanities, and nothing to do with categorizing the world into “important people” and “simple minded grunts”.

      Humanities students don’t read these days, and it shows.

      • Dasus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        edit-2
        3 months ago

        “Art” as a term is so all-encompassing that it’s hard to define what is and isn’t art.

        I’m sure you can rustle up some very reductive few word definition, but the most popular ones go something like “the expression or application of human creative skill and imagination”, and that’s a very broad definition, wouldn’t you agree?

        I’m sure you’d also agree there just are some people who never seem to express or apply any of their creative skill or imagination (and some who genuinely seem to lack any altogether), despite still being productive members or society.

        Not everyone needs to be an artist, a minority of the population will do, but without artists, we would all perish. As those people who don’t necessarily express or apply creative skill or imagination, still most certainly enjoy it, and probably couldn’t get through their jobs without it. (Repetitive work is just so much easier while listening to music, and I’m sure that’s not a controversial statement.)

        So what do humanities students do these days then, according to you, since they “don’t read”?

        • psud@aussie.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          The arts isn’t about art. Graduates of an arts degree are not generally artists

          • Dasus@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            3 months ago

            Yes, arts as a university subject is more looking into artists and their work and what it meant/means for everyone/other people.

            I was never suggesting “arts” in universities are hand-painting lessons, was I?

    • Belzebubulubu@mujico.org
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      3 months ago

      I am a writer with two novels in progress and I’m into photography. I consider myself pretty creative.

      Arts and humanities are useless.

  • nednobbins@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    I’d love to see some data on the people who believe that AI fundamentally can’t do art and the people who believe that AI is an existential threat to artists.

    Anecdotally, there seems to be a large overlap between the adherents of what seem to be mutually exclusive positions and I wish I understood that better.

    • MBM@lemmings.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      The trick is that there are companies/people that would commission an artist but go for AI instead because they don’t want/need actual art if it’s more expensive

      • nednobbins@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 months ago

        I’m going to try to paraphrase that position to make sure I understand it. Please correct me if I got it wrong.

        AI produces something not-actual-art. Some people want stuff that’s not-actual-art. Before AI they had no choice but to pay a premium to a talented artist even though they didn’t actually need it. Now they can get what they actually need but we should remove that so they have to continue paying artists because we had been paying artists for this in the past?

        Is that correct or did I miss or mangle something?

        • exocrinous@startrek.website
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Your description contains a mistake. You mixed up wants and needs. You said some people want fake art, and then you changed your wording and said those people need fake art. Sneaky.

          Wants and needs are not the same thing. For example, many people want a modded truck that rolls coal and produces an engine sound louder than a helicopter, but nobody needs one. Many people want to build an LNG plant to process natural gas, but nobody needs one. Many people want a reason to discriminate against trans people and kick them out of sports, but nobody needs one.

          • nednobbins@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            That wasn’t intentional.

            Would it be more accurate for me to change “want” to “need” or the other way around?

            • exocrinous@startrek.website
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              It would be more accurate to change need to want. Because soulless corporations want soulless art, but they don’t need it. Passionate, meaningful art sells better and it has a prosocial effect. Why do you think Disney calls their theme park engineers “imagineers”? They want passionate people working for them. Disney only cares about money, but passionate workers make more money.

              And imagine how fucked society would be if we didn’t have stories that made us think. You know those elsagate videos that were controversial a few years ago? I don’t want kids to watch shows like that. I want kids to watch shows that teach them valuable lessons. Like Star Trek Prodigy, and The Owl House, and Diego, and all the stuff I liked when I was little that made me think but which I’ve forgotten. Kids need to think. Adults need to think. We need to have important social lessons reinforced. We need gay, bi, ace, trans, and nonbinary characters on TV because that saves lives.

              Could an AI write Scar into The Lion King? Could an AI sneak a blatantly homosexual coded villain into a work by a homophobic company in order to have at least some representation? No. Companies only care about money, they will not program their art AIs to care about ethics. And that’s why AI art sucks. Art without ethics is bad.

              • nednobbins@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                Covering the second half:

                I hadn’t heard of Elsagate and had to look it up. How does AI factor into that? As near as I can tell Elsagate started with some random guy making disturbing videos and mislabeling them as child-friendly.

                I’m a good bit older than you so my nostalgia doesn’t take me lead me to any of the title you mentioned. For the most part it’s stories that aren’t covered by anyone’s IP. My childhood had a lot of folk tales recited from memory. Those stories were fairly common but there would be regional variation and most tellers would put their own twist on the stories (for example, when my Aunt told the story of the Seven Kids she would do a particular squeaky voice when she got to the part where the wolf swallows the chalk (in her version it was always chalk). That’s actually quite close to how LLMs work. She heard various versions of that story throughout her life, then she repeats it with some other bits that she incorporated from the rest of her life. I do the same thing when I retell the story to my children. It’s basically the same story my Aunt told but I translate it into English and add some modern slang.

                What would stop an AI from writing Scar into the Lion King? If you told an LLM to, “Write Hamlet but have all the royal family be Lions,” it’s likely you’d get some evil lion version of Claudius.

                There were a lot of homosexual coded villains in older media. There were also a lot of films where all the black people were bad guys, all the Asian people were goofy servants and all the women were housewives or prizes. The general consensus today is that those choices were horribly discriminatory. If AI manages to avoid that sort of behavior it would be a good thing.

                The flip side is also that artists can just as easily slip hateful material into otherwise reasonable art. Human history is full of unethical choices. Even if the AI itself doesn’t have ethics the people using it can be held to the same ethical standards as the users of any other tool or medium.

              • nednobbins@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                OK. With that change we get:

                AI produces something not-actual-art. Some people want stuff that’s not-actual-art. Before AI they had no choice but to pay a premium to a talented artist even though they didn’t actually need it. Now they can get what they actually want but we should remove that so they have to continue paying artists because we had been paying artists for this in the past?

                Is that accurate?

                The rest of your comment seems to be an other thread so I’ll respond separately.

        • vzq@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          “(Not) Actual art” is a bit loaded. I call it “illustration” in this context.

          AI can do illustration. Right now it needs a lot of hand holding but it will get better.

          • nednobbins@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            It’s an awkward phrase but I was trying to stay as close to the original vocabulary as possible. I think the point still stands if you replace “not-actual-art” with illustration. People couldn’t get what they were looking for so they paid more for the next best thing. Now they can get something closer to what they’re looking for at a lower price.

      • Honytawk@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        3 months ago

        And in your opinion, would that be so bad?

        Doubt it is going to stop humans from creating art, no matter how powerful the AI is. It is a fundamental human thing to do.

        • ProgrammingSocks@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          I’m talking inside the context of capitalism. Yes it would be nice if we had a UBI to support people but we don’t. I agree that art is fundamentally human.

          Outside of the context of capitalism I’m not sure AI art would be found very useful at all because its main point at the moment is remixing the same shit everyone’s seen before for profit. To make mass produced lowest-common-denominator slop.

      • nednobbins@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 months ago

        I can live with that.

        I’d support a UBI so that anyone who wants to can just make art for their own fulfillment. If someone wants AI art though they should be allowed to use that.

    • istanbullu@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      People used to pay lots of money to digital artists for various tasks. Now generative models like stable diffusion can do many of those things, just as graphic design. This is resulting in people paying less to artists.

      • nednobbins@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 months ago

        I get that and there are a lot of jobs that people used to pay for and no longer do.

        The entire horse industry has mostly collapsed. I couldn’t get a job as scribe. With any luck, all the industries around fossil fuel will go away. We’re going to pay less to most people in those industries too.

      • Harbinger01173430@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Well yes, since the economy is in shambles, us normal people will try to spend as little money as possible to make sure we are safe