• 0 Posts
  • 31 Comments
Joined 4 months ago
cake
Cake day: March 3rd, 2024

help-circle











  • Wherever humans draw the line. The meme uses the assumption that there is a clear change from earlier species to later descendants, when it reality it is a continuous change of many characteristics each time an individual reproduces and spreads their genetics. It’s the flaw of the missing link argument.



  • The caveat of finding “better” methods is that it excuses continuing or expanding the things we do that are the core problems of rapid growth, consumption, and a throwaway society. And like you said, they have their own issues that might become problematic with growth in that process. Not to say that we shouldn’t try to improve what we can, just a point that being better than the worst way to do things isn’t all that great either.

    The word “sustainable” in the title is one of those greenwashing terms to sell a product and keep the status quo of business as usual. As the report shows.



  • If anything I think the development of actual AGI will come first and give us insight on why some organic mass can do what it does. I’ve seen many AI experts say that one reason they got into the field was to try and figure out the human brain indirectly. I’ve also seen one person (I can’t recall the name) say we already have a form of rudimentary AGI existing now - corporations.


  • Rhaedas@fedia.iotoProgrammer Humor@programming.dev"prompt engineering"
    link
    fedilink
    arrow-up
    60
    arrow-down
    3
    ·
    edit-2
    3 months ago

    LLMs are just very complex and intricate mirrors of ourselves because they use our past ramblings to pull from for the best responses to a prompt. They only feel like they are intelligent because we can’t see the inner workings like the IF/THEN statements of ELIZA, and yet many people still were convinced that was talking to them. Humans are wired to anthropomorphize, often to a fault.

    I say that while also believing we may yet develop actual AGI of some sort, which will probably use LLMs as a database to pull from. And what is concerning is that even though LLMs are not “thinking” themselves, how we’ve dived head first ignoring the dangers of misuse and many flaws they have is telling on how we’ll ignore avoiding problems in AI development, such as the misalignment problem that is basically been shelved by AI companies replaced by profits and being first.

    HAL from 2001/2010 was a great lesson - it’s not the AI…the humans were the monsters all along.