𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠

  • 0 Posts
  • 97 Comments
Joined 1 year ago
cake
Cake day: August 16th, 2023

help-circle



  • Except the part where it said downloading videos is against their terms of service? Which was my only point?

    Did you completely fail to read the part “except where authorized”? That bit of legalese is a blanket “you can’t use this software in a way we don’t want to”.

    You physically cannot download files to a browser. A browser is a piece of software. It does not allow you to download anything

    Ah, you just have zero clue what you’re talking about, but you think you do. I can point out exactly where you are on the Dunning-Kruger curve.

    This is such a wild conversation and ridiculous mental gymnastics. I think we’re done here.

    Hilarious coming from you, who has ignored every bit of information people have thrown at you to get you to understand. But agreed, this is not going anywhere.


  • Yes, by allowing you to download the video file to the browser. This snippet of legal terms didn’t really reinforce any of your points.

    But it actually is helpful for mine. In legalese, downloading and storing a file actually falls under reproduction, as this essentially creates an unauthorized copy of the data if not expressly allowed. It’s legally separate from downloading, which is just the act of moving data from one computer to another. Downloading also kind of pedantically necessitates reproduction to the temporary memory of the computer (eg RAM), but this temporary reproduction is most cases allowed (except when it comes to copyrighted material from an illegal source, for example).

    In legalese here, the “downloading” specifically refers to retrieving server data in an unauthorized manner (eg a bot farm downloading videos, or trying to watch a video that’s not supposed to be out yet). Storing this data to file falls under the legal definition of reproduction instead.











  • What they didn’t prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It’s just that this particular method of inferential training, what they call “AI-by-Learning,” is an NP-hard computational problem.

    This is exactly what they’ve proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).

    They merely mentioned these methods to show that it doesn’t matter which method you pick. The explicit point is to show that it doesn’t matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI. It could be a good AI of course, but that G is pretty important here.

    But it’s easy to just define general intelligence as something approximating what humans already do.

    No, General Intelligence has a set definition that the paper’s authors stick with. It’s not as simple as “it’s a human-like intelligence” or something that merely approximates it.




  • Our squishy brains (or perhaps more accurately, our nervous systems contained within a biochemical organism influenced by a microbiome) arose out of evolutionary selection algorithms, so general intelligence is clearly possible.

    That’s assuming that we are a general intelligence. I’m actually unsure if that’s even true.

    That doesn’t mean they’ve proven there’s no pathway at all.

    True, they’ve only calculated it’d take perhaps millions of years. Which might be accurate, I’m not sure to what kind of computer global evolution over trillions of organisms over millions of years adds up to. And yes, perhaps some breakthrough happens, but it’s still very unlikely and definitely not “right around the corner” as the AI-bros claim (and that near-future thing is what the paper set out to disprove).