• 0 Posts
  • 32 Comments
Joined 3 years ago
cake
Cake day: June 13th, 2023

help-circle
  • As someone aware of decades of legal battles to prevent the gutting of education systems, usually noticeable around local levels, you almost always end up at corpo think tanks like the heritage foundation.

    If you’re familiar with the heritage foundation, they’ve been trying to run a project2025 style playbook for decades, and it is only through their success that current administration is a billionaire playground. Reminder that elon musk could directly choose for hundreds of thousands of children to die this year by taking aware their food and medicine, because he wanted to. Also billionaires got an unimaginably generous treatment at the same time, worth much more than all of the food and medicine.

    It’s more an amalgam of cooperatively evil assholes, most of which have an absurd amount of money for some reason, but yeah, billionaires are a good chunk of why there are whole groups being funded to spend all day every day trying to kneecap educational efforts, or painting academics as evil satanists who are corrupting your children with science.


  • Peanut@sopuli.xyzto196@lemmy.blahaj.zoneMom rule
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 months ago

    a lot of this is novel, and only now being properly understood from confidence of multiple expertise perspectives being compared, that allow more confidence in certain weightings on old ideas. some old ideas are hard to quickly correct for, because people don’t like digging out parts of their current model for making sense of the world.

    to be fair, that is mechanically connected to the same drives that run fear and everything else based on how we contextualize the ‘surprise’ we feel when the world doesn’t match our model.

    most of the stuff on surprisal is mostly in the karl friston direction, or predictive processing. active inference is a very good thing to study, because it teaches how we make sense of the world as a bunch of cells working together, in varied and often novel contexts.

    for more technical reading

    https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind

    free textbook on MIT, although it’s a couple years old now.

    https://pubmed.ncbi.nlm.nih.gov/38667857/

    this is one of my favourite current takes, basically anything around mahault, friston, or michael levin right now is great for a technical framing of things.

    levin is a good source if you like the cancer analogy

    https://pubmed.ncbi.nlm.nih.gov/33961843/

    although this writeup was a few years back. he is constantly interacting with different experts of different fields on youtube, and there’s a lot to be learned just hearing their conversations and sense making. some of the most amazing empirical results in recent experiments are coming out around michael levin’s work. he keeps a summary up to date for his broader message if anyone wants to know tufts university for something other than the government bagging students.

    more lighthearted and mainstream,

    algospeak by adam aleksic,

    Godel Escher Bach/i am a strange loop by douglas hofstadter,

    for understanding language and complexity.

    extra shortform,

    https://www.youtube.com/@theforestjar/videos

    the forest jar is often dismissed because of the art and dry delivery, but the topics are fantastic, comparing the represented perspectives of different ‘thought tools’ or representational perspectives, to convey a greater and more nuanced picture.

    a lot of knowledge is just understanding how cults work to sustain their current model of the world in-front of critique.

    some things are just general concepts that need to be better collected and talked about together, like the motte and bailey, and how cults, or people like jordon peterson will confabulate pockets of faux expertise complexity (kind of the same way AI will confabulate in a way that sounds like it makes sense) but he is actually just diverting and distracting so that he doesn’t need to deal with the dissonance in question. someone actually framed it well in his jubilee thing and it’s hard to call it out if the surrounding people aren’t familiar.

    that being said, any pocket can take all of your time if you let it, so we need to do better at creating cultures of cooperatively and intentionally interacting with this material. people with more social talent would be valuable here. etc.

    also artists should already be working with scientists to help with communicating the truth of current understanding, better than clickbait dishonest journal headlines.

    hopefully some good resources here, unless you’re looking for something else more specific.


  • Peanut@sopuli.xyzto196@lemmy.blahaj.zoneMom rule
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 months ago

    TLDR: cooperation, solidarity, and understanding diverse perspectives, which is a large part of what ‘learning’ is, if you’ve ever learned about complex intelligent systems. which you should, because we want to resolve dissonance between different model perspectives without losing the ability to communicate, or becoming hostile. also dealing with the non-communicative parasitic patterns, like cults, which are like organisms that get ‘smarter’ and more ‘able’ at large scale, at least in regard to growing goodhart’s law style until the non-comprehended environment dies along with the host.

    Expanded:

    learning about learning is important. the scientific method, bayesian probabilistic weighting, how words work, history and diverse expert consensus, bias, etc.

    all very important and should be the main class in school at this point. if you don’t learn how to learn, you might just find a hole and build complexity within confabulations until nobody knows what you’re talking about, and you can no longer confirm your beliefs with the diverse representations of others.

    this will stack with how to build tools to resist social momentums, and tactics often used to stop progressive memes from gaining foothold. astroturfing/reframing/dividing.

    a story for framing

    a bunch of atheists want to stop your cult dogma from being pushed into schools? get a bunch of feminists mad at the ones protesting to end male genital mutilation, because ‘female genital mutilation is worse’ or some other terrible strawman that re-frames their actual position, and then get some 4channers or joe rogan chuds/rhetoric in there so you have more evidence of bad actors, would be funny if a bunch of media groups come into existence and then die, doing nothing but this kinda ragebait. nice. now both sides have legitimate grievances being dismissed by the other side while they attack each-other. now to push some more stuff for governments/schools to weaken people’s ability to comprehend the world or communicate.

    when they get mad at rich people, let them waste all their energy screaming at buildings, uh oh, anti-fascists and minority groups are rising up? better frame them like an apocalypse of a bunch of idiots on the news, rather than address the constant struggles and suffering people experience, because we want to be pandas and don’t want to adapt to new environments…

    and i like to constantly point out that mainstream journals will even note that the black american community was generally russian propaganda target #1. get your enemy to attack themselves, and you can pay some idiot to idiot his way into office through a bunch of social manipulation tactics, which are being actively described here.

    and nobody is educated enough to interpret the complexity when sometimes there’s a little uncomfortable truth everywhere. easier to pretend none of us see the uncomfortable mistakes of the flawed models currently being used to represent our reality.

    imagine being rich and doing whatever you want, and then spending a little of your hoard to ensure others can’t do what you don’t want them to do. that’s largely what’s going on. you can, through changing what people are able to interact with, hack a lot of minds if you got the money and power. this hackability comes from a lot of social energy minimization vie creating more simplified shared heuristic representations, so you can more easily predict each-other. (for a deep dive, see friston’s free energy principle/active inference, and epistemically focused followup by mahault albarracin.)

    that being said, it’s not just cristo-fascist think tanks like prager-u and the heritage foundation, but a diverse set of differently-able groups working together to support the sustenance of their non-progressive models. kinda a “we don’t fuck with each-other, as long as we target the tribe that wants to force us to think” deal between a bunch of idiotic patterns that continue existing like parasitic cultural organisms.

    so… we gotta fight it like one. like cells and organs of a body working together to fight a new parasite, using the intelligence and tools we have. understanding that the whole of humanity is our ‘body,’ and we’re fighting a cancer that uses noise/stress to disable cells and take them over.

    need to build structure’s that won’t just get noticed and hijacked before they can become effective. understanding how to de-escalate and reframe the thought process of someone who is currently stuck in cult style non-communication traps where they can’t update their model anymore. also need to follow it up so that the inevitability of growing diversity can lead to better forms of understanding the world and communicating, building a robust self-healing body, rather than trying to self-segregate and isolate until a parasite decides to rampage.

    hope that makes sense!



  • Peanut@sopuli.xyzto196@lemmy.blahaj.zoneMom rule
    link
    fedilink
    English
    arrow-up
    22
    ·
    6 months ago

    You mean the solution to this problem is not for everyone to become a solipsistic asshole?

    It’s to fix the system and culture that encourages it?

    But then i can’t feel justified abusing people around me to take everything i can from them and climb a rung on the socio-econinic ladder.

    What do you mean cancer kills the body? That’s the brains problem to figure out. Definitely not confused cells being coerced into an ignorant and totally destructive culture.

    Snide overload, but you really hit the nail on the head, and I think humans should cooperatively be able to do better than dumb cells. The excuses i see for being solipsistic in freaking and epistemically destructive are both disgusting and disappointing.


  • Peanut@sopuli.xyztoScience Memes@mander.xyz🐇 🐇 🐇
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 months ago

    Bayesian analysis of complex intelligent systems via friston’s free energy principle and active inference? Or machine learning?

    Personally love the stuff circling Michael Levin at tufts university. I could also imagine there’s a lot of unique model building in different biological/ecological niches.


  • i think it’s a framing issue, and AI development is catching a lot of flak for the general failures of our current socio-economic hierarchy. also people having been shouting “super intelligence or bust” for decades now. i just keep watching it get better much more quickly than most people’s estimates, and understand the implications of it. i do appreciate discouraging idiot business people from shunting AI into everything that doesn’t need it, because buzzword or they can use it to exploit something. some likely just used it as an excuse to fire people, but again, that’s not actually the AI’s fault. that is this shitty system. i guess my issue is people keep framing this as “AI bad” instead of “corpos bad”

    if the loom was never invented, we would still live in an oppressive society sliding towards fascism. people tend to miss the forest for the trees when looking at tech tools politically. also people are blind to the environment, which is often more important than the thing itself. and the loom is still useful.

    compression and polysemy growing your dimensions of understanding in a high dimensional environment, which is also changing shape, comprehension growing with the erasure of your blindspots. collective intelligence (and how diversity helps cover more blindspots) predictive processing (and how we should embrace lack of confidence, but understand the strength of proper weighting for predictions, even when a single blindspot can shift the entire landscape, making no framework flawless or perfectly reliable.) and understanding how everything we know is just the best map of the territory we’ve figured out so far. if you want to know judge how subtle but in our face blindspots can be, look up how to test your literal blindspot, you just need 30 seconds a paper with two small dots to see how blind we are to our blindspots. etc.

    more than fighting the new tools we can use, we need to claim them, and the rest of the world, away from those who ensure that all tools will only exist to exploit us.

    am i shouting to the void? wasting the breath of my digits? will humanity ever learn to stop acting like dumb angry monkeys?


  • let’s make another article completely misrepresenting opinions/trajectories and the general state of things, because we know it’ll sell and it will get the ignorant fighting with those who actually have an idea of what’s going on, because they saw in an article that AI was eating the pets.

    please seek media sources that actually seek to inform rather than provoke or instigate confusion or division through misrepresentation and disinformation.

    these days you can’t even try to fix a category error introduced by the media without getting cussed out and blocked from congregate sites because you ‘support the evil thing’ that the article said was evil, and everyone in the group hates, without even an attempt to understand the context, or what part of the thing is even being discussed.

    also, can we talk more about breaking up the big companies so they don’t have a hold on the technology, rather than getting mad at everyone who interacts with modern technology?

    legit ss bad feels like fighting rightwing misinformation about migrant workers and trans people.

    just make people mad, and teach them that communication is a waste of energy.
    we need to learn how to tell who is informing rather than obfuscating, through historicity of accuracy, and consensus with other experts from diverse perspectives. not building tribes upon who agrees with us. and don’t blame experts for not also learning how to apply a novel and virtually impossible level of compression when explaining their complex expertise, when you don’t even want to learn a word or concept. it’s like being asked to describe how cameras work, and then getting called an idiot when some analogy used can be imagined in a less useful context that doesn’t apply 1:1 with the complex subject being summarized.

    outside of that, find better sources of information. fuck this communication disabling ragebait.

    cause now just having a history of rebuking this garbage gets you dismissed, because a history of interacting with the topic on this platform is a good enough vibe check to just not attempt understanding and interaction.

    TLDR: the quality of the articles and conversation on this subject are so generally ill-informed that it hurts, and obviously trying to craft environments of angry engagement rather than informing.

    also i wonder if anyone will actually engage with this topic rather than get angry, cuss me out, and not hear a single thing being communicated.


  • Or maybe the solution is in dissolving the socio-economic class hierarchy, which can only exist as an epistemic paperclip maximizer. Rather than also kneecapping useful technology.

    I feel much of the critique and repulsion comes from people without much knowledge of either art/art history, or AI. Nor even the problems and history of socio-economic policies.

    Monkeys just want to be angry and throw poop at the things they don’t understand. No conversation, no nuance, and no understanding of how such behaviours roll out the red carpet for continued ‘elite’ abuses that shape our every aspect of life.

    The revulsion is justified, but misdirected. Stop blaming technology for the problems of the system, and start going after the system that is the problem.



  • That argument was to be had with Apple twenty years ago as they built their walled garden, which intentionally frustrates people into going all in apple. Still can’t get anyone to care about dark patters/deceptive design, or disney attacking the creative Commons which it parasitically grew out of. AI isn’t and has never been the real issue. It’s just absorbs all the hate the corpos should be getting as they use it, along with every other tool at their disposal, to slowly fuck us into subservience. Honestly, AI is teaching us the importance of diverse perspectives in intelligent systems, and the dangers of overfitting, which exist in our own brains and social/economic systems.

    Same issue, different social ecosystem being hoarded by the wealthy.



  • The main issue though is the economic system, not the technology.

    My hope is that it shakes things up fast enough that they can’t boil the frog, and something actually changes.

    Having capable AI is a more blatantly valid excuse to demand a change in economic balance and redistribution. The only alternative would be destroy all technology and return to monkey. Id rather we just fix the system so that technological advancements don’t seem negative because the wealthy have already hoarded all new gains of every new technology for this past handful of decades.

    Such power is discretely weaponized through propaganda, influencing, and economic reorganizing to ensure the equilibrium stays until the world is burned to ash, in sacrifice to the lifestyle of the confidently selfish.

    I mean, we could have just rejected the loom. I don’t think we’d actually be better off, but I believe some of the technological gain should have been less hoardable by existing elite. Almost like they used wealth to prevent any gains from slipping away to the poor. Fixing the issue before it was this bad was the proper answer. Now people don’t even want to consider that option, or say it’s too difficult so we should just destroy the loom.

    There is a markov blanket around the perpetuating lifestyle of modern aristocrats, obviously capable of surviving every perturbation. every gain as a society has made that reality more true entirely due to the direction of where new power is distributed. People are afraid of AI turning into a paperclip maximizer, but that’s already what happened to our abstracted social reality. Maximums being maximized and minimums being minimized in the complex chaotic system of billions of people leads to inevitable increase of accumulation of power and wealth wherever it has already been gathered. Unless we can dissolve the political and social barrier maintaining this trend, it we will be stuck with our suffering regardless of whether we develop new technology or don’t.

    Although doesn’t really matter where you are or what system you’re in right now. Odds are there is a set of rich asshole’s working as hard as possible to see you are kept from any piece of the pie that would destabilize the status quo.

    I’m hoping AI is drastic enough that the actual problem isn’t ignored.



  • I conflate these things because they come from the same intentional source. I associate the copywrite chasing lawyers with the brands that own them, it is just a more generalized example.

    Also an intern who can give you a songs lyrics are trained on that data. Any effectively advanced future system is largely the same, unless it is just accessing a database or index, like web searching.

    Copyright itself is already a terrible mess that largely serves brands who can afford lawyers to harass or contest infringements. Especially apparent after companies like Disney have all but murdered the public domain as a concept. See the mickey mouse protection act, as well as other related legislation.

    This snowballs into an economy where the Disney company, and similarly benefited brands can hold on to ancient copyrights, and use their standing value to own and control the development and markets of new intellectual properties.

    Now, a neuralnet trained on copywritten material can reference that memory, at least as accurately as an intern pulling from memory, unless they are accessing a database to pull the information. To me, sueing on that bases ultimately follows the logic that would dictate we have copywritten material removed from our own stochastic memory, as we have now ensured high dimensional informational storage is a form of copywrite infringement if anyone instigated the effort to draw on that information.

    Ultimately, I believe our current system of copywrite is entirely incompatible with future technologies, and could lead to some scary arguments and actions from the overbearing oligarchy. To argue in favour of these actions is to argue never to let artificial intelligence learn as humans do. Given our need for this technology to survive the near future as a species, or at least minimize the excessive human suffering, I think the ultimate cost of pandering to these companies may be indescribably horrid.


  • Music publishers sue happy in the face of any new technological development? You don’t say.

    If an intern gives you some song lyrics on demand, do they sue the parents?

    Do we develop all future A.I. Technology only when it can completely eschew copyrighted material from their comprehension?

    "I am sorry, I’m not allowed to refer to the brand name you are brandishing. Please buy our brand allowance package #35 for any action or communication regarding this brand content. "

    I dream of a future when we think of the benefit of humanity over the maintenance of our owners’ authoritarian control.


  • Might have to edit this after I’ve actually slept.

    human emotion and human style intelligences are not exclusive in the entire realm of emotion and intelligence. I define intelligence and sentience on different scales. I consider intelligence the extent of capable utility and function, and emotion as just a different set of utilities and functions within a larger intelligent system. Human style intelligence requires human style emotion. I consider gpt an intelligence, a calculator an intelligence, and a stomach an intelligence. I believe intelligence can be preconscious or unconscious. Rather, a part of consciousness independent from a functional system complex enough for emergent qualia and sentience. Emotions are one part in this system exclusive to adaptation within the historic human evolutionary environment. I think you might be underestimating the alien nature of abstract intelligences.

    I’m not sure why you are so confident in this statement. You still haven’t given any actual reason for this belief. You are addressing it as consensus, so there should be a very clear reason why no successful considerably intelligent function exists without human style emotion.

    You have also not defined your interpretation of what intelligence is, you’ve only denied that any function untied to human emotion could be an intelligent system.

    If we had a system that could flawlessly complete françois chollet’s abstraction and reasoning corpus, would you suggest it is connected to specifically human emotional traits due to its success? Or is that still not intelligence if it still lacks emotion?

    You said neural function is not intelligence. But you would also exclude non-neural informational systems such as collective cooperating cell systems?

    Are you suggesting the real time ability to preserve contextual information is tied to emotion? Sense interpretation? Spacial mapping with attention? You have me at a loss.

    Even though your stomach cells interacting is an advanced function, it’s completely devoid of any intelligent behaviour? Then shouldn’t the cells fail to cooperate and dissolve into a non functioning system? again, are we only including higher introspective cognitive function? Although you can have emotionally reactive systems without that. At what evolutionary stage do you switch from an environmental reaction to an intelligent system? The moment you start calling it emotion? Qualia?

    I’m lacking the entire basis of your conviction. You still have not made any reference to any aspect of neuroscience, psychology, or even philosophy that explains your reasoning. I’ve seen the opinion out there, but not strict form or in consensus as you seem to suggest.

    You still have not shown why any functional system capable of addressing complex tasks is distinct from intelligence without human style emotion. Do you not believe in swarm intelligence? Or again do you define intelligence by fully conscious, sentient, and emotional experience? At that point you’re just defining intelligence as emotional experience completely independent from the ability to solve complex problems, complete tasks, or make decisions with outcomes reducing prediction error. At which point we could have completely unintelligent robots capable of doing science and completing complex tasks beyond human capability.

    At which point, I see no use in your interpretation of intelligence.


  • What aspect of intelligence? The calculative intelligence in a calculator? The basic environmental response we see in amoeba? Are you saying that every single piece of evidence shows a causal relationship between every neuronal function and our exact human emotional experience? Are you suggesting gpt has emotions because it is capable of certain intelligent tasks? Are you specifically tying emotion to abstraction and reasoning beyond gpt?

    I’ve not seen any evidence suggesting what you are suggesting, and I do not understand what you are referencing or how you are defining the causal relationship between intelligence and emotion.

    I also did not say that the system will have nothing resembling the abstract notion of emotion, I’m just noting the specific reasons human emotions developed as they have, and I would consider individual emotions a unique form of intelligence to serve its own function.

    There is no reason to assume the anthropomorphic emotional inclinations that you are assuming. I also do not agree with your assertions of consensus that all intelligent function is tied specifically to the human emotional experience.

    TLDR: what?