This is actually extremely difficult to study
I’m an anarchocommunist, all states are evil.
Your local herpetology guy.
Feel free to AMA about picking a pet/reptiles in general, I have a lot of recommendations for that!
This is actually extremely difficult to study
A fork wouldn’t help anything at all, the problem is that nobody is working on the patches not that the devs won’t accept them
It’s literally the worst distro, https://github.com/arindas/manjarno
Endeavoros is fundamentally better in every way, everything manjaro adds makes arch worse, and everything good they have comes from arch.
I use keyd for this.
For graphical installer - there are already plenty of these for arch that aren’t manjaro and don’t fuck up your system like manjaro does
As for package manager frontend… pamac is awful, just use octopi or any TUI.
There was no need to make a whole new distro for those two things, and those two things aren’t even well polished. They should’ve just made those tools for arch, and called it a day.
Manjaro 100%
Everything they add to arch makes arch worse. Everything good about manjaro comes from arch.
Not in the long-run. This is absolutely a temporary issue due to the beta quality of lemmy.
Yes, but we would refer to our consciousness as an emergent property of our brain.
And we’re trying to build artificial brains.
We also understand the math underlying it. Humans designed and constructed it; we know exactly what it is capable of and what it does
This is false. Read about their emergent properties. We have no way of knowing when emergent properties appear, we just notice them.
Except calculators aren’t models capable of understanding language that appear to become more and more capable as they grow. It’s nothing like that.
It literally can’t worry about its own existence; it can’t worry about anything because it has no thoughts or feelings. Adding computational power will not miraculously change that.
Who cares? This has no real world practical usecase. Its thoughts are what it says, it doesn’t have a hidden layer of thoughts, which is quite frankly a feature to me. Whether it’s conscious or not has nothing to do with its level of functionality.
And (from what I’ve seen) they get things wrong with extreme regularity, increasingly so as thing diverge from the training data. I wouldn’t say they’re a “stochastic parrot” but they don’t seem to be much better when things need to be correct… and again, based on my (admittedly limited) understanding of their design, I don’t anticipate this technology (at least without some kind of augmented approach that can reason about the substance) overcoming that.
Keep in mind, you’re talking about a rudimentary, introductory version of this, my argument is that we don’t know what will happen when they’ve scaled up, we know for certain hallucinations become less frequent as the model size increases (see the statistics on gpt3 vs 4 on hallucinations), perhaps this only occurs because they haven’t met a critical size yet? We don’t know.
There’s so much we don’t know.
That’s missing the forest for the trees. Of course an AI isn’t going to go fishing. However, I should be able to assert some facts about fishing and it should be able to reason based on those assertions. e.g. a child can work off of facts presented about fishing, “fish are hard to catch in muddy water” -> “the water is muddy, does that impact my chances of a catching a bluegill?” -> “yes, it does, bluegill are fish, and fish don’t like muddy water”.
https://blog.research.google/2022/05/language-models-perform-reasoning-via.html
they do this already, albeit imperfectly, but again, this is like, a baby LLM.
and just to prove it:
https://chat.openai.com/share/54455afb-3eb8-4b7f-8fcc-e144a48b6798
You’re assuming i’m saying something that i’m not, and then arguing with that, instead of my actual claim.
I’m saying we don’t know for sure what they will be able to do when they’re scaled up. That’s the end of my assertion. I don’t have to prove that they will suddenly come alive, i’m not claiming they will, i’m just claiming we don’t know what will happen when they’re scaled, and they seem to have emergent properties as they scale up. Nobody has devised a way of predicting what emergent properties happen when, nobody has made any progress whatsoever on knowing what scaling up accomplishes.
Can they reason? Yes, but poorly right now, will that get better? Who knows.
The end of my claim is that we don’t know what’ll happen when they scale up, and that you can’t just write it off like you are.
If you want proof that they reason, see the research article I linked. If they can do that in their rudimentary form that we’ve created with very little time, we can’t write off the possibility that they will scale.
Whether or not they reason LIKE HUMANS is irrelevant if they can do the job.
And i’m not anthropomorphizing them without reason, there aren’t terms for this already, what would you call this behavior of answering questions significantly better when asked to fully explain reasoning? I would say it is taking the easiest option that still meets the qualifications of what it is requested to do, following the path of least resistance, I don’t have a better word for this than laziness.
Furthermore predictive power is just another way of achieving reasoning, better predictive power IS better reasoning, because you can’t predict well without reasoning.
I’m not guessing. When I say it’s a difference of kind, I really did mean that. There is no cognition here; and we know enough about cognition to say that LLMs are not performing anything like it.
We do not know that, I challenge you to find a source for that, in fact, i’ve seen sources showing the opposite, they seem to reason in tokens, for example, LLM’s perform significantly better at tasks when asked to give a step by step reasoned explanation, this indicates that they are doing a form of reasoning, and their reasoning is limited by what I have no better term for than laziness.
https://blog.research.google/2022/05/language-models-perform-reasoning-via.html
If I teach a real AI about fishing, it should be able to reason about fishing and it shouldn’t need to have read a supplementary knowledge of mankind to do it.
This is a faulty assumption.
In order for you to learn about fishing, you had to learn a shitload about the world. Babies don’t come out of the womb able to do such tasks, there is a shitload of prerequisite knowledge in order to fish, it’s unfair to expect an ai to do this without prerequisite knowledge.
Furthermore, LLM’s have been shown to do many things that aren’t in their training data, so the notion that it’s a stochastic parrot is also false.
You’re guessing, you don’t actually know that for sure, it seems intuitively correct, but we simply do not know enough about cognition to make that assumption.
Perhaps our ability to reason exclusively comes from our ability to predict, and by scaling up the ability to predict, we become more and more able to reason.
These are guesses, all we have now are guesses, you can say “it doesn’t reason” and “it’s just autocorrect” all you want, but if that were the case why did scaling it up eventually enable it to perform basic math? Why did scaling it up improve its ability to problemsolve significantly (gpt3 vs gpt4), there’s so many unknowns in this field, to just say “nah, can’t be, it works differently from us” doesn’t mean it can’t do the same things as us given enough scale.
We don’t know that for sure yet, we saw a lot of emergent intelligent properties appear as we scaled up, and we’re nowhere near done scaling LLM’s, I’m not saying it will be solved, just that we don’t know one way or the other yet.
This isn’t a reason federation doesn’t work at all, that implies a fundamental issue with federation, this is why focusing on performance instead of mod tools and helping content filtering doesn’t work, the same would’ve happened to a massive centralized service without proper mod tools.
If there’s not going to be federation via activitypub I will not continue to use beehaw at all, so, this was very unfortunate to read.
I use f2fs on ssd’s and ext4 on hdd’s
I don’t see the need for snapshots, I backup externally