The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

  • 1 Post
  • 143 Comments
Joined 6 months ago
cake
Cake day: January 12th, 2024

help-circle

  • Is it? [coherent]

    Yes when it comes to the relevant info. The anaphoric references are all over the place; he, her, she, man*, they all refer to the same fossil.

    *not quite an anaphoric reference, I know. I’m still treating it as one.

    I can only really guess whether they’re talking about one or two subjects here.

    It’s clearly one. Dated to be six years old, of unknown sex, nicknamed “Tina”.

    Why does it show someone cared for the mother as well?

    This does not show lack of coherence. Instead it shows the same as the “is it?” from your comment: assuming that a piece of info is clear by context, when it isn’t. [This happens all the time.]

    That said, my guess (I’ll repeat for emphasis: this is a guess): I think that this shows that they cared for the mother because, without doing so, the child would’ve died way, way earlier.

    That all reads like bad AI writing to me.

    I genuinely don’t think so.

    Modern LLMs typically don’t leave sentence fragments like “on the territory of modern Spain. Years ago.” They’re consistent with anaphoric references, even when they don’t make sense in the real world. And they don’t screw up with prepositions, like switching “in” with “on”. All those errors are typically human.

    On the other hand, LLMs fail hard on a discursive level. They don’t know the topic (in this case, the fossil). At least this error is not present here.

    Based on that I think that a better explanation for why this text is so poorly written is “CBA”. The author couldn’t be arsed to review it. Myself wrote a lot of shit like this when drunk, sleepy, or in a rush.

    I’ll go a step further and say that the author likely speaks more than one language, and they were copying this stuff from some site in another language that has grammatical gender. I’m saying this because it explains why the anaphoric references are all over the place.





  • So Mint can perform the same role as a tablet

    Yeah, you could argue that Mint allows that laptop to perform the same role as a tablet; it’s at most used for simple image edition, web browsing, and listening music through the SMB network (from my computer because hers has practically no storage).

    Without a Linux distro the other options would be to “perform” as electronic junk or virus breeding grounds.

    I keep seeing these posts and comments, trying to convince people This Is The Year of The Linux Desktop.

    Drop off the strawman. That is neither what the author of the article said, nor what I did.

    The rest of your comment boils down to you noisily beating that strawman to death, and can be safely disregarded as such.


  • To reinforce the author’s views, with my own experience:

    I’ve been using Linux for, like, 20 years? Back then I dual booted it with XP, and my first two distros (Mandriva and Kurumin) are already discontinued. I remember LILO.

    So I’m probably a programmer, right? …nope, my grads are Linguistics and Chemistry. And Linux didn’t make me into a programmer either, the most I can do is to pull out a 10 lines bash script with some websearch.

    So this “Linux is for programmers” myth didn’t even apply to the 00s, let alone now.

    You need a minimum of 8GB of RAM and a fairly recent CPU to do any kind of professional work at a non-jittery pace [in Windows]. This means that if you want to have a secondary PC or laptop, you’ll need to pay a premium for that too.

    Relevant detail: Microsoft’s obsession with generative models, plus its eagerness to shove wares down your throat, will likely make this worse. (You don’t use Copilot? Or Recall? Who cares? It’ll be installed by default, running in the background~)

    Linux, on the other hand, can easily boot up on a 10-year-old laptop with just 2GB of RAM, and work fine. This makes it the perfect OS for my secondary devices that I can carry places without worrying about accidental damage.

    My mum is using a fossil like this. It has 4GB or so; it’s a bit slow but it works with an updated Mint, even if it wouldn’t with Windows 10.

    Sure, you can delay an update [in Windows], but it’s just for five weeks.

    I gave the link a check… what a pain. For reference, in Linux Mint, MATE edition:

    That’s it. You click a button. It’s probably the same deal in other desktop environments.


  • Yeah, it’s actually good. People use it even for trivial stuff nowadays; and you don’t need a pix key to send stuff, only to receive it. (And as long as your bank allows you to check the account through an actual computer, you don’t need a cell phone either.)

    Perhaps the only flaw is shared with the Asian QR codes - scams are a bit of a problem, you could for example tell someone that the transaction will be a value and generate a code demanding a bigger one. But I feel like that’s less of an issue with the system and more with the customer, given that the system shows you who you’re sending money to, and how much, before confirmation.

    I’m not informed on Tikkie and Klarna, besides one being Dutch and another Swedish. How do they work?


  • Brazil ended with a third system: Pix. It boils down to the following:

    • The money receiver sends the payer either a “key” or a QR code.
    • The payer opens their bank’s app and use it to either paste the key or scan the QR code.
    • The payer defines the value, if the code is not dynamic (more on that later).
    • Confirm the transaction. An electronic voucher is emitted.

    The “key” in question can be your cell phone number, physical/juridical person registre number, e-mail, or even a random number. You can have up to five of them.

    Regarding dynamic codes, it’s also possible to generate a key or QR code that applies to a single transaction. Then the value to be paid is already included.

    Frankly the system surprised me. It’s actually good and practical; and that’s coming from someone who’s highly suspicious of anything coming from the federal government, and who hates cell phones. [insert old man screaming at clouds meme]



  • With two exceptions*, the names are from Roman mythology. So I’d expect the new planet to get a definitive name from the same template. (Please be Janus. It’s the gate of the solar system!)

    *Uranus is from Greek mythology, with no good Latin equivalent. Terra is trickier; you could argue that it fits the template for Latin and the Romance languages, but most others simply use local words for soil, without a connection to the goddess. That is also called Tellus to add confusion.


  • That’s some great read.

    Those muppets (alt right talking about antiquity) are a dime a dozen. You see a lot of them in 4chan, too. They look at the past with a “the grass was greener” mindset, cherry picking stuff to justify their political bullshit, without a single iot of critical thinking.

    And they usually suck at understanding the past, as their cherry picking doesn’t allow them to get a picture of how and why things happened. They obsess over the Roman Empire and Sparta, but when you talk about the Republic or Athens they go into “lalala I’m not listening lalala” mode - because both contradict their discourse of “we need a strong rule, like people in the past, to fight against degeneracy”.

    They’ll also often screech if you mention why Octavius adopted the title of “imperator” (emperor) instead of “rex” (king). Because guess what, once they acknowledge why people in Republican Rome saw kings with disdain (kingdom = primitive system and breeding grounds for tyranny), all their political discourse goes down the drain, so Octavius had to “sell” his stupid idea under a different name.

    Don’t tell them about the Aurelian Moors, by the way. Or Caracalla’s familiar background. Or do tell them, if you enjoy seeing them screech.


  • Thank you for the info. That’s… sad, really.

    Any further than this and they’ll call you a conspiracy theorist.

    I’m probably one now - it was inevitable to connect GNOME’s obtuseness to Red Hat violating the GPL. It sounds a lot like IBM trying to make its own operating system, lacking the means to do so, and exploiting open source to do it for them.



  • Do you mind if I address this comment alongside your other reply? Both are directly connected.

    I was about to disagree, but that’s actually really interesting. Could you expand on that?

    If you want to lie without getting caught, your public submission should have neither the hallucinations nor stylistic issues associated with “made by AI”. To do so, you need to consistently review the output of the generator (LLM, diffusion model, etc.) and manually fix it.

    In other words, to lie without getting caught you’re getting rid of what makes the output problematic on first place. The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else’s, instead of a decent and original one. Those are the ones who’d get caught, because they’re doing what you called “dumb” (and I agree) - not proof-reading their output.

    Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.