The thing about words is that meanings can twist just like a snake, and if you want to find snakes look for them behind words that
have changed their meaning.
Puking on the carpet, dropping dead things at your feet, licking at you, drawing your blood with sharp claws. Imagine a long slimy toad-lizard with those sharp claws, behaving like that.
AIs are a tool, at least currently, and depend on the way you use them like any other tool. Future AGI would be a whole other dangerous can of paperclips, but we dont have that yet.
I never said I didn’t recognise the quote from Terry Pratchett’s Lords and Ladies (which incidentally, shares many interesting parallels between The Witcher - namely, the elves being evil/not good, the elves having space bending powers, the elves being from another world, which the portal to is opened by a young girl, and they are associated with snow/frost. Like, I’m not saying Marcin Sapkowski was definitely inspired, but it’s an interesting coincidence at least.)
Edit: I didn’t click your link, for your information
AI at this stage is just a tool. This might change one day, but today is not that day. Blame the user, not the tool.
AI and ML was being used to assist in scientific research long before ChatGPT or StableDiffusion hit the mainstream news cycle. AIs can be used to predict all sorts of outcomes, including ones relevant to climate, weather, even medical treatment. The University I work for even have a funded PhD program looking at using AI algorithms to detect cancer better, I found out because one of my friends is applying for it.
The research I am doing with AI is not quite as important as that, but it could shape the future of both cyber security and education, as I am looking at using for teaching cyber security students about ethical hacking and security. Do people also use LLMs to hack businesses or government organisations and cause mayhem? Quite probably, and they definitely will in the future. That doesn’t mean that the tool itself is bad, just that some people will inevitably abuse it.
Not all of this stuff is run by private businesses either. A lot of work is done by open source devs working on improving publicly available AI and ML models in their spare time. Likewise some of this stuff is publicly funded through universities like mine. There are people way better than me out there using AIs for all sorts of good things including stopping hackers, curing patients, teaching the next generation, or monitoring climate change. Some of them have been doing it for years.
The problem is that some people like me won’t get that reference and instead think AIs are universally bad. A lot of people already think this way, and it’s hard to know who believes what.
LLMs have legitimate uses today even if they are currently somewhat limited. In the future they will have more legitimate and illegitimate uses. The capabilities of current LLMs are often oversold though, which leads to a lot of this resentment.
Edit: also LLMs very much are AI (specifically ANI) and ML. It’s literally a form of deep learning. It’s not AGI, but nobody with half a brain ever claimed it was.
No they don’t. The only thing they can be somewhat reliable for is autocomplete, and the slight improvement in quality doesn’t compensate the massive increase in costs.
In the future they will have more legitimate and illegitimate uses
No. Thanks to LLM peddlers being excessively greedy and saturating the internet with LLM generated garbage newly trained models will be poisoned and only get worse with every iteration.
The capabilities of current LLMs are often oversold
LLMs have only one capability: to produce the most statistically likely token after a given chain of tokens, according to their model.
Future LLMs will still only have this capability, but since their models will have been trained on LLM generated garbage their results will quickly diverge from anything even remotely intelligible.
This is false. Anyone who has used these tools for long enough can tell you this is false.
LLMs have been used to write computer code, craft malware, and even semi-independently hack systems with the support of other pieces of software. They can even grade student’s work and give feedback, but it’s unclear how accurate this will be. As someone who actually researches the use of both LLMs and other forms of AI you are severely underestimating their current capabilities, never mind what they can do in the future.
I also don’t know where you came to the conclusion that hardware performance is always an issue, given that LLM model size varies immensely as does the performance requirements. There are LLMs that can run and run well on an average laptop or even smartphone. It honestly makes me think you have never heard of LLaMa models inc. TinyLLaMa or similar projects.
Future LLMs will still only have this capability, but since their models will have been trained on LLM generated garbage their results will quickly diverge from anything even remotely intelligible.
You can filter data you get from the internet to websites archived before LLMs were even invented as a concept. This is trivial to do for some data sets as well. Some data sets used for this training have already been created without LLM output (think about how the first LLM was trained).
Clearly, based on your responses, you don’t think AI/LLMs are universally bad. And anyone who is that easily swayed by what is essentially a clever shitpost likely also thinks the earth is flat and birds aren’t real.
AIs are wonderful. They provoke wonder.
AIs are marvellous. They cause marvels.
AIs are fantastic. They create fantasies.
AIs are glamorous. They project glamour.
AIs are enchanting. They weave enchantment.
AIs are terrific. They beget terror.
The thing about words is that meanings can twist just like a snake, and if you want to find snakes look for them behind words that have changed their meaning.
No one ever said AIs are nice.
AIs are bad.
Is this from Terry Pratchett?
Yep, from Lords and Ladies
GNU STP.
If cats looked like frogs we’d realize what nasty, cruel little bastards they are. Style. That’s what people remember.
Edit: FYI this is from Terry Pratchett’s Lords and Ladies, the same book that OP paraphrased
Puking on the carpet, dropping dead things at your feet, licking at you, drawing your blood with sharp claws. Imagine a long slimy toad-lizard with those sharp claws, behaving like that.
@[email protected]
Someone is a Pratchett fan.
Senpai remembered me :3
AIs are a tool, at least currently, and depend on the way you use them like any other tool. Future AGI would be a whole other dangerous can of paperclips, but we dont have that yet.
Congrats on being one of today’s lucky 10,000!
I never said I didn’t recognise the quote from Terry Pratchett’s Lords and Ladies (which incidentally, shares many interesting parallels between The Witcher - namely, the elves being evil/not good, the elves having space bending powers, the elves being from another world, which the portal to is opened by a young girl, and they are associated with snow/frost. Like, I’m not saying Marcin Sapkowski was definitely inspired, but it’s an interesting coincidence at least.)
Edit: I didn’t click your link, for your information
This is the internet. No one cares if you know something or not. You don’t have to be defensive about it.
GNU Sir pTerry.
AI at this stage is just a tool. This might change one day, but today is not that day. Blame the user, not the tool.
AI and ML was being used to assist in scientific research long before ChatGPT or StableDiffusion hit the mainstream news cycle. AIs can be used to predict all sorts of outcomes, including ones relevant to climate, weather, even medical treatment. The University I work for even have a funded PhD program looking at using AI algorithms to detect cancer better, I found out because one of my friends is applying for it.
The research I am doing with AI is not quite as important as that, but it could shape the future of both cyber security and education, as I am looking at using for teaching cyber security students about ethical hacking and security. Do people also use LLMs to hack businesses or government organisations and cause mayhem? Quite probably, and they definitely will in the future. That doesn’t mean that the tool itself is bad, just that some people will inevitably abuse it.
Not all of this stuff is run by private businesses either. A lot of work is done by open source devs working on improving publicly available AI and ML models in their spare time. Likewise some of this stuff is publicly funded through universities like mine. There are people way better than me out there using AIs for all sorts of good things including stopping hackers, curing patients, teaching the next generation, or monitoring climate change. Some of them have been doing it for years.
I was just making a clever reference
The problem is that some people like me won’t get that reference and instead think AIs are universally bad. A lot of people already think this way, and it’s hard to know who believes what.
The problem is that people selling LLMs keep calling them AI, and people keep buying their bullshit.
AI isn’t necessarily bad. LLMs are.
LLMs have legitimate uses today even if they are currently somewhat limited. In the future they will have more legitimate and illegitimate uses. The capabilities of current LLMs are often oversold though, which leads to a lot of this resentment.
Edit: also LLMs very much are AI (specifically ANI) and ML. It’s literally a form of deep learning. It’s not AGI, but nobody with half a brain ever claimed it was.
No they don’t. The only thing they can be somewhat reliable for is autocomplete, and the slight improvement in quality doesn’t compensate the massive increase in costs.
No. Thanks to LLM peddlers being excessively greedy and saturating the internet with LLM generated garbage newly trained models will be poisoned and only get worse with every iteration.
LLMs have only one capability: to produce the most statistically likely token after a given chain of tokens, according to their model.
Future LLMs will still only have this capability, but since their models will have been trained on LLM generated garbage their results will quickly diverge from anything even remotely intelligible.
This is false. Anyone who has used these tools for long enough can tell you this is false.
LLMs have been used to write computer code, craft malware, and even semi-independently hack systems with the support of other pieces of software. They can even grade student’s work and give feedback, but it’s unclear how accurate this will be. As someone who actually researches the use of both LLMs and other forms of AI you are severely underestimating their current capabilities, never mind what they can do in the future.
I also don’t know where you came to the conclusion that hardware performance is always an issue, given that LLM model size varies immensely as does the performance requirements. There are LLMs that can run and run well on an average laptop or even smartphone. It honestly makes me think you have never heard of LLaMa models inc. TinyLLaMa or similar projects.
You can filter data you get from the internet to websites archived before LLMs were even invented as a concept. This is trivial to do for some data sets as well. Some data sets used for this training have already been created without LLM output (think about how the first LLM was trained).
Sources:
Clearly, based on your responses, you don’t think AI/LLMs are universally bad. And anyone who is that easily swayed by what is essentially a clever shitpost likely also thinks the earth is flat and birds aren’t real.
You know. Morons.
I appreciate it <3
Oh, thank you. I forgot. Sometimes I can’t remember.