Not sure if this is the best community to post in; please let me know if there’s a more appropriate one. AFAIK Aii@programming.dev is meant for news and articles only.
I don’t hate AI, in fact i think it could be very useful. But i can’t help but notice that it’s critics are mostly correct and it’s proponents are a bunch of fucking morons.
I hate “A.I.” because it’s not A.I. it’s an if statement stapled to a dictionary.
Also because I can’t write the short name of Albert without people thinking I’m talking about A.I.
Fr. Allen Iverson is in shambles on brand recognition.
Al uses AI because Al is AI
As far as I am concerned I am going to do the opposite of anything Sam Altman says. He is the ultimate snake oil salesman. He sold this trash to Microsoft, which I guess these days is pretty on brand.
There is no “AI”.
That deception is the main ingredient in the snake oil.
I don’t understand the desire to argue against the terms being used here when it fits both the common and academic usages of “AI”
There is no autonomy. It’s just algorithmic data blending, and we don’t actually know how it works. It would be far better described as virtual intelligence than artificial intelligence.
Does it run on something that’s modelled on a neural net? Then it’s AI by definition.
I think you’re confusing AI with “AGI”.
Whose definition?
why not both B)
see: previous discussion
Most arguments people make against AI are in my opinion actually arguments against capitalism. Honestly, I agree with all of them, too. Ecological impact? A result of the extractive logic of capitalism. Stagnant wages, unemployment, and economic dismay for regular working people? Gains from AI being extracted by the wealthy elite. The fear shouldn’t be in the technology itself, but in the system that puts profit at all costs over people.
Data theft? Data should be a public good where authors are guaranteed a dignified life (decoupled from the sale of their labor).
Enshittification, AI overview being shoved down all our throats? Tactics used to maximize profits tricking us into believing AI products are useful.
AI is just a tool like anything else. What’s the saying again? "AI doesn’t kill people, capitalism kills people?
I do AI research for climate and other things and it’s absolutely widely used for so many amazing things that objectively improve the world. It’s the gross profit-above-all incentives that have ruined “AI” (in quotes because the general public sees AI as chatbots and funny pictures, when it’s so much more).
The quotes are because “AI” doesn’t exist. There are many programs and algorithms being used in a variety of way. But none of them are “intelligent”.
There is literally no intelligence in a climate model. It’s just data + statistics + compute. Please stop participating in the pseudo-scientific grift.
The quotes are because “AI” doesn’t exist. There are many programs and algorithms being used in a variety of way. But none of them are “intelligent”.
And this is where you show your ignorance. You’re using the colloquial definition for intelligence and applying incorrectly.
By definition, a worm has intelligence. The academic, or biological, definition of intelligence is the ability to make decisions based on a set of available information. It doesn’t mean that something is “smart”, which is how you’re using it.
“Artificial Intelligence” is a specific definition we typically apply to an algorithm that’s been modelled after the real world structure and behaviour of neurons and how they process signals. We take large amounts of data to train it and it “learns” and “remembers” those specific things. Then when we ask it to process new data it can make an “intelligent” decision on what comes next. That’s how you use the word correctly.
Your ignorance didn’t make you right.
lol ok buddy you definitely know more than me
FWIW I think you’re conflating AGI with AI, maybe learn up a little
The term AGI had to be coined because the things they called AI weren’t actually AI. Artificial Intelligence originates from science fiction. It has no strict definition in computer science!
Maybe you learn up a little. Go read Isaac Asimov
lol Again, you definitely know more than me
I always get such a kick reading comments from extremely overly confident people who know nothing about a topic that I’m an expert in, it’s really just peak social media entertainment
Are you talking about AI or LLM branded as LLM?
Actual AI is accurate and efficient because it is designed for specific tasks. Unlike LLM which is just fancy autocomplete.
Unlike LLM which is just fancy autocomplete.
You might keep hearing people say this, but that doesn’t make it true (and it isn’t true).
LLMs are part of AI, so I think you’re maybe confused. You can say anything is just fancy anything, that doesn’t really hold any weight. You are familiar with autocomplete, so you try to contextualize LLMs in your narrow understanding of this tech. That’s fine, but you should actually read up because the whole field is really neat.
Literally, LLMs are extensions of the techniques developed for autocomplete in phones. There’s a direct lineage. Same fundamental mathematics under the hood, but given a humongous scope.
LLMs are extensions of the techniques developed for autocomplete in phones. There’s a direct lineage
That’s not true.
Even llms are useful for coding, if you keep it in its auto complete lane instead of expecting it to think for you
Just don’t pay a capitalist for it, a tiny, power efficient model that runs on your own pc is more than enough
Yes technology can be useful but that doesn’t make it “intelligent.”
Seriously why are people still promoting auto-complete as “AI” at this point in time? It’s laughable.
Actual AI doesn’t exist
FTFY.
Data theft? Data should be a public good where authors are guaranteed a dignified life (decoupled from the sale of their labor).
I’ve seen it said somewhere that, with the advent of AI, society has to embrace UBI or perish, and while that’s an exaggeration it does basically get the point across.
I don’t think that AI is as disruptive as the steam engine, or the automatic loom, or the tractor. Yes, some people will lose their jobs (plenty of people have already) but the amount of work that can be done which will benefit society is near infinite. And if it weren’t, then we could all just work 5% fewer hours to make space for 5% unemployment reduction. Unemployment only exists in our current system to threaten the employed with.
You might be right about the relative impact of AI alone, but there are like a dozen different problems threatening the job market all at once. Added up, I do think we are heading towards a future where we have to start rethinking how our society handles employment.
A world where robots do most of the hard work for us ought to be a utopia, but as you say, capitalism uses unemployment as a threat. If you can’t get a job, you starve and die. That has to change in a world where we’ll have far more people than jobs.
And I don’t think it’s as simple as just having us all work less hours - every technological advancement that was once said would lead to shorter working hours instead only ever led to those at the top pocketing the surplus labor.
Yes, I 100% agree with you. The ‘working less’ solution was just meant as a simple thought exercise to show that with even a relatively small change, we could eliminate this huge problem. Thus the fact that the system works in this way is not an accident.
Por que no los dos?
Because “AI” doesn’t exist. Hating something that doesn’t exist is just playing into the grift.
Because AI - in a very broad sense - is useful.
Machine Learning and the training and use of targeted, specialized inferential models is useful. LLMs and generative content models are not.
What! LLMs are extremely useful. They can already:
-Funnel wealth to the richest people -Create fake money to trade around -Deplete the world of natural resources -Make sure consumers cannot buy computer hardware -Poison the wells of online spaces with garbage content that takes 2s to generate and 2 minutes to read
Let’s not forget about traditional AI, which have served us well for so long that we stopped thinking of them as AI.
What?
As in, I agree with your point. I just want to give a shoutout to the non-ML-based AI.
In the strictest sense of the technical definition: all of what you are describing are algorithmic approaches that are only colloquially referred to as “AI”. Artificial Intelligence is still science fiction. “AI” as it’s being marketed and sold today is categorical snake oil. We are nowhere even close to having a Star Trek ship-wide computer with anything even approaching reliable, reproducible, and safe outputs and capabilities that are fit for purpose - much less anything even remotely akin to a Soong-type Android.
In the strictest sense there is no technical definition because it all depends on what is “intelligence”, which isn’t something we have an easy definition for. A thermostat learning when you want which temperature based on usage stats can absolutely fulfill some definitions of intelligence (perceiving information and adapting behaviour as a result), and is orders of magnitude less complex than neural networks.
algorithmic approaches that are only colloquially referred to as “AI”. Artificial Intelligence is still science fiction
That’s why this joke definition of AI is still the best: “AI is whatever hasn’t been done yet.”
I have forgotten all working definitions of AI that CS professors gave except for this one 🙃
Putting aside that “AI” doesn’t exist…
For whom is it useful for? For what?
Under capitalism “usefulness” often means the destruction of humanity and the planet.
Example: The Role of AI in Israel’s Genocidal Campaign Against Palestinians
I am still waiting for evidence of that. Tried it for a while for general questions and for coding and the results were at best meh, and most of all it was not faster than traditional search.
Even so, if it was really useful, it would still not be worth the fact that it is based on stolen data and the impact to the environment.
AI is a super broad field that encompasses so many tech. It is not limited to the whatever the tech CEOs are pushing.
In this comment section alone, we see a couple examples of AI used in practical ways.
On a more personal level, surely you’d have played video games before? If you had to face any monster / bot opponents / etc, those are all considered AI. Depending on the game, stages / maps / environments may be procedurally generated - using AI techniques!
There are many more examples - e.g. pathfinding in map apps, translation apps -, just that we are all so familiar with them that we stopped thinking of them as AI.
So there are plenty of evidence for AI’s usefulness.
Langton’s ant can procedurally generate things, if you set it up right. Would you call that AI?
As for enemies in gaming, it got called that because game makers wanted to give the appearance of intelligence in enemy encounters. Aspirationally cribbing a word from sci-fi. It could just as accurately have been called “puppet behavior”… more accurately, really.
The point is “AI” is not a useful word. A bunch of different disciplines across computing all use it to describe different things, each trying to cash in on the cultural associations of a term that comes from fiction.
deleted by creator
I think what people are struggling to articulate is that, the way AI gets thrown around now, it’s basically being used as a replacement for the word “algorithm”.
It’s obfuscating the truth that this is all (relatively) comprehensible mathematics. Even the black box stuff. Just because the programmer doesn’t know each step the end program takes, doesn’t mean they don’t know the principals behind how it was made, or didn’t make deliberate choices to shape the outcome.
There’s some very neat mathematics, yes, and an utterly staggering amount of data and hardware. But at the end of the day its still just an (large) algorithm. Calling it AI is dubious at best, and con-artistry at worst.
Fair enough. I was using the new colloquial definition of AI which actually mean LLMs specifically.
I thing the broader AI which includes ML and all your other examples are indeed very useful.
I mean, I find the tech fascinating and probably would like it, except that I hate the way it was created, the way it is peddled, the things it is used for, the companies who use it, the way it “talks”, the impact it has had on society, the impact it has on the environment, the way it is monetised, and the companies who own it.
And all that makes it difficult to “just appreciate the tech”
i was a vocal synth nerd before i was a fedi/foss nerd. we’ve been doing ai since before the ai bubble, and i think vocal synths are a good example of ethical ai.
vocal synths are still a creative tool where you compose the music, lyrics and expression yourself, but the ai engine makes the voice more realistic sounding. you purchase “voice banks” which are effectively training data for a single voice and this voice bank comes from a “voice provider” who is a paid singer that will record samples for the vocal synth engine. a lot of voice providers request to have the voice bank “characterized” to sound different from themselves, and the vocal synth company will do so. compare KAF to KAFU CEVIO.
this is a process based entirely on consent, something openai and the rest of them lack, they just send out an army of scrapers to take anything and everything they can get their hands on, consent be damned.
actually speaking of KAF, i was excited because KAFU was coming to synth v, since i don’t have CEVIO (and don’t speak japanese). but unfortunately, KAFU SV was cancelled because the synth v ai engine made her sound too much like herself, and most likely they couldn’t modify the voice bank to sound differently enough and they cancelled it. at least, that’s the prevailing theory.
I hate “AI” because it’s been built of the forced exploitation of untold millions of artists and creative laborers, without even so much as consent, let alone compensation…
Sounds to me like you hate capitalism, not some ones and zeros
I do hate unregulated capitalism.
But that’s not the only problem, even people in the non-profit space, as well as the supposed “communists” of the CCP in China are using and abusing machine learning techniques for the purposes of surveillance, oppression and exploitation.
It’s not the technology’s fault, obviously. But at that point this becomes a bullshit “guns don’t kill people, people kill people” argument.
You hate this narrow use of AI in the commercial space . AI is so much larger and is used in many more amazing things that actually improve humanity than just making funny pictures and chatbots to squeeze more profit out of consumers. I know this because I’ve researched AI for climate for a long time now.
forced exploitation of untold millions of artists and creative laborers, without even so much as consent, let alone compensation…
In this case, is it AI that you truly hate?
I think this comment said it best.
I’m neutral-positive toward local AI, not so much toward Clawd-style agents impersonating humans on the web
With openclaw and moltbook recently, the threat of taking many white collar jobs has shaken me to the core. My job may be gone in the next few years, and I do AI research directly…
In other words automation
“AI is whatever hasn’t been done yet.”
Grifter theory right there. Trying to retro-fit “AI” onto every past technology.
The point is that “AI” has never existed. It becomes more and more obvious as grifters pump out more and more fake “AI”.
please let me know if there’s a more appropriate one.
!fuck_ai@lemmy.world maybe
Nah those guys hate (gen)AI because it’s (gen)AI, or for other reasons that are ultimately intrinsic to the tech such as the intellectual property aspect.
I agree that it’s foolish to hate tech per se. I think lots of people wind up promoting the grift of “AI” through misguided opposition.
But that’s not everybody in that comm. People hate “AI” for a variety of reason.









