counterpoint

Does it 🎯 creatitity?
Do you want to save people’s lives by diagnosing respiratory diseases months before even the most competent doctors could buy analysing x-rays and CT scans?
Then yes you need AI
AI has it’s legitimate uses and just blindly treating like the devil incarnate only hurts the many many valid criticisms of AI.
Compassionate fucking Buddha do people—sorry, not people: tech nerds—ever not comprehend the notion of “context” and “common parlance” and such.
Here’s a thought: fuck the fuck right fucking off from a group literally named “fuck AI” if your fucking fee-fees are so fucking fragile you can’t fucking not be a fucking dork, M’kay?
Buh-bye, bozo.
I’m not OP but I guarantee you this meme is about generative AI and not the machine learning applications you’re referring to
Yeah true, but I’m really proud of the work my team does and very invested in it, so will always come to it’s defence when people say AI in general is bad, even if they probably mean genAI
Yeah true, but I’m really proud of the work my team does and very invested in it, so will always come to it’s defence when people say AI in general is bad, even if they probably mean genAI
It’s not just companies pushing “AI” hype that are dishonest, the term itself is: artificial neural networks today simply do not meet reasonable definitions of intelligent and they won’t anytime soon.
Cheers to you for doing useful work, but why not call it something more accurate like computer vision or medical image computing?
spoiler
i guess maybe because calling things “AI” gets them funded? 😭
We’ve been using the term AI for a very long time now and it’s a very generic term that covers a bunch of technologies, like we’ve been talking about enemies in video games having AI for decades and people don’t find the need to correct that
That is mostly because, at the time, in the reduced context of videogames, it was clear that AI was used to describe the behaviour of non playable entities.
And also, I don’t think video game terms were ever taken to have scientific accuracy (at least I hope not) or, more importantly, ever tried to imply that these entities exhibited “intelligence”.
Now an entire subfield of statistics is being called AI by virtue of the fact that we often do linguistic abuses when it comes to talking about computers or code (something that Dijkstra was vehemently against in this fantastic note about teaching compsci). I don’t know why statisticians felt the need to hype up gradient descent by calling it “learning” but here we are.
Now I know I am caricaturing, but the point I am trying to make is that, now that the cat is out of the bag, and that “AI” is not just an academic term but has been willfully used to get money and to sell products with anything and everything, the unfortunate effect is that for a lot of people, AI = LLMs mostly. And I’d say amongst these people, a number, me included, would like it to stop using that term entirely because of that abuse AND because of the suggestion that it exhibits intelligence, in that context.
I get that it sucks for you and you feel attacked if you do anything that has to do with machine learning or deep learning, but again, context is important, and this comm is pretty clearly against the slop generators (and the term AI altogether for the reasons mentioned), not necessarily all modern tools of statistical analysis and pattern recognition.
Eh, AI is a useful term to describe the subset of computer science that encompasses these more advanced processes such as machine learning, computer vision, LLMs, generative etc.
Arguing wether it’s “true intelligence” or not is just unproductive and pointless, like getting mad that almond milk isn’t really milk.
I concede there was a window of time where that was true. Too early, and AI had nothing to do with the current methods and tried a more symbolic approach. Too late (i.e. now) and the term has been pushed so hard by corpos (because of the superficial semantic reading “AI” => “intelligence” for marketing hype) and it means for a lot of people, whether you like it or not, “ChatGPT”, and it alienated them.
Unproductive? Probably (and for sure in the context of your work). Pointless? Absolutely disagree when companies force that vision of intelligence on us. This has a social impact.
Now maybe this is pointless to you, and that’s ok, but it’s not to a lot of other people.
Ok. Brb I’m going to go classify these 8.5M data points into 6 categories by hand.
Ah yes, something traditional programming could never achieve, automatically sorting data into buckets. It’s easy to forget how we went from clay tablets to ChatGPT with nothing in-between.
Ok go ahead. I’ll give you some labeled example data, and you can use traditional programming to sort these assorted text strings into the “supports premise”, “rejects premise”, and “neither supports nor rejects” buckets. Once traditional programming does that without BERT or some other classifier, I’ll put in a rec for the full data and you can come work at my company running a data science division.
Let’s go. There’s more to AI/ML than chatgpt.
sorry I ain’t working for someone that spends their time calling people retarded in a tiny anti-AI sub because they don’t draw a bright enough line between different flavors of AI for your personal tastes. Thanks for the offer though, but I’ll have to politely refuse to prove to you the fact that programmatic sentiment analysis existed long before people used AI for it, and it worked just fine.
Well that’s good. I was worried I was going to have to tell you that job offers on Lemmy aren’t serious, but you cleared that bar with a herculean bound! I’ll take your refuse to prove as “can’t meaningfully prove” without loss of generality and go about my day. It’s been real.
But what if I really want to take a shortcut into creating “art”, or making software ? And what if I find thinking to be really tiring?
Please, I need to automate every aspect of my life because a lifetime of corporate propaganda convinced me that all that matters is “productivity” and “measurable outcomes”. Let me use AI, please.
Here’s a helpful flowchart to answer your question:

I don’t understand that nerd “flowchart” shit, I’ll ask ChatGPT to explain that to me.
Thanks anyways.
this is what gpt said:
It’s a joke flowchart.
Here’s what it’s “saying”:
-
The question in the oval is “Do I need AI?”
-
Regardless of any conditions or inputs, the flowchart leads directly to the answer “No.”
So the meaning is:
👉 According to this chart, the answer to “Do I need AI?” is always no.
It’s a humorous oversimplification suggesting that people often ask whether they need AI, but the chart claims the answer is straightforward and always negative.-





