Do you want to save people’s lives by diagnosing respiratory diseases months before even the most competent doctors could buy analysing x-rays and CT scans?
Then yes you need AI
AI has it’s legitimate uses and just blindly treating like the devil incarnate only hurts the many many valid criticisms of AI.
Compassionate fucking Buddha do people—sorry, not people: tech nerds—ever not comprehend the notion of “context” and “common parlance” and such.
Here’s a thought: fuck the fuck right fucking off from a group literally named “fuck AI” if your fucking fee-fees are so fucking fragile you can’t fucking not be a fucking dork, M’kay?
Yeah true, but I’m really proud of the work my team does and very invested in it, so will always come to it’s defence when people say AI in general is bad, even if they probably mean genAI
Yeah true, but I’m really proud of the work my team does and very invested in it, so will always come to it’s defence when people say AI in general is bad, even if they probably mean genAI
It’s not just companies pushing “AI” hype that are dishonest, the term itself is: artificial neural networks today simply do not meet reasonable definitions of intelligent and they won’t anytime soon.
Cheers to you for doing useful work, but why not call it something more accurate like computer vision or medical image computing?
spoiler
i guess maybe because calling things “AI” gets them funded? 😭
We’ve been using the term AI for a very long time now and it’s a very generic term that covers a bunch of technologies, like we’ve been talking about enemies in video games having AI for decades and people don’t find the need to correct that
That is mostly because, at the time, in the reduced context of videogames, it was clear that AI was used to describe the behaviour of non playable entities.
And also, I don’t think video game terms were ever taken to have scientific accuracy (at least I hope not) or, more importantly, ever tried to imply that these entities exhibited “intelligence”.
Now an entire subfield of statistics is being called AI by virtue of the fact that we often do linguistic abuses when it comes to talking about computers or code (something that Dijkstra was vehemently against in this fantastic note about teaching compsci). I don’t know why statisticians felt the need to hype up gradient descent by calling it “learning” but here we are.
Now I know I am caricaturing, but the point I am trying to make is that, now that the cat is out of the bag, and that “AI” is not just an academic term but has been willfully used to get money and to sell products with anything and everything, the unfortunate effect is that for a lot of people, AI = LLMs mostly. And I’d say amongst these people, a number, me included, would like it to stop using that term entirely because of that abuse AND because of the suggestion that it exhibits intelligence, in that context.
I get that it sucks for you and you feel attacked if you do anything that has to do with machine learning or deep learning, but again, context is important, and this comm is pretty clearly against the slop generators (and the term AI altogether for the reasons mentioned), not necessarily all modern tools of statistical analysis and pattern recognition.
Eh, AI is a useful term to describe the subset of computer science that encompasses these more advanced processes such as machine learning, computer vision, LLMs, generative etc.
Arguing wether it’s “true intelligence” or not is just unproductive and pointless, like getting mad that almond milk isn’t really milk.
I concede there was a window of time where that was true. Too early, and AI had nothing to do with the current methods and tried a more symbolic approach. Too late (i.e. now) and the term has been pushed so hard by corpos (because of the superficial semantic reading “AI” => “intelligence” for marketing hype) and it means for a lot of people, whether you like it or not, “ChatGPT”, and it alienated them.
Unproductive? Probably (and for sure in the context of your work). Pointless? Absolutely disagree when companies force that vision of intelligence on us. This has a social impact.
Now maybe this is pointless to you, and that’s ok, but it’s not to a lot of other people.
Do you want to save people’s lives by diagnosing respiratory diseases months before even the most competent doctors could buy analysing x-rays and CT scans?
Then yes you need AI
AI has it’s legitimate uses and just blindly treating like the devil incarnate only hurts the many many valid criticisms of AI.
Compassionate fucking Buddha do people—sorry, not people: tech nerds—ever not comprehend the notion of “context” and “common parlance” and such.
Here’s a thought: fuck the fuck right fucking off from a group literally named “fuck AI” if your fucking fee-fees are so fucking fragile you can’t fucking not be a fucking dork, M’kay?
Buh-bye, bozo.
I’m not OP but I guarantee you this meme is about generative AI and not the machine learning applications you’re referring to
Yeah true, but I’m really proud of the work my team does and very invested in it, so will always come to it’s defence when people say AI in general is bad, even if they probably mean genAI
It’s not just companies pushing “AI” hype that are dishonest, the term itself is: artificial neural networks today simply do not meet reasonable definitions of intelligent and they won’t anytime soon.
Cheers to you for doing useful work, but why not call it something more accurate like computer vision or medical image computing?
spoiler
i guess maybe because calling things “AI” gets them funded? 😭
We’ve been using the term AI for a very long time now and it’s a very generic term that covers a bunch of technologies, like we’ve been talking about enemies in video games having AI for decades and people don’t find the need to correct that
That is mostly because, at the time, in the reduced context of videogames, it was clear that AI was used to describe the behaviour of non playable entities.
And also, I don’t think video game terms were ever taken to have scientific accuracy (at least I hope not) or, more importantly, ever tried to imply that these entities exhibited “intelligence”.
Now an entire subfield of statistics is being called AI by virtue of the fact that we often do linguistic abuses when it comes to talking about computers or code (something that Dijkstra was vehemently against in this fantastic note about teaching compsci). I don’t know why statisticians felt the need to hype up gradient descent by calling it “learning” but here we are.
Now I know I am caricaturing, but the point I am trying to make is that, now that the cat is out of the bag, and that “AI” is not just an academic term but has been willfully used to get money and to sell products with anything and everything, the unfortunate effect is that for a lot of people, AI = LLMs mostly. And I’d say amongst these people, a number, me included, would like it to stop using that term entirely because of that abuse AND because of the suggestion that it exhibits intelligence, in that context.
I get that it sucks for you and you feel attacked if you do anything that has to do with machine learning or deep learning, but again, context is important, and this comm is pretty clearly against the slop generators (and the term AI altogether for the reasons mentioned), not necessarily all modern tools of statistical analysis and pattern recognition.
Eh, AI is a useful term to describe the subset of computer science that encompasses these more advanced processes such as machine learning, computer vision, LLMs, generative etc.
Arguing wether it’s “true intelligence” or not is just unproductive and pointless, like getting mad that almond milk isn’t really milk.
I concede there was a window of time where that was true. Too early, and AI had nothing to do with the current methods and tried a more symbolic approach. Too late (i.e. now) and the term has been pushed so hard by corpos (because of the superficial semantic reading “AI” => “intelligence” for marketing hype) and it means for a lot of people, whether you like it or not, “ChatGPT”, and it alienated them.
Unproductive? Probably (and for sure in the context of your work). Pointless? Absolutely disagree when companies force that vision of intelligence on us. This has a social impact.
Now maybe this is pointless to you, and that’s ok, but it’s not to a lot of other people.