It’s true, I fear AGI, not the current state of AI if it were to remain frozen and not improve at all. I am also not terribly afraid of climate change if the climate were to remain fixed at this point. Sure, we have lots of forest fires, and people are dying of heat, but it could get much worse.
I think maybe the root of our disagreement is that we’re appraising the current state of AI differently. I’m looking at AI now vs AI five years ago and seeing an orders-of-magnitude increase in how powerful it is – still not as good as a human, but no longer negligible – but you’re looking at both of these and rounding them to zero, calling it snake oil. Perhaps, in the Gartner hype cycle, you’re in the trough of disillusionment?
I don’t want to be a shill for big AI here, but I reject the idea that AI in its current state is useless (though I would agree it’s overhyped and probably detrimental to society overall). It’s capable of doing a lot of trivial labour that previously was not automatable, including coding tasks and graphics, and while it can’t do it with great reliability, or anywhere near as well as a human expert, and it’s much worse in some areas than others (AI-written news articles are much worse than useless, for instance), it’s still turning out to be a productivity benefit (read: reduction in jobs) for those who know how to use it to its strengths. I think the “snake oil” aspect is when lay-people are using it expecting it to be reliable or as good as a human – which is basically how big tech is pitching it.






Everyone here is either on the side of hating big AI companies or hating IP law. I proudly hate both.