polite leftists make more leftists

☞ 🇨🇦 (it’s a bit of a fixer-upper eh) ☜

more leftists make revolution

  • 2 Posts
  • 6 Comments
Joined 2 years ago
cake
Cake day: March 2nd, 2024

help-circle

  • It’s true, I fear AGI, not the current state of AI if it were to remain frozen and not improve at all. I am also not terribly afraid of climate change if the climate were to remain fixed at this point. Sure, we have lots of forest fires, and people are dying of heat, but it could get much worse.

    I think maybe the root of our disagreement is that we’re appraising the current state of AI differently. I’m looking at AI now vs AI five years ago and seeing an orders-of-magnitude increase in how powerful it is – still not as good as a human, but no longer negligible – but you’re looking at both of these and rounding them to zero, calling it snake oil. Perhaps, in the Gartner hype cycle, you’re in the trough of disillusionment?

    I don’t want to be a shill for big AI here, but I reject the idea that AI in its current state is useless (though I would agree it’s overhyped and probably detrimental to society overall). It’s capable of doing a lot of trivial labour that previously was not automatable, including coding tasks and graphics, and while it can’t do it with great reliability, or anywhere near as well as a human expert, and it’s much worse in some areas than others (AI-written news articles are much worse than useless, for instance), it’s still turning out to be a productivity benefit (read: reduction in jobs) for those who know how to use it to its strengths. I think the “snake oil” aspect is when lay-people are using it expecting it to be reliable or as good as a human – which is basically how big tech is pitching it.


  • I think we’re looking at this from completely different angles if you are "hope"ful that AI will improve.

    Also, you’re looking at AI completely wrong if you’re analyzing its performance on traditional CS problems in terms of time complexity. Nobody credible is hoping that AI is going to be solving NP problems just by feeding the problem into its context window like a quarter into a vending machine.





  • I have a question about if I’m allowed to be here. Here’s my stance:

    • As a CS person, I find the algorithms that run AI intellectually interesting.
    • The way AI is being used as a society, to put people out of work and replace human-made work with slop, is very very bad. (I support the sag-aftra strikers.)
    • I think it’s especially bad that artists are being put out of work, and that big companies think that copyright should protect mickey mouse and not starving artists. This is where most of my “fuck AI” drive comes from.
    • I think AI could in the future be made to not be slop, with some future new algorithms, but we’re not there yet. It’s not theoretically impossible or anything though. Doomer opinion: if AI actually becomes as powerful as the boosters say it will be, then it could be an x-risk for humanity. I’m not saying that to make AI sound really impressive, I just think we need to be cautious about it.
    • Currently, LLMs are sometimes useful, but only in the right context and when used properly. For instance, AI is pretty good at NLP. It’s also useful for explaining opaque C++ error messages. It’s not useful for ghiblification or summarizing search results or whatever it is Altman is trying to peddle. It might be able to help with protein folding and other pharmaceutical research.
    • The biggest reason that AI is bad is because capitalism is bad. Without capitalism, we could focus on the actually good uses of AI.