

Agreed. It’s a damn shame, because there are a lot of interesting ideas at play, lost in the miasma of geopolitics and acceleration. If only we could slow down, build/scale probabilistic hardware, and carefully test and validate which problems are best suited to each mode of compute.
But nah, let’s be rapacious, anthropomorphize “AI” into an awe-inspiring entity, and thrust it into all the orifices.






Right. Productivity tools, like AI or bloated frameworks, can both lead to mountains of slop. I don’t reactively take issue with AI, especially if it eventually produces better work, but I will always choose the more transparent approach. I take umbrage with deceit and will stay away from systems that seem to be careening toward manipulation and more hierarchical, gatekeeping bullshit.
We all need to contend with the possibility that these intelligent systems become far better than most people and adapt accordingly. It doesn’t mean we have to sacrifice our FOSS ideals, if that applies to you.
Though, I completely understand the reactivity, as we watch many peoples’ life trajectories become financially irrelevant, and the oligarch’s prime the population for a return to manual labor, while dangling that utopian carrot.