

I’m afraid what you wrote was not purely what you state here, and to the extent it was, poorly so.
I don’t think I’m the one here for whom the message is lost.
A little bit of neuroscience and a little bit of computing


I’m afraid what you wrote was not purely what you state here, and to the extent it was, poorly so.
I don’t think I’m the one here for whom the message is lost.


Ha, ok. Well I think your response is rude, meaningless, petulant, and a waste of space.
But I’m really glad we have people like you on social media to remind us what it’s all about.


Many, myself included, may not like to hear this, but I think it’s the bitter truth.
For better or worse, the majority like this technology. AI companies have stuck the landing in a sales sense.
For those who find it cringey or offensive or whatever, we may have to get used to being black sheep (even more).


Huh, didn’t know. Thanks!


Techno feudalism … seems plain and simple to me.
Our independent value and sustainability is no longer a given.
In a monopolised AI world (and how can it be anything other than a big tech monopoly) … you give yourself over, as training data, in exchange for permission to survive … and rely on the AI trained on your data.
Let’s be real … big tech cornered us over the past couple of decades. And now they’re trying to grab us by the balls. It’s happening fast. And most don’t have the philosophical agility to keep up with the implications.


From what I’m seeing, soooo many are naive to this dynamic. They think of it like it’s the latest nifty app and not the directed disruption of the labour market that it is.
Almost like thinking and social awareness has already been outsourced to big tech’s social media empire and this is just the next step.


I think they mean in parallel, as in the government steps in and regulates with guarantees etc, not that these reforms would come from the AI itself.


I keep saying that AI is the death of the Internet as we know it. It’s just no longer the same thing at all.
Completely flipping and question how everything we do on it should probably be the default stance.


Also, Siri, Alexa and Cortana were seen as “intelligent” at the time, as well (or were supposed to be seen, depending on who you ask).
Intelligent for the time, sure, but ever pitched as doing more than a Secretary that never encroaches on or gets involved with your actual job and cognitive skills? Because that’s the divide that’s being enforced: women for the menial dumb tasks and men for the serious, difficult and actually valuable and important stuff.


Not blaming anyone, this is social commentary.
But like the neutral “it” is right there.
In a world that’s both charged around gender and pronoun usage, and focused on the nature and value of LLMs … I think it’s weird that there isn’t more commonly pushback enforcing the non-human neutral for the simple reason that it’s an objective fact amidst a swampy pool of (mis-)information synthesis.
A little like the bechdel test, I feel like it’s the casualness and indifference around this gender bias (at least at the moment) that’s interesting and telling.


Couldn’t help but notice the casual gendering of Claude to “he” as well.
Someone somewhere made the important observation not long ago that computer assistants tended to be gendered female when more like a secretary (Siri and Alexa) but now that AIs are “intelligent” and powerful … Claude now has to be a male.
Especially weird (and telling?) when it is objectively gender neutral as it’s not human.


Most notable part for me in the article was not the AI stuff … but that Atlassian has never been profitable.
Not surprising for a tech company. But for one as big and kinda foundational in the service it provides … I found it surprising. Imagine if MS or Apple or Google were never profitable and companies were just entirely reliant on their services!
Couple that with how little love anyone has for Jira/confluence … and yea … good luck with that Atlassian.


I think it’s a great lesson in hiw good people can create and tolerate bad systems …
… how a bunch of clever and thoughtful people (academics) can walk into creating a dumb system which they simultaneously hate or disagree with, and don’t know how to effectively change or fix.
Worth studying IMO as a case study on these general problems. My understanding is that it was a manipulative capitalist that kicked it off by appealing to academics’ egos by creating increasingly specialised and likely redundant Journals (IE more subscriptions). And of course most academics know it’s dumb, but have no sense of collective action. And so humanity just stumbles along doing dumb shit.


I mean, it makes sense that it’s addictive, right?
I also suspect it’s one of those things that just naturally splits people. For some, the addictiveness and appeal just don’t make sense. For others it’s irresistible.
It’s part of the reason why I’m so doomer on the state of things, from a generally anti-AI/sceptical perspective. There’s just something compulsive that this kind of tool triggers in many people.


I mean kinda, yea … “brainfuck but good actually” Is probably a succinct way of putting the idea.


I tried to go through the tutorial a year or so ago.
I can’t recall when, but there’s a point at which doing something normal/trivial in an imperative language requires all sorts of weirdness in Uiua. But they try to sell it as especially logical while to me they came off as completely in a cult.
It’s this section, IIRC: https://www.uiua.org/tutorial/More Argument Manipulation#-planet-notation-
When they declare
And there you have it! A readable syntax juggling lots of values without any names!
For
×⊃(+⊙⋅⋅∘|-⊃⋅⋅∘(×⋅⊙⋅∘)) 1 2 3 4
Which, if you can’t tell, is equivalent to
f(a,b,c,x) = (a+x)(bx-c)
With arguments 1, 2, 3, 4.
I wanted to like this, and have always wanted to learn APL or J (clear influences). But I couldn’t take them seriously after that.
The way I look at it, it’s either going to need some kind of collapse or we’ll all soon live in a techno-feudalist dystopia.
This where I’m at. And I’m now thinking that techno-feudalism is where we are headed (and are already TBH). I’ve just seen too many people exhibit gross acceptance of basically this destiny/outcome, to the point that the logical conclusion is the ground work for the transition was successfully laid decades ago.
I don’t want to be to too doomer, but I fear the complacency we or many may have. The lack of a willingness to dwell on what world we want for each other, the lack of values and conversations about them, the consumerism and doom-scrolling ©opium. Including, I’m sorry to say, presuming a collapse/reset is guaranteed. We may just end up serfs (again) because Facebook and Google were just too convenient in 2010!
How sure are you that the collapse is coming? Personally, I’m seeing people embrace this stuff without caring too much.
I’m starting to think if there’s a bubble, it’s deeper than big tech. And if there’s a collapse, it may not be of the industry but if things many of us hold dear. I’m starting to think sitting back and waiting for the collapse may be completely the wrong move many of us will regret.
And it’s what’s happening here too. AI is just corporate control and monopolisation with new tricks.
A shallow response I know, but … what the hell timeline is this!!??