• 10 Posts
  • 10 Comments
Joined 3 years ago
cake
Cake day: June 10th, 2023

help-circle



  • I don’t feel like LLMs are conscious and I act accordingly as though they aren’t, but I do wonder about the confidence with which you can totally dismiss the notion. Assuming that they are seems like a leap, but since we don’t really know exactly what consciousness is, it seems difficult to rigorously decide upon what does and doesn’t get to be in the category. The usual means by which LLMs are explained not to be conscious, and indeed what I usually say myself, is something like your “they just output probability based on current context” or some variation of “they’re just guessing the next word”, but… is that definitely nothing like what we ourselves do and then call consciousness? Or if indeed that is definitively quite unlike anything we do, does that dissimilarity alone suffice to declare LLMs not conscious? Is ours the only possible example of consciousness, or is the process that drives the behaviour with LLMs possibly just another form or another way of arriving at consciousness? There’s evidently something that triggers an instinctual categorising, most wouldn’t classify a rock as conscious and would find my suggestion that ‘maybe it’s just consciousness in another form than ours’ a pretty weak way to assert that it is, but then again there’s quite a long way between a literal rock and these models running on specific rocks arranged in a particular way and which produce text in a way that’s really similar to the human beings that we all collectively tend to agree are conscious. Is being able to summarise the mechanisms that underpin the behaviour who’s output or manifestation looks like consciousness, enough on it’s own to explain why it definitely isn’t consciousness? Because, what if our endeavours to understand consciousness and understand a biological basis for it in ourselves bear fruit and we can explain deterministically how brains and human consciousness work? In that case, we could, if not totally predict human behaviours deterministically, then at least still give a pretty good and similar summarisation of how we produce those behaviours that look like consciousness. Would we at that point declare that human beings are not conscious either, or would we need a new basis upon which to exclude these current machine approximations of it?

    I always felt that things such as the Chinese Room thought experiment didn’t adequately deal with what I was driving at in the previous paragraph and it seems to me that dismissals of machine consciousness on the grounds that LLMs are just statistical models that don’t know what they are doing are missing a similar point. Are we sure that we ourselves are not mechanistically following complicated rules just as neural networks and LLMs are and that’s simply what the experience of consciousness actually is - an unconscious execution of rulesets? Before the current crop of technology that has renewed interest in these questions, when it all seemed a lot more theoretical and perennially decades off, I was comfortable with this uncomfortable thought. Now that we actually have these impressive models that have people wondering about the topic, I seem to be skewing more skeptical and less generous about ascribing consciousness. Suddenly now the Chinese Room thought experiment as a counter to whether these conscious-looking LLMs are really conscious looks more convincing, but that’s not because of any new or better understanding on my part. I seem to be just goal post shifting when faced with something that does a better job of looking conscious than any technology I’d seen previously.




  • I’m always somewhat confused by this, I haven’t tried Linux since 2009 so maybe I just need to try it some more to appreciate what people mean by thks. I’d say it was “fun” in so much as it was nice to have a challenge for a little while but that was more sort of incidental to it facilitating my computer being a useful machine for me. In terms of it being a better operating system that does it’s job efficiently without problems, shouldn’t it be sort of… Invisible then? Like how can it be fun? I use my computer to do stuff so for me it’s sort of like an operating system is only noticeable to the extent that it is bad and if it isn’t bad I won’t really be aware of it.



  • I think it carries some rhetorical weight. ICE is a political paramilitary organisation and as it serves no legitimate civic or legal function outside of that purpose it’s entirely wrapped up IN the politics behind it and attempts to invoke some legitimacy through that. It is because of that political support from some of the populace that such a force can exist and do what it does without uniform discontent and disapproval from the population suffering under their activities.Those particular segments subscribing to those particular politics have, as part of the wider constellation of beliefs, admiration for and an idealistic appreciation of the traditional military and those who are or have been a part of it (as long as they’re quiet and don’t say anything inconvenient).

    When even former or serving military veterans get victimised it does make it look at least a little bit worse to a wider range of people than it otherwise might. ICE supporters will certainly have ideological defences and rationalisations for this, they’ll surely rapidly disown military personnel who don’t toe the line and attempt to discredit them, but is at least a little bit inconvenient having to do this compared to just cheering on the oppression as part of a righteous attack on undesirable elements as any other victims will be considered.


  • You know, as with a lot of these tech advances that impinge upon privacy and put us at risk in the name of profit, the buy-in, the thing they’re offering in exchange, IS actually pretty worthwhile. This is extremely useful. It’s such a shame that all this cool Star Trek shit that I would have been giddy about as a kid has been realised, but at a sinister and often hidden cost.

    Is there any way this can be done on local metal? Would it achieve the same level of accuracy and sophistication of the progress notes? Because if this can be offered to the therapists that wanted it enough in the first place that they either knowingly or unwittingly sacrificed their patient’s privacy for it, maybe they can be given an alternative.