• 0 Posts
  • 12 Comments
Joined 6 months ago
cake
Cake day: June 20th, 2025

help-circle

  • There’s a real vs theoretical distinction. Turing machines are defined as having infinite memory. Running out of memory is a big issue that prevents computers from solving problems that Turing machines should be able to solve.

    The halting problem, a bunch of problems involving prime numbers, a bunch of other weird math problems are all things that can’t be solved with Turing machines. They can all sort of be solved in some circumstances (eg A TM can correctly classify many programs as either halting or not halting but there are a bunch of edge cases it can’t figure out, even with infinite memory).

    From what I remember, most researchers believe that human brains are Turing Complete. I’m not aware of any class of problem that humans can solve that we don’t think are solvable by sufficiently large computers.

    You’re right that Quantum Computers are Turing Complete. They’re just the closest practical thing I could think of to something beyond it. They often let you knock down the Big Oh relative to regular computers. That was my point though. We can describe something that goes beyond TC (like “it can solve the halting lemma”) but there don’t seem to be any examples of them.



  • Feelings are certainly real. That doesn’t mean that they provide any evidence beyond the existence of the feeling. The standard thought experiment around that is to think about dreams. In a dream, everything I feel can be completely convincing and I have no way to know it’s a hallucination. Once I wake up that reality becomes clear and I know that the feelings I was 100% certain of a few moments ago, were false. That suggests that even complete certainty in our feelings is not indicative of underlying truth.

    The extra dimension thing is a bit tricky. The standard 3+1 are widely accepted. There are several conjectures that involve more dimensions but we haven’t found evidence to support them. All of those are still physical dimensions. They generally fall into 2 categories; testable and not testable.

    The non-testability is why everyone looks down on string theorists. Their models “explain” everything by piling on more and more dimensions but non of it is testable.

    Since none of the dimensions above 4 are measurable, I’m much more comfortable believing they don’t exist than that they do. I don’t see why it would make sense to fill a void of non-knowledge with arbitrary guesses. I don’t see a problem in not knowing if it’s possible for AIs (or humans) to be conscious.



  • I’m not talking about a precise definition of consciousness, I’m talking about a consistent one. Without a definition, you can’t argue that an AI, a human, a dog, or a squid has consciousness. You can proclaim it, but you can’t back it up.

    The problem is that I have more than a basic understanding of how an LLM works. I’ve written NNs from scratch and I know that we model perceptrons after neurons.

    Researchers know that there are differences between the two. We can generally eliminate any of those differences (and many research do exactly that). No researcher, scientist, or philosopher can tell you what critical property neurons may have that enable consciousness. Nobody actually knows and people who claim to know are just making stuff up.