• 1 Post
  • 6 Comments
Joined 1 year ago
cake
Cake day: November 30th, 2024

help-circle

  • There isn’t a mystery, there is just physicists being mystical for no justified reason.

    There is no evidence quantum systems exist in two states until you look. Physicists just endlessly gaslight each other into believing a clearly statistical theory describing a stochastic process and thus only gives you probability distributions for the results somehow has nothing to do with probability at all and only “collapses” into the probabilities when you look. It’s all a grand delusion.

    The probabilities are always there from the get-go. If you separate the quantum state into its real and imaginary parts, then translate to polar form, you will see that it contains two degrees of freedom, one of those degrees of freedom is just a vector of real-valued probabilities for the current configuration of the system, and the other vector is a real-valued vector of relational phases between the objects in the system.

    The latter evolves deterministically whereas the former evolves stochastically, and the stochastic evolution of the former can be influenced by the deterministic state of the latter.

    The update rule for a classical probabilistic information is:

    • p⃗′ = Γp⃗

    Where p⃗ is your probability distribution and Γ is a stochastic matrix.

    The update rule for a quantum computer can literally just be expressed as:

    • p⃗′ = Γp⃗ + c⃗

    Where the additional c⃗ is a coherence term that is derivative of a function on φ⃗ where φ⃗ is the deterministically evolving vector of relational phases. Quantum computers are not magic, they are just bits that evolve stochastically according to a modified stochastic rule that deviates from classical stochastic processes by the non-linear coherence term with dependence upon the deterministic evolution of φ⃗, requiring you to have to both keep track both of φ⃗ and p⃗. (Both are real-valued! Imaginary numbers aren’t magic either!)

    There is no physical “collapse” of anything when you make a measurement. Since p⃗ is a probability distribution then you can perform a Bayesian knowledge update on it when you make a measurement, using Bayes’ theorem. Nothing is mysterious about that. That’s literally all it is. The quantum state ψ is complex-valued, meaning it represents two degrees of freedom in the system, one of those degrees of freedom is p⃗=|ψ|², and the other is arg(ψ)=φ⃗. When you perform a measurement, you ONLY have to update the degree of freedom associated with p⃗. You don’t have to touch the other, and this fact is guaranteed by U(1) gauge symmetry.

    There is only a mystery if you delude yourself into believing that quantum mechanics is not just a non-classical probabilistic theory. If you just accept it from the get-go, then the only difficult question is how is it that classical stochastic dynamics arises from quantum stochastic dynamics on macroscopic scales, but this is already solved via decoherence. You can prove that the “+ c⃗” becomes less relevant on macroscscales.

    But if you delude yourself into believing that quantum mechanics is not a stochastic theory at all, when it clearly is, then decoherence doesn’t suffice, because you would make the mistake of interpreting the entirety of ψ, both of its degrees of freedom, as physical, meaning you would be interpreting a probability distribution of p⃗ as physical. If you interpret a probability distribution as physical, then you end up interpreting the branching paths in the probability tree as physically branching paths, even though that’s clearly not what we observe as we only ever observe a single outcome, and decoherence does not get you to a single outcome, only a classical distribution of outcomes.

    This transformation of p⃗ into a physical object is then “resolved” either by proposing p⃗ “collapses” down into a definite outcome when you look at it based on values of p⃗, or by claiming that the observer themselves physically branches as well into a multiverse. You end up with two absurdities because p⃗ is not a physical object. It’s a probability distribution of the system’s configuration.

    The mass delusion that quantum mechanics has nothing to do with statistics or probability theory has somewhat of its origins in Bell’s theorem, where Bell proved that it is impossible for there to be an ontic state of the system when you’re not looking at it that is compatible with special relativity, and only if you marginalize out everything you aren’t looking at to isolate the measurement readout itself, only the measurement readouts are compatible with special relativity.

    This led physicists to then argue for dropping anything from the model that is not the measurement readouts, so the only ontic states are the measurement readouts and ψ. They thus have to claim ψ is the physical state of the system when you are not looking or else their theory no longer has objective reality in it at all.

    But this argument fails for a simple reason. When special relativity was first introduced by Einstein in 1905, it was mathematically equivalent to a theory without relativity proposed by Hendrik Lorentz in 1904. Hence, we know for a fact that a theory does not need to be relativistic to make all the same empirical predictions as relativity. Thus, you can get around Bell’s theorem and build a model with ontic states in a similar way and make all the same predictions as relativistic quantum mechanics, such as Hrvoje Nikolic’s model.

    However, I don’t actually advocate for such models, but the fact that such models exist shows the very simple fact that a universe where particles have ontic states when you’re not looking at them is perfectly compatible with a universe that is relativistic when you marginalize out those ontic states and only look at measurement readouts. If the dynamics are stochastic, then this also explains why we do not include the ontic states in the model, not because they don’t exist, but because the stochastic dynamics prevents you from tracking them in the model.

    Hence, you don’t need the ontic states in the model to admit that they still exist in physical reality. The particles do have real values in the real world when you’re not looking. The moon does exist when you’re not looking at it. The cat is either dead or alive, not both at the same time, before you open the box. It is just that the theory is a stochastic theory, a probabilistic theory, which does not track those states, only probability distributions for those states.

    We’ve known this for ages, but people are obsessed with using quantum mechanics for their own springboard of mysticism, and so they want to pretend it is “mysterious” to justify their beliefs in multiverses, some special role for “consciousness,” or what-have-you. If you just accept the bloody obvious reality that the theory gives you a statistical distribution because it is a statistical theory, at least in part (φ⃗ evolves deterministically), then decoherence is the end of the story. There only seems like a mystery is still left when you adopt decoherence if you rejected that the theory was statistical to begin with.

    Although, the physicist Jacob Barandes has pointed out that the theory can be interpreted as a purely statistical theory if you drop off the Markov assumption. This deterministic property of the system φ⃗ then becomes what he calls a “hidden Markov memory,” not a real physical property. The Markov assumption is the idea that the dynamics of a system depend solely on its present state. A non-Markovian system is one where there is also dependence upon its past state. A “hidden Markov model” is one where you include its past state as a hidden memory in the present to make the model appear Markovian. Barandes proves that you can interpret this deterministic property as actually just one of these hidden Markov memories, and you can drop it by just fitting the system’s statistical dynamics to non-Markovian stochastic laws.



  • Quantum mechanics is more weird than that. It’s not accurate to say things can be in two states at once, like a cat that is both dead and alive at the same time, or a qubit that is both 0 and 1 at the same time. If that were true, then the qubit’s mathematical description when in a superposition of states would be |0>+|1>, but it is not, it is a|0>+b|1> where the coefficients (a and b) are neither 0 or 1, and the coefficients cannot just be ignored if one were to give a physical interpretation as they are necessary for the system’s dynamics.

    You talk about it being “half” a cat, so you might think the coefficients should be interpreted as proportions, but proportions are such that 0≤x≤1 and ∑x=1. But in quantum mechanics, the coefficients can be negative and even imaginary, and do not have to sum to 1. You can have 1/√2|0>-i/√2|1> as a valid superposition of states for a qubit. It does not make sense to interpret -i/√2 as a “half,” so you cannot meaningfully interpret the coefficients as a proportion.

    Trying to actually interpret these quantum states ontologically is a nightmare and personally I recommend against even trying, as you will just confuse yourself, and any time you think you come up with something that makes sense, you will later find that it is wrong.


  • The point that Bell tried to point out in his “Against ‘Measurement’” article is that when you say “we start including atomic scale things we might as well just include everything up to and including the cat,” you have to place the line somewhere, sometimes called the “Heisenberg cut,” and where you place the line has empirically different implications, so wherever you choose to draw the line must necessarily constitute a different theory.

    Deutsch also published a paper “Quantum theory as a universal physical theory” where he proves that drawing a line at all must constitute a different theory from quantum mechanics because it will necessarily make different empirical predictions than orthodox quantum theory.

    A simple analogy is, let’s say, I claim the vial counts as an observer. The file is simple enough that I might be able to fully model it in quantum mechanics. A complete quantum mechanical model would consist of a quantum state in Hilbert space that can only evolve through physical interactions that are all described by unitary operators, and all unitary operators are reversible. So there is no possible interaction between the atom and the vial that could possibly lead to a non-reversible “collapse.”

    Hence, if I genuinely had a complete model of the vial and could isolate it, I could subject it to an interaction with the cesium atom, and orthodox quantum mechanics would describe this using reversible unitary operators. If you claim it is an observer that causes a collapse, then the interaction would not be reversible. So I could then follow it up with an interaction corresponding to the Hermitian transpose of the operator describing the first interaction, which is should reverse it.

    Orthodox quantum theory would predict that the reversal should succeed while your theory with observer-vials would not, and so it would ultimately predict a different statistical distribution if I tried to measure it after that interaction. Where you choose to draw the Heisenberg must necessarily make different predictions around that cut.

    This is why there is so much debate over interpretation of quantum mechanics, because drawing a line feels necessary, but drawing one at all breaks the symmetry of the theory. So, either the theory is wrong, or how we think about nature is wrong.



  • pcalau12i@lemmygrad.mltoScience Memes@mander.xyzI'm good, thanks
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    My main issue with Many Worlds is that it is always superfluous.

    We know that the exponential complexity of the quantum state cannot be explained by saying every outcome simply occurs in another branch. That would make it mathematically equivalent to an ensemble, and ensembles can be decomposed into large collections of simple deterministic systems with only linear complexity. If that were how reality worked, quantum mechanics would be unnecessary. The theory could be reduced to classical statistical mechanics.

    A quantum superposition, such as an electron being spin up and spin down, is not an electron doing both in some proportions. If it were, it would again be equivalent to an ensemble and fully describable using classical probability theory. If the quantum state has any ontology at all, it cannot merely represent particles doing multiple things at once. It must be something else, a distinct beable that either influences particles, as in pilot wave theories, or gives rise to them, as in collapse models.

    Some Many Worlds advocates eventually concede this, but then argue that particles never really existed and are only subjective illusions, while the quantum state alone is real. Calling something a subjective illusion does not remove the need for explanation. Hallucinations are still physical processes with physical causes. You can explain them by analyzing the brain and its interactions.

    Likewise, you still need a physical explanation for how the illusion of particles arises. Any such explanation ends up equivalent to explaining how real particles arise, and once you do that, Many Worlds becomes unnecessary. You can always replace the multiverse with a single universe by making the process stochastic instead of deterministic.

    The crucial point is that we know a particle in a superposition of states cannot be a particle in multiple states at the same time. That is mathematically impossible and if that is what it was then it could be reduced to a classical description! Any interpretation which relies on thinking the quantum state represents an ensemble, i.e. it represents things “taking all possible paths” or “in multiple states at once,” is just confused as to the mathematics as this is not what the mathematics says.

    I go into this in more detail here: https://medium.com/p/f67aacb622d5