An experiment to (dis)prove the strong physical Church-Turing Thesis (foundations of probability, digital physics and Laplacian determinism 2)

There seems to be a pervasive role of `information’ in probability, entropy and hence in physics. But the precise nature of this role escapes me, I’m afraid. I may have said before somewhere in this thread that I do not put too much faith in the classical model of probability as described by Laplace (see previous post, showing Laplace stated similar doubts himself).

One reason for this is an argument/experiment related to digital physics which has not received enough attention, I believe. I equate the term `digital physics’ with the strong physical Church-Turing thesis PCTT+: `Every real number produced by Nature is a computable real number‘ (the Universe is a computer).

The argument/experiment runs like this:

1. Denote with [0,1]_{_{\rm REC}} the recursive unit interval, that is the set of computable reals in [0,1]. We can effectively construct coverings of [0,1]_{_{\rm REC}} which classically have arbitrarily small Lebesgue measure. In fact we can for any n give a countable sequence of intervals (S_{n,m})_{m\in \mathbb{N}} such that the recursive interval [0,1]_{_{\rm REC}} is covered by (S_{n,m})_{m\in \mathbb{N}}, and such that the sum of the lengths of the intervals (S_{n,m})_{m\in \mathbb{N}} does not exceed 2^{-n}. (see [Bridges&Richman1987] Varieties of Constructive Mathematics Ch. 3, thm. 4.1; the coverings are not constructively measurable because the measure limit cannot be achieved constructively, but this doesn’t affect the probability argument).

2. Flipping a coin indefinitely yields a (for practical purposes potentially infinite) sequence x\in\{0,1\}^{\mathbb{N}}, we can see x as a binary real number in [0,1]. Let \mu denote the standard Lebesgue measure, and let A\subseteq [0,1] be Lebesgue measurable. Then in classical probability theory the probability that x is in A equals \mu(A). (Assuming the coin is `fair’ which leads to a uniform probability distribution on [0,1]).

3. Let H0 be the hypothesis: `the real world is non-computable’ (popularly speaking), and H1 be PCTT+ (mentioned above). Then letting the test size \alpha be 2^{-40} , we can start constructing (S_{40,m})_{m\in \mathbb{N}}. Notice that S_{40}=\bigcup_{m\in \mathbb{N}}(S_{40,m}) has Lebesgue measure less than 2^{-40}. H0 is meant to be interpreted mathematically as: classical mathematics is a correct description of physics, the Lebesgue measure of the non-computable reals in [0,1] equals 1, and the uniform probability distribution applies for a coin-flip-randomly produced real in [0,1].

4. Therefore the probability that x is in S_{40} is less than 2^{-40}. If we ever discover an m such that x is in the interval S_{40,m}, then according to the rules of hypothesis testing I think we would have to discard H0, and accept H1, that is PCTT+.

5. Even if the uniform probability distribution is not perfectly satisfied, the above argument still obtains. Any reasonable probability distribution function (according to H0) will be uniformly continuous on [0,1], yielding a uniform correspondence between positive Lebesgue measure and positive probability of set membership.

This seems to me a legitimate scientific experiment, which can be carried out. An interesting form would be to have people add their flips of a coin to the sequence x. I am really curious what the implications are. But several aspects of this experiment remain unclear to me.

I’ve been trying to attract attention to the possibility of carrying out this experiment, so far rather unsuccessfully. Perhaps someone will point out a fallacy in the reasoning, otherwise I think it should be carried out.

Still, there is a snag of course. Assuming H1, that is PCTT+, we are `sure’ to see x fall in some S_{40,m}…but how long would we have to wait for the right m to crop up?

This question then becomes the subject of the reverse hypothesis test: assuming H1, can we determine M\in \mathbb{N} such that with probability less than 2^{-40} we do not see x fall into any S_{40,m} for m\leq M?

If so we could use the experiment also to disprove PCTT+.

Finally, if we should in this way somehow `prove’ PCTT+, what remains of the standard scientific method of statistical hypothesis testing?

All these questions were raised in my paper `On the foundations of constructive mathematics — especially in relation to the theory of continuous functions‘ (2005, circulated as preprint since 2001).

I have yet to receive an answer…so here another invitation to comment. Don’t hesitate to point out where I go wrong.

Notice that a similar experiment can be done for the rational numbers (also of zero Lebesgue measure). I’m confident that such an experiment would not statistically yield that all reals are rational, but the reverse question remains interesting. These reverse questions were the motivation for the thread on `drawing a natural number at random’. This type of question is heavily entropy-related, I feel, and I will discuss this in the next post.

Finally, at this moment I consider PCTT+ the best scientific formulation of Laplacian determinism, which explains the title of these posts.

About franka waaldijk

mathematician (foundations & topology in constructive mathematics) and visual artist
This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a comment