## All bets are off, to disprove the strong physical Church-Turing Thesis (foundations of probability, digital physics and Laplacian determinism 3)

(continued from previous post)

Let H0 be the hypothesis: the real world is non-computable’ (popularly speaking, see previous post), and H1 be PCTT+ (also see the previous post).

For comparison we introduce the hypothesis H2: the real world produces only rational (real) numbers’.

H2 is assumed to have been the original world view of ancient Greek mathematicians (Pythagoreans), before their discovery that $\sqrt{2}$ is irrational (which is rumoured’ to have caused a shock but I cannot find a reliable historical reference for this).

The rational numbers in $[0,1]$ have Lebesgue measure $0$, so we can start constructing $(T_{40,m})_{m\in \mathbb{N}}$ such that $T_{40}=\bigcup_{m\in \mathbb{N}} T_{40,m}$ has Lebesgue measure less than $2^{-40}$, and such that $T_{40}$ contains all rational numbers in $[0,1]$.

If we then take our coin-flip-randomly produced $x\in [0,1]$, I personally don’t think that we will encounter an $m\in\mathbb{N}$ for which we see that $T_{40,m}$ contains $x$.

This opinion is supported by the fact that we can easily construct a non-rational number…at least in theory. Take for instance $e$, the basis of the natural logarithm, which equals $\Sigma_{n\in\mathbb{N}}\frac{1}{n!}$. We can in fact construct $T_{40}$ in such a way that $T_{40}$ does not contain $e$, and assume this to be the case here.

On the one hand, this does not depend on infinity, since we can simply look at approximations of $e$. We construct $T_{40}$ such that for any $m\in\mathbb{N}$ the $2m+2^{\rm th}$ binary approximation to $e$ is positively apart from $T_{40,m}$. On the other hand, any finite approximation to $e$ is still rational…and so we can only construct $e$ as an irrational number in the sense described above.

With regard to the existence of non-computable reals, the situation in my humble opinion is very different. We cannot construct a non-computable real, as result of the Church-Turing Thesis (which I have no reason to doubt). Any construction of a real which we recognize as such will consist of a finite set of construction instructions…in other words a Turing machine.

So to make a reasonable case for the existence of non-computable reals, we are forced to turn to Nature. In the previous post, we flipped our coin to produce a random $x$ in $[0,1]$. We argued that finding $m\in\mathbb{N}$ for which $S_{40,m}$ contains $x$ would force us to reject the hypothesis H0 (the real world is non-computable’).

So what result in this coin-tossing experiment could force us to reject H1, the strong physical Church-Turing thesis (PCTT+, the universe is a computer’)?

To be able to reject H1 in the scientific hypothesis-testing way, we should first assume H1. [This might pose a fundamental problem, because if we really assume H1, then our perception of probability might change, and we might have to revise the standard scientific hypothesis-testing way which seems to be silently based on H0. But we will for the moment assume that the scientific method itself needs no amendment under H1.]

Under H1 $x$ has to fall in some $S_{40,m}$. Failure to do so even if we let $m\in\mathbb{N}$ grow very large, might indicate H1 is false. For scientific proof we should avail of some number $M\in\mathbb{N}$ such that (under H1) the probability that $x$ is not in $\bigcup_{m\in \mathbb{N}, m is less than $2^{-40}$.

This reverse probability has had me puzzled for some time, and sent me on the quest for a probability distribution on the natural numbers. In the thread drawing a natural number at random’ I argued that some indication could be taken from Benford's law, and for discrete cases from Zipf’s law. Anyway, very tentatively, the result of this thread was to consider relative chances only. If for $1\leq n,m \in \mathbb{N}$ we denote the relative Benford chance of drawing $n$ vs. drawing $m$ by: $\frac{P_B(n)}{P_B(m)}$, then we find that $\frac{P_B(n)}{P_B(m)} = \frac{\log{\frac{n+1}{n}}}{\log{\frac{m+1}{m}}}$. The relative Zipf chance of drawing $n$ vs. drawing $m$ would be given by $\frac{P_Z(n)}{P_Z(m)} = \frac{m}{n}$.

In both cases, the relevant density function is $f(x)=\frac{1}{x}$. The important feature of this distribution is twofold:

1) The smaller natural numbers are heavily favoured over the larger. (Low entropy’).

2) There is no $M\in\mathbb{N}$ such that even the relative probability of drawing $m\in\mathbb{N}$ larger than $M\in\mathbb{N}$ becomes less than $2^{-40}$. (Because $\log x$ tends to infinity).

Fools rush in where angels fear to tread. I know, and so let me fall squarely in the first category. Yet this train of thought might provoke some smarter people to come up with better answers, so I will just continue. I do not believe these relative chances can simply be applied here, there are too many unknowns and assumptions. But it cannot do harm to try and get some feel for the reverse probability needed to disprove H1.

For this tentative argument then, disregarding some technical coding issues, we consider (under H1) our coin-flip random $x$ to equal some computable $x_s$ computed by a Turing machine with random number $s\in\mathbb{N}$, drawn from some extremely large urn with low entropy (favouring the smaller natural numbers).

Even with this favouring of the smaller natural numbers, still we cannot begin to indicate $M\in\mathbb{N}$ such that (under H1) the probability that $x$ is not in $\bigcup_{m\in \mathbb{N}, m is less than $2^{-40}$. Perhaps if we would know the size of the urn (which in this case would seem to be the universe itself) we could say something more definite on $M$. But all things considered, it seems to me that $M$ could easily be astronomically large, far larger than our limited computational resources can ever handle.

In other words: all bets are off, to disprove H1.

And so also, if H1 is true, it could very well take our coin-flip experiment astronomically long to find this out.

But I still think the experiment worthwhile to perform.

## An experiment to (dis)prove the strong physical Church-Turing Thesis (foundations of probability, digital physics and Laplacian determinism 2)

There seems to be a pervasive role of information’ in probability, entropy and hence in physics. But the precise nature of this role escapes me, I’m afraid. I may have said before somewhere in this thread that I do not put too much faith in the classical model of probability as described by Laplace (see previous post, showing Laplace stated similar doubts himself).

One reason for this is an argument/experiment related to digital physics which has not received enough attention, I believe. I equate the term digital physics’ with the strong physical Church-Turing thesis PCTT+: Every real number produced by Nature is a computable real number‘ (the Universe is a computer).

The argument/experiment runs like this:

1. Denote with $[0,1]_{_{\rm REC}}$ the recursive unit interval, that is the set of computable reals in $[0,1]$. We can effectively construct coverings of $[0,1]_{_{\rm REC}}$ which classically have arbitrarily small Lebesgue measure. In fact we can for any $n$ give a countable sequence of intervals $(S_{n,m})_{m\in \mathbb{N}}$ such that the recursive interval $[0,1]_{_{\rm REC}}$ is covered by $(S_{n,m})_{m\in \mathbb{N}}$, and such that the sum of the lengths of the intervals $(S_{n,m})_{m\in \mathbb{N}}$ does not exceed $2^{-n}$. (see [Bridges&Richman1987] Varieties of Constructive Mathematics Ch. 3, thm. 4.1; the coverings are not constructively measurable because the measure limit cannot be achieved constructively, but this doesn’t affect the probability argument).

2. Flipping a coin indefinitely yields a (for practical purposes potentially infinite) sequence $x\in\{0,1\}^{\mathbb{N}}$, we can see $x$ as a binary real number in $[0,1]$. Let $\mu$ denote the standard Lebesgue measure, and let $A\subseteq [0,1]$ be Lebesgue measurable. Then in classical probability theory the probability that $x$ is in $A$ equals $\mu(A)$. (Assuming the coin is fair’ which leads to a uniform probability distribution on $[0,1]$).

3. Let H0 be the hypothesis: the real world is non-computable’ (popularly speaking), and H1 be PCTT+ (mentioned above). Then letting the test size $\alpha$ be $2^{-40}$ , we can start constructing $(S_{40,m})_{m\in \mathbb{N}}$. Notice that $S_{40}=\bigcup_{m\in \mathbb{N}}(S_{40,m})$ has Lebesgue measure less than $2^{-40}$. H0 is meant to be interpreted mathematically as: classical mathematics is a correct description of physics, the Lebesgue measure of the non-computable reals in $[0,1]$ equals $1$, and the uniform probability distribution applies for a coin-flip-randomly produced real in $[0,1]$.

4. Therefore the probability that $x$ is in $S_{40}$ is less than $2^{-40}$. If we ever discover an $m$ such that $x$ is in the interval $S_{40,m}$, then according to the rules of hypothesis testing I think we would have to discard H0, and accept H1, that is PCTT+.

5. Even if the uniform probability distribution is not perfectly satisfied, the above argument still obtains. Any reasonable probability distribution function (according to H0) will be uniformly continuous on $[0,1]$, yielding a uniform correspondence between positive Lebesgue measure and positive probability of set membership.

This seems to me a legitimate scientific experiment, which can be carried out. An interesting form would be to have people add their flips of a coin to the sequence $x$. I am really curious what the implications are. But several aspects of this experiment remain unclear to me.

I’ve been trying to attract attention to the possibility of carrying out this experiment, so far rather unsuccessfully. Perhaps someone will point out a fallacy in the reasoning, otherwise I think it should be carried out.

Still, there is a snag of course. Assuming H1, that is PCTT+, we are sure’ to see $x$ fall in some $S_{40,m}$…but how long would we have to wait for the right $m$ to crop up?

This question then becomes the subject of the reverse hypothesis test: assuming H1, can we determine $M\in \mathbb{N}$ such that with probability less than $2^{-40}$ we do not see $x$ fall into any $S_{40,m}$ for $m\leq M$?

If so we could use the experiment also to disprove PCTT+.

Finally, if we should in this way somehow prove’ PCTT+, what remains of the standard scientific method of statistical hypothesis testing?

All these questions were raised in my paper On the foundations of constructive mathematics — especially in relation to the theory of continuous functions‘ (2005, circulated as preprint since 2001).

I have yet to receive an answer…so here another invitation to comment. Don’t hesitate to point out where I go wrong.

Notice that a similar experiment can be done for the rational numbers (also of zero Lebesgue measure). I’m confident that such an experiment would not statistically yield that all reals are rational, but the reverse question remains interesting. These reverse questions were the motivation for the thread on drawing a natural number at random’. This type of question is heavily entropy-related, I feel, and I will discuss this in the next post.

Finally, at this moment I consider PCTT+ the best scientific formulation of Laplacian determinism, which explains the title of these posts.

## Foundations of probability, digital physics and Laplacian determinism

In this thread of posts from 2012, the possibility of drawing a natural number at random was discussed. In the previous post I rediscussed an entropy-related solution giving relative chances. This solution also explains Benford’s law.

In the 2012 thread, I was working on two fundamental questions, the first of which

QUESTION 1   Is our physical world finite or infinite?

was treated to some degree of satisfaction. But its relation to the second question still needs exposition. So let me try to continue the thread here by returning to:

QUESTION 2   What is the role of information in probability theory?

In my (math) freshman’s course on probability theory, this question was not raised. Foundations of probability were in fact ignored even in my specialization area: foundations of mathematics. Understandable from a mathematical point of view perhaps…but not from a broader foundational viewpoint which includes physics. I simply have to repeat what I wrote in an earlier post:

(Easy to illustrate the basic problem here, not so easy perhaps to demonstrate why it has such relevance.) Suppose we draw a marble from a vase filled with an equal amount of blue and white marbles. What is the chance that we draw a blue marble?

In any high-school exam, I would advise you to answer: 50%. In 98% of university exams I would advise the same answer. Put together that makes … just kidding. The problem here is that any additional information can drastically alter our perception of the probability/chance of drawing a blue marble. In the most dramatic case, imagine that the person drawing the marble can actually feel the difference between the two types of marbles, and therefore already knows which colour marble she has drawn. For her, the chance of drawing a blue marble is either 100% or 0%. For us, who knows? Perhaps some of us can tell just by the way she frowns what type of marble she has drawn…?

It boils down to the question: what do we mean by the word chance’? I quote from Wikipedia:

The first person known to have seen the need for a clear definition of probability was Laplace.[citation needed] As late as 1814 he stated:

The theory of chance consists in reducing all the events of the same kind to a certain number of cases equally possible, that is to say, to such as we may be equally undecided about in regard to their existence, and in determining the number of cases favorable to the event whose probability is sought. The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.

— Pierre-Simon Laplace, A Philosophical Essay on Probabilities[4]

This description is what would ultimately provide the classi(Easy to illustrate the basic problem here, not so easy perhaps to demonstrate why it has such relevance.) Suppose we draw a marble from a vase filled with an equal amount of blue and white marbles. What is the chance that we draw a blue marble?

One easily sees however that this definition’ avoids the main issue. Laplace did not always avoid this main issue however:

Laplace([1776a]; OC, VIII, 145):

Before going further, it is important to pin down the sense of the words chance and probability. We look upon a thing as the effect of chance when we see nothing regular in it, nothing that manifests design, and when furthermore we are ignorant of the causes that brought it about. Thus, chance has no reality in itself. It is nothing but a term for expressing our ignorance of the way in which the various aspects of a phenomenon are interconnected and related to the rest of nature.

and:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

—Pierre Simon Laplace, A Philosophical Essay on Probabilities[37]

In the meantime I came across some work by Albert Tarantola, and this work is really heartening! In a seminal paper Inverse problems = quest for information (together with Bernard Valette) Tarantola already states that we should consider any probability distribution as an information state (subjective even) regarding the phenomenon under study, and vice versa: every information state can be described by a probability distribution function on some appropriate model space.

Now we’re talking!

To surprise me further: Tarantola describes the quandary that measure density functions like $f(x)= \frac{1}{x}$ cause (the integral diverges) and offers exactly the same solution: look at relative probabilities of events, instead of absolute probabilities. To top it off, Tarantola emphasizes that the measure density function $f(x)=\frac{1}{x}$ plays a very important role in inverse problems…

So now I need to study this all, in order also to join these ideas to the perspective of digital physics and Laplacian determinism.

(to be continued)

## An entropy-related derivation of Benford’s law

In this thread of posts from 2012, the possibility of drawing a natural number at random was discussed. A solution offering relative chances was given, and stated to be in accordance with Benford’s law.

Then, due to unforeseen circumstances, the thread remained unfinished. In fact it stopped precisely at the point for which I had started the thread in the first place :-).

I wish to return to this thread, but it seems worthwhile to repeat the result mentioned above in a different wording. Why? Well, searching on “Benford’s law” I didn’t find any comparable entropy-related derivation (perhaps I should say motivation) in the literature/internet. And perhaps more telling, Theodore Hill (when deriving Benford’s law from considering a random’ mix of distributions, see A Statistical Derivation of the Significant-Digit Law) explicitly mentions the difficulty of drawing a natural number at random, if the sum probability must be 1, which would seem to exclude the density function $\frac{1}{x}$.

So the trick to turn to relative chances $\frac{P(n)}{P(m)}$ seems new [except it isn't, see the next post], and it does yield Benford’s law. The entropy-related motivation for these relative chances also seems to be new. The (relative) chances involved in drawing a natural number at random will play a role in the discussion to come (on Laplacian determinism, digital physics and foundations of probability theory). But first let us explicitly derive Benford’s law from these chances:

Solution to drawing a natural number at random’:

* We can only assign relative chances, and the role of the natural number $0$ remains mysterious.

* For $1\leq n,m \in \mathbb{N}$ let’s denote the relative chance of drawing $n$ vs. drawing $m$ by: $\frac{P(n)}{P(m)}$.

* For $1\leq n,m \in \mathbb{N}$, we find that $\frac{P(n)}{P(m)} = \frac{\log{\frac{n+1}{n}}}{\log{\frac{m+1}{m}}}$

(* An alternative discrete’ or Zipfian’ case $P_{\rm discrete}$ can perhaps be formulated, yielding: for $1\leq n,m \in \mathbb{N}$, we find that $\frac{P_{\rm discrete}(n)}{P_{\rm discrete}(m)} = \frac{m}{n}$.)

The entropy-related motivation for these chances can be found in this thread of posts from 2012.

Now to arrive at Benford’s law, we consider the first digit of a number $N$ in base 10 (the argument is base-independent though) to be drawn at random according to our entropy-related chances. We then see that the relative chance of drawing a 1 compared to drawing a 2 equals $\frac{\log 2}{\log\frac{3}{2}}$ which in turn equals $\frac{^{10}\log 2}{^{10}\log\frac{3}{2}}$. Since the sum of the probabilities of drawing 1, 2,…,9 equals 1, one finds that the chance of drawing $i$ as first digit for $N$ equals $^{10}\log\frac{i+1}{i}$.

A second-digit law can be derived similarly, for example adding the relative chances for 11, 21, 31, 41, 51, 61, 71, 81, 91 vs. the sum of the relative chances for 12, 22, 32, 42, 52, 62, 72, 82, 92 to arrive at the relative chance of drawing a 1 as second digit vs. the chance of drawing a 2 as second digit. And so on for third-digit laws etc.

This shows that the second-digit distribution is more uniform than the first-digit distribution, and that each next-digit law is more uniform than its predecessor, which fits with our entropy argument.

In the next post the thread on information and probability will be continued.

## Is Cantor space the injective continuous image of Baire space?

Is Cantor space the injective continuous image of Baire space? This is an intriguing question, since its answer depends on which axioms one adopts.

First of all, in classical mathematics (CLASS), the answer is yes. However, in intuitionistic mathematics (INT) the answer is no. Then again, in recursive mathematics (RUSS) the answer is a strong yes, since in RUSS Baire space and Cantor space are homeomorphic.

For CLASS I’ve tried to find references to the above question in publication databases and with Google, but I came up short. Many texts prove that any uncountable Polish space $P$ contains an at most countable subset $D$ such that $P\setminus D$ is the continuous injective image of Baire space. It is easy to show this for Cantor space, but what if we drop $D$ altogether? Well, it is not so difficult to constructively define a continuous injective function from Baire space to Cantor space which in CLASS is surjective (whereas in INT surjectivity can be proven to fail for all such functions). I would be surprised if this has not been done before, but like I said I cannot find any references. Therefore let’s call it a theorem:

Theorem (in CLASS) Cantor space is the injective continuous image of Baire space.

Proof: We constructively define the desired injective continuous function $f$, using induction. $f$ will send the zero-sequence $0,0,...$ to itself. The $f$-image of other sequences starting out with $n$ will be branched off from the zero-sequence at appropriate height’.

To this end, we inductively define $f'$ on finite sequences of natural numbers. $\underline{0}m$ denotes the sequence $0,...,0$ of length $m$. For finite sequences $a, b\in \mathbb{N}^{\star}$ we let $a\star b$ denote the concatenation. For any $\alpha\in\mathcal{N}$ let $\underline{\alpha}n$ denote the finite sequence formed by the first $n$ values of $\alpha$ (for $n=0$ this is the empty sequence).

Let $g$ be the bijection from $\{(n,m)\mid n,m\in\mathbb{N}, n,m >0\}$ to $\mathbb{N}$ given by $g(n,m)=2^{n-1}\cdot(2m-1)-1$. Then for $m>0$ we have $2m-2=\min(\{g(n,m)\mid n\in\mathbb{N}, n>0\})$.

For $n>0$ put $f'(n)= \underline{0}(2n-2)\star 1$. For $n,m>0$ put $f'(\underline{0}m\star n)= \underline{0}(2\cdot g(n,m)+1)\star 1$. For $m>0$ put $f'(\underline{0}m)=\underline{0}(2m-2)$.

For induction, let $a\in\mathbb{N}^{\star}$ be a finite sequence not ending with $0$ and suppose $f'(a)$ has been defined. Then for $n>0$ put $f'(a\star n)= f'(a)\star\underline{0}(2n-2)\star 1$. For $n,m>0$ put $f'(a\star\underline{0}m\star n)= f'(a)\star\underline{0}(2\cdot g(n,m)+1)\star 1$. For $m>0$ put $f'(a\star\underline{0}m)=f'(a)\star\underline{0}(2m-2)$.

Finally, for $\alpha\in\mathcal{N}$ let $f(\alpha)=\lim_{n\rightarrow\infty} f'(\underline{\alpha}n)$. It is easy to see that $f$ is as required. (End of proof).

Clearly, even in CLASS the inverse of $f$ is not continuous (otherwise we would also have that Baire space is homeomorphic to Cantor space!). This clarifies why the constructively defined $f$ fails to be surjective in INT and RUSS, even though in INT and RUSS we cannot indicate $\alpha$ in $\mathcal{C}$ such that $f(\beta)\#\alpha$ for all $\beta\in\mathcal{N}$.

Consider the recursive sequence $\alpha = 0,0,...$ given by $\alpha(n)=0$ if there is no block of 99 consecutive 9′s in the first $n$ digits of the decimal approximation of $\pi$, and $\alpha(n)=1$ else. We see that $\alpha$ is in $\mathcal{C}$ but with current knowledge of $\pi$ we cannot determine any $\beta\in\mathcal{N}$ such that $f(\beta)=\alpha$ (go ahead and try…:-)).

In INT we can easily prove:

Theorem: (INT) There is no continuous injective surjection from Baire space to Cantor space.

Proof: By AC11 such a surjection has a continuous inverse, which contradicts the Fan Theorem. (End of proof)

Now in recursive mathematics (RUSS) the Fan Theorem does not hold, and Cantor space has an infinite cover of open subsets which does not contain a finite cover of Cantor space. This enables one to define a recursive homeomorphism $k$ from Baire space to Cantor space.

Interesting symmetry, since in CLASS and INT $k$ fails to be surjective, although this time in INT we cannot even indicate $\alpha$ in $\mathcal{C}$ for which we cannot find $\beta\in\mathcal{N}$ such that $k(\beta)=\alpha$. (in CLASS we can’ indicate such an $\alpha$, but this is necessarily vague, any sharp indication is necessarily recursive!). So in CLASS and INT one relies on (the intuition behind) the axioms for the statement: not all sequences of natural numbers are given by a recursive rule.

This intuition can be questioned, see my paper On the foundations of constructive mathematics — especially in relation to the theory of continuous functions‘ (2005), and the book Natural Topology (2012).

But this post is just for fun. I wonder what happens if under $f$ (see above) we pull back the compact topology of Cantor space to Baire space…probably not very interesting but let me ponder on it.

## Addenda and errata for Natural Topology (2nd ed.)

[Never my finest moments: discovering flaws in what I tried so hard to create as a perfect piece of work...:-)]

I’m rereading the second edition of Natural Topology (available at http://www.fwaaldijk.nl/natural-topology.pdf, as well as on arXiv), and naturally I am spotting some omissions, typos and even errors.

I will make a list of these in this post, and I will update this list until I replace the second edition with a third (which then in turn I fear will still be in need of a similar list, but one hopes for improvement along the line).

The most important change to be made is related to theorem 1.2.2. The given proof in A.3.1 of this (beautiful) theorem is partly deficient, because it fails to take into account the strict requirement (i) in definition 1.1.2. of morphism’. At the time I saw no need to relax this requirement, since everything seemed to work smoothly, and in all regular’ situations this requirement is fulfilled. So I opted for some form of aesthetic optimality.

In hindsight, I should have noted that the requirement (i), which is phrased for dots, is too restrictive in a pointwise setting. Thankfully the remedy is easy: simply replace this with the slightly less restrictive pointwise phrasing (see below), and all is well. No need even to change any other wordings, in proofs or elsewhere.

But in almost all relevant situations, the requirement (i) is easily met. I would like to mention this in an elegant way, without making a separate distinctive definition of say morfism’ (to be pondered on). The other addenda and errata are all very minor, so far.

Errata:

“Let $(\mathcal{V},\mathcal{T}_{\#_1})$ and $(\mathcal{W},\mathcal{T}_{\#_2})$ be two natural spaces, with corresponding pre-natural spaces $(V,\#_1, \preceq_1)$ and $(W,\#_2, \preceq_2)$. Let $f$ be a function from $V$ to $W$ Then $f$ is called a refinement morphism (notation: $\preceq$-morphism) from $(\mathcal{V},\mathcal{T}_{\#_1})$ to $(\mathcal{W},\mathcal{T}_{\#_2})$ iff for all $a,b\in V$ and all $p=p_0,p_1,\ldots,\ q=q_0,q_1,\ldots\in\mathcal{V}$:

(i) $f(p)=_{\rm D}\ f(p_0), f(p_1), \ldots$ is in $\mathcal{W}$ (points go to points’)

(ii) $f(p)\#_2 f(q)$ implies $p\#_1 q$.

(iii) $a\preceq_1 b$ implies $f(a)\preceq_2 f(b)$ (this is an immediate consequence of (i))

As indicated in (i) above we will write $f$ also for the induced function from $\mathcal{V}$ to $\mathcal{W}$. The reader may check that (iii) follows from (i). By (i), a $\preceq$-morphism $f$ from $(\mathcal{V},\mathcal{T}_{\#_1})$ to $(\mathcal{W},\mathcal{T}_{\#_2})$ respects the apartness/equivalence relations on points, but not necessarily on dots since $f(a)\#_2 f(b)$ does not necessarily imply $a\#_1 b$ for $a,b\in V$. This stronger condition however in practice obtains very frequently.”
.

If $f(a)\#_2 f(b)$, then by (ii) we know that $x\#_1 y$ for all $x\prec a, y\prec b$ in $\mathcal{V}$. Therefore, if necessary we could update’ the apartness to ensure $a\#_1 b$…but we cannot guarantee that this is simultaneously possible for all similar pairs of dots $c,d$ in $V$.
However, most spaces $(\mathcal{V},\mathcal{T}_{\#})$ naturally carry an apartness on dots such that if $x\# y$ for all $x\prec a, y\prec b$ in $\mathcal{V}$, then $a\# b$. In this situation, (ii) of the definition becomes equivalent with (ii’): $f(a)\#_2 f(b)$ implies $a\#_1 b$ for all $a,b\in V$. (This (ii’) is part of the original definition 1.2.2., which should be replaced by the above definition.)
(OO) It should be noted that $(\mathcal{V}^{\wr\wr}, \mathcal{T}_{\#})$ is $\preceq$-isomorphic to $(\mathcal{V}^{\wr}, \mathcal{T}_{\#})$. This means that it always suffices to look at $\mathcal{V}^{\wr}$.
(OOO) It should be noted that all natural spaces ‘are’ spreads already, when looking at their set of points. This is another (perhaps easier) way of seeing that any natural space is spreadlike. Let $(\mathcal{V},\mathcal{T})$ be a natural space with corresponding pre-natural spaces $(V,\#, \preceq)$. Assume $h$ is an enumeration of $\{(a,b)\subset V\times V\mid a\# b\}$. To create a point $x=x_0, x_1, ...$ in $\mathcal{V}$, one can start with any basic dot $a$ as $x_0$. Then, one chooses $m_0\in N$, and for the next $m_0$ values of $x$ one is free to choose basic dots $x_{(m_0)}\preceq...\preceq x_0$, but at stage $m_0+1$ we must choose for $x_{(m_0+1)}$ a basic dot $c$ such that $c$ is apart from at least one of the constituents of $h(0)$. Then one chooses $m_1\in N$, etc. Therefore we see that, if one disregards the partial order, we did not create any new structure outside of Baire space. And there is no problem whether our point are sets (contrasting to formal topology, where there seems to be a problem in general whether the points of a formal space form a set).