An experiment to (dis)prove the strong physical Church-Turing Thesis (foundations of probability, digital physics and Laplacian determinism 2)

There seems to be a pervasive role of `information’ in probability, entropy and hence in physics. But the precise nature of this role escapes me, I’m afraid. I may have said before somewhere in this thread that I do not put too much faith in the classical model of probability as described by Laplace (see previous post, showing Laplace stated similar doubts himself).

One reason for this is an argument/experiment related to digital physics which has not received enough attention, I believe. I equate the term `digital physics’ with the strong physical Church-Turing thesis PCTT+: `Every real number produced by Nature is a computable real number‘ (the Universe is a computer).

The argument/experiment runs like this:

1. Denote with [0,1]_{_{\rm REC}} the recursive unit interval, that is the set of computable reals in [0,1]. We can effectively construct coverings of [0,1]_{_{\rm REC}} which classically have arbitrarily small Lebesgue measure. In fact we can for any n give a countable sequence of intervals (S_{n,m})_{m\in \mathbb{N}} such that the recursive interval [0,1]_{_{\rm REC}} is covered by (S_{n,m})_{m\in \mathbb{N}}, and such that the sum of the lengths of the intervals (S_{n,m})_{m\in \mathbb{N}} does not exceed 2^{-n}. (see [Bridges&Richman1987] Varieties of Constructive Mathematics Ch. 3, thm. 4.1; the coverings are not constructively measurable because the measure limit cannot be achieved constructively, but this doesn’t affect the probability argument).

2. Flipping a coin indefinitely yields a (for practical purposes potentially infinite) sequence x\in\{0,1\}^{\mathbb{N}}, we can see x as a binary real number in [0,1]. Let \mu denote the standard Lebesgue measure, and let A\subseteq [0,1] be Lebesgue measurable. Then in classical probability theory the probability that x is in A equals \mu(A). (Assuming the coin is `fair’ which leads to a uniform probability distribution on [0,1]).

3. Let H0 be the hypothesis: `the real world is non-computable’ (popularly speaking), and H1 be PCTT+ (mentioned above). Then letting the test size \alpha be 2^{-40} , we can start constructing (S_{40,m})_{m\in \mathbb{N}}. Notice that S_{40}=\bigcup_{m\in \mathbb{N}}(S_{40,m}) has Lebesgue measure less than 2^{-40}. H0 is meant to be interpreted mathematically as: classical mathematics is a correct description of physics, the Lebesgue measure of the non-computable reals in [0,1] equals 1, and the uniform probability distribution applies for a coin-flip-randomly produced real in [0,1].

4. Therefore the probability that x is in S_{40} is less than 2^{-40}. If we ever discover an m such that x is in the interval S_{40,m}, then according to the rules of hypothesis testing I think we would have to discard H0, and accept H1, that is PCTT+.

5. Even if the uniform probability distribution is not perfectly satisfied, the above argument still obtains. Any reasonable probability distribution function (according to H0) will be uniformly continuous on [0,1], yielding a uniform correspondence between positive Lebesgue measure and positive probability of set membership.

This seems to me a legitimate scientific experiment, which can be carried out. An interesting form would be to have people add their flips of a coin to the sequence x. I am really curious what the implications are. But several aspects of this experiment remain unclear to me.

I’ve been trying to attract attention to the possibility of carrying out this experiment, so far rather unsuccessfully. Perhaps someone will point out a fallacy in the reasoning, otherwise I think it should be carried out.

Still, there is a snag of course. Assuming H1, that is PCTT+, we are `sure’ to see x fall in some S_{40,m}…but how long would we have to wait for the right m to crop up?

This question then becomes the subject of the reverse hypothesis test: assuming H1, can we determine M\in \mathbb{N} such that with probability less than 2^{-40} we do not see x fall into any S_{40,m} for m\leq M?

If so we could use the experiment also to disprove PCTT+.

Finally, if we should in this way somehow `prove’ PCTT+, what remains of the standard scientific method of statistical hypothesis testing?

All these questions were raised in my paper `On the foundations of constructive mathematics — especially in relation to the theory of continuous functions‘ (2005, circulated as preprint since 2001).

I have yet to receive an answer…so here another invitation to comment. Don’t hesitate to point out where I go wrong.

Notice that a similar experiment can be done for the rational numbers (also of zero Lebesgue measure). I’m confident that such an experiment would not statistically yield that all reals are rational, but the reverse question remains interesting. These reverse questions were the motivation for the thread on `drawing a natural number at random’. This type of question is heavily entropy-related, I feel, and I will discuss this in the next post.

Finally, at this moment I consider PCTT+ the best scientific formulation of Laplacian determinism, which explains the title of these posts.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Foundations of probability, digital physics and Laplacian determinism

In this thread of posts from 2012, the possibility of drawing a natural number at random was discussed. In the previous post I rediscussed an entropy-related solution giving relative chances. This solution also explains Benford’s law.

In the 2012 thread, I was working on two fundamental questions, the first of which

QUESTION 1   Is our physical world finite or infinite?

was treated to some degree of satisfaction. But its relation to the second question still needs exposition. So let me try to continue the thread here by returning to:

QUESTION 2   What is the role of information in probability theory?

In my (math) freshman’s course on probability theory, this question was not raised. Foundations of probability were in fact ignored even in my specialization area: foundations of mathematics. Understandable from a mathematical point of view perhaps…but not from a broader foundational viewpoint which includes physics. I simply have to repeat what I wrote in an earlier post:

(Easy to illustrate the basic problem here, not so easy perhaps to demonstrate why it has such relevance.) Suppose we draw a marble from a vase filled with an equal amount of blue and white marbles. What is the chance that we draw a blue marble?

In any high-school exam, I would advise you to answer: 50%. In 98% of university exams I would advise the same answer. Put together that makes … just kidding. The problem here is that any additional information can drastically alter our perception of the probability/chance of drawing a blue marble. In the most dramatic case, imagine that the person drawing the marble can actually feel the difference between the two types of marbles, and therefore already knows which colour marble she has drawn. For her, the chance of drawing a blue marble is either 100% or 0%. For us, who knows? Perhaps some of us can tell just by the way she frowns what type of marble she has drawn…?

It boils down to the question: what do we mean by the word `chance’? I quote from Wikipedia:

The first person known to have seen the need for a clear definition of probability was Laplace.[citation needed] As late as 1814 he stated:

The theory of chance consists in reducing all the events of the same kind to a certain number of cases equally possible, that is to say, to such as we may be equally undecided about in regard to their existence, and in determining the number of cases favorable to the event whose probability is sought. The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.

— Pierre-Simon Laplace, A Philosophical Essay on Probabilities[4]

This description is what would ultimately provide the classical definition of probability.

One easily sees however that this `definition’ avoids the main issue. Laplace did not always avoid this main issue however:

Laplace([1776a]; OC, VIII, 145):

Before going further, it is important to pin down the sense of the words chance and probability. We look upon a thing as the effect of chance when we see nothing regular in it, nothing that manifests design, and when furthermore we are ignorant of the causes that brought it about. Thus, chance has no reality in itself. It is nothing but a term for expressing our ignorance of the way in which the various aspects of a phenomenon are interconnected and related to the rest of nature.

and:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

—Pierre Simon Laplace, A Philosophical Essay on Probabilities[37]

In the meantime I came across some work by Albert Tarantola, and this work is really heartening! In a seminal paper Inverse problems = quest for information (together with Bernard Valette) Tarantola already states that we should consider any probability distribution as an information state (subjective even) regarding the phenomenon under study, and vice versa: every information state can be described by a probability distribution function on some appropriate model space.

Now we’re talking!

To surprise me further: Tarantola describes the quandary that measure density functions like f(x)= \frac{1}{x} cause (the integral diverges) and offers exactly the same solution: look at relative probabilities of events, instead of absolute probabilities. To top it off, Tarantola emphasizes that the measure density function f(x)=\frac{1}{x} plays a very important role in inverse problems…

So now I need to study this all, in order also to join these ideas to the perspective of digital physics and Laplacian determinism.

(to be continued)

Posted in Uncategorized | Tagged , , , , , | 1 Comment

An entropy-related derivation of Benford’s law

In this thread of posts from 2012, the possibility of drawing a natural number at random was discussed. A solution offering relative chances was given, and stated to be in accordance with Benford’s law.

Then, due to unforeseen circumstances, the thread remained unfinished. In fact it stopped precisely at the point for which I had started the thread in the first place :-).

I wish to return to this thread, but it seems worthwhile to repeat the result mentioned above in a different wording. Why? Well, searching on “Benford’s law” I didn’t find any comparable entropy-related derivation (perhaps I should say motivation) in the literature/internet. And perhaps more telling, Theodore Hill (when deriving Benford’s law from considering a `random’ mix of distributions, see A Statistical Derivation of the Significant-Digit Law) explicitly mentions the difficulty of drawing a natural number at random, if the sum probability must be 1, which would seem to exclude the density function \frac{1}{x}.

So the trick to turn to relative chances \frac{P(n)}{P(m)} seems new [except it isn’t, see the next post], and it does yield Benford’s law. The entropy-related motivation for these relative chances also seems to be new. The (relative) chances involved in drawing a natural number at random will play a role in the discussion to come (on Laplacian determinism, digital physics and foundations of probability theory). But first let us explicitly derive Benford’s law from these chances:

Solution to `drawing a natural number at random’:

* We can only assign relative chances, and the role of the natural number 0 remains mysterious.

* For 1\leq n,m \in \mathbb{N} let’s denote the relative chance of drawing n vs. drawing m by: \frac{P(n)}{P(m)}.

* For 1\leq n,m \in \mathbb{N}, we find that \frac{P(n)}{P(m)} = \frac{\log{\frac{n+1}{n}}}{\log{\frac{m+1}{m}}}

(* An alternative `discrete’ or `Zipfian’ case P_{\rm discrete} can perhaps be formulated, yielding: for 1\leq n,m \in \mathbb{N}, we find that \frac{P_{\rm discrete}(n)}{P_{\rm discrete}(m)} = \frac{m}{n}.)

The entropy-related motivation for these chances can be found in this thread of posts from 2012.

Now to arrive at Benford’s law, we consider the first digit of a number N in base 10 (the argument is base-independent though) to be drawn at random according to our entropy-related chances. We then see that the relative chance of drawing a 1 compared to drawing a 2 equals \frac{\log 2}{\log\frac{3}{2}} which in turn equals \frac{^{10}\log 2}{^{10}\log\frac{3}{2}}. Since the sum of the probabilities of drawing 1, 2,…,9 equals 1, one finds that the chance of drawing i as first digit for N equals ^{10}\log\frac{i+1}{i}.

A second-digit law can be derived similarly, for example adding the relative chances for 11, 21, 31, 41, 51, 61, 71, 81, 91 vs. the sum of the relative chances for 12, 22, 32, 42, 52, 62, 72, 82, 92 to arrive at the relative chance of drawing a 1 as second digit vs. the chance of drawing a 2 as second digit. And so on for third-digit laws etc.

This shows that the second-digit distribution is more uniform than the first-digit distribution, and that each next-digit law is more uniform than its predecessor, which fits with our entropy argument.

In the next post the thread on information and probability will be continued.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Is Cantor space the injective continuous image of Baire space?

Is Cantor space the injective continuous image of Baire space? This is an intriguing question, since its answer depends on which axioms one adopts.

First of all, in classical mathematics (CLASS), the answer is yes. However, in intuitionistic mathematics (INT) the answer is no. Then again, in recursive mathematics (RUSS) the answer is a strong yes, since in RUSS Baire space and Cantor space are homeomorphic.

For CLASS I’ve tried to find references to the above question in publication databases and with Google, but I came up short. Many texts prove that any uncountable Polish space P contains an at most countable subset D such that P\setminus D is the continuous injective image of Baire space. It is easy to show this for Cantor space, but what if we drop D altogether? Well, it is not so difficult to constructively define a continuous injective function from Baire space to Cantor space which in CLASS is surjective (whereas in INT surjectivity can be proven to fail for all such functions). I would be surprised if this has not been done before, but like I said I cannot find any references. Therefore let’s call it a theorem:

Theorem (in CLASS) Cantor space is the injective continuous image of Baire space.

Proof: We constructively define the desired injective continuous function f, using induction. f will send the zero-sequence 0,0,... to itself. The f-image of other sequences starting out with n will be branched off from the zero-sequence at appropriate `height’.

To this end, we inductively define f' on finite sequences of natural numbers. \underline{0}m denotes the sequence 0,...,0 of length m. For finite sequences a, b\in \mathbb{N}^{\star} we let a\star b denote the concatenation. For any \alpha\in\mathcal{N} let \underline{\alpha}n denote the finite sequence formed by the first n values of \alpha (for n=0 this is the empty sequence).

Let g be the bijection from \{(n,m)\mid n,m\in\mathbb{N}, n,m >0\} to \mathbb{N} given by g(n,m)=2^{n-1}\cdot(2m-1)-1. Then for m>0 we have 2m-2=\min(\{g(n,m)\mid n\in\mathbb{N}, n>0\}).

For n>0 put f'(n)= \underline{0}(2n-2)\star 1. For n,m>0 put f'(\underline{0}m\star n)= \underline{0}(2\cdot g(n,m)+1)\star 1. For m>0 put f'(\underline{0}m)=\underline{0}(2m-2).

For induction, let a\in\mathbb{N}^{\star} be a finite sequence not ending with 0 and suppose f'(a) has been defined. Then for n>0 put f'(a\star n)= f'(a)\star\underline{0}(2n-2)\star 1. For n,m>0 put f'(a\star\underline{0}m\star n)= f'(a)\star\underline{0}(2\cdot g(n,m)+1)\star 1. For m>0 put f'(a\star\underline{0}m)=f'(a)\star\underline{0}(2m-2).

Finally, for \alpha\in\mathcal{N} let f(\alpha)=\lim_{n\rightarrow\infty} f'(\underline{\alpha}n). It is easy to see that f is as required. (End of proof).

Clearly, even in CLASS the inverse of f is not continuous (otherwise we would also have that Baire space is homeomorphic to Cantor space!). This clarifies why the constructively defined f fails to be surjective in INT and RUSS, even though in INT and RUSS we cannot indicate \alpha in \mathcal{C} such that f(\beta)\#\alpha for all \beta\in\mathcal{N}.

Consider the recursive sequence \alpha = 0,0,... given by \alpha(n)=0 if there is no block of 99 consecutive 9’s in the first n digits of the decimal approximation of \pi, and \alpha(n)=1 else. We see that \alpha is in \mathcal{C} but with current knowledge of \pi we cannot determine any \beta\in\mathcal{N} such that f(\beta)=\alpha (go ahead and try…:-)).

In INT we can easily prove:

Theorem: (INT) There is no continuous injective surjection from Baire space to Cantor space.

Proof: By AC11 such a surjection has a continuous inverse, which contradicts the Fan Theorem. (End of proof)

Now in recursive mathematics (RUSS) the Fan Theorem does not hold, and Cantor space has an infinite cover of open subsets which does not contain a finite cover of Cantor space. This enables one to define a recursive homeomorphism k from Baire space to Cantor space.

Interesting symmetry, since in CLASS and INT k fails to be surjective, although this time in INT we cannot even indicate \alpha in \mathcal{C} for which we cannot find \beta\in\mathcal{N} such that k(\beta)=\alpha. (in CLASS we `can’ indicate such an \alpha, but this is necessarily vague, any sharp indication is necessarily recursive!). So in CLASS and INT one relies on (the intuition behind) the axioms for the statement: not all sequences of natural numbers are given by a recursive rule.

This intuition can be questioned, see my paper `On the foundations of constructive mathematics — especially in relation to the theory of continuous functions‘ (2005), and the book Natural Topology (2012).

But this post is just for fun. I wonder what happens if under f (see above) we pull back the compact topology of Cantor space to Baire space…probably not very interesting but let me ponder on it.

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

Addenda and errata for Natural Topology (2nd ed.)

[Never my finest moments: discovering flaws in what I tried so hard to create as a perfect piece of work…:-)]

I’m rereading the second edition of Natural Topology (available at http://www.fwaaldijk.nl/natural-topology.pdf, as well as on arXiv), and naturally I am spotting some omissions, typos and even errors.

I will make a list of these in this post, and I will update this list until I replace the second edition with a third (which then in turn I fear will still be in need of a similar list, but one hopes for improvement along the line).

The most important change to be made is related to theorem 1.2.2. The given proof in A.3.1 of this (beautiful) theorem is partly deficient, because it fails to take into account the strict requirement (i) in definition 1.1.2. of `morphism’. At the time I saw no need to relax this requirement, since everything seemed to work smoothly, and in all `regular’ situations this requirement is fulfilled. So I opted for some form of aesthetic optimality.

In hindsight, I should have noted that the requirement (i), which is phrased for dots, is too restrictive in a pointwise setting. Thankfully the remedy is easy: simply replace this with the slightly less restrictive pointwise phrasing (see below), and all is well. No need even to change any other wordings, in proofs or elsewhere.

But in almost all relevant situations, the requirement (i) is easily met. I would like to mention this in an elegant way, without making a separate distinctive definition of say `morfism’ (to be pondered on). The other addenda and errata are all very minor, so far.

Errata:

(X) Definition 1.1.2. should read:

“Let (\mathcal{V},\mathcal{T}_{\#_1}) and (\mathcal{W},\mathcal{T}_{\#_2}) be two natural spaces, with corresponding pre-natural spaces (V,\#_1, \preceq_1) and (W,\#_2, \preceq_2). Let f be a function from V to W Then f is called a refinement morphism (notation: \preceq-morphism) from (\mathcal{V},\mathcal{T}_{\#_1}) to (\mathcal{W},\mathcal{T}_{\#_2}) iff for all a,b\in V and all p=p_0,p_1,\ldots,\ q=q_0,q_1,\ldots\in\mathcal{V}:

(i) f(p)=_{\rm D}\ f(p_0), f(p_1), \ldots is in \mathcal{W} (`points go to points’)

(ii) f(p)\#_2 f(q) implies p\#_1 q.

(iii) a\preceq_1 b implies f(a)\preceq_2 f(b) (this is an immediate consequence of (i))

As indicated in (i) above we will write f also for the induced function from \mathcal{V} to \mathcal{W}. The reader may check that (iii) follows from (i). By (i), a \preceq-morphism f from (\mathcal{V},\mathcal{T}_{\#_1}) to (\mathcal{W},\mathcal{T}_{\#_2}) respects the apartness/equivalence relations on points, but not necessarily on dots since f(a)\#_2 f(b) does not necessarily imply a\#_1 b for a,b\in V. This stronger condition however in practice obtains very frequently.”
.

Addenda:

(O)(in the terminology of erratum (X) above):

If f(a)\#_2 f(b), then by (ii) we know that x\#_1 y for all x\prec a, y\prec b in \mathcal{V}. Therefore, if necessary we could `update’ the apartness to ensure a\#_1 b…but we cannot guarantee that this is simultaneously possible for all similar pairs of dots c,d in V.

However, most spaces (\mathcal{V},\mathcal{T}_{\#}) naturally carry an apartness on dots such that if x\# y for all x\prec a, y\prec b in \mathcal{V}, then a\# b. In this situation, (ii) of the definition becomes equivalent with (ii’): f(a)\#_2 f(b) implies a\#_1 b for all a,b\in V. (This (ii’) is part of the original definition 1.2.2., which should be replaced by the above definition.)

(OO) It should be noted that (\mathcal{V}^{\wr\wr}, \mathcal{T}_{\#}) is \preceq-isomorphic to (\mathcal{V}^{\wr}, \mathcal{T}_{\#}). This means that it always suffices to look at \mathcal{V}^{\wr}.

(OOO) It should be noted that all natural spaces ‘are’ spreads already, when looking at their set of points. This is another (perhaps easier) way of seeing that any natural space is spreadlike. Let (\mathcal{V},\mathcal{T}) be a natural space with corresponding pre-natural spaces (V,\#, \preceq). Assume h is an enumeration of \{(a,b)\subset V\times V\mid a\# b\}. To create a point x=x_0, x_1, ... in \mathcal{V}, one can start with any basic dot a as x_0. Then, one chooses m_0\in N, and for the next m_0 values of x one is free to choose basic dots x_{(m_0)}\preceq...\preceq x_0, but at stage m_0+1 we must choose for x_{(m_0+1)} a basic dot c such that c is apart from at least one of the constituents of h(0). Then one chooses m_1\in N, etc. Therefore we see that, if one disregards the partial order, we did not create any new structure outside of Baire space. And there is no problem whether our point are sets (contrasting to formal topology, where there seems to be a problem in general whether the points of a formal space form a set).

Posted in Uncategorized | Leave a comment

We live through maps (1a): Chapter one – Wir machen uns Bilder der Welt

[continued from the previous post, this series of posts is a complete translation of `Philosophy Paper, written by F.A. Waaldijk, student of mathematics, student number 8327661, in the year 1991′]

[—–fifth post in the translation—–]

Chapter one

Wir machen uns Bilder der Welt[1]; why, how?

Why people feel the need to express themselves, to express what goes on in their world, is not entirely clear. Some say that it is because people wish to grasp[2] their experiences, to give themselves more grip on their world. And, so they say, the only way to do so is to condense those experiences, that world, for instance in a stone sculpture, or a painting, or in words. A good painting is a good painting because it is a condensation of what you experience, what you think, what you feel. A condensation which is comprehensible at least, because it leaves out everything which in daily life makes understanding impossible.

For example, who really understands well how a city like Nijmegen functions? Who knows really well what all happens in this city, what patterns, what developments, in the social field, in the economical field, in the technological field, in nature? And how do these fields interrelate? As soon as you try to think about this seriously you start to get dizzy. It is simply too complicated. In practice it boils down to that you occupy yourself with a small piece, and that there are others who occupy themselves with the glueing together of all those small pieces, etc. In this way the city is administered. But nobody understands the totality.

When I bicycle through Nijmegen, it strikes me that I know the town primarily in the following way: I know how I must cycle from one place to another, if I want to do it as fast as possible, or if I want to encounter as little car traffic as possible, or if I want to buy some bread on the way, or… It appears there are a large number of routes in my head, from which I can choose, all according to my mood and my need. It is however not the case that I hold those routes in my head in all detail, I simply know enough to be able to make a decision on each junction, from the Berg en Dalseweg left on the Corduwenerstraat, cross the Hengstdalseweg, go up, go right on the Postweg and then on the corner with the Broerdijk there is a baker whose name I do not mention because ve would not give more than five guilders for this form of advertising, which is laughable.

When I follow such a route in my head, I only ever know small pieces (crossroads, some bends, some hills, etc.) which I glue together. And those small pieces I only know from a cycling and pedestrian perspective, by car my knowledge of Nijmegen is far more limited. Sometimes it is hard to glue those small pieces together well. Suppose I ask you for the shortest way (by bike) from the Waalkade to the Radboud hospital such that you encounter three bakers and three hairdressers (in that sequence) on the way?

Put differently, my knowledge of Nijmegen consists of an enormous collection of (small) maps. When I wish to go from one place to another, I glue a number of these maps together as well as they will allow, for as long as it takes to end up with something that promises to be a good route. But these maps are not always of the same character, some are meant for the bicycle, othes for bus and foot, some indicate the elevation differences, others the probability of flat tires (bottle bank), etcetera, etcetera. And maybe the most important is that these maps are strongly imprecise, they are rough approximations, rough condensations which at least are comprehensible since they leave out almost everything which in the real situation makes comprehension impossible.

When glueing together my small maps it sometimes happens that these imprecisions add up, and that the resulting route is not the right (or the best) one. Sometimes while cycling I find that things are just a little different from what I thought, and that I should have taken the Groesbeeksedwarsweg after all. This then gives me a new small map, which in this situation is more accurate than the small maps I already had, but which in another situation might hinder me again. All in all I would get to know Nijmegen better and better, were it not for the fact that Nijmegen itself also changes. Therefore my maps age, and I have to keep on providing old maps with a stamp `still reasonably valid’ or a stamp `really no longer valid’, and sometimes a stamp `hooray! valid again’. Apart from that, I must also continually make new maps.

You may say, reader, : why don’t you buy a city map, and every five years a new one? Good question. I’m too lazy for it, I suppose. But, so I can state in my defense, for a city map the same thing holds as for my little maps. It remains a rough approximation, a rough condensation, I will always keep encountering surprises. A city map, true to its name [in Dutch: `cityflatground’] does not indicate elevation differences. And it doesn’t tell me either at which baker’s you can buy tasty bread and at which baker’s you can buy tasty almond paste pastries, where to find the prettiest catalpa and where the Japanese cherry in bloom.

After this lengthy introduction I feel strengthened to write down the main thesis of this paper:

main thesis:

The mythos, the sum total of all our knowledge, all our understanding of the world, consists of maps. These maps are small and very divergent in character. Some are rational in nature, some emotional, some differ otherwise. Some are obsolete, some are new, some are fragrant, others visual, some verbal. Some are about spring, others about Wittgenstein. Very importantly, all these maps are imprecise, rough approximations, rough condensations, rough simplifications.

Now what happens if in a certain situation I ask myself the question: how should I behave?, or equivalently: what should I do? This means I’m asking for an acceptable route, preferably a good route, which leads me from today to tomorrow, from this situation to the next. To find that route I consult my maps of the situation in which I happen to be, and I try to glue a number of these small maps together.

This is complicated by the fact that my maps are not only imprecise, and strongly divergent in character, but also often contradictory. In a certain sense it therefore is convenient to not have too many maps at your disposal. Another disadvantage of having many maps, is that at a certain point you occupy yourself exclusively with the maps, and no longer with the world itself. When was the last time, reader, that you took the time to brush your fingers over a pine cone, to look at it, smell it, throw it in the air, play football with it, put it in your mouth to see what it tastes like?

For example, someone offers a dog a lit cigarette. Most likely the dog will reject this offer, trusting on its nose (this stinks) and its fear of fire. A human being of say thirteen summers has a more difficult job of it, vir maps could be somewhat like this: it stinks, it is not allowed by my mom and dad, that’s just what makes it exciting, it’s bad for your lungs, if I refuse I won’t belong, etc. By the time ve has taken a decision, the cigarette has burnt out (well…in a manner of speaking).

[—–to be continued in the next post—–]


1. freely after Wittgenstein ({3}, 1-1.21 ; 2.063-2.12)

2. try to take `grasp’ as literally as you can [translated from the Dutch `begrijpen’ which means `grasp’ and `understand’ at the same time]

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment