A conceptual model for human image recognition: combining passive memory with active imagination

When I was in college (in the ’80s) the question why humans outperform computers in image recognition already was receiving some attention. At the time an idea came to me, and it still seems relevant enough to write down. No doubt similar ideas have been brought forward, but some repetition will do no harm.

In this post I describe a simple conceptual model of how human image recognition could work, given the obvious limitation on human memory capacity when compared to computers. A key observation is that although we are good at passive recognition, our active visual memory seems very limited. We do not seem to store entire images in our memory. If we are asked to visualize objects, faces, scenes in our mind we usually find that it is very hard to really produce `detailed’ mental imagery.

The model also offers an explanation for the déjà vu phenomenon.

Recent developments and background
Nowadays, there are areas of image recognition / classification in which computers outperform humans, so the question has evolved a bit. But still, in the general field of image recognition the feeling is that humans are generally better than computers…so far.

Stanford University (Andrej Karpathy with Fei-Fei Li) in collaboration with Google has recently announced a significant improvement in artificial-intelligence image recognition (New York Times article November 2014, see here for the Stanford technical paper).

Even more recently Amirhossein Farzmahdi et al. (at the Institute for Research on Fundamental Sciences in Tehran) published a paper on neural-network based face-recognition software (review, for the paper on arXiv see here), derived from studies of primate brains in relation to face recognition. Although still not nearly as good as humans, at least the software shows traits similar to human face-recognition performance.

Holistic face-processing seems to be the human way (`hotly debated yet highly supported’ according to the abstract of the above paper), and neuroscience describes specialized areas in the brain for face recognition.

A conceptual model for human image recognition
Enough background. On to the conceptual model promised in the title. A main question to me in college was:

How can one devise a recognition machinery which does not take up enormous memory?

A key observation seems that although we are good at passive recognition, our active visual memory is very limited. We do not store entire images in our memory. If we are asked to visualize objects, faces, scenes in our mind we will find that it is very hard to really produce `detailed’ mental imagery.

Nonetheless, given some time, we can come up with more and more details. And of course we are extremely good at passive recognition. Even if the face we see has been altered by lighting, aging, facial hair, you name it. But can we always immediately place a name to a face? No we can’t. We often struggle: `… I’m sure I know this person from somewhere, but was it high school? Some holiday? The deli near my previous job? …’

And then slowly, we can enhance our recognition by going down such paths, imagining the person a bit younger perhaps, or with a shovel, or in this deli with an employee’s uniform…until we hit on a strong recognition sense and say: `Hey Nancy, wow, I almost didn’t recognize you with those sunglasses and short hair, it’s been a long time.’

This leads to the following conceptual model. Possibly, our image recognition uses two components: one-dimensional passive recognition and more-dimensional active imagination.

The first component is one-dimensional passive recognition. By this I mean that visual data is generally not stored, but memory-processed in such a way that when similar visual data are observed, a sense of recognition is triggered. One-dimensional: from 0 (no recognition at all) to 1 (sure sense of recognition).

So when we observe say a face, our brain does not store actual `pixels’, but instead creates some sort of tripwire. Or better still: a collection of tripwires. These tripwires then give off a signal when a similar face is observed. The more similarity, the stronger the signal (which produces the sensation: `hey I’ve seen this face before (or close)’).

Then the second component comes into play: more-dimensional active imagination. By this I mean an active imaging, which changes components of the observed image, with the express purpose of amplifying the tripwire signal (the passive recognition sense). Suppose I look at the face before me, imagine it without beard, and the tripwire signal gets stronger… then I am one step closer to recognizing the face. Next I picture this person in my old college, but the signal gets weaker…so next I search in my job history…and I hit a stronger signal upon my third job (still don’t know who it is but I am getting closer)… etc.

In this way, without storing large `files’, it should be possible to reach high levels of passive recognition. This does depend on creating very good tripwires, and having a good active imagination. Such a system would favour `holistic’ recognition (in concurrence with scientific findings), because details are not stored separately.

That’s almost all for now. In the recent news on image recognition software I haven’t seen the idea of `active imaging to enhance passive recognition’ come up (but that doesn’t mean it is not used). Oh, and finally: how does this model explain déjà vu?

Well that is really easy. According to the model, déjà vu occurs when a tripwire is falsely yet strongly triggered. The brain is flooded with a strong sense of recognition, which has no base in a factual previous experience. If you have ever experienced déjà vu, you will likely do so again :-). If it concerns a situation (`I’ve been in this exact situation before’) you could try to see if you can predict what will happen next. According to this model, you can’t, but still the feeling of recognition will only slowly die away.

[Update 17 Feb:]
In this recent article on face detection what I call `tripwire’ is called a `detector’, and a series of tripwires is called a `detector cascade’.

Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment

Collective Intelligence seems a bigger threat than Artificial Intelligence

Recently both Stephen Hawking and Bill Gates have voiced their concern over Artificial Intelligence (AI), warning that AI could possibly become a threat to humanity in the future.

This prompts me to (finally) write down some thoughts on Collective Intelligence (CI), which is also sometimes referred to as swarm intelligence or hive intelligence (hive mind) when not dealing with humans. CI refers to the idea that humans can create a hive mind – even unknowingly. (As a primer you could read Collective Intelligence in Humans: A Literature Review by Juho Salminen.).

Of course a fundamental question in regard to hive intelligences is: does an intelligent hive have self-awareness? Somehow we `always’ associate intelligence with self-awareness, but to me this might well be because we have a hard time picturing intelligences which differ from ours. However, even if a CI made up of humans would have self-awareness, these humans would be unlikely to be aware of this. Do ants know that their ant hill is intelligent?

To me it seems likely that CI is already a reality. In this view there already are non-human intelligences which are stronger than human intelligence. Consider any large human organization (corporation, religion, country, …) and consider whether it displays signs of hive intelligence (such as seen in ant hills):

  • Large human organizations (LHOs) have a strong tendency to self-preservation.
  • LHOs compete fiercely for resources.
  • LHOs are largely independent of the individuals of which they are comprised. Anyone is replaceable, although some replacements impact more than others.
  • LHOs learn and adapt. They retain memories. They have active long-term strategies as well as surviving tactics.
  • The individuals which help form the LHO are usually quite differentiated according to the tasks they perform. The factory worker is unlikely to be able to come up with marketing and sales strategies; vice versa the marketing and sales analyst is unlikely to be able to craft the product to be sold.
  • Communication `internal’ to the LHO is usually quite different from communication with other LHOs. There are secrets, there are barriers, there is misunderstanding, there is difference in speed and informality of communication.
  • Internal efficiency is a key driving force in the development of LHOs. There is a continuous pressure to perform more efficiently. This pressure comes from the fierce competition for resources, and any LHO which does not adapt quickly enough, efficiently enough, will be swept aside and dismantled (devoured) by those who do.
  • There is pressure on individuals to conform to the `code’ or `identity’ of the LHO to which they belong.

If the above meets a little with your recognition, then I can continue to where I was headed:

CI poses a bigger threat to humans than AI.

Why? Let’s see. Have you lately had any thoughts similar to:

  • I am on a treadmill, we are all on a treadmill. Fast is seldom fast enough. Good is only good enough for a very short time.
  • If I don’t conform to `the norm’ I will be cast aside, left behind, ridiculed, ignored.
  • If I were completely free and independent of income concerns, I would do things very differently.
  • If I were completely free and independent of social concerns, I would do things very differently.
  • I have to live up to the expectations of  a) employer b) peers c) family d) friends e) society f) myself …
  • I have to keep up with the latest developments. New technology, social platforms, new hypes and raves, the news, I have to be up-to-date.
  • I have to communicate, participate in networks, just in order to get by socially and professionally.
  • I have to profile myself, promote myself, market myself, advertise myself, prove myself more and more. Just doing my job does not cut it anymore. To administrators, to peers, it is important that I am innovative, pushing borders, and pushing myself to new `heights’.
  • I have to be seen as a responsible-enough member of society. Law-abiding and not amoral.
  • I have to find money for a) my project b) my research c) my prototype d) my dream … In order to raise this money I need to convince people that a), b), c), d)… is more worthy than those of others.

I can go on like this, but I hope my point is clear. Most of us are being `forced’ by various LHOs to conform more and more to role patterns that are beneficial to these LHOs but possibly detrimental to us.

The ant hill only cares about having enough able workers and soldiers to survive and hopefully thrive and expand. It does not care about what kind of life these workers and soldiers lead.

Moreover, if ants stray too far from the ant hill and pick up too many strange smells, they are no longer recognized as `own’ and thus become prone to attack from the other ants. To me this mirrors the increasing difficulty for individuality in our society.

It has become more and more difficult to operate on an individual basis, in the past decades. The individual voice is slowly being drowned out. Non-conformity becomes harder. The worth of our endeavours is increasingly being measured in terms of  social response to these endeavours. Citation counts, Facebook likes, number of followers, and … money. Money is an easily underestimated factor in the workings of CI, but it is the natural `reward’ for any CI’s exertion. It can easily be compared to packets of sugar for the ant hill.

Modern ICT has tremendously increased the capabilities for CIs to expand rapidly. Which is why I expect to see the above effects crystallize more clearly in the near future.

So, to recap, I believe we are already seeing Collective Intelligences at work, influencing our lives more heavily than we would like. Personally, I can only hope that we are capable of preventing CIs from taking over completely, but to be honest I doubt it.

And if it ever came to a contest between AI and CI, my money would be on the latter…

[Update 16 Feb:]
Thanks to Toby Bartels for pointing out on Google+ that CI and AI can be seen more compellingly as two sides of the same coin:

“I’m not sure that there’s much difference. An artificial general intelligence (that is, the sort of artificial intelligence that worries people, as opposed to specialized expert systems) is unlikely to be developed by an individual in a garage. It’ll be developed by a corporation (or worse, a military), and it will work against us regardless of whether it stays in or escapes its box.”

Posted in Uncategorized | Tagged , , , , , | Leave a comment

The arrow of time (4): entropy and reversibility, pictured in designs

So…I do not see any convincing answers in physics to the basic question of `what is time?’. To wrap up this complicated subject for now, I will show some half-designs for the IMAPP symposium (which I did not elaborate on since the Francesco del Cossa design that I showed earlier clearly was superior), and reformulate earlier questions on time and entropy.

The first question already is hard to formulate without falling into much inaccuracy as well as absurdity. But here goes anyway: suppose we have two situations / configurations S1 and S2 of a closed system U [for Universe] such that S1 is exactly the same as S2 in every conceivable non-time-related way (particles, waves, constellations,…down to every last photon). Then I would say that

A: Within U there is no time difference between S1 and S2, in other words they are also time identical.
B: Therefore time in U corresponds to (some measure of) the difference between configurations of U.

Hence my earlier `formula’ ΔTime \approx ΔEntropy.

Thus the question of (ir)reversibility, known as `arrow of time’, in my eyes could well be a tautology. To see this, consider the statement: `we cannot lower the entropy of the system U when going forward in time’. When ΔTime `equals’ ΔEntropy this more or less becomes equivalent to: `we cannot lower the entropy of the system U when the entropy is increasing’…

Also, it raises the question whether time is a `local’ phenomenon (non-uniform). One half-design that I made for the IMAPP symposium centered around this entropy idea:

arrow of time, entropy

(click for enlargement, you might notice different `reversal arrows’ which I added pictorially, to express the questions surrounding this subject)

Next, in my eyes the question of causality and reversibility is intimately connected to our own consciousness. We seem to experience things exclusively in the present, but! we do not even know what `experience’ and `the present’ mean. Anything we experience stems from neurons firing in our brain; anything we see/hear/sense in this way has a time lag as compared to the stimulus which provoked our senses…

Somehow we retain memories (unreliable!) from past events, and we experience time as moving forward, probably because our consciousness is hardwired that way. See Immanuel Kant‘s Kritik der Reinen Vernunft, we quote from wikipedia:

Kant proposed a “Copernican Revolution” in philosophy. Kant argued that our experiences are structured by necessary features of our minds. In his view, the mind shapes and structures experience so that, on an abstract level, all human experience shares certain essential structural features. Among other things, Kant believed that the concepts of space and time are integral to all human experience, as are our concepts of cause and effect.[3] One important consequence of this view is that one never has direct experience of things, the so-called noumenal world, and that what we do experience is the phenomenal world as conveyed by our senses. These claims summarize Kant’s views upon the subject–object problem.

In my humble and ignorant opinion, Kantian philosophy is not eclipsed by Einstein’s relativity and its concomitant spacetime. A real discussion of relativistic spacetime is beyond both me and the scope of this series of blog posts, but perhaps it is relevant to notice that causality in relativistic spacetime hinges on `light cones’ (image by Stib):

World_line.svg

When it comes to reversibility and the arrow of time, the Kantian crux seems to me: what do we mean with the word causality?

If our consciousness were hardwired `the other way round’, could we not perceive reality as follows: a billiard ball rolling `kauses’ a billiard cue to hit the ball which in turn `kauses’ a billiard player to appear at the billiard table etc. etc.

With this in mind I made the following Escherian half-design:

arrow of time, Escherian style, frank waaldijk
(click for enlargement)

and since I found this half-design to be too sterile, I also made another one based more on entropy and human experience (which is approximate, vague, sketchy):

arrow of time, Escherian sketch style, frank waaldijk
(click for enlargement)

This last design was a close contender (but lost to the Del Cossa design, see the first posts in this series), however it lacked the depiction of a human interaction, intervention, … I also tried yet another half-design, before finally picking the Del Cossa design:

Centauresse Vezelay, arrow of time, frank waaldijk

(click for enlargement, the arrow shooter comes from a centaur sculpture in the Basilique Sainte-Marie-Madeleine of Vézelay, the original photo was taken by Vassil)

In the end , the Del Cossa design, apart from its visual strength, had another interesting feature which proved decisive: the golden circle held by its arrow-bearing protagonist. To me this circle symbolized both mathematics, and two other conundrums of time: can there be a first moment in time? is time circular (another way of looking at reversibility)?:

arrow of time, frank waaldijk
(click for enlargement, almost final design, just the sponsors omitted)

Hope you enjoyed this cross-over between science, philosophy and graphical design!

Postscript: if you’ve come this far, then the following very recent article should interest you: new quantum theory could explain the flow of time. It seems that every few years or so, a new insight in `the arrow of time’ is claimed…which in a way illustrates how hard the problems surrounding time really are.

Entropy, entanglement, energy dispersal… they all start with an E so perhaps I could just pimp up my `formula’ thus: ΔT = H(ΔE), where H is some appropriate function (multiplication with constant would be nice but is probably far too simplistic).

Isn’t cosmology just the most marvelous religion? The really amazing part of physics to me is that we actually succeed in increasing our capabilities to manipulate Nature, even when (in my eyes) we remain largely ignorant of the real mechanisms at work. On the other hand, I’m highly pessimistic about whether this increase in manipulative activities will be beneficial to humanity, and life on earth in general.

[End of series]

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

The arrow of time (3): what is it, actually?

No, but honestly: what is meant by the term `the arrow of time’? I cannot get my head around it, unfortunately. This seemingly puts me in an unenviable minority position, enhanced by my obvious ignorance of relevant theories in modern physics.

My problem is this: If we do not know at all what time is, then how can we determine that there is an arrow of time?

As an illustration, let us look at the second law of thermodynamics, as related to the arrow of time via entropy (quote from wikipedia: entropy (arrow of time)):

Entropy is the only quantity in the physical sciences (apart from certain rare interactions in particle physics; see below) that requires a particular direction for time, sometimes called an arrow of time. As one goes “forward” in time, the second law of thermodynamics says, the entropy of an isolated system can increase, but not decrease. Hence, from one perspective, entropy measurement is a way of distinguishing the past from the future. However in thermodynamic systems that are not closed, entropy can decrease with time: many systems, including living systems, reduce local entropy at the expense of an environmental increase, resulting in a net increase in entropy. Examples of such systems and phenomena include the formation of typical crystals, the workings of a refrigerator and living organisms.

Entropy, like temperature, is an abstract concept, yet, like temperature, everyone has an intuitive sense of the effects of entropy. Watching a movie, it is usually easy to determine whether it is being run forward or in reverse. When run in reverse, broken glasses spontaneously reassemble, smoke goes down a chimney, wood “unburns”, cooling the environment and ice “unmelts” warming the environment. No physical laws are broken in the reverse movie except the second law of thermodynamics, which reflects the time-asymmetry of entropy. An intuitive understanding of the irreversibility of certain physical phenomena (and subsequent creation of entropy) allows one to make this determination.

By contrast, all physical processes occurring at the microscopic level, such as mechanics, do not pick out an arrow of time. Going forward in time, an atom might move to the left, whereas going backward in time the same atom might move to the right; the behavior of the atom is not qualitatively different in either case. It would, however, be an astronomically improbable event if a macroscopic amount of gas that originally filled a container evenly spontaneously shrunk to occupy only half the container.

In previous posts I already conjectured a similar view on time and entropy, perhaps a bit more radical: ΔTime \approx ΔEntropy.

The above conjecture is vague and needs improvement, the gist is that entropy and time are cut from the same cloth. But as I said I am rather hampered by my overwhelming ignorance of modern physics.

Still, it seems to me that what is usually called `the arrow of time’ depends on an experimentally unconfirmed ideal view of time as an independent and qualitatively different dimension (an `objective clock’ or perhaps `subjective clock’ which runs independently of other physical processes/attributes, at least on the nanoscale; precisely here might lie the difficulty in reconciling quantum mechanics with general relativity).

One should read at this point the Stanford Encyclopedia of Philosophy entry on time, to hopefully help gain some insight in what I’m trying to say.

[Also I think I remember a vivid related portraying of time by Kurt Vonnegut in Slaughterhouse-Five (I read this over 30 years ago, so inaccuracy is inevitable). As I remember this portraying, time is similar to the other dimensions, which leads to all things existing in four equivalent dimensions…only our human consciousness is like a train which moves in a certain fixed direction. And from that train we can only look out through a very narrow window (the present), hence we see the landscape pass us by in a more or less linear time fashion, moment after moment. If we would be able to break free from the train, then our sensation of time would be radically changed.]

If I may, let me put forward an aphorism which I discovered through my telescope on a meteorite made of antimatter :-). It perhaps illustrates my thoughts on a possible reversal of time: namely that time could well be a phenomenon produced by our consciousness, in other words an anthropomorphic artefact:

We anti-time humans have no memory to speak of, alas!, and can only rely on our often patchy foresight of our future

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

The arrow of time (2): Francesco del Cossa

[continued from the previous post:]

 

arrow of time, frank waaldijk
(click for enlargement)
 

The Arrow of Time (almost final design)

The basis of the design is formed by a detail from the fresco `Allegory of March: the triumph of Minerva’ by Francesco del Cossa (1430-1477 apprx).

 

francesco_del_cossa_triumph-of-minerva-det

 

francesco_del_cossa_triumph-of-minerva

 

Of course this is not meant as an art blog, but nonetheless I like to point out sources and alterations. You can see (if you are so inclined) that I did some work with Photoshop and Illustrator to revitalize the fresco detail sufficiently for use in an A_0 poster.

The visual strength of the design (to which I referred in the previous post) for me stems from multiple factors. One of these is the pretty superficial clue of the arrow which is held by the central figure…but a more content-related factor is the wear-and-tear of the fresco itself. We see that it is old…because of the wear, the damages, which is the physical manifestation of time (increase in entropy; whether this supports the idea of an arrow of time will be discussed in later posts).

Finally, this mysterious circle held in the other hand to me signifies a combination of mathematics, infinity, divinity…and mystery of course.

Other designs that I made sometimes were more intriguing, but they lacked this direct intuitive appeal.

(to be continued)

Posted in Uncategorized | Tagged , , , , | Leave a comment

The arrow of time (1)

Three years ago I was asked to design a poster for the symposium `The Arrow of Time’, organized by IMAPP (Institute for Mathematics, Astrophysics and Particle Physics).

I would like to show the final design that I made, and also some other partial designs. Perhaps more relevant to this blog’s purpose, I also would like to pose the question: does the arrow of time really exist? Let me start however with the poster:

 

arrow of time, frank waaldijk
(click for enlargement)
 

The Arrow of Time (almost final design, since for the final design I had to add the logos of the sponsors, which seldom is an improvement [yet I managed to avoid real disruption])

I chose this design over other designs (which sometimes were stronger conceptually) because of its visual strength.

(To be continued)

Posted in Uncategorized | Tagged , , , | Leave a comment

All bets are off, to disprove the strong physical Church-Turing Thesis (foundations of probability, digital physics and Laplacian determinism 3)

(continued from previous post)

Let H0 be the hypothesis: `the real world is non-computable’ (popularly speaking, see previous post), and H1 be PCTT+ (also see the previous post).

For comparison we introduce the hypothesis H2: `the real world produces only rational (real) numbers’.

H2 is assumed to have been the original world view of ancient Greek mathematicians (Pythagoreans), before their discovery that \sqrt{2} is irrational (which is `rumoured’ to have caused a shock but I cannot find a reliable historical reference for this).

The rational numbers in [0,1] have Lebesgue measure 0, so we can start constructing (T_{40,m})_{m\in \mathbb{N}} such that T_{40}=\bigcup_{m\in \mathbb{N}} T_{40,m} has Lebesgue measure less than 2^{-40}, and such that T_{40} contains all rational numbers in [0,1].

If we then take our coin-flip-randomly produced x\in [0,1], I personally don’t think that we will encounter an m\in\mathbb{N} for which we see that T_{40,m} contains x.

This opinion is supported by the fact that we can easily construct a non-rational number…at least in theory. Take for instance e, the basis of the natural logarithm, which equals \Sigma_{n\in\mathbb{N}}\frac{1}{n!}. We can in fact construct T_{40} in such a way that T_{40} does not contain e, and assume this to be the case here.

On the one hand, this does not depend on infinity, since we can simply look at approximations of e. We construct T_{40} such that for any m\in\mathbb{N} the 2m+2^{\rm th} binary approximation to e is positively apart from T_{40,m}. On the other hand, any finite approximation to e is still rational…and so we can only construct e as an irrational number in the sense described above.

With regard to the existence of non-computable reals, the situation in my humble opinion is very different. We cannot construct a non-computable real, as result of the Church-Turing Thesis (which I have no reason to doubt). Any construction of a real which we recognize as such will consist of a finite set of construction instructions…in other words a Turing machine.

So to make a reasonable case for the existence of non-computable reals, we are forced to turn to Nature. In the previous post, we flipped our coin to produce a random x in [0,1]. We argued that finding m\in\mathbb{N} for which S_{40,m} contains x would force us to reject the hypothesis H0 (`the real world is non-computable’).

So what result in this coin-tossing experiment could force us to reject H1, the strong physical Church-Turing thesis (PCTT+, `the universe is a computer’)?

To be able to reject H1 in the scientific hypothesis-testing way, we should first assume H1. [This might pose a fundamental problem, because if we really assume H1, then our perception of probability might change, and we might have to revise the standard scientific hypothesis-testing way which seems to be silently based on H0. But we will for the moment assume that the scientific method itself needs no amendment under H1.]

Under H1 x has to fall in some S_{40,m}. Failure to do so even if we let m\in\mathbb{N} grow very large, might indicate H1 is false. For scientific proof we should avail of some number M\in\mathbb{N} such that (under H1) the probability that x is not in \bigcup_{m\in \mathbb{N}, m<M} S_{40,m} is less than 2^{-40}.

This reverse probability has had me puzzled for some time, and sent me on the quest for a probability distribution on the natural numbers. In the thread `drawing a natural number at random’ I argued that some indication could be taken from Benford's law, and for discrete cases from Zipf’s law. Anyway, very tentatively, the result of this thread was to consider relative chances only. If for 1\leq n,m \in \mathbb{N} we denote the relative Benford chance of drawing n vs. drawing m by: \frac{P_B(n)}{P_B(m)}, then we find that \frac{P_B(n)}{P_B(m)} = \frac{\log{\frac{n+1}{n}}}{\log{\frac{m+1}{m}}}. The relative Zipf chance of drawing n vs. drawing m would be given by \frac{P_Z(n)}{P_Z(m)} = \frac{m}{n}.

In both cases, the relevant density function is f(x)=\frac{1}{x}. The important feature of this distribution is twofold:

1) The smaller natural numbers are heavily favoured over the larger. (`Low entropy’).

2) There is no M\in\mathbb{N} such that even the relative probability of drawing m\in\mathbb{N} larger than M\in\mathbb{N} becomes less than 2^{-40}. (Because \log x tends to infinity).

Fools rush in where angels fear to tread. I know, and so let me fall squarely in the first category. Yet this train of thought might provoke some smarter people to come up with better answers, so I will just continue. I do not believe these relative chances can simply be applied here, there are too many unknowns and assumptions. But it cannot do harm to try and get some feel for the reverse probability needed to disprove H1.

For this tentative argument then, disregarding some technical coding issues, we consider (under H1) our coin-flip random x to equal some computable x_s computed by a Turing machine with random number s\in\mathbb{N}, drawn from some extremely large urn with low entropy (favouring the smaller natural numbers).

Even with this favouring of the smaller natural numbers, still we cannot begin to indicate M\in\mathbb{N} such that (under H1) the probability that x is not in \bigcup_{m\in \mathbb{N}, m<M} S_{40,m} is less than 2^{-40}. Perhaps if we would know the size of the urn (which in this case would seem to be the universe itself) we could say something more definite on M. But all things considered, it seems to me that M could easily be astronomically large, far larger than our limited computational resources can ever handle.

In other words: all bets are off, to disprove H1.

And so also, if H1 is true, it could very well take our coin-flip experiment astronomically long to find this out.

But I still think the experiment worthwhile to perform.

claimtoken-52f2197334248

claimtoken-52f2362a0cc48

Posted in Uncategorized | Tagged , , , , | Leave a comment