Collective Intelligence seems a bigger threat than Artificial Intelligence

Recently both Stephen Hawking and Bill Gates have voiced their concern over Artificial Intelligence (AI), warning that AI could possibly become a threat to humanity in the future.

This prompts me to (finally) write down some thoughts on Collective Intelligence (CI), which is also sometimes referred to as swarm intelligence or hive intelligence (hive mind) when not dealing with humans. CI refers to the idea that humans can create a hive mind – even unknowingly. (As a primer you could read Collective Intelligence in Humans: A Literature Review by Juho Salminen.).

Of course a fundamental question in regard to hive intelligences is: does an intelligent hive have self-awareness? Somehow we `always’ associate intelligence with self-awareness, but to me this might well be because we have a hard time picturing intelligences which differ from ours. However, even if a CI made up of humans would have self-awareness, these humans would be unlikely to be aware of this. Do ants know that their ant hill is intelligent?

To me it seems likely that CI is already a reality. In this view there already are non-human intelligences which are stronger than human intelligence. Consider any large human organization (corporation, religion, country, …) and consider whether it displays signs of hive intelligence (such as seen in ant hills):

  • Large human organizations (LHOs) have a strong tendency to self-preservation.
  • LHOs compete fiercely for resources.
  • LHOs are largely independent of the individuals of which they are comprised. Anyone is replaceable, although some replacements impact more than others.
  • LHOs learn and adapt. They retain memories. They have active long-term strategies as well as surviving tactics.
  • The individuals which help form the LHO are usually quite differentiated according to the tasks they perform. The factory worker is unlikely to be able to come up with marketing and sales strategies; vice versa the marketing and sales analyst is unlikely to be able to craft the product to be sold.
  • Communication `internal’ to the LHO is usually quite different from communication with other LHOs. There are secrets, there are barriers, there is misunderstanding, there is difference in speed and informality of communication.
  • Internal efficiency is a key driving force in the development of LHOs. There is a continuous pressure to perform more efficiently. This pressure comes from the fierce competition for resources, and any LHO which does not adapt quickly enough, efficiently enough, will be swept aside and dismantled (devoured) by those who do.
  • There is pressure on individuals to conform to the `code’ or `identity’ of the LHO to which they belong.

If the above meets a little with your recognition, then I can continue to where I was headed:

CI poses a bigger threat to humans than AI.

Why? Let’s see. Have you lately had any thoughts similar to:

  • I am on a treadmill, we are all on a treadmill. Fast is seldom fast enough. Good is only good enough for a very short time.
  • If I don’t conform to `the norm’ I will be cast aside, left behind, ridiculed, ignored.
  • If I were completely free and independent of income concerns, I would do things very differently.
  • If I were completely free and independent of social concerns, I would do things very differently.
  • I have to live up to the expectations of  a) employer b) peers c) family d) friends e) society f) myself …
  • I have to keep up with the latest developments. New technology, social platforms, new hypes and raves, the news, I have to be up-to-date.
  • I have to communicate, participate in networks, just in order to get by socially and professionally.
  • I have to profile myself, promote myself, market myself, advertise myself, prove myself more and more. Just doing my job does not cut it anymore. To administrators, to peers, it is important that I am innovative, pushing borders, and pushing myself to new `heights’.
  • I have to be seen as a responsible-enough member of society. Law-abiding and not amoral.
  • I have to find money for a) my project b) my research c) my prototype d) my dream … In order to raise this money I need to convince people that a), b), c), d)… is more worthy than those of others.

I can go on like this, but I hope my point is clear. Most of us are being `forced’ by various LHOs to conform more and more to role patterns that are beneficial to these LHOs but possibly detrimental to us.

The ant hill only cares about having enough able workers and soldiers to survive and hopefully thrive and expand. It does not care about what kind of life these workers and soldiers lead.

Moreover, if ants stray too far from the ant hill and pick up too many strange smells, they are no longer recognized as `own’ and thus become prone to attack from the other ants. To me this mirrors the increasing difficulty for individuality in our society.

It has become more and more difficult to operate on an individual basis, in the past decades. The individual voice is slowly being drowned out. Non-conformity becomes harder. The worth of our endeavours is increasingly being measured in terms of  social response to these endeavours. Citation counts, Facebook likes, number of followers, and … money. Money is an easily underestimated factor in the workings of CI, but it is the natural `reward’ for any CI’s exertion. It can easily be compared to packets of sugar for the ant hill.

Modern ICT has tremendously increased the capabilities for CIs to expand rapidly. Which is why I expect to see the above effects crystallize more clearly in the near future.

So, to recap, I believe we are already seeing Collective Intelligences at work, influencing our lives more heavily than we would like. Personally, I can only hope that we are capable of preventing CIs from taking over completely, but to be honest I doubt it.

And if it ever came to a contest between AI and CI, my money would be on the latter…

[Update 16 Feb:]
Thanks to Toby Bartels for pointing out on Google+ that CI and AI can be seen more compellingly as two sides of the same coin:

“I’m not sure that there’s much difference. An artificial general intelligence (that is, the sort of artificial intelligence that worries people, as opposed to specialized expert systems) is unlikely to be developed by an individual in a garage. It’ll be developed by a corporation (or worse, a military), and it will work against us regardless of whether it stays in or escapes its box.”

Advertisements

About fwaaldijk

mathematician (foundations & topology in constructive mathematics) and visual artist
This entry was posted in Uncategorized and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s