The Government of Intelligent Systems

DANIEL INNERARITY

 

Download Paths to sustainability S.M.A.R.T. (PDF)

 

The main task of the government of a knowledge society is to create the conditions that will enable collective intelligence. Systematizing intelligence and governing through intelligent systems should be the priority at every level of governments, institutions and organizations. Governing complex environments, confronting risks, anticipating the future, managing uncertainty, guaranteeing sustainability and structuring responsibility oblige us to think holistically and to configure intelligent systems (technologies, procedures, rules, protocols, etc.). Only through such patterns of collective intelligence is it possible to face a future that is no longer the peaceful continuation of the past but an opaque reality full of opportunities, and by the same token pregnant with potential risks that are hard to identify. The same principle of intelligent government should rule the way we relate to our technological devices in order to face up to the new instances of ignorance that a complex society obliges us to manage.

 

The Nature of Collective Intelligence

To understand what a system of collective intelligence is, it may be illustrative to recall the mental experiment proposed by Robert Geyer and Samir Rihani (2010, 188):

  1. What would happen if the governors of the Bank of England were replaced by a room full of monkeys?
  2. What would happen if Great Britain were to copy Norway’s educational system exactly?
  3. What would happen if a super-medicine were invented that suppressed all the symptoms of the common cold (or of our students’ hangovers)?

If one had to respond quickly to these questions, immediate intuition would lead to the following assertions:

  1. The British economy would collapse.
  2. Educational results would improve, since Norway’s educational system is far better ranked than the United Kingdom’s.
  3. It would be a marvelous advance for personal health, since the patient would feel much better.

However, as soon as we are able to reflect a little and overcome the automatism of these answers, looking at things instead from the perspective of the complexity of systems, the answers start to look very different.

  1. The government of monkeys would make manifest exactly to what point we are governed more by systems than by people, with checks, balances and counterbalances, and so the monkeys would do less harm than might be supposed.
  2. The transfer of an educational system to another country would not be as successful as all that. There is of course much to be learned from the best practices of others, but the success of a system as complex as education depends a great deal on factors that are not utomatically transplantable.
  3. Being healthy is not the same thing as feeling well, and removing bothersome symptoms is equivalent to depriving oneself of signals and learning mechanisms that are precisely at the service of our health, understood as something more valuable than a mere absence of ill-being at any given moment.

This experiment is interesting because the automatism of our initial responses shows how indebted we are to a way of thinking centered on individuals and leaders, on the short term and on a lack of attention to the systemic conditions in which our actions take place. We still think of government as a heroic action by individuals instead of understanding that it is a matter of configuring intelligent systems. This is a proof of what Luhmann called “the flight toward the subject” (1997, 1016), when political action degrades into a competition among persons, their programs, their good (or bad) intentions or their moral example. That is why we speak of leadership with such personalized connotations. Public attention is concerned principally with the personal qualities of those who govern us, and we are more worried about discovering guilty parties than about repairing poor structural designs.

Any attempt to place the focus on human beings when identifying and confronting our problems, based on the theory that the human being is more important than anything else from the perspective of either the personal properties of the leader or the rational choice of the individual voter, brings with it an undervaluation of the systemic properties of social complexity. The main problems humanity faces today are conflicts engendered by an interdependent and concatenated system, ones to which its individual components are blind: unsustainability, financial risk, and those problems in general caused by a long chain of individual behaviors that are not detrimental in themselves, but are in disordered aggregate. It is therefore not so much a question of modifying individual behaviors as of configuring their interaction properly, and that is precisely the task which goes by the name of collective intelligence. Much more is gained by improving procedures than by improving the people in charge of them. We should not expect so much of the virtues of those forming part of a complex system, nor should we greatly fear their vices. What should really concern us is whether their interconnection is well organized, and what kinds of rules, processes and structures configure that interdependence.

Societies are well governed when it is systems synthesizing a collective intelligence (rules, norms and procedures) that govern them, not when they have especially able people leading them. We could make do without intelligent people, but not without intelligent systems, or, as it is otherwise generally put, a society is well governed when it stands up to periods under bad governors. These two hundred years of democracy have configured precisely an institutional constellation in which a set of experiences has crystallized into structures, processes and rules (especially constitutions) that provide democracy with a high degree of systemic intelligence, an intelligence which is not in individuals but in the components constituting the system. In a way, this makes the democratic system independent of the specific people who act in it, and even of those who direct it, and so resistant to the faults and weaknesses of individual players. That is why democracy must be considered as something which functions with the average voter and politician, for it survives only if the very intelligence of the system compensates for the mediocrity of the players, including the chance arrival of a government of monkeys.

 

The Double Risk of Technologies

One example of the configuration of our collective intelligence is to be seen in the way we design our technological artifacts. I am referring less to their sophistication than to how we identify their future risks and protect ourselves from them. Now, one of the paradoxes of our technologies is that they have to contend with two contradictory risks: the risk they will cease to heed those who direct them, and the risk they will heed them too much. To go by this distinction, some accidents would therefore be due to impotence and others to omnipotence. We are more anxious about the latter than the former. It is more disturbing to be at the mercy of men than of machines.

The first type of risk is more evident. Complex systems usually function automatically, since we could have no sophisticated technology otherwise, but this autonomy often comes at the price of ungovernability, when the very systems we have configured escape from our hands and hurtle against us. World literature is plagued with fantasies, some highly realistic, of creations that acquire a life of their own and rebel against their makers, from Faustus and Frankenstein to the general characterization of today’s world as one flying out of control (Giddens 1999). When we consider the specific problems of contemporary society, we find a great many examples of this lack of control, perhaps the most devastating being the difficulty of governing financial markets. When, for example, we affirm that something is not sustainable, we are saying that we were able to start it functioning, but we are not able to guarantee that it will function in future in accordance with the intentions that justified its implantation. In short, it could collapse. For an everyday example, we might also consider to what point our relations with the technology we use have been modified. We have grown accustomed to using devices whose logic we are ignorant of, and so hardly anyone now knows how they work, or is able to mend them. Even the specialist we turn to replaces parts rather than performing repairs. When something goes wrong, it does so irreparably.

Systems of government, from the most modest technology to the most sophisticated political proceedings, are more intelligent insofar as they can resist the obstinacy of those who govern

The automatic pilot is a very good example of the paradox that emerges when we ask ourselves who is in charge around here. A pilot thinks he flies planes, but from this point of view, the truth is just the opposite. The pilot starts up the system, but it is immediately thereafter the machine which prescribes the pilot’s actions to the smallest detail until finally doing without him altogether. The pilot has to adapt to the logic of the flight. A system is intelligent when it can even disobey certain absurd orders. Nobody in their right mind would disagree with this, since it provides us with an enormous number of devices that make our lives easier and sometimes literally safer.

The other great risk is that technologies will be excessively subject to the control of those who run them. There are accidents and catastrophes that are caused by an excess of power held by those running a technological system, not a lack of it. One thinks of railway accidents due to excess speed in which no device prevented the driver from surpassing the critical limit, as in the train crash of Angrois on July 24, 2013. The most dramatic case was that of the suicidal Germanwings pilot who crashed a plane into the French Alps on March 24, 2015. In both cases, the disaster was caused by the excessive power of a man over an artifact that was insufficiently intelligent, since it allowed the individual in charge free rein over the speed of the vehicle or even the liberty to crash it into a mountainside, with all the alarms going off but no device obliging him to rectify his course. There are many systems that are intelligent because they are able to oppose the express will of those running them. The sophistication of governing devices is brought about through systems that prevent governors from doing what they like, from constitutional limits in politics to automatic braking systems for car drivers.

I shall put this somewhat provocatively: the paradox of any intelligent system is that it does not permit us to do whatever we want. Let us take a few examples. What a constitution principally resembles is a set of prohibitions and restrictions. It even makes itself hard to modify, laying down conditions for procedures and qualified majorities in order to guarantee that no such changes will be implemented on a whim or sanctioned by only a very small majority. The ABS brake system prevents us in a moment of panic from braking as much as we want to, which would endanger the stability of the car and end up doing us more harm than not braking. Even fear is an instinct that protects us from ourselves. In this respect, we might recall the story of the patient suffering from brain damage that prevented him from experiencing certain emotions such as fear. This allowed him to do some things better than other people, such as driving on icy roads, since he avoided the natural reaction of braking when the car skidded (Damasio 2008, 193). Anyone is free to buy all the financial products they want (and can, of course), but the experience of the economic crisis has made us establish more exacting conditions for purchasing them, obliging the credit institutions to ensure that those who buy them have the necessary solvency and knowledge to acquire a product that is not free of risk. In some way, systemic intelligence has configured a series of protocols so that people cannot do as they please when there are especially dangerous artifacts involved, whether a vehicle or a financial product. Indeed, there is a flourishing market in what we might without exaggeration call “the protection of people against themselves,” such as the “behavioral apps” which advise, urge and monitor us. Human beings do not always wish to do as they desire, and such self-restriction is a source of reasonable forms of behavior.

We can therefore say without fear of contradiction that systems of government, from the most modest technology to the most sophisticated political proceedings, are more intelligent insofar as they can resist the obstinacy of those who govern. That is what Adam Smith, Karl Marx and others tried to teach us: that social systems have their own dynamic which acts independently of the will of individual players. All of human progress is at stake in that difficult balance between permitting the human will to govern events and at the same time preventing arbitrariness.

The Germanwings crash perhaps occurred because this reflection on the dangers of those in charge of a technological device had disappeared from view as a consequence of the defense against terrorism, which tends to consider the enemy as someone located literally and metaphorically outside. It should be recalled that the pilot flying the aircraft began his maneuver to crash into the Alps at a moment when he had been left on his own. Neither the other pilot nor the rest of the crew were able to get into the locked cockpit once the suicidal intentions became apparent. Our security protocols have been sophisticated since 9/11 with outside enemies in mind, not inside ones: an encroaching terrorist, not a mad pilot. That, among other reasons, is why it was possible to lock the aircraft’s cockpit from the inside, and why the door was armored. The whole paradox of the affair lies in how to cope with the risks presented by our own security measures, and how to avoid excessive protection.

An intelligent system is, so to speak, a system that protects us not only from others but also from ourselves. It is configured after the experience of the dangers we are capable of generating for ourselves, and against the atavism of considering that our worst enemy is someone different from ourselves. To act with this type of counter-intuitive intelligence, it is necessary to have realized, for example, that a society is not threatened so much by nuclear weapons in the power of an enemy as by its own nuclear power plants, and far less by the biological weapons of the enemy than by certain experiments of its own scientific system. It is not menaced by the invasion of foreign troops but by its own organized crime and the demand of its own drug addicts, and not by the famine and death caused by war but by the disabilities and death caused by its traffic accidents (Willke 2014, 60). What makes it most difficult for plural societies to decide their destinies freely is not so much an external impediment as a lack of agreement in their very heart. The solution does not lie with individuals, I would conclude, but in improving the systems that protect us against people and against our mistakes, our dementia and our evil.

 

An Enlightenment of Ignorance

In an intelligent system for the purpose of governing today’s complex environments, two fundamental experiences are crystallized. One is that knowledge is more important than norms, and the other is that what has to be managed is ignorance rather than knowledge.

Let us begin with the importance for governing of cognitive rulings. Government, when understood as something normative rather than cognitive, is too rigid, retrospective and slow to be effective in complex and dynamic knowledge societies. Apart from a normative perspective for simple and stable constellations, other knowledge-linked resources are also necessary, such as the expert knowledge that turns itself into rulings, the ability to argue and convince, and the possibility of collective learning. While the first Enlightenment revolved around the acquisition of knowledge for individual and social progress, the second Enlightenment should aim at a broader level of knowledge, at the intelligence of organizations and institutions, and at organized forms of collective intelligence. For organizations, constructing collective intelligence means that learning no longer takes place simply through evolution or mere adaptation, but must be systematically organized into sensible processes of knowledge management.

In an intelligent system for the purpose of governing today’s complex environments, two fundamental experiences are crystallized. One is that knowledge is more important than norms, and the other is that what has to be managed is ignorance rather than knowledge

Just as decisive as the generation of knowledge, however, is understanding the function fulfilled by ignorance in a knowledge society, and why ignorance is important for the acquisition and reproduction of knowledge, as well as for the emergence and transformation of institutions. A knowledge society is one whose collective intelligence consists of prudently and rationally managing the ignorance in which we are obliged to act, which means in the last instance a society of unknowledge. We might put this less dramatically by affirming that it is a society where we have no option but to learn to go about things with incomplete knowledge. One fundamental aspect of collective ignorance is the question of “systemic ignorance” (Willke 2002, 29), when we refer to social risks, futures and constellations of players in which too many events are related to too many other events, so overwhelming individual players’ capacity for taking decisions.

Whereas the dominant methods used to combat ignorance in other times consisted of trying to eliminate it, we may assume today that there is an irreducible dimension to ignorance, and we must therefore understand it, tolerate it and even make use of it and consider it as a resource (Smithson 1989; Wehling 2006). One example of this is the fact that the risk entailed in “trust in the knowledge of others” in a knowledge society has become a key issue (Krohn 2003, 99). The knowledge society may be characterized precisely as one which has to learn to manage this ignorance.

The limits between knowing and unknowing are not unquestionable, self-evident or stable. It remains an open question in many cases how much can still be known, what can no longer be known, and what will never be known. It is not the typical discourse of Kantian humility which confesses how little we know and how limited human knowledge is. It is even more imprecise than that “specified ignorance” of which Merton wrote. I am referring to weak forms of knowledge, like something that is supposed or feared, of which it is not known precisely what is unknown and up to what point.

The appeal to unknown unknowns, which lie beyond scientifically established hypotheses of risks, has become a powerful and controversial argument in social debates on new research and technologies. Of course it is still important to broaden the horizons of expectation and relevance so as to be able to glimpse the unknown spaces we were previously unable to see, and so proceed towards the discovery of the “ignorance we are ignorant of.” But this aspiration should not make us fall into the illusory trap of believing that the problem of the unknown unknown can be resolved in the traditional way, which is to dissolve it completely for the sake of more and better knowledge. Even where the relevance of the unknown unknown has been expressly recognized, it is still not known what is unknown or whether there is anything decisive that is unknown. Knowledge societies must get used to the idea that they are always going to have to face the question of the unknown unknown, and that they will never be in a state to know whether and to what extent these unknown unknowns are relevant with regard to those confronted of necessity.

From now on, our great dilemmas are going to hinge on decision-making under ignorance (Collingridge 1980). Now, decision-making under ignorance requires new forms of justification, legitimization and observation of consequences. How can we protect ourselves against threats about which by definition we do not know what to do? And how can justice be done to the plurality of perceptions of the unknown if we are ignorant of the magnitude and relevance of what is not known? How much unknown can we permit ourselves without unleashing uncontrollable menaces? What ignorance should we regard as relevant and how much can we ignore as inoffensive? What balance between control and chance is tolerable from the point of view of responsibility? Is the unknown a wildcard for taking action or just the opposite, a warning that maximum precautions must be taken?

These are the deep reasons why a knowledge democracy is not governed by expert systems but through the integration of those expert systems into broader procedures of government that necessarily include decision-making in areas where ignorance is irreducible. Our principal democratic controversies revolve precisely around the amount of ignorance we can permit ourselves, how we can reduce it with forecasting systems, and what risks it is opportune to assume. The challenge facing us is that of learning to manage these uncertainties, which can never be eliminated completely, and transform them into calculable risks and learning opportunities. Contemporary societies must develop not only competence in solving problems but also the capacity to react suitably to the unexpected.

While the first Enlightenment aspired towards clarity and exactitude, the second has to make do with unfathomability, inexactitude and uncertainty. The first Enlightenment assumed there was nothing problematical in the aggregation of rational components, whereas the situation now is that the convergence of parts (of individual interests and the interdependence of systems) gives rise all too often to an irrational totality: knowledges do not accumulate but generate confusion, interests are not aggregated but neutralize one another, the increase of information enhances not the transparency but the opacity of the whole, and decisions, even if individually rational, trigger fatal consequences. What theory and praxis of government respond to this new constellation?The government of intelligent systems might well be an appropriate denomination for this new challenge.

 

BIOGRAPHY

DANIEL INNERARITY

A tenured lecturer of political and social philosophy, Ikerbasque Researcher at the University of the Basque Country, and director of the Institute for Democratic Governance, he has also been a guest lecturer at various universities, including recent spells at the Robert Schuman Centre for Advanced Studies at the European Institute in Florence, Georgetown University and the London School of Economics. He is director of associated studies for the Fondation Maison des Sciences de l’Homme in Paris. His latest books include La política en tiempos de indignación, La democracia del conocimiento,(winner of the 2012 Euskadi Prize for essays), La humanidad amenazada: gobernar los riesgos globales (with Javier Solana), La sociedad invisible (winner of the 2004 Espasa Prize for essays), and La transformación de la política (winner of the 2003 National Literature Prize, essay section). He contributes regularly to El País, El Correo, Diario Vasco and Claves de razón práctica. In 2013 he was awarded the Príncipe de Viana Prize for Culture by the Regional Government of Navarre. The French magazine Le Nouvel Observateur included him on a list of the world’s twenty-five great thinkers.

Download Paths to sustainability S.M.A.R.T. (PDF)

References

Collingridge, David. 1980. The Social Control of Technology. New York: St. Martin’s Press.
Damásio, António. 2005. Descartes’ Error: Emotion, Reason, and the Human Brain. New York: Avon Books.

Geyer, Robert and Samir Rihani. 2010. Complexity and Public Policy. A New Approach to 21st Century Politics, Policy and Society. London: Routledge.

Giddens, Anthony. 1999. Runaway World: How Globalization Is Reshaping Our Lives. London: Routledge.

Haldane, Andrew. 2012. “The Dog and the Frisbee,” speech given at the symposium on politics held at Jackson Hole (Wyoming) on 31 August 2012. http://www.bis.org/search/?q=haldane+dog+and+frisbee.

Krohn, Wolfgang. 2003. “Das Risiko des (Nicht-)Wissens. Zum Funktionswandel der Wissenschaft in der Wissensgesellschaft.” In Wissenschaft in der Wissensgesellschaft, edited by Stefan Böschen and Ingo Schulz-Schaeffer. Wiesbaden: Westdeutscher Verlag: 87–118.

Luhmann, Niklas. 1997. Die Gesellschaft der Gesellschaft. Frankfurt: Suhrkamp.
Smithson, Michael. 1989. Ignorance and Uncertainty. Emerging Paradigms. New York: Springer.

Wehling, Peter. 2006. Im Schatten des Wissens? Perspektiven der Soziologie des Nichtwissens. Konstanz: UVK Verlagsgesellschaft.

Willke, Helmut. 2002. Dystopia. Studien zur Krisis des Wissens in der modernen Gesellschaft. Frankfurt: Suhrkamp.

———. 2014. Regieren: Politische Steuerung komplexer Gesellschaften. Wiesbaden: Springer.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s