Friday, April 3, 2015

The Machine of the (social aspiration) gaps

Machines are on the verge of providing us with capabilities we can hardly begin to imagine. Yet before we can actually take advantage of them, we must eliminate the side effects of using machines at an unprecedented scale since the First Machine Age kicked in in the late 18th century, also known as the Age of Western Enlightenment. Once we manage to do so, instead of the apocalyptic scenarios of singularity, we may discover the essence of our own humanity in a way never seen before: as a machine-human continuum, and US realizing that with great (machine) power comes great responsibility!

Motto: “Ignorance, the root and stem of all evil.” (Plato)

In his pamphlet 21st Century Enlightenment, Matthew Taylor, CEO of the RSA (Royal Society for the Encouragement of Arts, Manufactures and Commerce), referred to the social aspiration gap as “the gap between the kind of future to which most people in a moderate, reasonably cohesive society aspire and our trajectory relying on current modes of thought and behavior.”

He asserts: “…this gap can be said to comprise three dimensions; three ways in which tomorrow’s citizens need, in aggregate, to be different to today’s.

First, citizens need to be more engaged, by which I mean more willing to appreciate the choices society faces, to get involved in those choices, to give permission to their leaders to make the right decisions for all of us for the long term, and to recognize how their own behavior shapes those choices.

Second, with the cost of labor intensive public services bound to rise, citizens need to be more self-sufficient and resourceful. Whether it is looking after our health, investing in our education, saving for our retirement or setting up our own business, we need to be comfortable with managing our own lives and confident about taking initiative.

Third, we need to be more pro-social, behaving in ways which strengthen society, contributing to what the writer on social capital, David Halpern, calls the hidden wealth of nations; our capacity for trust, caring and co-operation.”

He continues suggesting that the “gap is less one of recognition and more one of intent. We seem to see that things need to be different, and that this has implications for us all, while responding to the empty promise that change can be achieved without challenging any of our assumptions and behaviors. A poll commissioned by the 2020 Public Services Trust at the RSA found that voters tended to condemn politicians for failing to tell the truth about the current deficit while at the same time demanding to be protected from any service cuts or tax rises.”

In my interpretation, recognition implies something more than the intuition that something is not right; we can start by asking: How can we be so delusional and lack intent? What makes us so disengaged (socially and politically) and isolates us from the wealth of our inner and social resources, which would dramatically increase our autonomy?

There is wide consensus that historically, people have never had it so good, – even in light of the pockets of political instability and war across the globe. There is also consensus that this has been made possible by the momentous changes brought about by Western Enlightenment – the market economy, modern state institutions and science/technology - which could effectively serve the notions of human autonomy, universality and humanistic purpose. The closely linked relationship between these notions has created a world in which progress towards the humanistic values of the Enlightenment seemed linear - especially until the end of the 20th century when Francis Fukuyama proclaimed the End of History.

Yet, as Mr. Taylor asserts, we haven’t accomplished what the authors of the Enlightenment envisioned.

The reason for this seems to be related to the progress of science and technology, to the efficiency-chasing and inequality creating the logic of the market economy as well as to our success in liberating ourselves (some more than others) from various existential constrains that were characteristic of humanity’s earlier history. But most importantly, it is related to our psychological (in)ability to cope with all of these achievements and to their effects on how we view ourselves, our social environment (communities) and how they drive our intentions.

The First Machine Age triggered an automation at such a large scale that while humanity, by and large, lived better (less constrained) and longer, it also became disengaged due to specialization and bureaucracies that fueled its underlying revolution in efficiency. Specialization created parallel worlds of "soft" humanities and "hard" science/technology, which in turn generated the illusion that humanity can exist separately from machines that exist to simply deal with tedious everyday tasks. While work has become more productive and efficient than ever before, it has also become less engaging because it is more automated.

Consequently, instead of the former existential physical constraints we used to be confronted with, we have created new ones, but this time mental ones! This is the price we as the representatives of humanity as a whole are paying for more efficiency, the market economy and the modern state. This cost remained hidden for quite some time (or was articulated in an erroneous context, such as in the case of Karl Marx, providing a wrong solution, communism, and adding even more costs on top of the existing ones).

Kant was among those who recognized the danger of Enlightenment values turning into an own dogma, forgetting the limited and contingent nature of human rationality. Michel Foucault says of Kant’s own description of enlightenment: “It has to be conceived as attitude, an ethos and a philosophical life in which the critique of what we are is at one and the same time the historical analysis of the limits that are imposed on us and an experiment with the possibility of going beyond them.”

Enter the Second Machine Age. Machines have added another dimension to their capabilities (leveraging/duplicating our innate abilities, but this time, in the mental space) and, at least in theory, we now have the possibility of removing the mental constraints from our lives as well. Unfortunately, this hasn't happened yet for at least three identifiable reasons:

First, in the transitional phase, computers simply mimicked First Machine Age (process-driven, rational) efficiency. Today, they do so even more efficiently and thus additional “human” value is lost (e.g., doctors’ paper medical records, which were invaluable for professional and educational purposes, have been lost due to digitization). This “transitional phase effect” has been observed before, when electric engines replaced steam engines (initially, the factory design followed the earlier patterns when the big steam engine was located in the center of the assembly floor).

Secondly, because we are still not articulating the mental constraints (or ignorance), we are subjected to our natural inclinations and social environment or to the fact that technology or tools are not designed around human needs and experiences. The “quantified self” concept, for example, often distorts the significance of the data we are hoarding about the “self” – instead of complementing it, it may actually replace the mental picture we have of ourselves (the lure of “easy numbers”). As pointed out by philosopher Bruno LatourMatters of fact are only very partial and, I would argue, very polemical, very political renderings of matters of concern and only a subset of what could also be called states of affairs” . Cognitive computing is now available, but only as a tool for specialized domains, instead of integrating knowledge and disseminating it.

Thirdly, because science has led us to the conclusion that human nature is ultimately controlled by our mind. As science brings more and more examples of how incredibly flexible our brain is (just look at a TED-talk of David Eagleman) we see more and more evidence that our nature staid essentially the same during the millennia of human civilization, evidence that led philosopher N. N. Taleb to formulate the following conjecture on a recent post on FaceBook: “Any "discovery" in the "soft" sciences related to human nature that is not wrong should be found in the ancients, and, if not there, it would be wrong.”

Last, but not least, because the path of least resistance (therefore more efficient) ultimately leads to the conclusion that it is unnecessary to formulate intentions (of any kind). The data available about people allow the formulation of solutions (by more efficient businesses and governments) even before they have to possibility to articulate a need or intention!

What Mark Lilla says about current political thinking - “Our hubris is to think that we no longer have to think hard or pay attention or look for connections, that all we have to do is stick to our “democratic values” and economic models and faith in the individual and all will be well” - is also echoed by Michael Sandel on the role of markets in our lives: “Our reluctance to bring competing conceptions of the good life into political debate has not only impoverished our public discourse; it has also left us ill equipped to contend with the growing role and reach of markets in our lives.”, as well as by Evgeny Morozov on the current digital infrastructure (known under the collective name “the Internet”), which is assuming an increasingly significant role in exposing us to both politics and markets: “If the public debate is any indication, the finality of "the Internet"— the belief that it's the ultimate technology and the ultimate network— has been widely accepted. Its Silicon Valley's own version of the end of history: just as capitalism-driven liberal democracy in Francis Fukuyama's controversial account remains the only game in town, so does the capitalism-driven "Internet”. It, the logic goes, is a precious gift from the gods that humanity should never abandon or tinker with. Thus, while "the Internet" might disrupt everything, it itself should never be disrupted. It's here to stay— and we'd better work around it, discover its real nature, accept its features as given, learn its lessons, and refurbish our world accordingly. If it sounds like a religion, it's because it is.”

The direct consequence is that people are operating on autopilot-mode, know less about themselves than computing-enabled businesses and governments and don’t even counterbalance this phenomenon by formulating intentions of their own (who has time to do so when there is so much going on on Netflix and Facebook?)!

The indirect consequence is that we continue to satisfy most of their personal needs (while at the same time gradually becoming more ignorant about other needs), we are becoming disengaged socially (if we cannot even formulate personal intentions, it is even less likely to formulate collective or shared intentions) and politically (in our “digital Disneyland”, politics sound very unfamiliar and problematic). Technology and socio-economic status determine an individual’s “cognitive island”, leading to ever increasing inequality and ever decreasing social cohesion.

…and ultimately: the prospect of technological Singularity– the prospect that the exponentially increasing capabilities of technology will hopelessly outpace human capabilities (although, as we can see, this has already happened!) – both its economic and social consequences need to be addressed to close the social aspiration gap to ensure that society can flourish.

Our aim should be a society that is aware of its machine-enhanced capabilities and is able to engender intentions that foster both personal and collective well-being.

To achieve this, society should start learning about how human nature can maximize the benefits of machines, and machines should be “incentivized” to get busy with “matters of concern” that truly matter to people and to society as a whole instead of focusing on dry “matters of fact” that are keeping us locked up inside our own ignorance.