The Space Between Us: An Approach to Collaborative Innovation



ARTICLE | | BY David Norris

Author(s): 
David Norris


Get Full Text in PDF

Abstract

We live in extraordinary times. Every generation has thought so, but in fact, ours is the first to be confronted with existential crises of such a magnitude that we might consider humanity to be on the endangered species list. In our past history on this planet we have always succeeded in dealing with critical challenges through innovation produced by flashes of insight. And now more than ever exactly that is what is needed. Through experience and observation we know that collaboration does inspire and support innovation, but we do not really know exactly why or how. We do have familiar aphorisms that point to the phenomenon, such as “two heads are better than one;” but while we accept this as true, it does not account for the phenomenon itself. In this article I will propose an approach towards explaining it, as well as several key questions underlying that approach.

1. The Story Telling Animal

Human being is often referred as the story telling animal. Not only did we once tell stories to the other members of the clan gathered around the fire, we still tell stories about the Universe, about the God or Goddess we believe created it and also about ourselves. And sometimes we get so lost in our own stories that we no longer know they are stories. We then relate it to each other and to ourselves as characters in that narrative and to the narrative itself as if it is described as an independently existing fixed reality. When that happens we become captives of the story and feel more like the victims of a world apparently out of control than the inventors of that world. In fact, some observers have even expressed the belief that we are already past the point of no return. It is not just the decline of the West that Oswald Spengler prophesied, but conceivably the potential end of life as we know it is for all of us.

Innovation requires the ability to question the prevailing reality, which in turn requires that we are able to recognize ourselves as the source of the narrative underlying it, or at the very least as the source of the agreement that keeps it in place. In other words, innovation implies, first of all, that we are able to distinguish reality as a story and ourselves as the storytellers. This is just as true whether one lives in a scientific materialistic society or in an indigenous animistic one. If innovation requires the ability to question the prevailing reality, then the competence to engage in that kind of questioning requires the ability to collaborate. Yes, it is true that there always have been a few geniuses who seemed able to manage it alone in the solitude of their library or laboratory, but the questioning now needed is far too large and complex for a purely individual effort. Moreover, even these apparent lone geniuses did not actually act alone. They were in contact with the work of contemporaries through reading their publications or through correspondence as well as sometimes even through direct conversations. None of them operated in a total vacuum.

In the last few decades there has been quite a bit of inquiry into the nature of insight. For example, in a study done by brain researchers Edward Bowden, Mark Jung-Beeman and their team, the results of which were published in July 2005 in “Trends in Cognitive Sciences”, the classic “aha” moment of insight was discovered to be associated with a surge of activity in the right hemisphere of the brain. However, closer examination showed that this surge was actually caused by a reduction in the left hemisphere activity, which normally would inhibit right hemisphere functioning. Why is this important or even relevant? Because it is the left hemisphere that tells us who we are, which is to say, that continually reminds us of our identity and of its place in the story. It is the right hemisphere, which operates more intuitively and holistically, that produces the moment of insight by being freed up to make connections that are otherwise not obvious. Thus, not only is the facticity of the prevailing reality anchored in the activity of the brain, but so also is the ability to question that reality. This theme is developed further by Iain McGilchrist in his brilliant book, “The Master and His Emissary: The Divided Brain and the Making of the Western World.” McGilchrist explores how the two hemispheres interact with each other to shape our experience of reality. The central point of the book is that there has been a palace coup, so to speak; the left hemisphere, which ought to be operating in the service of the right hemisphere, has instead taken over. The emissary has become the master. As a result, to a large degree the left brain’s logic and rationality inhibit the right brain’s natural and inherent capacity for connection and insight. In other words, the prevailing reality is maintained in existence because the left brain prevents the right brain from effectively questioning it. Here is how McGilchrist puts it: “Insight is…a perception of the previous incongruity of one’s assumptions, which links it to the right hemisphere’s capacity for detecting an anomaly.” And because of this brain activity that suppresses insight, we humans often are blinded to our role in maintaining our perception of reality and are simultaneously in denial of our blindness. This would certainly explain the fact that we can see and understand our perilous situation as a species but seem largely unable to take effective action with regard to it.

“The 18th century, and particularly the second half of it, was the highpoint in the development of individuality.”

It has often been noted by cognitive development experts that very young children do not play so much with each other as that they play next to each other. That is to say, they are not yet capable of collaboration. It takes a significant leap in cognitive development for that ability to be available and clearly many humans, regardless of their age, are not very good at it. That is, adults mostly do not work together so much as they work next to each other. When two or more people are genuinely collaborating there is a diminishment in their attachment to their own identity, which is to say, to the story they are telling themselves about themselves and about their world. In fact, it is not unusual for people to report that during particularly creative occasions they “forgot themselves” or “lost track of time.” In other words, there appears to be a direct correlation between successful collaborative innovation and a period of looser attachment to one’s identity. There is a shift from being focused on my point-of-view as it interacts with your point-of-view to being attentive instead to the space between us. While I still may have a point-of-view I am no longer so fixated on it and am more open to new and unpredictable input from the surrounding environment. This will, of course, include another man’s point-of-view but also may go beyond it. Suddenly, I can access not only my collaborator’s thinking, but can also see connections between things I may have learned or thought about in the past and what people sometimes refer to as the “Zeitgeist” or simply “ideas in the air.” Freed from a tight connection to my identity, the space between us rather than my own mind becomes the workbench where collaboration actually occurs.

2. The Age of Enlightenment

Let me take this a step further by first taking a step back. At the present time in our human history, particularly in the West, we have come to know ourselves primarily as individuals. In fact, we pride ourselves on this as a sign of our progress as a species. It was not always so. In earlier periods of history people knew themselves primarily by virtue of their place in a social hierarchy and as members of a tribe or extended family or a social class or a craft guild or as belonging to an estate ruled by a nobleman. Individuality existed, of course, but as a secondary or ancillary attribute.

Most historians locate the start (or at least the flowering) of this development in the enlightenment period of the 18th Century, although there are certainly traces of a burgeoning individuality as early as the late Middle Ages. In his well-known letter describing his ascent of Mount Ventoux, which was written about 1350, the Italian poet Francesco Petrarch (1304-1374) described his ecstatic experience of reaching the summit, gazing at the landscape spread out before him and discovering three-dimensional space. This may sound bizarre to anyone living in the 21st Century for whom it might seem that three-dimensionality has always been a feature of reality. But consider that medieval paintings do not portray three-dimensionality; they depict a flat, two-dimensional world. Petrarch’s account presents the discovery of perspective, and inherent in perspective is the existence of two points in space: a “vanishing point” on the horizon and a point-of-view located in the observer. Later René Descartes (1596-1650) declared the observer’s point of view to be separate from his body and gave it a name—the “res cogitans” (the thinking thing). Still later, John Locke (1632-1704) made self-awareness (or thinking about oneself) the defining feature of human identity and thus reified even further the self as an object of thought located at a point somewhere in mental space. This not only made human identity more objectively real, but in doing so, it also enhanced the possibility of greater human agency through the prospect of reasoned thinking to control both the inner and the outer worlds. Immanuel Kant’s (1724-1804) “categorical imperative” then accorded this objectified self a dignity as well as autonomy by making the following maxim the basis of all moral action: “So act as to treat humanity, whether in your own person or in another, always as an end, and never as only a means.” And building on all of this, the framers of the American Republic, who had read and were very much influenced by the Enlightenment philosophers, granted each individual person “certain unalienable rights,” which were enshrined in 1776 in the Declaration of Independence. “Life, liberty and the pursuit of happiness” became not only self-evident truths, but also human birthrights. In 1789, inspired in part by the American example, the French made their own revolution and produced the Declaration of the Rights of Man and of the Citizen. Individuality now not only had an objective existence in reality, its existence and even its right to flourish were protected by social contract.

“In our zeal for freedom and autonomy we have lost our connectedness not only to each other but also to nature and to life itself.

I think it is fair to say that the 18th Century, and particularly the second half of it, was the highpoint in the development of individuality. It was a time of remarkable ferment and creativity and blossomed into a celebration not only of individuality but also of the individual’s power to reason. However, I believe it is also fair to say that the bright light of reason, which shone so brilliantly during this period, began to lose its luster over the following centuries. Today at the start of the 21st century the shadow side of the Enlightenment period has shown itself. The glorious autonomy of individuality has devolved into a depressing isolation and estrangement. The light of reason, which the Enlightenment philosophers were so sure would lead to universal peace, freedom and happiness has been eclipsed by the dark of reason, which has led, among other misfortunes, to the development of nuclear weapons, the prospect of environmental collapse and, perhaps most debilitating of all, the sense of anomie at being cut off from any kind of meaningful relationship to our world. In our zeal for freedom and autonomy we have lost our connectedness not only to each other but also to nature and to life itself. Splendid individuality has become “The Loneliness of the Long Distance Runner” and the buried fear and rage of reclusive teenagers depicted in “Bowling for Columbine”. In fact, in a book published last year entitled “Loneliness—the Unrecognized Sickness,” Manfred Spitzer, a leading psychiatrist and neuroscientist, claims that loneliness is actually now the leading cause of death in the West, though this is hidden because loneliness kills indirectly by making people much more susceptible to cancer, heart attack, stroke, depression and even dementia.

However, to be clear, I am not suggesting that we throw out the remarkable achievements of the Enlightenment. Having a looser relationship to one’s own identity should in no way be understood as a denial of individuality. On the contrary, it should be seen as an enhancement and a further development of the original intentions of those 18th century philosophers. Their dream was of a free thinking human being capable of being guided by reason towards ethical action. To the extent that one can have a point-of-view rather than be trapped in it, one is autonomous and free even from one’s own inherited and in part socially constructed opinions.

On the other hand, to be caught up in one’s point-of-view is, of course, the essence of an identity; it is to be trapped in one’s own history and condemned to live out one’s life as a character in a story limited by the plotline of his or her autobiography. And what is true at the level of individual is also true at every larger level of human social structure: relationship, family, organization, community and society.

The fact is we need collaborative innovation at each of these levels if we are to meet the current challenges of being human. As a fall out of these challenges, so far mostly in the West but increasingly everywhere, people at every level have become so polarized that genuine and productive dialogue has become increasingly strained if not impossible.

Certainly, we remain quite good at innovation with regard to our technology and our business models, but hardly at all with regard to how we conceive of ourselves. In fact, we seem to have lost interest in the oldest and most productive questions of our species: “Who are we?” and “Why are we here?”

3. Beyond Identity

It is once again essential to ask the newest variations of the oldest questions: If I am not an identity, then what am I? If I am not the main character of a story, then who am I? If I am not located in a point-of-view, then where am I?

Jean Gebser (1905-1973), one of the greatest almost unknown geniuses of the 20th century, saw in the scientific, philosophical and artistic breakthroughs of his time the birth of a new consciousness, which he described as “a-rational” and “a-perspectival.” By this he meant a consciousness unattached to any point-of-view. As examples: while the consciousness of the Enlightenment period was based on a clear Newtonian/Cartesian separation between the inner and outer world-spaces, the findings of Quantum Physics call that separation into question; Picasso drew the human figure in “Les Demoiselles D’Avignon” from so many points-of-view that the concept of point-of-view itself is no longer applicable; Rainer Maria Rilke’s poetry transcended the subject/object basis of language to create a luminous world appearing unattached to any point-of-view; and Akira Kurosawa made the film, “Roshomon,” staging the same event from so many different points-of-view that the notion of point-of-view itself becomes the main character of the story. Moreover, modern neuroscience and Post-Modern philosophy have so meticulously deconstructed the myth of an objective individuality that all that remains of it is a superstition on a par with the belief our ancestors once had in the Divine Right of Kings. In short, believing in the objective existence of a “me” with a point-of-view just because there are thoughts, feelings and body sensations is like believing in a thunder god just because there are loud noises during a storm. Perhaps without realizing or understanding the full consequences of it, we are now in the midst of outgrowing our fixation with our own identity and with it the familiar ways of connecting to one another and to our world. This is both good news and bad. It is bad because it leaves us feeling unmoored in an unrecognizable world beyond the conventional understanding of identity. It is, however, also good news in that only by entering such a profound field of not knowing can we find the necessary power of innovation, which by definition lies beyond what we thought we knew.

What actually happens in a moment of successful collaborative innovation, which I have alluded to throughout this article, is a letting go of one’s attachment to one’s sense of self as an object with a point-of-view long enough to interact with one or more people who are doing the same. This allows for a leap from defensive debate to cooperative exploration, which often can lead to insight. It seems to me that we are now entering a time in our history when it is once again essential to ask the newest variations of the oldest questions: If I am not an identity, then what am I? If I am not the main character of a story, then who am I? If I am not located in a point-of-view, then where am I?

About the Author(s)

David Norris
International Consultant, Facilitator and Coach
RELATED TERMS: