Considerations for the Future of Thought
Our machines should be nothing more than tools for extending the powers of the human beings who use them.
- Thomas Watson Jr., Founder of IBM
I recently attended a introduction to IBM’s Watson. The presentation hit a bit of a snag when the presenter got hung up on demonstrating that IBM could tell the sex of a person by their picture. The presenter had chosen a picture Caitlyn Jenner (formerly Bruce Jenner). Besides the problems of making this highly gray area binary, it speaks to a much larger problem that AI’s are being programmed to make the same categorical mistakes and carve the same distresses into our system of thought. What a missed opportunity for AI and one that will cost IBM its stake in the future of the planet, because they have given up their willingness to transform the future of thought.
I am not the first to claim that life is simply a movie we make with our brain. That our eyes are merely a camera and our brain simply a cutting room for the film that animates our actions. On this process repeats and we have barely an awareness of it. No one has ever showed us how to make this movie and all the things we have ever build have simply been a process of adding complexity and trying to understand it. Even as we undertake this pinnacle of human intelligence, which is to replicate it, we seek not to transform this process but only speed it up and deliver it outside the body.
Perhaps we should look to film for understanding how this movie making process can be understood. As if we are simply making a digital version of Man with A Movie Camera, a film about watching a movie in which you learn how to watch a movie and how to make a movie. It fully unpacks how to introduce new media to the world and a similar process must occur with thought. As Donna Haraway says, “It matters which stories tell stories, which concepts think concepts. Mathematically, visually, and narratively, it matters which figures figure figures, which systems systematize systems.” Our thoughts of the thinker's thoughts matter. Therefore, we must consider that we have been thinking of artificial intelligence all wrong.
We need to teach AI to think better in order to teach ourselves to think better.
Instead, we have been making them in our flawed image which does nothing to progress the human race, only demonstrates how to highlight our cancerous anthropocentricity. We make artificial intelligence in our image, as we many believe god made us in his image. What unfolds is technological messiah complex. One played out exquisitely in Ex Machina and one that is continued by the disciples of Ray Kurzweil at the temple of singularity.
Perhaps this is the stem of our problem. Our need to see thinking as something inherent and not as the first form of technics. There was perhaps no thought before we first had a thought and recognized it as such. Now we are quite possibly able to extract that thinking process and embed it in a mechanical being only have it embedded back into us by some contact lenses by which we would be able to see and think better about the world. Again, Man with a Movie Camera is ripe for comparison.
Our brain makes a movie of the world we see. But we have never been fully shown how to make that movie, so we make it flawed and haphazard only to the best of egos ability. Our giving names to the things we perceive somehow creates an artificial notion of them. Friedrich Nietzsche says, "We believe that when we speak of trees, colors, snow, and flowers, we have knowledge of these things themselves, and yet we possess only metaphors of things which in no way correspond to the original entities." Are we not somehow artificially intelligent ourselves?
One of the classic criticisms of AI is that we do not understand how we think so how can we hope to model it. As Nietzsche asked, "What do human beings really know about themselves? Are they even capable of perceiving themselves in their entirety just once...?" How audacious are we that we believe we have some complete enough knowledge of the human and human experience that we could in some way replicate it. While this is possibly a fair criticism from a programmability sense, I do not think it goes far enough. Not only do we not fully understand how we think in hopes to model it, but a quick look at the history of our thought reveals just how flawed it has been. We may be the only conscious species, by our own definition, but how we have collectively acted and thought has at times been terrible. If we are to model machines after our world we are harkening our own destruction.
Vilem Flusser illustrates this aptly in his essay, “The Ground We Tread.” Flusser claims that the “Western project” is the formation of apparatus. The everyone is turned into a cog in the machine. Furthermore, that this process reached its ultimate state in the form of Auschwitz “when for the first time in the history of humanity, an apparatus was put into operation that was programed with the most advanced techniques available, which realized the objectification of man, together with the functional collaboration of man.” The concentration camp fully realized the man as mechanistic piece so fully that it was able to carry out genocide without remorse.
We’ve acted in the collective best interest in some of the worst ways. Flusser predicts the modern day when he says, “The ultimate objectification of the Jews into ashes...will be followed by less brutal objectifications, such as the robotization of society.” The way we are building them, AIs will only be as smart as we are collectively. As Flusser reminds us, collectively humanity has done some some awful things.
Another slogan of IBM is “Cognitive Business is Here.” Of course this is saying, welcome to the cognitive economy, where one is able to farm out cognition. Let us not forget that IBM is in the business of offering services, which in this case is that of thought. The more thought Watson is able to do, the less one has to pay someone to do it. We have to remember as Jaron Lanier reminds us that behind every AI is all the people who trained it, mostly unknowingly. I was recently enlisted to help train a new AI called Iris. In order to train Iris a person needs to simply sit and interact with it and tell Iris if it has made the right decision or not. You are simply helping it learn to pattern make. In a sense Iris is just a damn for human thought. Just as when we see river we see dam and the standing reserve of energy of the river, wow when we see person we see AI and the standing reserve of cognition. We truly have at last “enframed” ourselves.
This is bring us to the classic worry of AI, that it is here to steal everyone's jobs from cab driver to the doctor. The same IBM presenter said that one day the picture recognition of Watson could be such that a doctor could determine from facial features whether or not someone had down syndrome or even autism. Again, the lack of spectral thinking is astounding, but it hits at an even darker scenario. One in which a person texts Watson a picture of their child to find out if it has a cognitive issues. Yes cognitive business is here, and it is going to drive progress by identifying at face value the disabled… If you thought Flusser’s comparison of a robotic word to Auschwitz was unfair, you better think again.
So how then should we make our artificial intelligences? To answer this, we have to ask ourselves, how do we want to see the world? Because we have to consider how we are going to have an AI think. Why would we want an AI to emulate our thinking process when we can imbue it with such a richer ability to pattern make and network? This is not a new challenge. Rosi Braidotti proposed it in The Posthuman:
“We need to devise new social, ethical and discursive schemes of subject formation to match the profound transformations we are undergoing. That means that we need to learn to think differently about ourselves. I take the posthuman predicament as an opportunity to empower the pursuit of alternative schemes of thought, knowledge and self-representation.”
We have a valuable opportunity to radically transform the human process of thinking. How we make AI a transformational experience is by making its way of thinking transformational.
First of all we should teach our AI’s to adopt the view of Donna Haraway who implores us to “make kin.” She says,
“We have a mammalian job to do, with our biotic and abiotic sym-poietic collaborators, co-laborers. We need to make kin sym-chthonically, sym-poetically. Who and whatever we are, we need to make-with—become-with, compose-with—the earth-bound.”
Who do will these AI’s work for? The easy response is human, but perhaps that is not the right answer. Can we not program them to take a more holistic view of the planetary situation? Taking the less anthropocentric view, why not program them to give equal footing to everything. Let us also consider that as we teach an artificial intelligence to differentiate between things we can also teach them to classify them as one. Yes the bird is different than the cat, but they are both living residing alongside the rock on planet earth. Are they different? Perhaps only in physicality, but not in worth. Perhaps our worth is only a measure of “compost.”
Haraway evokes Kim Stanley Robinson and his book 2312. Looking to 2312 for an answer, we find the character of Swan. Swan has been changed by nature. She has spliced her genes with those of song birds, ingested alien life forms and even has an AI implanted in here. She runs with the wolves, feeling a calling to them. Swan seems to be the physical form of Roberto Esposito’s words, “The animal - in the human, of the human - means above all multiplicity, plurality, assemblage with what surrounds us and with what always dwells inside us.”
This idea of the other must be eradicated by understanding our interconnectivity. As Esposito says, “A future horizon that belongs neither to the human order, nor the animal order, but rather to the still hazy silhouette of their crossing.” Haraway’s closing remark reminds us just how much we have taken advantage of the other in our world. “No species, not even our own arrogant one pretending to be good individuals in so-called modern Western scripts, acts alone; assemblages of organic species and of abiotic actors make history, the evolutionary kind and the other kinds too.” Of course we must answer for the vast travesties we have wrecked upon the world but perhaps we do so by paying our respects forward.
Massive mapping projects have been undertaken in the past in order to attempt to map these huge actor-networks. But perhaps these mappings are only a beginning. They simply provide a template for a way of thinking, that mapping of past actor-network allows a looking forward.
A prime example is to consider how dams have wrecked havoc on the natural world. Looking at the disaster of the Aswan dam, which combined global trade, sugar cane, sweet blood and mosquitos to cause a regional disaster. No talk of dams would be complete without a consideration of Heidegger’s enframing. Again, we lack the ability to see that this happens all at once, or is behind everything we touch and hold. We no longer see the river as natural formation but instead as a resource to be extracted. We see river and we see dam. Let alone mosquito, sugar cane, or death. Algorithmically devised and maintained actor-networks could perhaps be a beginning to a deeper understanding of our planetary interconnectedness.
At this point it is becoming rather obvious that I am have a firm belief that an aggressive mapping of the world is not only possible, it is necessary. This is all due, perhaps frighteningly so, to technology. I would than the question DeLanda when he , “If computers have emerged as windows onto this world” and say they most certainly have. My argument becomes that they must make technology a better window. To do so Manuel DeLanda becomes a shining light.
An aggressive mapping of actor-networks coupled with an algorithmic understanding would result in a densely thick network which levels of its own requiring deeper understanding. These maps would then require an understanding of flows within them and structures that govern them. As Delanda says, “the work of Deleuze and Guattari is exemplary for the creation of such maps: they show how our lives maybe view as a composite of rigid structures, supple structures, and finally, ‘lines of flight.’” Adding this new element to our networks of actors, layering a new concept of stratification I imagine would real deep insight in the nature of our world. I say imagine because I’m not sure if I am capable of fathoming exactly what the weaving of actor network theory and the machinic phylum would look like. I imagine it perhaps in the same way as the wave-particle duality in matter. There would be this stiffer network of physical connection that sits incase in a conceptual flow.
“We must create stratometers of every kind--mathematical and nonmathematical--and get to work mapping the attractors that define our local destinies and the bifurcations that could allow us to modify these destinies. And though this is undoubtedly an enterprise fraught with dangers, we can derive some comfort from the hints of the machinic phylum that have recently become visible to us and seem to indicate there may be ways of evading our currently doomed environmental destiny.”
I think we should heed Delanda’s call to action, with AI being the best tool to do so.
In order to make this entire new system of knowledge work may I propose we reconsider where things are in both time and space. AI can gives a more dynamic idea of location which is not limited by static physicality or temporality. To help us, Ben Bratton has give us the concept of Deep Address which he defines as:
“a concept suggesting the identification of objects and events with a granularity that far surpasses the scales at which humans perceive physical space and duration...as for all of the things and events beyond natural perception, the design program of deep address considers how it is that some as yet unspecifiable means to sense their presence and to identify them according to a generic addressing matrix, and even to network them one to the other would crack open the world’s information portraiture well beyond the normal scale of Internet of Things.”
When Bratton uses this word “Deep” he means deep into the molecular level. Being able to perhaps give any phenomenon, regardless of size, location in space. I suggest that we expand upon this and consider deep as more of a term for temporality. That a deep address suggest a location not just of the physical now but also of the previous and the possible. That an ever changing concept of time and space driven out of a live actor network theory. I do not believe it is possible maybe to even conceive of what this may look like. Just as diagramming the concept of stratification seems daunting. Perhaps it is something that must be grown like a culture in a laboratory.
Perhaps it would be best to refer to this network not as a social network, or an inhuman network, but simply a mapping of intelligence. An attempt to say that all things have intelligence. That we lived in this ever-mesh universe of intelligences that at times are intelligible to the human. Our idea is to bring these all to a scale, both physically and temporally, where they can be understood, not fully but more robustly, and with insights made actionable. The sale we think is most universal is that of data, both qualitative and quantitative. We propose an open portal into the intelligence of our world. A way for us to be in touch with that which we cannot hope to comprehend in full complexity except by simple means through math and visualization. This design proposal echoes that of Isabelle Stenger’s Cosmopolitical Proposal when she asks,
“As for the cosmopolitical perspective, its question is twofold. How to design the political scene in a way that actively protects it from the fiction that humans of good will decide in the name of the general interest?” How to turn the virus of the river into a cause for thinking? But also how to design in such a way that collective thinking has to proceed “in the presence of” those who would otherwise be likely to be disqualified as having idiotically nothing to propose, hindering the emergent “common account”?”
One could easily end a discussion of artificial intelligence here but I think it is important to further unpack how an AI of this nature because a vehicle to transform the planet. My theory of change is that transformative experiences become embedded into our culture. Slowly it simply becomes “cool” to be a better person. We need better tools to help us become better people. Ultimately that's why I believe the empathetically oriented artificial intelligence built as a vehicle for planetary awareness becomes this transformative experience. Nothing scales change like a cultural transformation.
Bernard Steigler reminds us that “technics evolves more quickly than culture.” What if we design more emphatic technics. We we have a better culture? Continuing this train of thought, perhaps more darkly, Flusser says, “it has currently become impossible to engage ourselves in the ‘progress of culture.’ As doing so would be to engage ourselves in our own annihilation. We have lost faith in our culture, in the ground we tread. That is: we have lost faith in ourselves. It is this hollow vibration that follows our steps toward the future.” Culture is technological byproduct; an after thought.
IBM’s business model has of course for a long time been that of service. They stopped making computers and began offering a service. But of course the most successful services are those that are experiences. People go to starbucks not for the service of having coffee made for them, but for the experience of “Starbucks.” IBM continues their folly with Watson. They promote Watson as a service. Something that humanity has been awarded with for its enduring progress.
Of course if AI’s are built not simply to replace human labor and instead expand human thought then they are to be more than a service. They will be a transformation.
Rapidly AI will go from service to transformation in fact experience will likely be an afterthought. The experience will almost become inherently spiritual. Like when Braidotti describes the circling of witches, “The efficacy of the ritual is therefore not the manifestation of a Goddess who might inspire the answer but that of a presence that transforms each protagonist’s relations with his or her own knowledge, hopes, fears, and memories, and allows the whole to generate what each one would have been unable to produce separately.” We will likely have to go back and consider just how one experiences such a transformative understanding of the world.
Which of course brings us back to A Man and A Movie Camera. IBM’s Watson lacks a revealing of a deeper human experience. It does not transform how we think. But it could. A Man with A Movie Camera transformed its viewers by letting them understand a new media. They were taught how to consume and make while they were consuming. In doing so they were taught a new way to communicate. This is what the model of the AI experience should be. One in which we are taught a new way to think and to think about thinking, and show how our new ability to think as revealed an entirely new and better process of thought. If we are to truly make it into the next century we must adopt this model for new media as well. How do we bolster our understanding by understanding new ways to understand?
Perspective is an invaluable experience. In 1968 during Apollo 8 astronaut William Anders snapped a picture of the earth as it crested over the lunar horizon. The now famous picture entitled Earthrise has gone on to be a cultural phenomenon. As Michael MaCarthy recently wrote:
“Earthrise and its taking was without doubt one of the most profound events in the history of human culture, for at this moment we truly saw ourselves from a distance for the first time; and the Earth in its surrounding dark emptiness not only seemed infinitely beautiful, it seemed infinitely fragile.”
Perhaps in the same way as Earthrise we will be able to see not only our infinite beauty and fragility, but also our infinite connectedness. Afterall, “Gaia is just symbiosis as seen from space.” Perhaps all pervasive empathy requires a similar perspective. One only possible through artificial intelligence.