As humans, we like to think we’re pretty smart. And, for the most part, that’s a pretty smart thing to think. After all, put us up against any of our fellow meat-machines in the animal kingdom and it would be hard to place us anywhere other than at the top of that particular intellectual food-chain (I mean, sure, dolphins are pretty bright, but when you swim in the same place you poop, how smart can you really be?).
When it comes right down to it, even the dimmest bulbs among the human species make any other known life-form look like a sad excuse for a candlestick.
However, bring computers into the conversation, and all of a sudden, things start to look a lot more like a light show…
Imagining other minds
If you’ve happened across any sort of technology news in recent years, chances are, you’ve heard somebody, somewhere, mention something about A.I. (short for artificial intelligence, in case you never watched the then over-hyped but currently underrated late-90’s Spielberg/Kubrick flick by that name). And probably, if you’re like most people, upon hearing the term A.I., the first image that flashed into your mind was one of either R2-D2 and C3P0, Rosie from The Jetsons, or a leather jacket-wearing Arnold Schwarzenegger grunting quasi-English catchphrases as The Terminator (more on A.I. as existential threat in a little bit…).
These contemporary fictional A.I. representatives are well-embedded in the popular unconscious but the idea of thinking machines is actually much older than one might readily imagine. Think, for instance, of Dr. Frankenstein and his lab-created artificial man from Mary Shelley’s famous 1823 novel, or the golem, a clay humanoid brought to life by divine magic from historical Jewish literature. Or, older still, consider the Greek God Hephaestus, who, in classical myth, was said to have created golden automata to help him work in his forge—perhaps taking the phrase “making new friends” a bit too literally.
Basically, as long as we’ve been making stuff, we’ve been imagining how that stuff might one day find a way to join us at the adult table of intelligent thought. So far, we’ve had to make do with our imagined creations as dinnermates, but if recent technological advances are any indication, it might not be long until machines get their own seat at the table.
The technology of intelligence: evolved
A.I. technology has seen something of an explosion in recent times. While we’ve had autonomous factory robots for decades now, these days, A.I. technology continues to become more advanced and more subtly widespread than ever before.
Within just the last five years, we’ve seen self-driving cars begin to roll out onto our city streets, witnessed internet bots swarm Twitter and Facebook feeds (incidentally becoming Neo-Nazis, in the process), and watched Siri and Alexa take up residence in our digital devices as our own personal genies-in-a-bottle, helping us with everything from finding out when we last booked our latest nose-hair waxing appointment to where best to pick up a spread of locally-sourced, gluten/lactose/cruelty-free avocado-salad sandwiches.
As technology pushes forward, we can expect more and more human tasks to fall under the purview of A.I. systems, especially as we find ways to mimic our own cognitive compositions in machines.
One of the most game-changing advances to come to the field of machine intelligence has been the development of artificial neural networks (ANNs). Inspired by evolutionarily-created neural networks from biology—like those that constitute animal minds—these computer systems complete tasks by considering specific examples, generally without task-specific programming.
For example, one such system might be trained in image recognition. This training would consist of learning to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using those analytic results to identify cats in other images. (No word yet if they can auto-un-tag photos of drunken escapades before they make it to your boss’s Facebook timeline but, surely, that’s on the way.)
The application potential for this type of software is virtually (pun intended) unlimited.
One use of said applications became rather famous in March of last year, when Google’s DeepMind software, AlphaGo, defeated go world champion, Lee Sedol, 4-1 in a five game playoff—building on, and wholly surpassing, its predecessor Deep Blue’s achievements in defeating Gary Kasparov in 1987 (to drive home the gulf of difficulty between both tasks: chess has a game tree complexity of 1047, while go has a complexity of 10360. There are 1081 atoms in the entire universe. So yeah, this is a non-trivial accomplishment.)
To date, these systems have also been used for superior natural resource management, data mining, e-mail filtering, and financial decision-making through automated trading systems. They’ve also been used for black-box engineering in fields such as geoscience, hydrology, ocean modelling, and geomorphology. Within these fields, such artificial neural networks have been able to design and build solutions that worked better than any human engineers could achieve. The real kicker? That “black-box” qualifier means that the system’s programmers don’t have a clue how the networks are doing it. As world-renowned infomercial magnate, Ron Popeil, would encourage, they just set the network and watch it go.
The technology is impressive. In fact, it’s impressive enough that the mere fact of its capabilities may soon force us to ask those questions that tangle with the deepest roots of the human existential tree—to proffer a few: what does this technology mean for humanity? How will A.I. alter our place in the world as individuals? As a species? Are we in danger of losing our seat at the big brain table or of becoming those barely flickering candles next to the floodlight of computerized intelligent systems? Where do even we go from here? The list of worthy questions is a long one, to say the least.
Working with A.I.: displacement or destruction?
While it’s difficult to come up with many concrete answers to these questions, at a minimum, it’s pretty clear that this sort of technology will present a radical challenge to our current societal modes of being in the coming years, decades, and probably millennia (supposing we dodge both catastrophic global climate change and the Cold War redux—with special guest star Best Korea—that we’re currently in the middle of).
For one, I don’t think it’s a stretch to say that AI technology will disrupt the state of human work in a significant way. The question is, how will we handle that disruption?
Many blue-collar workers have been living under the threat of technological dislocation for more than 30 years now. During this time, the professional class has generally responded to the plights of these workers with a variant of “re-train” or “work harder”, all while the upstream current for middle-class life has contracted to a fraction of its former bandwidth. Now, what’s going to happen when these same professionals start seeing their own jobs in the crosshairs of technological displacement?
In case you missed it, ANNs are being used to attain more accurate medical diagnoses, and to engineer better structures than have been generated from the minds of any humans working in their fields. Surely, it’s one thing to replace a “mere parts-assembler” but when doctors, lawyers, stock brokers, and—here’s some irony—programmers start being put out of work in favour of these upstart computer intelligences, what’s the plan b, c, and d-through-z for human utility?
By the way, you artists out there—don’t get too cozy. Neural networks are now being trained to create artwork and music of their own, to provocative results. The next great musical genius might be composing in 0’s and 1’s rather than with ¾ notes and semitones.
Of course, the most common refrain when this is pointed out, is to harken back to the industrial revolution or to the early 20th-century, when the automobile came on to the scene and people started to sound the death-knell for the wheel-making industry. Alas, they’ll point out, the companies adapted to the change in technology and soon, the wheels were running on Model T’s rather than behind Clydesdale steeds. Displacement, not destruction—they might say—has been the abiding principle in the history of human labour.
But the mistake people make when presenting such an example is to identify us with the wheel-makers in the scenario, when, in actuality, we’ve got more in common with the horses. Think about it; as humans, we’re able to offer one of two things as our labour: the physical or the mental. What other domains do we turn to once those options have been exhausted?
Towards an intelligent future
In light of this situation, we must consider what type of future we would like to bring into fruition. One option could find us entrenched in a new hyper-gilded-age, where A.I. technology brings about vast wealth for the fortunate few owners of the machines, leaving all but those at the tippy-top of the human pyramid to fight for the scraps left in the wake of the new technological epoch. Or, we might consider ways to make the distribution more equitable. Perhaps implement a machine tax? Or individually-assigned human-labour machine proxies who generate income from their work output, alleviating the necessity of human toil? Or maybe just stick us all in The Matrix and be done with it?
Whichever route we follow, we’re going to have to get creative. Our current model isn’t going to hold up to the changes. Indeed, in many ways, it’s barely holding up as it is (or would you prefer to characterize the things going on in the world today as comfortingly boring?).
Not to mention the double-bitten threat-promise that a “Superintelligent” system could pose. Picture a networked intelligence, super-human in every field of its capacity, able to spend the equivalent of 20,000 years within a single second of human thought. And then imagine that intelligence gets into the internet. (If you thought Nazi social media bots were a problem, you ain’t seen nothin’ yet.)
It might sound like the premise of a generic 1980s sci-fi movie but real-life academics and technology leaders are taking this possibility with surprising seriousness; Elon Musk, for one, warned that such an A.I. system could be equivalent to “waking the demon”, as he put it. (I’ll let you fill this beat with your own personal nightmare interpretations of that statement.)
The only thing more frightening than the possibility that a superintelligent A.I. system could go wrong and turn against the fleshy hand of its makers, is to imagine not inventing such a system at all. One need only consider the number of lives that could be saved from all manner of natural disaster and tragedy through the use of such technology to realize that intentional delay in its creation would, in effect, represent an ever-rising death-tally for each day—or even, hour—that we fail to bring the technology on-line.
Beyond all of that, how do we even begin to broach what such a system could mean for our understanding of consciousness—both our own and in the universe at large?
Within this question, we find another miscellany of discomforting possibilities. We could end up creating artificial consciousness before we’ve even figured out how consciousness works within ourselves. Or give rise to an intelligence magnitudes more capable than our own without consciousness coming along for the ride at all.
In other words: we could turn the lights on but find nobody home.
To ride that analogy back to an earlier light metaphor: we’re talking about an intelligence capable of supernova bright-light intensity but nonetheless without any capacity to hold the experiential effects of that brilliance—a null qualia whose incandescence nonetheless drowns out any other pallid sparks within its sphere.
So while this future might currently lie far-flung and distant on the horizon, the difficulties posed by its questions means it’s probably not a bad idea to start thinking ahead to them as soon as we can—perhaps, while we’re still the ones doing the thinking.