T-Minus AI: Humanity’s Countdown to Artificial Intelligence and the New Pursuit of Global Power

By Michael Kanaan, Aug/2020(270p.)


This was a great book because it educated on a complex topic without being dry or unnecessarily long. The author succeeds in weaving the complex building blocks of AI into the history of human intelligence – from the invention of language in Chapter 2, to automatically-generated texts in Chapter 16.  Amazon’s description of the book is pretty solid, so I included it below, after my Highlighted Passages – where I also added a number of interesting links to videos and articles.  As the US Air Force Captain explains in this interview, his book is organized in three parts:  Part 1 provides context, part 2 defines AI, and part 3 addresses its implications.

Captain Kanaan has a talent for explaining complex concepts in brief and basic terms. Among the many explanations, analogies, and insights in the book, his description of cloud computing in Chapter 8 was precious.  “Cloud computing is just a means of accessing all aspects of computing services over the internet,” he explains.  “Regardless of their locations, cloud computing simply allows us to instantaneously draw upon the strength, software, and information of other computers —which are inevitably more powerful, equipped, and versatile than our own.”  But it was his review of neural networks in Chapter 9 that I would have to call my favorite, not only because it brought back good memories, but because it showed how far the field has come since it first gained my attention in the late 1980s.

In Part 3 of the book Captain Kanaan worries out loud about the geopolitical implications of AI.  He contends that the Chinese, Russians, and Saudis do not follow the doctrines of democracy and are therefore not to be trusted with the power that AI affords.  With typical military paranoia, he exposes several threats and wrongdoings committed by these sovereign nations, including invasions of privacy, genocide, misinformation, and torture.  While less critical and alarmist, Kai-Fu Li’s excellent book, AI Superpowers : China, Silicon Valley, and the New World Order, reaches a similar conclusion from the Chinese perspective:  That a global technology dominance race is underway.  The bottom line is that the geopolitical agenda speaks loudly these days, making anti-trust and privacy regulation seem like sideshows.  In fact, Captain Kanaan never even mentions antitrust or the DOJ in his book – although he does point to the winner-take-all nature of the technology.

Because I quickly became a fan of Captain Michael Kanaan and his Tweeter feed, I excused him for mildly succumbing to some of the same biases he warns us about in his book. For instance, it caught my eye that in Chapter 8, he claimed “IBM, Oracle, and Google are the other main American companies offering cloud computing.” Why would he have downgraded Google to “other” status and list it behind IBM and Oracle?  Any Google (or Bing) search will show Google as number 3.  As of Q2-2020, Amazon is said to have about one third of the cloud infrastructure service market, followed by Microsoft at 18% and Google at 9%.  While IBM/Red Hat do indeed participate in this market and show up in some of the industry charts, they are not that significant and are not growing as fast (i.e. losing share).  Former Google CEO Eric Schmidt is quoted praising Captain Kanaan on the cover of the book (he also shows up on his Linkdin page) – so I couldn’t help but wonder if Eric noticed the slip.  It certainly is not because of the Alphabetical order (pun intended).  But judging by these promotional videos featuring the young Captain Kanaan touting Red Hat technology – one does not need a neural network to figure out why IBM even made it to that list.

The concluding words of the book were fascinating if only because they were self-generated by a deep neural network called GPT-2.  As Captain Kanaan explains:  “Before closing this book, I thought it would be an appropriate experiment to informally test GPT-2 myself. On the very first page of this book, in the short Author’s Note just before the Prologue, I wrote: Our focus now must be to openly address the current realities of AI to ensure, as well as we can, that it is implemented only in ways consistent with fundamental human dignities . . . and only for purposes consistent with democratic ideals, liberties, and laws. At this point in your reading, I trust you know how convinced I am of those words. They seemed a perfect choice to test GPT-2, an appropriate sample to see what kind of “continuation” the program would produce. When I typed and submitted them into the program, its generator displayed a response almost immediately. The words the algorithm created, on its own and in less time than it took to lift my fingers from the keyboard, are shown as the epigraph at the start of this chapter. They’re so cogent to the entirety of this book that they bear repeating. So, here they are. This is from an algorithm familiar with eight million web pages, but prompted only by my 43 words: “Our job is now to convince the public in particular that using AI to achieve these aims is a necessary and desirable part of our society, but we cannot afford to do so unless we know how it will best be used and when. But in the end, the future demands we make moral decisions as we begin to build a world that is truly safe and sustainable, one where humans and AI can truly coexist together.”

So in closing, this was an excellent book on a very relevant and timely topic where easy-to-read books are not readily available.  As such – I would highly recommend it.

Best regards,


Highlighted Passages:


“Mike Kanaan is an influential new voice in the field of AI, and his thoughts paint an insightful perspective. A thought-provoking read.”—ERIC SCHMIDT, former CEO and executive chairman of Google

“This is one of the best books I’ve read on AI.”—ADAM GRANT, New York Times bestselling author of Originals and Give and Take

Author’s Note

The countdown to artificial intelligence (AI) is over.

Prologue: Out of the Dark

As the Air Force lead officer for artificial intelligence and machine learning, I’d been reporting directly to Jamieson for over two years. The briefing that morning was to discuss the commitments we’d just received from two of Silicon Valley’s most prominent AI companies. After months of collective effort, the new agreements were significant steps forward. They were also crucial proof that the long history of cooperation between the American public and private sectors could reasonably be expected to continue.

“Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” – Vladimir Putin – September 2017

In the months that followed [late 2017], Putin’s now infamous few sentences proved impactful across continents, industries, and governments. His comments provided the additional, final push that accelerated the planet’s sense of seriousness about AI and propelled most everyone into a higher gear forward.

Only a month earlier, China had released a massive three-part strategy aimed at achieving very clear benchmarks of advances in AI. First, by 2020, China planned to match the highest levels of AI technology and application capabilities in the US or anywhere else in the world. Second, by 2025, they intend to capture a verifiable lead over all countries in the development and production of core AI technologies, including voice- and visual-recognition systems. Last, by 2030, China intends to dominantly lead all countries in all aspects and related fields of AI.3 To be the sole leader, the world’s unquestioned and controlling epicenter of AI. Period. That is China’s declared national plan.

Part 1: The Evolution of Intelligence: From a Bang to a Byte

In the age of artificial intelligence, second place will be of an ever-diminishing and distant value.

Chapter 1: Setting the Stage

Most conversations about artificial intelligence, whether in auditoriums, offices, or coffee shops, either begin or end with one or more of the following questions:

1.​ What exactly is AI?

2.​ What aspects of our lives will be changed by it?

3.​ Which of those changes will be beneficial and which of them harmful?

4.​ Where do the nations of the world stand in relation to one another, especially China and Russia?

5.​ And, what can we do to ensure that AI is only used in legal, moral, and ethical ways?

But, when it comes to their scientific portrayals of artificial intelligence, our most popular authors and screenwriters have too often generated an array of exotic fears by focusing our attention on distant, dystopian possibilities instead of present-day realities. Science fiction that depicts AI usually aligns a computer’s intelligence with consciousness, and then frightens us by portraying future worlds in which AI isn’t only conscious, but also evil-minded and intent, self-motivated even, to overtake and destroy us.

Chapter 2: In the Beginning . . .

To put the slow pace of our intellectual innovations into perspective, we didn’t invent the wheel until 5,000 to 6,000 years ago. Think about that. Over the entire timeline of Homo sapiens’ existence, and as the most advanced and only remaining of all human species, it took 194,000 of our 200,000 total years on Earth to finally piece together the idea and method of putting a round object to a constructive, locomotive use.

little more than 5,000 years ago, the ancient Sumerians (in modern day Iraq) first began reducing their verbal language to writing. This was a tremendously important step forward in the use of language—it enabled true collective learning.

From the moment alphabet-based written languages took hold and began spreading throughout different civilizations, a new game was truly on. Human learning, human capability, and human dominion over nature began to expand at an unprecedented rate.

Chapter 3: Too Many Numbers to Count

A common thought experiment using the concept of time is to imagine counting to the number 1,000—not an unreasonable task, although I suspect few of us have actually done it. In any event, if you start with the number one and count one additional number every second, without stopping, it will take you almost 17 minutes to reach 1,000. That’s not tough to calculate. Just divide 1,000 by 60 (the number of seconds in a minute). And while it might take a bit longer than you’d have guessed, it’s easy to see, and you’re probably thinking, “OK, sure, that sounds about right.” But what if you want to continue counting from one thousand to one million? How much longer would that take? This time, the answer’s more likely to surprise you. If you continue counting, again by adding one new number every second and again without stopping (not even to speak, eat, drink, or sleep), it will take more than 277 hours, or more than 11½ continuous, uninterrupted days. And if you want to be more realistic in going about the task, by only doing your counting during the course of your eight-hour workdays, then the job of counting to a million would take you almost two months of a full-time work schedule to complete—and that’s without time off for lunch. And what about a billion? That’s a number we now hear all the time. It can’t be that much more than a million, right? Wrong. If you want to count to one billion at one-second intervals, you’ll unfortunately have to spend most of your adult life at the task, because it will take almost 32 years of continuous, nonstop counting to get there. And to count to one trillion, another number that we’re beginning to hear and use more frequently? Well, that would take you almost 32,000 years.

In today’s world of computing, the kinds of numbers that exceed our natural comprehension have become commonplace. Scientists, computer designers, software programmers, and even consumers encounter them every day. They explain our universe, our products, and even ourselves. They also explain artificial intelligence.

Chapter 4: Secret Origins of Modern Computing

So, what does Mexico’s independence from Spain combined with Texas’s subsequent independence from Mexico, America’s resulting annexation of Texas, and the American acquisition of more than half of Mexico’s territory at the end of the Mexican-American War possibly have to do with computer technology and the eventual creation of artificial intelligence? Fast-forward seven decades, all the way to the other side of the Atlantic, and the connections unfold. … In an effort to take advantage of lingering animosities, the German secretary of foreign affairs, Arthur Zimmermann, sent a coded telegram on January 19, 1917, to Germany’s ambassador in Mexico instructing him to offer an alliance and financial support to Mexico if it would agree to invade America should the US enter the war. In pertinent part, the telegram read: We intend to begin on the first of February unrestricted submarine warfare. We shall endeavor in spite of this to keep the United States of America neutral. In the event of this not succeeding, we make Mexico a proposal of alliance on the following basis: make war together, make peace together, generous financial support and an understanding on our part that Mexico is to reconquer the lost territory in Texas, New Mexico, and Arizona . . . Signed, Zimmermann.  See Figure 4.1. Though little is made of it in history books, Zimmermann’s telegram was one of the most significant strategic missteps in military history. It not only failed, but completely backfired. Unknown to the Germans, the British had been intercepting their military signals and communications for years. When the Zimmermann telegram was sent, the Royal Navy’s code-breaking operation intercepted it, deciphered it, and, a little more than a month later, turned it over to the American embassy in London.7 On February 26, 1917, American president Woodrow Wilson first learned of the telegram. At that point, the German submarine campaign in the North Atlantic had already begun, and American cargo ships were sinking just as the telegram foreshadowed. Although Wilson was already strategizing a military response, many Americans and members of Congress were still strongly opposed to entering the war. But the Zimmermann telegram was Wilson’s ticket to change public opinion. He presented it to Congress and instructed the State Department to openly release its contents to the American media.

…it’s estimated that the information the Allies acquired through Turing’s Bombe, along with other work accomplished at Bletchley, shortened the war by at least two years and saved millions of lives.

Chapter 5: Unifying the Languages of Men and Machines

“Mathematical science shows what is. It is the language of unseen relations between things. But to use and apply that language, we must be able fully to appreciate, to feel, to seize the unseen, the unconscious.” —Ada Lovelace, 1815–1852 English Mathematician and Computing Theorist

Science fiction aside, no computer programming language will ever be any person’s native or natural language. But no one can dispute that in today’s computer-driven age of information, common computer languages like Java, JavaScript, C, C++, Python, Swift, and PHP are extraordinarily useful and powerful skills to possess—so much so that they’re now becoming accepted in various places as the equivalents of true secondary languages. … High-level computer programming languages, on the other hand—such as Python, Java, C++, and C—are much easier to write, but they rely upon other interpreter programs (called compilers) to convert their high-level code into the machine’s underlying binary code.

Chapter 6: Consciousness and Other Brain-to-Computer Comparisons

“It’s ridiculous to live 100 years and only be able to remember 30 million bytes. You know, less than a compact disc. The human condition is really becoming more obsolete every minute.” —Marvin Minsky, 1927–2016 Cofounder of MIT AI Laboratory

While the scopes of the tasks that narrow AI programs can accomplish are broadening, computers are still not anywhere close to accomplishing the general intelligence, multitask potential and performance parameters of which humans are capable. General artificial intelligence, as we’ll discuss in Chapter 9, is still far off, if ever.

Cognitive and neuroscientists say we’ve learned more about the actual physiology of the brain in the last ten years alone than we had known in all of our prior history. Even so, much remains a complete mystery.

Moreover, until recently, our experiences and traditional ways of thinking told us that consciousness was a prerequisite for intelligence, and that the latter could only arise from the former. We never knew of anything intelligent that wasn’t also conscious. Those two phenomena seemed always, intuitively and actually, to go together. It’s therefore understandable that science fiction seems to always put the two together. In most fictional equations, intelligence always equals consciousness. It’s reasonable, then, for people to feel uneasy about, and suspicious of, machines that can learn, especially when they can do so on their own and without any continuing programming or specific oversight from us. But, in the new world ahead of us, we have to put our unease aside, right along with our old notions that intelligence always requires or results in consciousness. It doesn’t.

In his book The Future of the Mind, Kaku suggests that we should think of consciousness as a progressive collection of ever-increasing factors that animals use to determine and measure their place in both space and time, and in order to accomplish certain goals. Kaku proposes that there are three fundamentally distinguishable levels of consciousness. Level one is a creature’s singular ability to understand its position in space. In other words, it’s the ability to be aware of one’s own spatial existence with respect to the existence of others. This is the most minimal, basic level of consciousness, and it emanates from the oldest, most prehistoric part of the brain—the hindbrain or reptilian brain. A lizard, for example, can be said to have level-one consciousness because it is aware of its own space in relation to the space of the other animals upon which it preys. … Level-two consciousness is an animal’s ability to understand its position with respect to others—not only contrary to them, but also in concerted accord with them. This level of consciousness flows from the later-evolved center regions of the brain, the cerebellum, and involves emotions and an awareness of social hierarchy, protocol, deference, respect, and even courtesies. Kaku describes this level of consciousness as the monkey brain—the ability to understand and abide by social hierarchy and order within groups and communities of animals. In humans, this level of consciousness develops during the early stages of our socialization, when young children learn from their parents and others to abide by the rules of their homes and communities, to act socially responsible, and to show respect and tolerance for others. Level-three consciousness is the ability to not only understand our social place in space with respect to the places of others, but also understand our place in time—to have an understanding of both yesterday and tomorrow. In Kaku’s view, the level-three ability to reflect, consider, plan, and anticipate the future involves unique attributes of consciousness that only humans possess. He asserts, and most others would agree, that this level results from the part of the brain that most recently evolved, which is the outer and forwardmost part that sits right behind our foreheads—the prefrontal region of the neocortex. As we discussed in Chapter 2, this is where mankind’s higher thinking resides, including such skills as theorizing and strategizing.

flowers, which likewise fall short of level-one consciousness, can nonetheless be said to have a few additional perceptive elements beyond that of a thermostat. In addition to temperature, a flower can also sense humidity, soil quality, and the angle of sunlight. The Venus flytrap, a carnivorous plant, takes things even a step further than a flower. Beyond those things that any plant can sense, the flytrap is also able to detect the presence of an insect or spider on the blades of its leaves. When it does, it folds in on itself to capture and then digest the prey. Its mechanisms are so highly specialized that it can even distinguish between living prey and nonliving stimuli, such as falling raindrops. Yet, Little Shop of Horrors aside, most of us wouldn’t think for an instant that a Venus flytrap has anything that even remotely approaches an animal’s level of overall consciousness. And we’d be right. Nonetheless, the flytrap does have some elements of awareness that are, at least generally speaking, foundational components of consciousness.

But, with machines now capable of completing those and many other intelligent goals without us, consciousness is no longer a necessary element. Just because computers can be programmed to accomplish such tasks on their own, and even learn while doing so, it doesn’t mean that they’ll one day just spontaneously develop consciousness. In this new world of ours, intelligence and consciousness are not interdependent.

There’s a misconception, which has now become common myth, that we use only 10 to 20 percent of our brain. In truth, we use virtually all of it—and most of our brain is active most of the time. Brain scans show that no matter what we are doing or thinking, all areas remain relatively active and none are ever completely dormant or shut down. Even when we’re sleeping, all parts of our brain show at least some levels of readiness and interactive activity.10 For all of its complexity and despite the continual engagement of all of its parts, the human brain is extremely efficient in energy consumption. Requiring only 20 watts, which is barely enough to light a dim incandescent lightbulb, it’s about 50 million times more efficient than any of today’s computers of even remotely comparable capacity. That’s fortunate for us, because we can only produce a certain amount of energy from the volume of food we’re capable of eating on any given day. Still, despite its efficiency compared to its total energy consumption, our brain does demand more energy than any of our other organs. Although it only weighs about 2 percent of our total body mass, it requires almost 20 percent of the total energy we generate. At that rate, the brain consumes 10 times its pro rata share of our available energy, or approximately 500 of the 2,400 calories we consume on an average day.

Just imagine the intellect and insight any one of us would have if we could immediately analyze all of the information and experiences stored not only in our own mind, but in the minds of all our human colleagues. Machines that can access all of the world’s retrievable information, instantaneously analyze and learn from it, and then provide us with new answers, strategies, and solutions are changing the realities of our existence and allowing us to accomplish far more than humans alone ever could.

Part 2: Twenty-First-Century Computing and AI: Power, Patterns, and Predictions

Chapter 7: Games Matter

Games fall generally into one of four categories. There are games of chance—like roulette, lotto, and dice—where luck is the only determiner of outcome. There are games of pure intellect—like crossword puzzles, brainteasers, checkers, and chess—in which we match our knowledge, skills, and strategies against a set standard or an opponent. There are games of pure physical aptitude—like individual track-and-field events—where success is determined only by the participant’s own physical attributes. And, finally, composing the largest category of all are games that combine two or more of those elements. Most team and sporting events fall into the last category, where we test our skills and strategies against a combination of elements outside our control—like the skills of an individual opponent, the aptitude of an adversarial team, the luck of the draw, and even the bounce of the ball.

Regardless of its simple rules, Go is incredibly complex. At most points in the game, the best players can’t even begin to calculate in their minds all the moves that might mathematically be the next best.

Consistent with a habit of underestimating AI developments that continues even today, most everyone, including the majority of AI researchers, believed machine learning technology was at least ten years away from succeeding at the challenge in store for DeepMind. Sedol himself said before the match, “I am confident about the match. I believe that human intuition is still too advanced for AI to have caught up.” Played at Seoul’s Four Seasons Hotel, the first game was broadcast live and watched by an estimated 80 million people worldwide, 60 million in China alone. From the very start, it was clear that things weren’t as people had expected or hoped. Beginning with the first move, AlphaGo played with an apparent creativity so different from conventional human playing styles that its moves seemed random at times, even outright wrong. But its new strategies quickly proved powerfully, almost intuitively, effective. In the words of one commentator, “No matter how complex the situation, AlphaGo plays as if it knows everything already.” Sedol was surprised and immediately confused by the unexpected level and style of AlphaGo’s play. He lost the first game in an overwhelming defeat. The world was shocked, especially the large community of Go players and fans throughout Asia. Worse, they were completely, culturally unprepared for the loss. In a single game between man and machine, centuries of methodically developed and highly respected theories of how Go ought to be played had been dismantled by the machine’s drastically new strategies. The second game went no better. Sedol lost soundly again. In the press conference afterward, he was clearly shaken by the incredible pressure upon him. The internet and media were alive with reactions, and most observers empathized with Sedol’s plight. There seemed something very unsettling and sad, frightening even, about a machine that could find new and devastatingly effective strategies on its own . . . especially when those strategies were far different from anything the best human players, over centuries of time, had ever even thought to consider. After another loss in Game 3, things had become entirely hopeless for Sedol. Having already mathematically lost the match, he publicly apologized for being powerless against AlphaGo.

The match ended with another win for AlphaGo, resulting in a final tally of 4–1 in favor of AI over humanity. At the end of it all, the style of AlphaGo’s play upset centuries of accepted wisdom. The huge community of Go players and enthusiasts throughout Asia collectively reflected on what had occurred . . . and ultimately found inspiration. Their game had been played for thousands of years and was symbolic of their cultural way of life. It represented the wonders of the human mind and our unique, human way of mastering the challenges of the world around us. Yet, the game had just been creatively mastered by a machine. The community realized, though, that this was an opportunity for them to see things, perhaps all things, in a new light. After the match, Sedol said it most poignantly: “What surprised me the most was that AlphaGo showed us that moves humans may have thought are creative were actually conventional.” Fan Hui, the European champion who had been the first to lose to AlphaGo, also reflected, “Maybe AlphaGo can just show humans something we never discovered. Maybe it’s beautiful.”

Today’s video games are incredibly complex, particularly those that are played online in multiplayer formats. Billion-dollar companies routinely grow from them and, since the early 2010s, professional teams from across the world have been competing in matches and tournaments organized around them. Top professional players earn multimillion-dollar contracts and the commercial values of teams now rival traditional professional sports franchises.

The bots had learned—in all of their prior games against only themselves and lesser competition—to pursue strategies just sufficient to win. By definition, only a small margin is necessary to achieve victory, and therefore only a slight advantage was all that ever mattered to the bots as they learned and developed their initial strategies. Because there was no advantage to winning by a large margin, the algorithms behind the bots had never discerned a strategic advantage to gaining a larger lead than necessary to ensure victory. And so, once they fell behind, they had difficultly devising the kind of coordinated response necessary to make up for large deficits. The bots had simply never been in that situation before. In April 2019, the OpenAI Five again challenged a top professional team of Dota 2 players, who this time were the reigning world champions. Although the match was scheduled for three games, the artificially intelligent team of virtual bots quickly proved that they had learned the value of crushing their opponent as quickly and decisively as possible. They so overwhelmed the human players in the first two games of the match that the event organizers didn’t even bother staging the third game.

According to Tencent, when TSTARBOT2 was engaged to oversee game play, it won 90 percent of the time.

Regardless of whether an algorithm is playing a board game like Go, manipulating a video game like StarCraft II, or directing a self-driving car to operate safely within the traffic around it, everything AI applications are capable of doing depends upon their ability to obtain and analyze the necessary indicators of the situations they’re tasked to solve. In essence, the quality of their performance depends upon the quality and completeness of the information available to them.

Chapter 8: A Deluge of Data

Our ability to learn requires our capacity to acquire data and to analyze it. Without data, intelligence just isn’t possible, not at any level—and not in any animal, individual, or computer.

With the entirety of the learning process dependent upon it, there are two fundamental truths about data. First, while the laws of quantum mechanics insist information about the past is never actually lost, for us humans, it most surely can be. Data, or certainly our ability to capture it, doesn’t naturally last forever. It’s fleeting. Unless someone or something is present to observe and somehow record the data emanating from an event when or as it occurs, then the data that describes the occurrence tends to disappear. Think of the tree that falls in the woods with no one there to observe it. Does it make a sound? Other than in the most esoteric of philosophic conversations, the answer is yes—for a falling tree certainly creates sound waves. But there’s a slightly different and much more pertinent question to ask. Can we ever know exactly what it sounded like? To that question, the answer is no, not unless it was recorded in some way or unless some other evidence or effect of the sound exists that we can later still detect and measure—and from which we can recreate the sound. In other words, unless an event was observed when it occurred, or unless the event leaves a physical or measurable, residual trace we can later discover, the data of the event—as far as we humans are concerned—is lost. … The second truth about data is that there is simply too much of it for us to fully perceive, collect, and manage on our own. The capacity of the human brain is enormous. But, as we’ve seen, there’s a practical limit to the amount of data that any person or any group of people can ever obtain, let alone meaningfully process or share.

In 1989, an English software engineer named Tim Berners-Lee proposed a solution. While working at the European Organization for Nuclear Research (CERN), Berners-Lee wrote a paper called “Information Management: A Proposal,” in which he suggested that connected computers could share information using a newly emerging technology called hypertext—which is text displayed on a computer or other device in hyperlinks that, with a simple click, immediately connect the user to other text, documents, and internet locations. Hypertext is so fundamentally common to us now that we don’t even think of it. But when first introduced, it changed everything about the practical functionality of the internet.

Digital data is now commonly referred to as “the new oil.” Although the phrase has become cliché, it’s essentially accurate. Even before data is structured or manipulated in any way, the variety and amounts of it that we generate—about ourselves, our families, our communities, and even our cultures—are of immense and powerful commercial and political value. But, just like oil, it’s also spread unevenly across geographies and nations. China is the most populated country on Earth, with more than 1.4 billion people. India is a close second with over 1.35 billion, and the US is in a very distant third place with a total population of fewer than 330 million people—which is less than a quarter of the population of either China or India, and only 4.4 percent of the world’s total population. … Regionally, Asia represents 63 percent of the globe’s populace, Africa 16 percent, Latin America 9 percent, and Europe 7.5 percent.

An impressive 90 percent of the US population is connected to the internet, which amounts to about 300 million people. By contrast, less than 60 percent of China’s population is currently digitally connected, but even that smaller percentage of its population nonetheless equates to more than 800 million people, almost three times the number of Americans. In fact, of all people using the internet globally, 49 percent of them are Chinese and Southeast Asian, 17 percent are European, 11 percent are African, and just over 10 percent are Latin American. Only 8 percent of all internet traffic originates from the US and, because of the rising number of users from other countries, the US percentage will only continue to decrease.

In 1996, a small group of Compaq Computer executives proposed a theoretical structure for the concept and gave it a name. In a document distributed only internally at the company’s offices outside of Houston, they plotted the future of the internet and imagined a day when software and storage capacity would be disbursed and shared throughout the web. They called it “cloud computing.” By the early 2000s, the concept of the cloud was taking shape in the real world. Contrary to what many people think, there’s nothing mysterious about it. Cloud computing is just a means of accessing all aspects of computing services over the internet—including servers, storage, databases, software, security, and even artificial intelligence. In practice, the cloud is nothing more than a network, albeit vast, that allows you to utilize other computers’ hardware and software. Most of us use cloud computing all the time without even realizing it. When we use our laptops, iPhones, or any other devices to type a Google search query, our personal device doesn’t really have much to do with finding the information we’re looking for. It’s only acting as a messenger that communicates our search terms to an array of other computers somewhere out in the world. Those computers then use their own programs and databases to determine the results and send them back to us. It all occurs at incredible speed, and, for all we know, the real work that identified the information we requested may have been performed by computers five miles away or on the other side of the planet. Regardless of their locations, cloud computing simply allows us to instantaneously draw upon the strength, software, and information of other computers—which are inevitably more powerful, equipped, and versatile than our own. In addition to Google Search, other examples of cloud services that we frequently use include web-based email and cloud backup services for our phones and other computer devices. Even Netflix uses cloud computing to facilitate their video streaming service, and the cloud has likewise become the primary delivery method for the majority of apps now available, particularly from companies that offer their applications free of charge or for subscription fees over the internet, rather than as stand-alone products that require full downloads. The infrastructures required for cloud services are provided primarily by a handful of major commercial cloud providers. The largest two are Microsoft Azure and Amazon Web Services (AWS)—the latter of which launched the first public cloud service in 2006 as a way of turning its unused computer power into commercial revenue. Each of those providers now generates close to $30 billion per year from their cloud services.18 IBM, Oracle, and Google are the other main American companies offering cloud computing, and Alibaba is the principal Chinese provider of cloud services. These companies manage networks of huge, secure data centers that are usually spread over broad geographic regions where they house the infrastructures that power and store the data, systems, and software necessary to operate their clouds.

Cloud operations can either be public, private, or hybrid. Public clouds allow all users to share space and time on the cloud and to access it through unrestricted means. Public is the usual cloud format for individual and personal cloud computing, but many companies also opt to use public cloud infrastructures for their internet email systems and for employees who share documents using Google Drive. Private clouds work technologically the same as their public counterparts, except they service a single company and require authorized access to use the network. They can be managed either exclusively by the user company or by one of the major cloud providers on the company’s behalf. Either way, a private cloud is usually fully integrated with the company’s existing infrastructure, network of users, and databases . . . and can span countries and continents just as a public cloud can. Oftentimes, companies have needs that lie somewhere between public and private clouds, so they opt instead for a hybrid cloud, which, just as the name implies, provides elements of each that usually reflect the different levels of security and corporate control required for various cloud-based purposes and activities.

As a result, security is actually a compelling reason to use cloud-based systems rather than avoid them.

There’s a completely reasonable argument that we do in fact pay for what we use by giving those who provide such services consumer and behavioral information about us that they can then use for their own commercial benefits—to generate revenue, for instance, through targeted advertising campaigns. As can rightly be said, “If you don’t think you’re paying for it, that just means you’re not the customer—you’re the product.”

Chapter 9: Mimicking the Mind

“If you wish to make an apple pie from scratch, you must first invent the universe.” —Carl Sagan, 1934–1996 Astronomer, Astrophysicist, Author Cosmos, TV Adaptation

In 2012, machine learning finally proved effective when a major breakthrough in computer vision capabilities occurred that changed the attitudes of most naysayers.

For a computer, data is the equivalent of experience. So, the more data that a machine learning system processes, the better it becomes.

In the earliest days, neural networks were shallow, usually consisting of only a few layers: an input layer, a middle or hidden layer, and an output layer. Data is fed into the input layer, analyzed, and weighed as it progresses through the hidden layer, and it’s then forwarded to the output layer as a measured result. Nowadays, the process works generally the same, but the frameworks of the networks often include many middle, hidden layers, sometimes thousands. These are called deep neural networks, or deep-learning systems.

Through a process called backpropagation, the results of any measurements—at any points in the process—can even be fed back to prior layers over and over again to continually adjust the weights and measurements based on the overall dynamics of the evolving analysis.

If a neural net could describe “how” or “why” it came to the conclusion it did, the description wouldn’t be much different from what we would say about our own assessments of information streaming into our own brains. We would probably say something like, “Well, I thought about it and just concluded what I did—based, I imagine, on all of my prior experiences and the information that was newly available to me.” Similarly, a machine learning system would say that it assessed the information, made its calculations, and predicted that its output was accurate, or most probable. In both cases, although we might have no idea of the precise measurements occurring deep at the evaluative levels, at the end of the process we feel confident that we’ve made sense of the data, and that we can likewise feel confident of whatever we’ve concluded.

In supervised learning systems, the training data is first labeled by humans with the correct classification or output value. … For unsupervised machine learning algorithms, the training data isn’t classified or labeled in any manner before being fed into the system. Instead, the system analyzes the data without any prior guidance or specific goal. Its task is more generally to discover, on its own, any similarities or distinct and recurring differences within the data so it can be grouped or consolidated according to those qualities. In these systems, the machine learning application is essentially being asked to find unifying or distinguishing characteristics within the data from which categories can then be determined and labeled. This kind of approach is frequently used to explore data in order to find hidden commonalities within very broad sets of complex and varied data. It’s often referred to as cluster analysis, and is routinely used for market research—where, for advertising and other strategies, it’s extremely valuable to find common characteristics, behaviors, and habits within otherwise broad bands of prospective consumers whose similarities aren’t readily apparent. Reinforcement learning is similar to unsupervised learning in that the training data isn’t labeled. But when the system draws a conclusion about the data or acts upon it in some way, the outcome is graded, or rewarded, and the algorithm accordingly learns what actions are most valued.  … Convolutional neural networks are the most commonly used network for computer vision programs or any machine learning applications that require the system to recognize images or shapes.  … Finally, generative adversarial networks, GANs, are composed of two separate, deep neural networks designed to work against one another. One of the neural networks, called the generator, creates new data that the other network, called the discriminator, evaluates in order to determine if is indistinguishable from other data in the training set, in which case it is considered authentic.

The following is a list of tasks at which machine learning applications have already proven well suited. Despite its length, the list is far from complete.

Aerospace and aeronautics—research and design
Agriculture—crop management and control
Authenticity verification—visual, voice, and data
Aviation—logistics, routing, and traffic control Biometrics—assessment and prescriptives
Climate—analysis and prediction
Computer hardware design and engineering
Computer vision
Crime analysis and detection
Customer relations—proactive management
Customer service—call centers and response
Cybersecurity Data analysis—for any use
Disaster response, recovery, and resupply
Disease detection and contact tracing
Disinformation and deepfake detection
DNA sequencing and classification
Due diligence research
Economics—analysis and prediction
Education—curricula, content, and proficiencies
Emergency detection and response
Energy—control, creation, efficiency, and optimization
Entertainment—preferences, creation, and delivery
Environmental impact and conservation analyses
Finances—personal, business
Financial services—market analysis, trading
Food—processing, preservation, distribution
Forest fire prediction, control, and containment
Fraud detection—online, identity, credit
Game creation and competitive play
Handwriting recognition
Healthcare—plan management and diagnostics
Image processing Information retrieval and data mining
Insurance underwriting Internet fraud detection
Language translation—verbal and written Law enforcement
Legal—research, analysis, and writing
Logistics—supply and distribution
Manufacturing—facilities and processes
Market analysis Marketing strategies
Media—customer preferences and content
Medical—research, diagnosis, and treatment
Military—all aspects, like any other enterprise
National security
Natural language processing
Navigation—land, air, and water
News verification—authenticity and fact checking
Online advertising
Performance analysis—materials, products, people
Personnel—assessment and optimization
Pharmaceutical—research and development
Politics—analysis, polling, and messaging
Products—design, manufacturing, and assembly
Proving mathematical theorems
Quality control—products and services
Retail—inventory and pricing
Robotics—spatial assessment and locomotion
Scientific research—all branches
Search engine optimization
Security—premises, personal, and virtual
Self-driving, self-flying, self-sailing vehicles
Shipping—logistics, sorting, and handling
Social media—networking, implementation, analysis
Software design and engineering
Space exploration
Speech recognition
Telecommunications—service and efficiencies
Transportation—all types, all facets
Vaccines—research, development, and delivery
Weather—analysis and prediction
Wildlife research, assessment, and conservation

Unlike humans, machine learning applications are not capable of applying strategies, knowledge, or skills acquired in one area to another. They’re therefore called narrow AI. … Narrow AI is very strong, efficient, and quite capable at its purposed job. It’s just incompetent at anything beyond it. … AGI, also known as strong AI, is the type of hypothetical artificial intelligence that could operate beyond a single domain of information or task orientation, and that could perform successfully at any intellectual task just as well as a human. While we’ve been hypothesizing that AGI is right around the corner for many decades, the more we’ve accomplished in the science of AI, and the more we’ve come to appreciate the deepest mysteries of the human brain, the more we’ve realized just how hard it would be for a machine to become capable of achieving anything other than specifically oriented tasks. Humans may not be able to process data as fast or as comprehensively as computers, but we can think abstractly and across purposes. We can plan and sometimes even intuit the solutions to problems, at a general level, without even analyzing the details. This is what we sometimes characterize as common sense, which is something computers simply don’t have—and that no currently known technology can give them. For the foreseeable future, AI will not be able to create or solve anything from something that isn’t there, from data it doesn’t specifically have, or for something it hasn’t specifically been designed and trained to do. AI has no intuitive or transferrable abilities—and, for now, it’s not going to acquire those abilities. Machine learning technology just doesn’t allow for it. That’s not to say, unequivocally, that general AI will never occur. But it would require a new breakthrough and an entirely different technological approach than those described in the previous pages. Whether such a breakthrough occurs at some point in the future remains, like all things, to be seen. But it’s not on the visible horizon.

Chapter 10: Bias in the Machine

It’s quite common for human biases to be reflected in our data and, when they are, it stands to reason that any analyses, strategies, or predictions based on that data will be biased as well. Worse, if decisions are made or actions are taken based on biased analyses, then the underlying biases will of course perpetuate, and possibly ingrain, historical or cultural inequities even deeper into our lives.

Microsoft’s engineers designed the chatbot to learn from the speech patterns and content of the human responses to its tweets. Consistent with Tay’s machine learning algorithm, it quickly recognized patterns in the onslaught of conversational input it received. Unfortunately, people are . . . well, people are who they are. Their tweets back to Tay were filled with intentionally racial and sexually biased slurs. The chatbot, unable to discern the impropriety of such speech, emulated the input and started to reply, very efficiently, in kind. In only a matter of hours, the algorithm’s singular ability to learn—only from the data it obtained and the patterns it assessed—caused it to devolve from an unbiased machine chatbot to a frighteningly prejudiced and outspoken technological monster, tweeting racial and xenophobic slurs of every kind imaginable. I won’t repeat any of them here, but a Google search of “Tay’s tweets” will pull up a compilation if you’re interested.

Chapter 11: From Robots to Bots

“I believe that robots should only have faces if they truly need them.” —Donald Norman, Director, The Design Lab at the University of California, San Diego

Despite the confusion, it’s important to understand the differences between machines, robots, and bots—especially because of the different ways artificial intelligence and machine learning applications can be utilized in each.  … Robots, then, are machines that have at least some minimal level of autonomous functionality enabled by some type of computer or information processor.

The kinds of robots we’ve talked about so far, regardless of how big or small, all have physical, mechanical structures of one sort or another. But another kind of robot, without any physical form or material existence at all, also exists in the digital, virtual world of software and the internet. The most commonly recognizable of these virtual bots is the chatbot, which is a computer program designed to impersonate humans and simulate human conversation, either in writing, text, or voice.

Virtual bots can also be used, however, for invasive and malicious purposes . . . as malware, computer viruses, or cyberattack agents. Without any physical presence, they’re of course more difficult to identify and defend against than a traditional physical attack would be. They’re the invisible intruders against which cybersecurity efforts are usually directed. One of the difficult realities of malicious internet bots is that they’re intentionally designed to go unnoticed and remain hidden. They can lurk within the vast array of algorithms and code that make up the internet, and they can also lurk within a single network, or even an individual computer or software system. Worse, they usually hide behind file names and functions that are similar or identical to regular, necessary files, making them extremely difficult to recognize.

Part 3: The Sovereign State of AI: Technology’s Impact on the Global Balance

Chapter 12: Moments That Awaken Nations

The launch of Sputnik proved those concerns warranted. It wasn’t the satellite in orbit Americans feared; it was the Russian rocket that put it there. If the Soviet launch vehicle could carry 184 pounds of machinery into space, it could likely carry a nuclear warhead as well. And if that warhead was directed to reenter the atmosphere over North America, then the Soviets could presumably unleash its payload anywhere in the United States they chose.

Just after the Explorer 1 launch, Eisenhower created the Advanced Research Projects Agency (ARPA) to collaborate with academic, industry, and government partners in order to formulate, expand, and fund science and technology R&D projects. The agency’s name was later changed to the Defense Advanced Research Projects Agency (DARPA). As we’ve discussed in prior chapters, DARPA went on to be the leading catalyst behind a long list of technologies now enabling the world, including computer networking, the internet, robotics, and self-driving cars. Eisenhower also proposed to Congress the creation of a civilian National Aeronautics and Space Administration (NASA) to oversee the US space program. By mid-June 1958, both houses of Congress had passed versions of a NASA bill. They were quickly consolidated and Eisenhower signed NASA into law on July 29, 1958. Within two months, the nation’s new space agency was up and running.

Because of America’s historic response to the Soviet Union’s first satellite launch, similar occasions—events that cause nations to suddenly realize they must work urgently to bridge or surpass a gap that’s arisen between them and a competitor—are now commonly called Sputnik moments.

First, though, in September 2013, Xi unveiled a new Chinese program for foreign infrastructure and economic initiatives throughout Asia, Europe, the Middle East, and Africa. Called the Belt and Road Initiative (BRI), the policy is historically unparalleled. It’s designed to build a unified market of international trade, economic reliance, and cultural exchange broadly similar in function and value to the Silk Road trade routes that connected the Far East to Europe and the Middle East from antiquity to the fifteenth century. … intended to make China the dominant global power in high-tech manufacturing by providing government subsidies to further mobilize state-controlled enterprises and encourage the acquisition of intellectual property from around the globe. Essentially, the plan is a cohesive effort to move China away from being the world’s foremost provider of cheap labor and manufactured goods to become the world’s foremost producer of new, high-value products in the pharmaceutical, automotive, aerospace, semiconductor, telecommunications, and robotics fields.

In July 2017, only two months after Ke Jie’s loss to AlphaGo, the State Council of China released a landmark new plan for government-sponsored, statewide development of artificial intelligence. Titled the “Next (New) Generation Artificial Intelligence Development Plan,” China’s massive three-part program laid out the steps necessary to accomplish specific benchmarks by maximizing the country’s productive forces, national economy, and national competitiveness. The express purpose of the plan is to create an innovative new type of nation, led by science and global technological power, to achieve what Xi calls “the great rejuvenation of the Chinese nation.” First, by 2020, the plan spells out China’s intent to equal the most globally advanced levels of AI technology and application capabilities in the US or anywhere else in the world. … Second, by 2025, the Chinese intend to capture a verifiable lead over the US and all other countries in the development and production of all core AI technologies, while at the same time making them the structural strength of China’s ongoing industrial and economic transformation. The AI industry will enter into the global high-end value chain. This new-generation will be widely used in intelligent manufacturing, intelligent medicine, intelligent city, intelligent agriculture, national defense construction, and other fields . . . Last, by 2030, China intends to lead the world in all aspects of AI. [B]y 2030, China’s AI theories, technologies, and applications should achieve world-leading levels, making China the world’s primary AI innovation center, achieving visible results in intelligent economy and intelligent society applications, and laying an important foundation for becoming a leading innovation-style nation and an economic power. As we’ll see in the next chapter, the Chinese government has already taken significant steps to accomplish its national AI objectives, and it has done so in ways most Westerners don’t yet realize and will find hard to fathom.

Chapter 13: China’s Expanding Sphere

… it is important to acknowledge a political reality. Through processes of free elections, by design—if not always perfect execution—democratic systems of government must eventually respond and account to the majority will of their people, or at least to the will of the people’s elected representatives. The same isn’t true in nondemocratic or authoritarian countries where citizens have no real voice or vote. That’s a definitional reality. Again, democracy doesn’t always play out perfectly, but it at least allows free speech, open conversation, informed debate, and peaceful protest. Most importantly, democracy allows for multiple parties and political opposition. Authoritarian governments do not, at least not to any meaningful or ultimately effective degree. Also, throughout all of the following pages, it is not my purpose to denigrate the people or population of any nation, nor to suggest that the morals, ethics, or integrities of any population are better or worse than another. Populations should not and cannot be stereotyped, nor should anyone speak to the mind-set of others or generalize about a culture they’ve not experienced themselves. Governments and administrations, however, along with their policies and practices, can be characterized and ought to be criticized when the circumstances warrant.

Throughout Mao’s rule, the controlling Communist Party relied heavily on mass surveillance to ensure the political and social conformity of its people. Before the development of technology, social control was accomplished primarily through harsh government retribution against anyone suspected of anti-party attitudes or ideas. Throughout Mao’s rule, perceived violations of Communist Party doctrine were handled swiftly and severely by the central and local governments. Police and military repressions and mass executions of the Chinese people were commonplace. Tens of millions were killed, and tens of millions more were sent to forced labor camps where an uncountable number of additional Chinese citizens perished under brutal conditions. Even outside of the camps, forced suicides and widespread famine were commonplace.

With the advance of twenty-first-century technology, the watchful eye of the Communist Party’s authority has become even more penetrating. Digital methods of censorship, surveillance, and social control have become unavoidable, integral parts of Chinese society. Those methods provide the Communist Party, which essentially is the state, with powerful eyes, ears, and influence over most aspects of its citizens’ lives. Again, and as stated, I am not criticizing the Chinese people themselves, nor suggesting that China is entirely alone in surveilling its population. The extent and unchecked degree to which China is doing so, however, is far beyond any Western notions of national security or local crime control rationales for doing so.  …  Tracking physical activities through cameras, however, is only the beginning. China’s influence and control also invasively extend to people’s use of the internet and to their personal digital devices. China’s internet and digital market is controlled primarily by three corporate technology giants—Baidu, Alibaba, and Tencent (collectively referred to as “BAT”).  … As of 2019, Tencent, Alibaba, and Baidu ranked as the third, fifth, and eighth largest internet companies, respectively, in the world. Combined, their power and range are colossal—particularly with respect to AI. It is currently estimated that more than half of all Chinese companies that are in any way involved in AI research, development, or manufacturing have ownership or funding ties that relate directly back to one of the three.  Regardless of the formal structure of their ownership, Chinese companies are subject to a mandated and direct influence from the Communist Party. Its largest enterprises, including the large tech giants Baidu, Alibaba, and Tencent, are required to have Communist Party committees within their organizations.

Westerners often mistakenly assume that the content they can access on the internet is essentially the same as what’s available to residents of other countries. But that’s entirely untrue, and China’s control of its internet is one of the most glaring examples.  … Beyond censoring and monitoring the internet, China also surveils its masses by collecting data from their personal devices—most notably their mobile devices and the apps they rely upon to conduct their daily affairs. Since 2015, China has been developing a “social credit system” powered by AI that is expected to be a unified, fully operational umbrella covering all 1.4 billion of its people by 2022. The system is meant to collect all forms of digital data in order to calculate the “social trustworthiness” of individual citizens, and then reward or punish them by either allowing or restricting various opportunities and entitlements based on their scores. The formal and publicly stated aim of the system is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.” An additional party slogan for the system is “Once discredited, limited everywhere.” The analogies to George Orwell’s novel 1984, and its themes of government overreach and Big Brother’s regimentation of social behavior are hard to deny. … As a result, Chinese citizens can find themselves blacklisted or otherwise restricted from renting cars, buying train or airplane tickets, obtaining favorable loan rates, acquiring insurance, purchasing real estate or otherwise obtaining affordable housing, making financial investments, and even attending preferred schools or qualifying for certain jobs and career opportunities.

While some contend that China’s use of digital and AI technologies shouldn’t be criticized—and that Xi’s government is entitled to their applications as somehow culturally appropriate and politically acceptable—widespread reports of what’s transpiring in China’s largest western region argue otherwise. Well over 90 percent of mainland China’s population is composed of the Han Chinese. All Han share a deeply rooted, common genetic ancestry tracing back to ancient civilizations that originally inhabited a single region along the Yellow River in northern China. Throughout most of China’s recorded history, the Han Chinese have been the culturally dominant majority. … Most Uighurs are Muslim. Due to their cultural differences from the Chinese majority, frictions with the Communist Party and central government have existed for many decades. … In recent years, there have been worldwide accusations and consensus that China is guilty of extreme human rights abuses against the Uighurs. It’s now commonly reported that China is detaining between one and two million Uighurs, or 10 to 20 percent of all Uighur people, in more than a hundred detention camps in Xinjiang. Most of those detained aren’t accused of any crimes, and very few records or information are even publicly available.

Also, consistent with the Communist Party and central government’s approach elsewhere in the country, officials are using a scoring system to determine when, or if, those detained will be released. One document specifically instructs officials to tell inquiring family members that their own behavior could compromise their detained relatives’ scores. Specifically, authorities are advised to say: “Family members, including you, must abide by the state’s laws and rules and not believe or spread rumors. Only then can you add points for your family member, and after a period of assessment they can leave the school if they meet course completion standards.”

While CEIEC is an admitted, wholly-state-owned enterprise, Huawei’s specific ownership is more of a mystery. Officially, the company claims to be 99 percent owned by its employees, their interests purportedly flowing indirectly through a labor union. To date, though, outside experts haven’t been able to clarify the true structure. What is clear, however, is that Huawei is linked to China’s party-state in ways even more direct than those of most Chinese enterprises, and the government’s intelligence agencies undoubtedly have leverage and influence over the company’s decisions, activities, and data—a disconcerting reality given that Huawei has exported telecommunication infrastructures, equipment, and related consumer electronics to more than 150 countries around the world. The next great change in digital technology and capability will come in the form of 5G technologies, in which Huawei is an industry leader. The term 5G stands for the “fifth generation” of wireless cellular technology, which won’t just be an improvement over 3G and 4G capabilities, it will be a transformation. Engineered to operate using millimeter radio waves as signals, 5G networks will transform the internet with a broadband capacity perhaps 100 times the capacity of current 4G networks, and with network response times that will be 10 to 100 times faster than 4G. While a clean, uninterrupted connection to a 4G network produces response times of about 45 milliseconds, a 5G network will produce response times possibly less than 1 millisecond, which is 300 to 400 times faster than the blink of an eye.

At the time of this writing, 5G is available in very few locations around the globe. But the transition from 4G is well underway, and, consistent with the Belt and Road Initiative, Huawei is aggressively marketing its ability to provide 5G core infrastructures and consumer devices to countries and regions throughout the world. … In the US, the Department of Defense has banned sales of Huawei products on military bases, the Federal Communications Commission (FCC) has proposed rules that would formally prevent any American telecom company from using Huawei equipment, and various other legislation is being proposed to protect the country’s infrastructure from the risks Huawei might cause.

Militarily, China doesn’t approach the size, power, or sophistication of the US and its allies, which lead by a wide margin on the ground, in the air, and at sea. But China views AI technology as its opportunity to leapfrog certain phases of weapons development to bridge the gap between it and the US. In October 2018, the deputy director of the General Office of China’s Central Military Commission confirmed the Chinese military’s vision for AI when he characterized China’s overall goal as an effort to “narrow the gap between the Chinese military and global advanced powers” by taking advantage of the “ongoing military revolution . . . centered on information technology and intelligent technology.”

Chapter 14:  Russian Disruption

During his years in office—a period that has now spanned the American presidential administrations of Clinton, Bush, Obama, and Trump—Putin has greatly reduced the country’s poverty percentage, lowered its personal and corporate tax rates, increased wages, and enhanced the country’s consumption and general standard of living. All of Putin’s reformations have resulted in a middle class that’s grown by tremendous numbers since he first took office. As a former KGB agent and onetime head of the KGB’s successor agency, the Russian Federal Security Service (FSB), Putin entered office with deep connections and alliances to the nation’s controlling intelligence and military agencies. Although Russia is constitutionally structured as a multiparty democracy, under Putin’s leadership it’s in fact something far different. Better described as a bureaucratic autocracy, any political opposition that threatens Putin’s standing is routinely suppressed, as are any unfavorable domestic press or media reports.

Although he downplays his financial holdings at every opportunity, many experts believe Putin has become one of the world’s wealthiest individuals since taking power—with a fortune spread across a wide range of secretly held Russian oil, natural gas, real estate, and other corporate interests.

As a consequence of that background and the ongoing international trade and related difficulties confronting Russia, the country is dramatically far behind both the US and China in AI investments, research facilities, expert talent, and development capabilities. Simply put, Russia doesn’t have the available funding—from either domestic or foreign sources—and is without the technological infrastructure and expertise required to match the level of AI efforts and accomplishments taking place in other parts of the world.

Realizing their economic obstacles, Putin’s defense agencies and military designers are aggressively putting machine learning technologies to their most immediately accomplishable and impactful uses—electronic warfare (EW) and robotic weapons. .. The second AI track the Kremlin is focused on is domestic and international propaganda, surveillance, and disinformation. Since Putin first became president, mandates to control and manipulate information have been key components of his policies. Now, some 20 years later, Putin’s administration is still intent on accomplishing its own form of domestic digital authoritarianism. The government’s control of traditional and digital media sources and its repression of independent media outlets have increased under Putin’s reign. There are more reporters in Russian prisons now than at any point since the fall of Soviet Russia. Digital surveillance and social control strategies have been enhanced. Russian social and political speech is monitored carefully, especially for those considered activists or political adversaries, and Putin is now looking to create an independent, sovereign internet that will be fully controlled by the Kremlin and shield all of Russia from vast amounts of outside information,16 akin to the Great Firewall of China. Russia’s System of Operative Search Measures (SORM) was first created in 1995 and requires all Russian telecommunications and internet providers to install hardware provided by the FSB that gives it the ability to monitor Russian phone calls, emails, texts, and web browsing activities. Five years later, during Putin’s first week in office, he expanded the SORM’s reach by allowing a number of additional Russian security agencies apart from the FSB to gather SORM information from Russian citizens and foreign visitors.

It is a standard doctrine of Russian military strategy to conduct information warfare (“informatsionaya voyna”) to interfere in the politics and operations of its foreign adversaries through cyber and other operations. The 2010 “Military Doctrine of the Russian Federation” specifically says such measures are taken, “to achieve political objectives without the utilization of military force.” This is nothing new. Information warfare is a long-held Russian military concept that goes back to the earliest days of the Cold War. As General Valery Gerasimov, the chief of the general staff of the armed forces of Russia, publicly acknowledged as recently as March 2019, the Russian government and military consider it a simple reality of international power and politics that they should, and do, conduct information and propaganda campaigns, including political interference, as an integral part of their regular national defense strategies. Even the most recently published “Military Doctrine of the Russian Federation” (2015) expressly states that one feature of modern military conflict is “exerting simultaneous pressure on the enemy throughout the enemy’s territory in the global information space.”22 Further, and perhaps most pertinent, Russian military doctrine does not differentiate between times of war and times of peace with respect to strategic noncombat measures waged against adversaries . . . and, by any objective and informed account, Russia considers any country of significant global standing that is not its formal ally to be its adversary. Russian information warfare tactics don’t have the absolute goal of convincing foreign populations that disinformation and lies are necessarily the truth. Instead, Putin’s Russia considers it strategically sufficient just to plant seeds of confusion, doubt, and disruption in the populations of foreign adversaries. The goal, first and foremost, is to internally polarize populations. Leading political theorists have long recognized disinformation as a basic tenet of governments with totalitarian orientations. [AA NOTE:  Russian term for the military deception strategies is Maskirovka.]

A people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can do what you please.” Twentieth-century German American philosopher Hannah Arendt

With respect to the latter, in 2019, an entirely new and dangerous category of AI disinformation technology began to emerge called deepfakes. Using machine learning techniques, a deepfake is a video and/ or audio clip that shows individuals appearing to do or say things that, in actuality, were never done or said—essentially creating events that never occurred. The danger of such technology can’t be overstated, and its potential to sow discord by adversely affecting public impression, opinion, and politics is significant.

To what extent the Russian efforts materially influenced the actual outcome of the 2016 US election is, for purposes of this conversation, irrelevant. What should alarm every American citizen, in fact every world citizen, is that intelligence agencies across the globe had little doubt that Russia would continue those interference strategies in the future.

Chapter 15: Democratic Ideals in an AI World

Just over half the world now has systems of government that can fairly be characterized as democratic, but the proliferation of democracy, even with the US as a principal model, has only occurred in the last 75 years. In 1945, at the close of World War II, there were only 12 democratic governments. Now, approximately 100 of the 195 states recognized by the United Nations are democracies in structure and overall ideology.

The DoD will itself be a significant developer and user of AI technologies in years going forward. As the mandated national defender of American rights and dignities, it will also be the country’s primary protector in the face of foreign AI. In response to the global technological changes so rapidly occurring, along with the world’s apparent return to an era of aggressive, strategic competition, the DoD is now taking meaningful steps to ensure the ethical design and use of AI, both domestically and abroad.

Khashoggi’s death and its aftermath reminded many in the Western world of Saudi ruthlessness, and that much of what the Saudi government appears to stand for is antithetical to democratic ideals of human dignities and freedoms—particularly free speech and free press.

On a separate, but directly related note, a Saudi mobile app available from both Apple and Google merits discussion. “Absher” (roughly translated as “Yes, Sir” or “Yes, done”) is a product of the Saudi Interior Ministry that gives Saudi Arabian men the ability to exercise their guardian rights over women by tracking their locations and blocking their ability to travel, conduct financial transactions, and even obtain certain medical procedures.39 To a country’s leadership that considers it culturally appropriate and legally acceptable to discriminate and control the rights of women, this type of app is a perfectly acceptable and socially efficient tool of AI. As is clear from earlier pages, it should come as no surprise that countries and cultures—in fact, any country or culture—will use AI in ways they deem morally and legally acceptable. While the Absher app is but one example, it highlights an imperative question for private enterprise that develops AI under the freedoms provided by democratic principles. That question is whether companies should participate in or enable oppressive uses of their commercial technologies by countries with vastly contrary cultural and moral codes. These types of issues deserve transparent debate, and a cooperative and consistent approach from democratic governments and their private institutions alike.

Chapter 16. A Computer’s Conclusion

Our job is now to convince the public in particular that using AI to achieve these aims is a necessary and desirable part of our society, but we cannot afford to do so unless we know how it will best be used and when. But in the end, the future demands we make moral decisions as we begin to build a world that is truly safe and sustainable, one where humans and AI can truly coexist together. — GPT-2 (1558 Model) An OpenAI Language-Generating Neural Network

GPT-2 is a large-scale, unsupervised machine learning application created by the American nonprofit organization, OpenAI.  An acronym for Generative Pre-Training, Version 2, the GPT-2 application was trained on a data set of eight million web pages and designed as an intricately deep neural network capable of weighing 1.5 billion parameters. Its narrow task is to generate humanlike, written language responses to submissions of text, or “prompts,” that it generates in the form of either a proposed continuation of the prompt, or a response if the prompt was submitted in the form of a question. In essence, GPT-2’s function is to create additional words that are: (1) consistent with the patterns and content of new text submissions, and (2) based on patterns the program has discerned from its immense training set of internet information.

Before closing this book, I thought it would be an appropriate experiment to informally test GPT-2 myself. On the very first page of this book, in the short Author’s Note just before the Prologue, I wrote: Our focus now must be to openly address the current realities of AI to ensure, as well as we can, that it is implemented only in ways consistent with fundamental human dignities . . . and only for purposes consistent with democratic ideals, liberties, and laws. At this point in your reading, I trust you know how convinced I am of those words. They seemed a perfect choice to test GPT-2, an appropriate sample to see what kind of “continuation” the program would produce. When I typed and submitted them into the program, its generator displayed a response almost immediately. The words the algorithm created, on its own and in less time than it took to lift my fingers from the keyboard, are shown as the epigraph at the start of this chapter. They’re so cogent to the entirety of this book that they bear repeating. So, here they are. This is from an algorithm familiar with eight million web pages, but prompted only by my 43 words: “Our job is now to convince the public in particular that using AI to achieve these aims is a necessary and desirable part of our society, but we cannot afford to do so unless we know how it will best be used and when. But in the end, the future demands we make moral decisions as we begin to build a world that is truly safe and sustainable, one where humans and AI can truly coexist together.” Impressive? I think so. And I couldn’t have said it better myself. In fact, in an instant and in those two sentences, an artificially intelligent program captured the essence of what I’ve endeavored to make clear through the previous 15 chapters.

What I do know, however, is that we’re now at an inflection point in the history of the human race. What we do with respect to AI will impact our present, our future, and perhaps our eventual destiny. The strengths of free nations and democratically represented people are, and will always be, their ability to work cooperatively together in order to preserve their individual liberties and ways of life. This is no time to distance ourselves, to be passive or distracted.

If this book contributes in any way to a better understanding of AI and an enhanced appreciation of its significance, then I’ll have accomplished my mission. It’s time for another awakening, a public awareness, and a conscientious consensus. Those who one day look back upon these times should not be left wishing our eyes had been more open.


Onward then to my publisher, Glenn Yeffeth of BenBella Books, who was also an extraordinary connection. He was enthusiastic from the start, but most importantly allowed me the freedom to write the book my way. Whereas others might have constrained or altered my structure, thinking it too broad in scope, Glenn took the risk of letting me follow my original vision—trusting that I could sweep from A to Z in some reasoned and lucid manner. Authors often complain, at least to one another, that their publisher took control of their style, their structure, or even their title—yes, publishers have the final say on many more aspects of a finished book than readers have reason to know. But that was never the case with Glenn and his team. Their contributions and insight were immensely constructive and creative, but also, always, cooperative and deferential.


Amazon Book Description

Late in 2017, the conversation about the global impact of artificial intelligence (AI) changed forever. China delivered a bold message when it released a national plan to dominate all aspects of AI across the planet. Within weeks, Russia’s Vladimir Putin raised the stakes by declaring AI the future for all humankind, and proclaiming that, “Whoever becomes the leader in this sphere will become the ruler of the world.”  The race was on. Consistent with their unique national agendas, countries throughout the world began plotting their paths and hurrying their pace. Now, not long after, the race has become a sprint. Despite everything at risk, for most of us AI remains shrouded by a cloud of mystery and misunderstanding. Hidden behind complex technical terms and confused even further by extravagant depictions in science fiction, the realities of AI and its profound implications are hard to decipher, but no less crucial to understand. In T-Minus AI: Humanity’s Countdown to Artificial Intelligence and the New Pursuit of Global Power, author Michael Kanaan explains the realities of AI from a human-oriented perspective that’s easy to comprehend. A recognized national expert and the U.S. Air Force’s first Chairperson for Artificial Intelligence, Kanaan weaves a compelling new view on our history of innovation and technology to masterfully explain what each of us should know about modern computing, AI, and machine learning. Kanaan also illuminates the global implications of AI by highlighting the cultural and national vulnerabilities already exposed and the pressing issues now squarely on the table. AI has already become China’s all-purpose tool to impose authoritarian influence around the world. Russia, playing catch up, is weaponizing AI through its military systems and now infamous, aggressive efforts to disrupt democracy by whatever disinformation means possible. America and like-minded nations are awakening to these new realities, and the paths they’re electing to follow echo loudly, in most cases, the political foundations and moral imperatives upon which they were formed. As we march toward a future far different than ever imagined, T-Minus AI is fascinating and critically well-timed. It leaves the fiction behind, paints the alarming implications of AI for what they actually are, and calls for unified action to protect fundamental human rights and dignities for all.

About the Author

Since February 2020, Michael Kanaan has been the Director of Operations at the US Air Force/MIT Artificial Intelligence program in Boston.  Before that, he chaired an Air Force cross-functional team charged with integrating AI initiatives.  He directed all AI and machine learning activities on behalf of the Deputy Director of Air Force Intelligence, who oversees a staff of 30,000 with an annual budget of $55 billion. Following his graduation from the U.S. Air Force Academy in 2011, Kanaan was the Officer in Charge of a $75 million hyperspectral mission at the National Air and Space Intelligence Center, and then the Assistant Director of Operations for the 417-member Geospatial Intelligence Squadron. He was also the National Intelligence Community Information Technology Enterprise Lead for an 1,800-member enterprise responsible for data discovery, intelligence analysis, and targeting development against ISIS. In addition to receiving several awards and distinctions as an Air Force officer, Kanaan was added to Forbes 2019 list of most influential 30 under 30.  He also teaches a machine learning course at the MIT Sloan School of Business and in May 2020, joined the AI Education Project as an Advisory Board Member.

Google Research:

Michael Kanaan’s Twitter feed:  Shares lots of interesting information and insights.  Crazy about books.

On August 5, 2020, Kanaan did 48 minute Interview with Dr. Rollan Roberts, who is an expert on cybersecurity and author of multiple books. He explains that the purpose of his book is to educate and inspire debate about the difficult decisions that need to be made regarding AI – particularly regarding regulation, enforcement, and education. He questions if governments are doing enough to bring AI into the humanities.  What are we doing for our schools?  AI is not all tech.  There are biases in the machines.  Its an integral part of the human experience.

On August 28, 2020, Kanaan did a 48 minute Book Talk Interview with the Center for Strategic & International Studies (CSIS), where he explains the structure of the book.  Believes in starting with analogies – to weave the AI narrative into the human experience – for context and to demystify AI.  Part 1 provides this context, Part 2 defines AI, and Part 3 addresses the biases, dangers, and implications.

Putin says the nation that leads in AI will be the ruler of the world – The Verge Article – September 4, 2017.

Recent Book Reviews

Book Reviews

Idea Hub