<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Science &amp; Technology Archives &#8226; VII Capital Management</title>
	<atom:link href="https://www.vii-llc.com/category/book-review/science-and-technology/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.vii-llc.com/category/book-review/science-and-technology/</link>
	<description>VII Capital Management</description>
	<lastBuildDate>Thu, 01 Dec 2022 11:10:09 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>T-Minus AI: Humanity’s Countdown to Artificial Intelligence and the New Pursuit of Global Power</title>
		<link>https://www.vii-llc.com/2020/11/25/t-minus-ai-humanitys-countdown-to-artificial-intelligence-and-the-new-pursuit-of-global-power/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=t-minus-ai-humanitys-countdown-to-artificial-intelligence-and-the-new-pursuit-of-global-power</link>
		
		<dc:creator><![CDATA[Adriano Almeida]]></dc:creator>
		<pubDate>Wed, 25 Nov 2020 18:45:54 +0000</pubDate>
				<category><![CDATA[Book Review]]></category>
		<category><![CDATA[Science & Technology]]></category>
		<guid isPermaLink="false">https://www.vii-llc.com/?p=1568</guid>

					<description><![CDATA[<p>By Michael Kanaan, Aug/2020(270p.) &#160; This was a great book because it educated on a complex topic without being dry or unnecessarily long. The author succeeds in weaving the complex...</p>
<p>The post <a href="https://www.vii-llc.com/2020/11/25/t-minus-ai-humanitys-countdown-to-artificial-intelligence-and-the-new-pursuit-of-global-power/">T-Minus AI: Humanity’s Countdown to Artificial Intelligence and the New Pursuit of Global Power</a> appeared first on <a href="https://www.vii-llc.com">VII Capital Management</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h5 style="text-align: left;"><span style="text-decoration: underline;"><em>By Michael Kanaan, Aug/2020(270p.)</em></span></h5>
<p>&nbsp;</p>
<p style="text-align: justify;">This was a great book because it educated on a complex topic without being dry or unnecessarily long. The author succeeds in weaving the complex building blocks of AI into the history of human intelligence – from the invention of language in Chapter 2, to automatically-generated texts in Chapter 16.  Amazon’s description of the book is pretty solid, so I included it below, after my <em>Highlighted Passages –</em> where I also added a number of interesting links to videos and articles.  As the US Air Force Captain explains in this <a href="https://www.youtube.com/watch?v=tNGTq_NajJc" target="_blank" rel="noopener noreferrer">interview</a>, his book is organized in three parts:  Part 1 provides context, part 2 defines AI, and part 3 addresses its implications.</p>
<p><img fetchpriority="high" decoding="async" class="aligncenter size-full wp-image-1569" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-1-of.jpg" alt="" width="468" height="346" srcset="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-1-of.jpg 468w, https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-1-of-300x222.jpg 300w" sizes="(max-width: 468px) 100vw, 468px" /></p>
<p style="text-align: justify;">Captain Kanaan has a talent for explaining complex concepts in brief and basic terms. Among the many explanations, analogies, and insights in the book, his description of <em>cloud computing</em> in Chapter 8 was precious.  “<em>Cloud computing is just a means of accessing all aspects of computing services over the internet,”</em> he explains.  “<em>Regardless of their locations, cloud computing simply allows us to instantaneously draw upon the strength, software, and information of other computers —which are inevitably more powerful, equipped, and versatile than our own</em>.”  But it was his review of <em>neural networks</em> in Chapter 9 that I would have to call my favorite, not only because it brought back <a href="https://commons.erau.edu/cgi/viewcontent.cgi?referer=http://scholar.google.com/&amp;httpsredir=1&amp;article=1005&amp;context=db-theses" target="_blank" rel="noopener noreferrer">good memories</a>, but because it showed how far the field has come since it first gained my attention in the late 1980s.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-1570" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-2-of.jpg" alt="" width="409" height="306" srcset="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-2-of.jpg 409w, https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-2-of-300x224.jpg 300w" sizes="(max-width: 409px) 100vw, 409px" /></p>
<p style="text-align: justify;">In Part 3 of the book Captain Kanaan worries out loud about the geopolitical implications of AI.  He contends that the Chinese, Russians, and Saudis do not follow the doctrines of democracy and are therefore not to be trusted with the power that AI affords.  With typical military paranoia, he exposes several threats and wrongdoings committed by these sovereign nations, including invasions of privacy, genocide, misinformation, and torture.  While less critical and alarmist, Kai-Fu Li’s excellent book, <a href="https://www.thriftbooks.com/w/ai-superpowers-china-silicon-valley-and-the-new-world-order_kai-fu-lee/18643789/item/36767680/?mkwid=%7cdc&amp;pcrid=448939279362&amp;pkw=&amp;pmt=&amp;slid=&amp;plc=&amp;pgrid=104167485813&amp;ptaid=pla-894501118442&amp;gclid=CjwKCAiAnvj9BRA4EiwAuUMDf61z8s0XZzIyoj8WGv8M0vP_7fDPoYLBcGOzSbhkMuhF9VzzVfZt_BoCglgQAvD_BwE#idiq=36767680&amp;edition=19843150" target="_blank" rel="noopener noreferrer">AI Superpowers : China, Silicon Valley, and the New World Order</a>, reaches a similar conclusion from the Chinese perspective:  That a global technology dominance race is underway.  The bottom line is that the geopolitical agenda speaks loudly these days, making anti-trust and privacy regulation seem like sideshows.  In fact, Captain Kanaan never even mentions antitrust or the DOJ in his book – although he does point to the <em>winner-take-all</em> nature of the technology.</p>
<p style="text-align: justify;">Because I quickly became a fan of Captain Michael Kanaan and <a href="https://twitter.com/MichaelJKanaan?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor" target="_blank" rel="noopener noreferrer">his Tweeter feed</a>, I excused him for mildly succumbing to some of the same biases he warns us about in his book. For instance, it caught my eye that in Chapter 8, he claimed “<em>IBM, Oracle, and Google are the <strong>other</strong> main American companies offering cloud computing.” </em>Why would he have downgraded Google to “other” status and list it behind IBM and Oracle?  Any Google (or Bing) search will show Google as number 3.  As of Q2-2020, Amazon <a href="https://www.statista.com/chart/18819/worldwide-market-share-of-leading-cloud-infrastructure-service-providers/" target="_blank" rel="noopener noreferrer">is said</a> to have about one third of the cloud infrastructure service market, followed by Microsoft at 18% and Google at 9%.  While IBM/Red Hat do indeed participate in this market and show up in some of the industry charts, they are not that significant and are not growing as fast (i.e. losing share).  Former Google CEO Eric Schmidt is quoted praising Captain Kanaan on the cover of the book (he also shows up on <a href="https://www.linkedin.com/in/michaeljkanaan?challengeId=AQHm7nzYveJqgwAAAXX_kZ9q6med2-zhJkF5YCybAwbwrM50PElR3rcF9gjFCJiVJOUKoBepwefZQZwAvfNmYheHonRqzBLipQ&amp;submissionId=60175db5-ebc2-4a16-0cfe-f99aa0de4844" target="_blank" rel="noopener noreferrer">his Linkdin page</a>) – so I couldn’t help but wonder if Eric noticed the slip.  It certainly is not because of the Alphabetical order (pun intended).  But judging by these <a href="https://www.fedscoop.com/champions-of-digital-transformation/capt-michael-kanaan" target="_blank" rel="noopener noreferrer">promotional videos</a> featuring the young Captain Kanaan touting Red Hat technology – one does not need a neural network to figure out why IBM even made it to that list.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-1571" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-3-of.jpg" alt="" width="496" height="289" srcset="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-3-of.jpg 496w, https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-3-of-300x175.jpg 300w" sizes="(max-width: 496px) 100vw, 496px" /></p>
<p style="text-align: justify;">The concluding words of the book were fascinating if only because they were self-generated by a deep neural network called GPT-2.  As Captain Kanaan explains:  “<em>Before closing this book, I thought it would be an appropriate experiment to informally test GPT-2 myself. On the very first page of this book, in the short Author’s Note just before the Prologue, I wrote: Our focus now must be to openly address the current realities of AI to ensure, as well as we can, that it is implemented only in ways consistent with fundamental human dignities . . . and only for purposes consistent with democratic ideals, liberties, and laws. At this point in your reading, I trust you know how convinced I am of those words. They seemed a perfect choice to test GPT-2, an appropriate sample to see what kind of “continuation” the program would produce. When I typed and submitted them into the program, its generator displayed a response almost immediately. The words the algorithm created, on its own and in less time than it took to lift my fingers from the keyboard, are shown as the epigraph at the start of this chapter. They’re so cogent to the entirety of this book that they bear repeating. So, here they are. This is from an algorithm familiar with eight million web pages, but prompted only by my 43 words: <strong>“Our job is now to convince the public in particular that using AI to achieve these aims is a necessary and desirable part of our society, but we cannot afford to do so unless we know how it will best be used and when. But in the end, the future demands we make moral decisions as we begin to build a world that is truly safe and sustainable, one where humans and AI can truly coexist together.” </strong></em></p>
<p style="text-align: justify;">So in closing, this was an excellent book on a very relevant and timely topic where easy-to-read books are not readily available.  As such – I would highly recommend it.</p>
<p>Best regards,</p>
<p>Adriano</p>
<hr />
<h5></h5>
<h5><em><span style="text-decoration: underline;">Highlighted Passages</span>:</em></h5>
<p style="text-align: justify;"><strong><em>Cover</em></strong></p>
<p style="text-align: justify;"><em>“Mike Kanaan is an influential new voice in the field of AI, and his thoughts paint an insightful perspective. A thought-provoking read.”—ERIC SCHMIDT, former CEO and executive chairman of Google</em></p>
<p style="text-align: justify;"><em>“This is one of the best books I’ve read on AI.”—ADAM GRANT, New York Times bestselling author of Originals and Give and Take</em></p>
<p style="text-align: justify;"><strong><em>Author’s Note</em></strong></p>
<p style="text-align: justify;"><em>The countdown to artificial intelligence (AI) is over.</em></p>
<p style="text-align: justify;"><strong><em>Prologue: Out of the Dark</em></strong></p>
<p style="text-align: justify;"><em>As the <u>Air Force lead officer for artificial intelligence and machine learning</u>, I’d been reporting directly to Jamieson for over two years. The briefing that morning was to discuss the commitments we’d just received from two of Silicon Valley’s most prominent AI companies. After months of collective effort, the new agreements were significant steps forward. They were also crucial proof that the long history of cooperation between the American public and private sectors could reasonably be expected to continue.</em></p>
<p style="text-align: justify;"><em>“Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. <u>Whoever becomes the leader in this sphere will become the ruler of the world</u>.” – Vladimir Putin – September 2017</em></p>
<p style="text-align: justify;"><em>In the months that followed [late 2017], Putin’s now infamous few sentences proved impactful across continents, industries, and governments. His comments provided the additional, final push that accelerated the planet’s sense of seriousness about AI and propelled most everyone into a higher gear forward.</em></p>
<p style="text-align: justify;"><em>Only a month earlier, China had released a massive three-part strategy aimed at achieving very clear benchmarks of advances in AI. First, <u>by 2020, China planned to match the highest levels of AI technology</u> and application capabilities in the US or anywhere else in the world. Second, <u>by 2025, they intend to capture a verifiable lead</u> over all countries in the development and production of core AI technologies, including voice- and visual-recognition systems. Last, <u>by 2030, China intends to dominantly lead all countries in all aspects and related fields of AI.3 To be the sole leader, the world’s unquestioned and controlling epicenter of AI</u>. Period. That is China’s declared national plan.</em></p>
<p style="text-align: justify;"><strong><em>Part 1: The Evolution of Intelligence: From a Bang to a Byte</em></strong></p>
<p style="text-align: justify;"><em>In the age of artificial intelligence, second place will be of an ever-diminishing and distant value.</em></p>
<p style="text-align: justify;"><strong><em>Chapter 1: Setting the Stage</em></strong></p>
<p style="text-align: justify;"><em>Most conversations about artificial intelligence, whether in auditoriums, offices, or coffee shops, either begin or end with one or more of the following questions: </em></p>
<p style="text-align: justify;"><em>1.​ What exactly is AI? </em></p>
<p style="text-align: justify;"><em>2.​ What aspects of our lives will be changed by it? </em></p>
<p style="text-align: justify;"><em>3.​ Which of those changes will be beneficial and which of them harmful? </em></p>
<p style="text-align: justify;"><em>4.​ Where do the nations of the world stand in relation to one another, especially China and Russia? </em></p>
<p style="text-align: justify;"><em>5.​ And, <u>what can we do to ensure that AI is only used in legal, moral, and ethical ways</u>?</em></p>
<p style="text-align: justify;"><em>But, when it comes to their scientific portrayals of artificial intelligence, our most popular authors and screenwriters have too often generated an array of exotic fears by focusing our attention on distant, dystopian possibilities instead of present-day realities. <u>Science fiction that depicts AI usually aligns a computer’s intelligence with consciousness, and then frightens us by portraying future worlds in which AI isn’t only conscious, but also evil-minded and intent, self-motivated even, to overtake and destroy us</u>.</em></p>
<p style="text-align: justify;"><strong><em>Chapter 2: In the Beginning . . . </em></strong></p>
<p style="text-align: justify;"><em>To put the slow pace of our intellectual innovations into perspective, we didn’t invent the wheel until 5,000 to 6,000 years ago. Think about that. Over the entire timeline of Homo sapiens’ existence, and as the most advanced and only remaining of all human species<u>, it took 194,000 of our 200,000 total years on Earth to finally piece together the idea and method of putting a round object to a constructive, locomotive use</u>.</em></p>
<p style="text-align: justify;"><em>little more than 5,000 years ago, the <u>ancient Sumerians (in modern day Iraq) first began reducing their verbal language to writing</u>. This was a tremendously important step forward in the use of language—it enabled true collective learning.</em></p>
<p style="text-align: justify;"><em>From the moment alphabet-based written languages took hold and began spreading throughout different civilizations, a new game was truly on. <u>Human learning, human capability, and human dominion over nature began to expand at an unprecedented rate</u>.</em></p>
<p style="text-align: justify;"><strong><em>Chapter 3: Too Many Numbers to Count</em></strong></p>
<p style="text-align: justify;"><em>A common thought experiment using the concept of time is to imagine counting to the number 1,000—not an unreasonable task, although I suspect few of us have actually done it. In any event, <u>if you start with the number one and count one additional number every second, without stopping, it will take you almost 17 minutes to reach 1,000.</u> That’s not tough to calculate. Just divide 1,000 by 60 (the number of seconds in a minute). And while it might take a bit longer than you’d have guessed, it’s easy to see, and you’re probably thinking, “OK, sure, that sounds about right.” <strong>But what if you want to continue counting from one thousand to one million? How much longer would that take?</strong> This time, the answer’s more likely to surprise you. If you continue counting, again by adding one new number every second and again without stopping (not even to speak, eat, drink, or sleep), it will take more than 277 hours, or more than <strong>11½ continuous, uninterrupted days</strong>. And if you want to be more realistic in going about the task, by only doing your counting during the course of your eight-hour workdays, then the job of counting to a million would take you almost two months of a full-time work schedule to complete—and that’s without time off for lunch. And what about a billion? That’s a number we now hear all the time. It can’t be that much more than a million, right? Wrong. <u>If you want to count to one billion at one-second intervals, you’ll unfortunately have to spend most of your adult life at the task, because it will take almost 32 years of continuous, nonstop counting to get there</u>. And to count to one trillion, another number that we’re beginning to hear and use more frequently? Well, that would take you almost 32,000 years.</em></p>
<p style="text-align: justify;"><em>In today’s world of computing, <u>the kinds of numbers that exceed our natural comprehension have become commonplace</u>. Scientists, computer designers, software programmers, and even consumers encounter them every day. They explain our universe, our products, and even ourselves. They also explain artificial intelligence.</em></p>
<p style="text-align: justify;"><strong><em>Chapter 4: Secret Origins of Modern Computing</em></strong></p>
<p style="text-align: justify;"><em>So, what does Mexico’s independence from Spain combined with Texas’s subsequent independence from Mexico, America’s resulting annexation of Texas, and the American acquisition of more than half of Mexico’s territory at the end of the Mexican-American War possibly have to do with computer technology and the eventual creation of artificial intelligence? Fast-forward seven decades, all the way to the other side of the Atlantic, and the connections unfold. … In an effort to take advantage of lingering animosities, the <u>German secretary of foreign affairs, Arthur Zimmermann, sent a coded telegram on January 19, 1917, to Germany’s ambassador in Mexico instructing him to offer an alliance and financial support to Mexico if it would agree to invade America should the US enter the war</u>. In pertinent part, the telegram read: We intend to begin on the first of February unrestricted submarine warfare. We shall endeavor in spite of this to keep the United States of America neutral. In the event of this not succeeding, we make Mexico a proposal of alliance on the following basis: make war together, make peace together, generous financial support and an understanding on our part that Mexico is to reconquer the lost territory in Texas, New Mexico, and Arizona . . . Signed, Zimmermann.  See Figure 4.1. Though little is made of it in history books, Zimmermann’s telegram was one of the most significant strategic missteps in military history. It not only failed, but completely backfired. Unknown to the Germans, the British had been intercepting their military signals and communications for years. When the Zimmermann telegram was sent, the Royal Navy’s code-breaking operation intercepted it, deciphered it, and, a little more than a month later, turned it over to the American embassy in London.7 On February 26, 1917, American president Woodrow Wilson first learned of the telegram. At that point, the German submarine campaign in the North Atlantic had already begun, and American cargo ships were sinking just as the telegram foreshadowed. Although Wilson was already strategizing a military response, many Americans and members of Congress were still strongly opposed to entering the war. But the Zimmermann telegram was Wilson’s ticket to change public opinion. He presented it to Congress and instructed the State Department to openly release its contents to the American media.</em></p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-1573" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-4-of.jpg" alt="" width="458" height="283" srcset="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-4-of.jpg 393w, https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-4-of-300x185.jpg 300w" sizes="(max-width: 458px) 100vw, 458px" /></p>
<p style="text-align: justify;"><em>…it’s estimated that <u>the information the Allies acquired through Turing’s Bombe, along with other work accomplished at Bletchley,</u><u> shortened the war by at least two years and saved millions of lives</u>.</em></p>
<p style="text-align: justify;"><strong><em>Chapter 5: Unifying the Languages of Men and Machines</em></strong></p>
<p style="text-align: justify;"><em>“Mathematical science shows what is. It is <u>the language of unseen relations between things</u>. But to use and apply that language, we must be able fully to appreciate, to feel, to seize the unseen, the unconscious.” —<strong>Ada Lovelace, 1815–1852</strong> English Mathematician and Computing Theorist</em></p>
<p style="text-align: justify;"><em>Science fiction aside, no computer programming language will ever be any person’s native or natural language. But no one can dispute that in today’s computer-driven age of information, <u>common computer languages like Java, JavaScript, C, C++, Python, Swift, and PHP are extraordinarily useful and powerful skills to possess—so much so that they’re now becoming accepted in various places as the equivalents of true secondary languages</u>. … High-level computer programming languages, on the other hand—such as <strong>Python</strong>, Java, C++, and C—are <strong>much easier to write</strong>, <u>but they rely upon other interpreter programs (called compilers) to convert their high-level code into the machine’s underlying binary code</u>.</em></p>
<p style="text-align: justify;"><strong><em>Chapter 6: Consciousness and Other Brain-to-Computer Comparisons</em></strong></p>
<p style="text-align: justify;"><em>“It’s ridiculous to live 100 years and only be able to remember 30 million bytes. You know, less than a compact disc. <u>The human condition is really becoming more obsolete every minute</u>.” —<a href="https://en.wikipedia.org/wiki/Marvin_Minsky" target="_blank" rel="noopener noreferrer">Marvin Minsky</a>, 1927–2016 Cofounder of MIT AI Laboratory</em></p>
<p style="text-align: justify;"><em>While the scopes of the tasks that narrow AI programs can accomplish are broadening, <u>computers are still not anywhere close to accomplishing the general intelligence, multitask potential and performance parameters of which humans are capable</u>. General artificial intelligence, as we’ll discuss in Chapter 9, is still far off, if ever.</em></p>
<p style="text-align: justify;"><em>Cognitive and neuroscientists say <u>we’ve learned more about the actual physiology of the brain in the last ten years alone than we had known in all of our prior history</u>. Even so, much remains a complete mystery.</em></p>
<p style="text-align: justify;"><em>Moreover, until recently, our experiences and traditional ways of thinking told us that consciousness was a prerequisite for intelligence, and that the latter could only arise from the former. <u>We never knew of anything intelligent that wasn’t also conscious</u>. Those two phenomena seemed always, intuitively and actually, to go together. It’s therefore understandable that science fiction seems to always put the two together. <u>In most fictional equations, intelligence always equals consciousness</u>. It’s reasonable, then, for people to feel uneasy about, and suspicious of, machines that can learn, especially when they can do so on their own and without any continuing programming or specific oversight from us. But, in the new world ahead of us, <strong>we have to put our unease aside</strong>, <u>right along with our old notions that intelligence always requires or results in consciousness. It doesn’t</u>.</em></p>
<p style="text-align: justify;"><em>In his book <a href="https://www.amazon.com/Future-Mind-Scientific-Understand-Enhance-ebook/dp/B00EX4E258/ref=sr_1_2?dchild=1&amp;keywords=future+of+the+mind&amp;qid=1606295724&amp;s=digital-text&amp;sr=1-2" target="_blank" rel="noopener noreferrer">The Future of the Mind</a>, Kaku suggests that we should think of consciousness as a progressive collection of ever-increasing factors that animals use to determine and measure their place in both space and time, and in order to accomplish certain goals. Kaku proposes that there are <strong>three fundamentally distinguishable levels of consciousness</strong>. <strong><u>Level one</u></strong><u> is a creature’s singular ability to understand its position in space</u>. In other words, it’s the ability to be aware of one’s own spatial existence with respect to the existence of others. This is the most minimal, basic level of consciousness, and it emanates from the oldest, most prehistoric part of the brain—the hindbrain or reptilian brain. <u>A lizard, for example, can be said to have level-one consciousness</u> because it is aware of its own space in relation to the space of the other animals upon which it preys. … <strong><u>Level-two</u></strong><u> consciousness is an animal’s ability to understand its position with respect to others</u>—not only contrary to them, but also in concerted accord with them. This level of consciousness flows from the later-evolved center regions of the brain, the cerebellum, and involves emotions and an awareness of social hierarchy, protocol, deference, respect, and even courtesies. Kaku describes this level of consciousness as <u>the monkey brain</u>—the ability to <u>understand and abide by social hierarchy</u> and order within groups and communities of animals. In humans, this level of consciousness develops during the early stages of our socialization, when young children learn from their parents and others to abide by the rules of their homes and communities, to act socially responsible, and to show respect and tolerance for others. Level-three consciousness is the ability to not only understand our social place in space with respect to the places of others, but also understand our place in time—to have an understanding of both yesterday and tomorrow. In Kaku’s view, the <strong><u>level-three</u></strong> <u>ability to reflect, consider, plan, and anticipate the future</u> involves unique attributes of consciousness that <strong>only humans possess</strong>. He asserts, and most others would agree, that <u>this level results from the part of the brain that most recently evolved, which is the outer and forwardmost part that sits right behind our foreheads—the prefrontal region of the neocortex</u>. As we discussed in Chapter 2, this is where mankind’s higher thinking resides, including such skills as <strong>theorizing </strong>and <strong>strategizing</strong>.</em></p>
<p style="text-align: justify;"><em>…<u>flowers, which likewise fall short of level-one consciousness</u>, can nonetheless be said to have a few additional perceptive elements beyond that of a thermostat. In addition to temperature, <u>a flower can also sense humidity, soil quality, and the angle of sunlight. The Venus flytrap, a carnivorous plant, takes things even a step further than a flower</u>. Beyond those things that any plant can sense, the flytrap is also able to detect the presence of an insect or spider on the blades of its leaves. When it does, it folds in on itself to capture and then digest the prey. Its mechanisms are so highly specialized that it can even distinguish between living prey and nonliving stimuli, such as falling raindrops. Yet, Little Shop of Horrors aside, <u>most of us wouldn’t think for an instant that a Venus flytrap has anything that even remotely approaches an animal’s level of overall consciousness</u>. And we’d be right. Nonetheless, the flytrap does have some elements of awareness that are, at least generally speaking, foundational components of consciousness.</em></p>
<p style="text-align: justify;"><em>But, with machines now capable of completing those and many other intelligent goals without us, <u>consciousness is no longer a necessary element</u>. Just because computers can be programmed to accomplish such tasks on their own, and even learn while doing so, it doesn’t mean that they’ll one day just spontaneously develop consciousness. In this new world of ours, <strong>intelligence and consciousness are not interdependent</strong>.</em></p>
<p style="text-align: justify;"><em><u>There’s a misconception, which has now become common myth, that we use only 10 to 20 percent of our brain. In truth, we use virtually all of it—and most of our brain is active most of the time</u>. Brain scans show that no matter what we are doing or thinking, all areas remain relatively active and none are ever completely dormant or shut down. Even when we’re sleeping, all parts of our brain show at least some levels of readiness and interactive activity.10 For all of its complexity and despite the continual engagement of all of its parts<u>, the human brain is extremely efficient in energy consumption. Requiring only 20 watts, which is barely enough to light a dim incandescent lightbulb, it’s about 50 million times more efficient than any of today’s computers of even remotely comparable capacity</u>. That’s fortunate for us, because we can only produce a certain amount of energy from the volume of food we’re capable of eating on any given day. Still, despite its efficiency compared to its total energy consumption, <u>our brain does demand more energy than any of our other organs. Although it only weighs about 2 percent of our total body mass, it requires almost 20 percent of the total energy we generate. At that rate, the brain consumes 10 times its pro rata share of our available energy, or approximately 500 of the 2,400 calories we consume on an average day</u>.</em></p>
<p style="text-align: justify;"><em><u>Just imagine the intellect and insight any one of us would have if we could immediately analyze all of the information and experiences stored not only in our own mind, but in the minds of all our human colleagues</u>. Machines that can access all of the world’s retrievable information, instantaneously analyze and learn from it, and then provide us with new answers, strategies, and solutions are changing the realities of our existence and allowing us to <u>accomplish far more than humans alone ever could</u>.</em></p>
<p style="text-align: justify;"><strong><em>Part 2: Twenty-First-Century Computing and AI: <u>Power</u>, <u>Patterns</u>, and <u>Predictions</u></em></strong></p>
<p style="text-align: justify;"><strong><em>Chapter 7: Games Matter</em></strong></p>
<p style="text-align: justify;"><em><u>Games fall generally into one of <strong>four categories</strong>.</u> There are games of chance—like roulette, lotto, and dice—where luck is the only determiner of outcome. There are games of pure intellect—like crossword puzzles, brainteasers, checkers, and chess—in which we match our knowledge, skills, and strategies against a set standard or an opponent. There are games of pure physical aptitude—like individual track-and-field events—where success is determined only by the participant’s own physical attributes. And, finally, composing the largest category of all are games that combine two or more of those elements. Most team and sporting events fall into the last category, where we test our skills and strategies against a combination of elements outside our control—like the skills of an individual opponent, the aptitude of an adversarial team, the luck of the draw, and even the bounce of the ball.</em></p>
<p style="text-align: justify;"><em>Regardless of its simple rules, <a href="https://en.wikipedia.org/wiki/Go_(game)" target="_blank" rel="noopener noreferrer">Go</a> is incredibly complex. At most points in the game, <u>the best players can’t even begin to calculate in their minds all the moves that might mathematically be the next best</u>.</em></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1574" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-5-of.jpg" alt="" width="175" height="281" /></p>
<p style="text-align: justify;"><em>Consistent with a habit of underestimating AI developments that continues even today, most everyone, including the majority of AI researchers, believed machine learning technology was at least ten years away from succeeding at the challenge in store for <strong>DeepMind</strong>. <a href="https://en.wikipedia.org/wiki/Lee_Sedol" target="_blank" rel="noopener noreferrer">Sedol</a> himself said before the match, “I am confident about the match. I believe that human intuition is still too advanced for AI to have caught up.” <u>Played at Seoul’s Four Seasons Hotel, the first game was broadcast live and watched by an estimated 80 million people worldwide, 60 million in China alone</u>. From the very start, it was clear that things weren’t as people had expected or hoped. <u>Beginning with the first move, AlphaGo played with an apparent creativity so different from conventional human playing styles that its moves seemed random at times, even outright wrong</u>. But its new strategies quickly proved powerfully, almost intuitively, effective. In the words of one commentator, “No matter how complex the situation, <strong>AlphaGo plays as if it knows everything already</strong>.” Sedol was surprised and immediately confused by the unexpected level and style of AlphaGo’s play. He lost the first game in an overwhelming defeat. The world was shocked, especially the large community of Go players and fans throughout Asia. Worse, <u>they were completely, culturally unprepared for the loss</u>. In a single game between man and machine, centuries of methodically developed and highly respected theories of how Go ought to be played had been dismantled by the machine’s drastically new strategies. The second game went no better. Sedol lost soundly again. In the press conference afterward, he was clearly shaken by the incredible pressure upon him. The internet and media were alive with reactions, and most observers empathized with Sedol’s plight. There seemed something very unsettling and sad, frightening even, about a machine that could find new and devastatingly effective strategies on its own . . . <u>especially when those strategies were far different from anything the best human players, over centuries of time, had ever even thought to consider</u>. After another loss in Game 3, things had become entirely hopeless for Sedol. Having already mathematically lost the match, <u>he publicly apologized for being powerless against AlphaGo</u>.</em></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1575" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-6-of.jpg" alt="" width="192" height="273" /></p>
<p style="text-align: justify;"><em>The match ended with another win for AlphaGo, resulting in a <u>final tally of 4–1 in favor of AI over humanity</u>. At the end of it all, the style of AlphaGo’s play upset centuries of accepted wisdom. The huge community of Go players and enthusiasts throughout Asia collectively reflected on what had occurred . . . and ultimately found inspiration. Their game had been played for thousands of years and was symbolic of their cultural way of life. It represented the wonders of the human mind and our unique, human way of mastering the challenges of the world around us. Yet, the game had just been creatively mastered by a machine. The community realized, though, that this was an opportunity for them to see things, perhaps all things, in a new light. After the match, Sedol said it most poignantly: “What surprised me the most was that AlphaGo showed us that moves humans may have thought are creative were actually conventional.” <a href="https://en.wikipedia.org/wiki/Fan_Hui" target="_blank" rel="noopener noreferrer">Fan Hui</a>, the European champion who had been the first to lose to AlphaGo, also reflected, “<u>Maybe AlphaGo can just show humans something we never discovered. Maybe it’s beautiful</u>.”</em></p>
<p style="text-align: justify;"><em>Today’s video games are incredibly complex, particularly those that are played online in multiplayer formats. Billion-dollar companies routinely grow from them and, since the early 2010s, professional teams from across the world have been competing in matches and tournaments organized around them. <u>Top professional players earn multimillion-dollar contracts</u> and the commercial values of teams now rival traditional professional sports franchises.</em></p>
<p style="text-align: justify;"><em>The bots had learned—in all of their prior games against only themselves and lesser competition—to pursue strategies just sufficient to win. By definition, only a small margin is necessary to achieve victory, and therefore only a slight advantage was all that ever mattered to the bots as they learned and developed their initial strategies. <u>Because there was no advantage to winning by a large margin, the algorithms behind the bots had never discerned a strategic advantage to gaining a larger lead than necessary to ensure victory.</u> And so, <u>once they fell behind, they had difficultly devising the kind of coordinated response necessary to make up for large deficits</u>. The bots had simply never been in that situation before. In April 2019, <u>the OpenAI Five <strong>again </strong>challenged a top professional team of Dota 2 players</u>, who this time were the reigning world champions. Although the match was scheduled for three games, <u>the artificially intelligent team of virtual bots quickly proved that they had learned the value of crushing their opponent as quickly and decisively as possible. They so overwhelmed the human players in the first two games of the match that the event organizers didn’t even bother staging the third game</u>.</em></p>
<p style="text-align: justify;"><em>According to <strong>Tencent</strong>, when <a href="https://medium.com/syncedreview/tencent-tstarbots-defeat-starcraft-iis-powerful-builtin-ai-in-the-full-game-ee3d76519419" target="_blank" rel="noopener noreferrer">TSTARBOT2</a> was engaged to oversee game play, it<u> won 90 percent of the time.</u></em></p>
<p style="text-align: justify;"><em>Regardless of whether an algorithm is playing a board game like Go, manipulating a video game like StarCraft II, or directing a self-driving car to operate safely within the traffic around it, everything AI applications are capable of doing depends upon their ability to obtain and analyze the necessary indicators of the situations they’re tasked to solve. In essence, <u>the quality of their performance depends upon the quality and completeness of the information available to them</u>.</em></p>
<p style="text-align: justify;"><strong><em>Chapter 8: A Deluge of Data</em></strong></p>
<p style="text-align: justify;"><em>Our ability to learn requires our capacity to acquire data and to analyze it. Without data, intelligence just isn’t possible, not at any level—and not in any animal, individual, or computer.</em></p>
<p style="text-align: justify;"><em>With the entirety of the learning process dependent upon it, there are <u>two fundamental truths about data</u>. <strong>First</strong>, while the laws of quantum mechanics insist information about the past is never actually lost, for us humans, it most surely can be. Data, or certainly our ability to capture it, doesn’t naturally last forever. It’s fleeting. <u>Unless someone or something is present to observe and somehow record the data emanating from an event when or as it occurs, then the data that describes the occurrence tends to disappear. Think of the tree that falls in the woods with no one there to observe it.</u> Does it make a sound? Other than in the most esoteric of philosophic conversations, the answer is yes—for a falling tree certainly creates sound waves. But there’s a slightly different and much more pertinent question to ask. Can we ever know exactly what it sounded like? To that question, the answer is no, not unless it was recorded in some way or unless some other evidence or effect of the sound exists that we can later still detect and measure—and from which we can recreate the sound. In other words, <u>unless an event was observed when it occurred, or unless the event leaves a physical or measurable, residual trace we can later discover, the data of the event—as far as we humans are concerned—is lost</u>. … The <strong>second</strong> truth about data is that <u>there is simply too much of it for us to fully perceive, collect, and manage on our own</u>. The capacity of the human brain is enormous. But, as we’ve seen, <u>there’s a practical limit to the amount of data that any person or any group of people can ever obtain, let alone meaningfully process or share</u>.</em></p>
<p style="text-align: justify;"><em>In 1989, an English software engineer named <a href="https://en.wikipedia.org/wiki/Tim_Berners-Lee" target="_blank" rel="noopener noreferrer">Tim Berners-Lee</a> proposed a solution. While working at the European Organization for Nuclear Research (CERN), Berners-Lee wrote a paper called “<a href="https://www.w3.org/History/1989/proposal.html" target="_blank" rel="noopener noreferrer">Information Management: A Proposal</a>,” in which he suggested that connected computers could share information using a newly emerging technology called hypertext—which is text displayed on a computer or other device in hyperlinks that, with a simple click, immediately connect the user to other text, documents, and internet locations. <strong>Hypertext </strong>is so fundamentally common to us now that we don’t even think of it. But when first introduced, it changed everything about the practical functionality of the internet.</em></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1576" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-7-of.jpg" alt="" width="201" height="282" /></p>
<p style="text-align: justify;"><em>Digital data is now commonly referred to as “the new oil.” Although the phrase has become cliché, it’s essentially accurate. Even before data is structured or manipulated in any way, the variety and amounts of it that we generate—about ourselves, our families, our communities, and even our cultures—are of immense and powerful commercial and political value. <u>But, just like oil, it’s also spread unevenly across geographies and nations</u>. China is the most populated country on Earth, with more than 1.4 billion people. India is a close second with over 1.35 billion, and <u>the US is in a very distant third place with a total population of fewer than 330 million people—which is less than a quarter of the population of either China or India, and only 4.4 percent of the world’s total population</u>. … Regionally, <strong>Asia represents 63 percent of the globe’s populace</strong>, Africa 16 percent, Latin America 9 percent, and Europe 7.5 percent.</em></p>
<p style="text-align: justify;"><em>An impressive <u>90 percent of the US population is connected to the internet</u>, which amounts to about 300 million people. By contrast, <u>less than 60 percent of China’s population is currently digitally connected, but even that smaller percentage of its population nonetheless equates to more than 800 million people, almost three times the number of Americans</u>. In fact, <u>of all people using the internet globally, 49 percent of them are Chinese and Southeast Asian</u>, 17 percent are European, 11 percent are African, and just over 10 percent are Latin American. <u>Only 8 percent of all internet traffic originates from the US</u> and, because of the rising number of users from other countries, the <u>US percentage will only continue to decrease</u>.</em></p>
<p style="text-align: justify;"><strong><em>In 1996,</em></strong><em> a small group of <u><a href="https://www.technologyreview.com/2011/10/31/257406/who-coined-cloud-computing/" target="_blank" rel="noopener noreferrer">Compaq Computer</a> </u>executives proposed a theoretical structure for the concept <u>and gave it a name</u>. In a document distributed only internally at the company’s offices outside of Houston, they plotted the future of the internet and imagined a day when software and storage capacity would be disbursed and shared throughout the web. <strong>They called it “cloud computing.”</strong> By the early 2000s, the concept of the cloud was taking shape in the real world. <strong>Contrary to what many people think, there’s nothing mysterious about it.</strong> <u>Cloud computing is just a means of accessing all aspects of computing services over the internet—including servers, storage, databases, software, security, and even artificial intelligence. In practice, the cloud is nothing more than a network, albeit vast, that allows you to utilize other computers’ hardware and software. Most of us use cloud computing all the time without even realizing it. When we use our laptops, iPhones, or any other devices to type a Google search query, our personal device doesn’t really have much to do with finding the information we’re looking for. It’s only acting as a messenger that communicates our search terms to an array of other computers somewhere out in the world. Those computers then use their own programs and databases to determine the results and send them back to us. It all occurs at incredible speed, and, for all we know, the real work that identified the information we requested may have been performed by computers five miles away or on the other side of the planet. Regardless of their locations, cloud computing simply allows us to instantaneously draw upon the strength, software, and information of other computers—which are inevitably more powerful, equipped, and versatile than our own. In addition to Google Search, other examples of cloud services that we frequently use include web-based email and cloud backup services for our phones and other computer devices. Even Netflix uses cloud computing to facilitate their video streaming service, and the cloud has likewise become the primary delivery method for the majority of apps now available, particularly from companies that offer their applications free of charge or for subscription fees over the internet, rather than as stand-alone products that require full downloads. The infrastructures required for cloud services are provided primarily by a handful of major commercial cloud providers. The largest two are Microsoft Azure and Amazon Web Services (AWS)—the latter of which launched the first public cloud service in 2006 as a way of turning its unused computer power into commercial revenue. Each of those providers now generates close to $30 billion per year from their cloud services.18 IBM, Oracle, and Google are the other main American companies offering cloud computing, and Alibaba is the principal Chinese provider of cloud services. These companies manage networks of huge, secure data centers that are usually spread over broad geographic regions where they house the infrastructures that power and store the data, systems, and software necessary to operate their clouds</u>.</em></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1577" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-8-of.jpg" alt="" width="355" height="289" srcset="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-8-of.jpg 355w, https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-8-of-300x244.jpg 300w" sizes="(max-width: 355px) 100vw, 355px" /></p>
<p style="text-align: justify;"><em>Cloud operations can either be <strong>public</strong>, <strong>private</strong>, or <strong>hybrid</strong>. Public clouds allow all users to share space and time on the cloud and to access it through unrestricted means. <u>Public is the usual cloud format for individual and personal cloud computing</u>, but many companies also opt to use public cloud infrastructures for their internet email systems and for employees who share documents using Google Drive. <u>Private clouds work technologically the same as their public counterparts, except they service a single company and require authorized access to use the network</u>. They can be managed either exclusively by the user company or by one of the major cloud providers on the company’s behalf. Either way, a private cloud is usually fully integrated with the company’s existing infrastructure, network of users, and databases . . . and can span countries and continents just as a public cloud can. Oftentimes, companies have needs that lie somewhere between public and private clouds, so they opt instead for a <u>hybrid cloud, which, just as the name implies, provides elements of each</u> that usually reflect the different levels of security and corporate control required for various cloud-based purposes and activities.</em></p>
<p style="text-align: justify;"><em>As a result, <strong><u>security </u></strong><u>is actually a compelling reason to use cloud-based systems rather than avoid them</u>.</em></p>
<p style="text-align: justify;"><em>There’s a completely reasonable argument that <u>we do in fact pay for what we use by giving those who provide such services consumer and behavioral information about us</u> that they can then use for their own commercial benefits—to generate revenue, for instance, through targeted advertising campaigns. As can rightly be said, “If you don’t think you’re paying for it, that just means you’re not the customer—you’re the product.”</em></p>
<p style="text-align: justify;"><strong><em>Chapter 9: Mimicking the Mind</em></strong></p>
<p style="text-align: justify;"><em>“If you wish to make an apple pie from scratch, you must first invent the universe.” —<a href="https://en.wikipedia.org/wiki/Carl_Sagan" target="_blank" rel="noopener noreferrer">Carl Sagan</a>, 1934–1996 Astronomer, Astrophysicist, Author Cosmos, TV Adaptation</em></p>
<p style="text-align: justify;"><em>In 2012, machine learning finally proved effective when a major breakthrough in computer vision capabilities occurred that changed the attitudes of most naysayers.</em></p>
<p style="text-align: justify;"><em><u>For a computer, data is the equivalent of experience</u>. So, the more data that a machine learning system processes, the better it becomes.</em></p>
<p style="text-align: justify;"><em>In the earliest days, neural networks were shallow, usually consisting of only a few layers: an <strong>input layer</strong>, a middle or <strong>hidden layer</strong>, and an <strong>output layer</strong>. Data is fed into the input layer, analyzed, and weighed as it progresses through the hidden layer, and it’s then forwarded to the output layer as a measured result. Nowadays, the process works generally the same, but the frameworks of the networks often include many middle, hidden layers, sometimes thousands. These are called <u>deep neural networks</u>, or <strong>deep-learning</strong> systems.</em></p>
<p style="text-align: justify;"><em>Through a process called <strong>backpropagation</strong>, the results of any measurements—at any points in the process—can even be fed back to prior layers over and over again to continually adjust the weights and measurements based on the overall dynamics of the evolving analysis.</em></p>
<p style="text-align: justify;"><em><u>If a neural net could describe “how” or “why” it came to the conclusion it did, the description wouldn’t be much different from what we would say about our own assessments of information streaming into our own brains. We would probably say something like, “Well, I thought about it and just concluded what I did—based, I imagine, on all of my prior experiences and the information that was newly available to me.”</u> Similarly, a machine learning system would say that it assessed the information, made its calculations, and predicted that its output was accurate, or most probable. In both cases, although we might have no idea of the precise measurements occurring deep at the evaluative levels, at the end of the process we feel confident that we’ve made sense of the data, and that we can likewise feel confident of whatever we’ve concluded.</em></p>
<p style="text-align: justify;"><em>In <strong>supervised learning</strong> systems<u>, the training data is first labeled by humans with the correct classification or output value</u>. … For <strong>unsupervised machine learning </strong>algorithms, the training <u>data isn’t classified or labeled in any manner</u> before being fed into the system. Instead, the system analyzes the data without any prior guidance or specific goal. Its task is more generally to discover, on its own, any similarities or distinct and recurring differences within the data so it can be grouped or consolidated according to those qualities. In these systems, the machine learning application is essentially being asked to find unifying or distinguishing characteristics within the data from which categories can then be determined and labeled. <u>This kind of approach is frequently used to explore data in order to find hidden commonalities</u> within very broad sets of complex and varied data. It’s often referred to as <strong>cluster analysis</strong>, and is routinely used for <u>market research</u>—where, for advertising and other strategies, it’s extremely valuable to <u>find common characteristics, behaviors, and habits within otherwise broad bands of prospective consumers</u> whose similarities aren’t readily apparent. <strong>Reinforcement learning</strong> is similar to unsupervised learning in that the training data isn’t labeled. But when the system draws a conclusion about the data or acts upon it in some way<u>, the outcome is graded, or rewarded</u>, and the algorithm accordingly learns what actions are most valued.  … <strong>Convolutional neural networks</strong> are the most commonly used network for computer vision programs or any machine learning applications that require the system to recognize images or shapes.  … Finally, <strong>generative adversarial networks, GANs</strong>, are composed of two separate, deep neural networks designed to <u>work against one another</u>. One of the neural networks, called the <strong>generator,</strong> creates new data that the other network, called the <strong>discriminator</strong>, evaluates in order to determine if is indistinguishable from other data in the training set, in which case it is considered authentic.</em></p>
<p style="text-align: justify;"><em>The following is a list of tasks at which machine learning applications have already proven well suited. Despite its length, the list is far from complete. </em></p>
<p style="text-align: justify;"><em>Aerospace and aeronautics—research and design<br />
</em><em>Agriculture—crop management and control<br />
</em><em>Authenticity verification—visual, voice, and data<br />
</em><em>Aviation—logistics, routing, and traffic control Biometrics—assessment and prescriptives<br />
</em><em>Climate—analysis and prediction<br />
</em><em>Computer hardware design and engineering<br />
</em><em>Computer vision<br />
</em><em>Counterterrorism<br />
</em><em>Crime analysis and detection<br />
</em><em>Customer relations—proactive management<br />
</em><em>Customer service—call centers and response<br />
</em><em>Cybersecurity Data analysis—for any use<br />
</em><em>Disaster response, recovery, and resupply<br />
</em><em>Disease detection and contact tracing<br />
</em><em><u>Disinformation and deepfake detection<br />
</u></em><em>DNA sequencing and classification<br />
</em><em><u>Due diligence research</u><br />
</em><em>Economics—analysis and prediction<br />
</em><em>Education—curricula, content, and proficiencies<br />
</em><em>Emergency detection and response<br />
</em><em>Energy—control, creation, efficiency, and optimization<br />
</em><em>Entertainment—preferences, creation, and delivery<br />
</em><em>Environmental impact and conservation analyses<br />
</em><em><u>Finances—personal, business</u><br />
</em><em><u>Financial services—market analysis, trading</u><br />
</em><em>Food—processing, preservation, distribution<br />
</em><em>Forest fire prediction, control, and containment<br />
</em><em>Fraud detection—online, identity, credit<br />
</em><em>Game creation and competitive play<br />
</em><em>Handwriting recognition<br />
</em><em>Harvesting—agriculture<br />
</em><em>Healthcare—plan management and diagnostics<br />
</em><em>Image processing Information retrieval and data mining<br />
</em><em>Insurance underwriting Internet fraud detection<br />
</em><em>Language translation—verbal and written Law enforcement<br />
</em><em>Legal—research, analysis, and writing<br />
</em><em>Logistics—supply and distribution<br />
</em><em>Manufacturing—facilities and processes<br />
</em><em>Market analysis Marketing strategies<br />
</em><em>Media—customer preferences and content<br />
</em><em><u>Medical—research, diagnosis, and treatment<br />
</u></em><em>Military—all aspects, like any other enterprise<br />
</em><em>National security<br />
</em><em>Natural language processing<br />
</em><em>Navigation—land, air, and water<br />
</em><em><u>News verification—authenticity and fact checking<br />
</u></em><em><u>Online advertising<br />
</u></em><em>Performance analysis—materials, products, people<br />
</em><em>Personnel—assessment and optimization<br />
</em><em><u>Pharmaceutical—research and development</u><br />
</em><em>Politics—analysis, polling, and messaging<br />
</em><em>Products—design, manufacturing, and assembly<br />
</em><em>Proving mathematical theorems<br />
</em><em>Quality control—products and services<br />
</em><em>Retail—inventory and pricing<br />
</em><em>Robotics—spatial assessment and locomotion<br />
</em><em>Scientific research—all branches<br />
</em><em>Search engine optimization<br />
</em><em>Security—premises, personal, and virtual<br />
</em><em><u>Self-driving, self-flying, self-sailing vehicles</u><br />
</em><em>Shipping—logistics, sorting, and handling<br />
</em><em>Social media—networking, implementation, analysis<br />
</em><em>Software design and engineering<br />
</em><em>Space exploration<br />
</em><em>Speech recognition<br />
</em><em>Telecommunications—service and efficiencies<br />
</em><em>Transportation—all types, all facets<br />
</em><em>Vaccines—research, development, and delivery<br />
</em><em>Weather—analysis and prediction<br />
</em><em>Wildlife research, assessment, and conservation</em></p>
<p style="text-align: justify;"><em>Unlike humans, machine learning applications are not capable of applying strategies, knowledge, or skills acquired in one area to another. They’re therefore called <strong>narrow AI</strong>. … Narrow AI is very strong, efficient, and quite capable at its purposed job. It’s just incompetent at anything beyond it. … AGI, also known as <strong>strong AI</strong>, is the type of hypothetical artificial intelligence that could operate beyond a single domain of information or task orientation, and that could perform successfully at any intellectual task just as well as a human. While we’ve been hypothesizing that AGI is right around the corner for many decades, the more we’ve accomplished in the science of AI, and the more we’ve come to appreciate the deepest mysteries of the human brain, <u>the more we’ve realized just how hard it would be for a machine to become capable of achieving anything other than specifically oriented tasks</u>. Humans may not be able to process data as fast or as comprehensively as computers, but we can think abstractly and across purposes. We can plan and sometimes even intuit the solutions to problems, at a general level, without even analyzing the details. This is what we sometimes characterize as common sense, which is something computers simply don’t have—and that no currently known technology can give them. For the foreseeable future, AI will not be able to create or solve anything from something that isn’t there, from data it doesn’t specifically have, or for something it hasn’t specifically been designed and trained to do<u>. AI has no intuitive or transferrable abilities</u>—and, for now, it’s not going to acquire those abilities. Machine learning technology just doesn’t allow for it. That’s not to say, unequivocally, that general AI will never occur. But it would require a new breakthrough and an entirely different technological approach than those described in the previous pages. <u>Whether such a breakthrough occurs at some point in the future remains, like all things, to be seen. But it’s not on the visible horizon</u>.</em></p>
<p style="text-align: justify;"><strong><em>Chapter 10: Bias in the Machine</em></strong></p>
<p style="text-align: justify;"><em>It’s quite common for <u>human biases to be reflected in our data and, when they are, it stands to reason that any analyses, strategies, or predictions based on that data will be biased as well</u>. Worse, if decisions are made or actions are taken based on biased analyses, then the underlying biases will of course perpetuate, and possibly ingrain, historical or cultural inequities even deeper into our lives.</em></p>
<p style="text-align: justify;"><em>Microsoft’s engineers designed the chatbot to learn from the speech patterns and content of the human responses to its tweets. Consistent with Tay’s machine learning algorithm, it quickly recognized patterns in the onslaught of conversational input it received. Unfortunately, people are . . . well, people are who they are. Their tweets back to Tay were filled with intentionally racial and sexually biased slurs. The chatbot, unable to discern the impropriety of such speech, emulated the input and started to reply, very efficiently, in kind. In only a matter of hours, the algorithm’s singular ability to learn—only from the data it obtained and the patterns it assessed—caused it to devolve from an unbiased machine chatbot to a frighteningly prejudiced and outspoken technological monster<u>, tweeting racial and xenophobic slurs of every kind imaginable</u>. I won’t repeat any of them here, but a Google search of “<a href="https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist" target="_blank" rel="noopener noreferrer">Tay’s tweets</a>” will pull up a compilation if you’re interested.</em></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1579" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-9-of.jpg" alt="" width="374" height="340" srcset="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-9-of.jpg 374w, https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-9-of-300x273.jpg 300w" sizes="(max-width: 374px) 100vw, 374px" /></p>
<p style="text-align: justify;"><strong><em>Chapter 11: From Robots to Bots</em></strong></p>
<p style="text-align: justify;"><em>“I believe that robots should only have faces if they truly need them.” —<a href="https://cogsci.ucsd.edu/~norman/" target="_blank" rel="noopener noreferrer">Donald Norman</a>, Director, The Design Lab at the University of California, San Diego</em></p>
<p style="text-align: justify;"><em>Despite the confusion, it’s important to understand the differences between <strong>machines</strong>, <strong>robots</strong>, and <strong>bots</strong>—especially because of the different ways artificial intelligence and machine learning applications can be utilized in each.  … <u>Robots, then, are machines that have at least some minimal level of autonomous functionality enabled by some type of computer or information processor</u>.</em></p>
<p style="text-align: justify;"><em>The kinds of robots we’ve talked about so far, regardless of how big or small, all have physical, mechanical structures of one sort or another. But <u>another kind of robot, without any physical form or material existence at all, also exists in the digital, virtual world of software and the internet. The most commonly recognizable of these virtual bots is the chatbot</u>, which is a computer program designed to impersonate humans and simulate human conversation, either in writing, text, or voice.</em></p>
<p style="text-align: justify;"><em><u>Virtual bots can also be used, however, for invasive and malicious purposes</u> . . . as malware, computer viruses, or cyberattack agents. Without any physical presence, they’re of course more difficult to identify and defend against than a traditional physical attack would be. They’re the invisible intruders against which cybersecurity efforts are usually directed. One of the difficult realities of malicious internet bots is that they’re intentionally designed to go unnoticed and remain hidden. <u>They can lurk within the vast array of algorithms and code that make up the internet</u>, and they can also lurk within a single network, or even an individual computer or software system. Worse, they usually hide behind file names and functions that are similar or identical to regular, necessary files, making them extremely difficult to recognize.</em></p>
<p style="text-align: justify;"><strong><em>Part 3: The Sovereign State of AI: Technology’s Impact on the Global Balance</em></strong></p>
<p style="text-align: justify;"><strong><em>Chapter 12: Moments That Awaken Nations</em></strong></p>
<p style="text-align: justify;"><em>The launch of Sputnik proved those concerns warranted. It wasn’t the satellite in orbit Americans feared; it was the Russian rocket that put it there. If the Soviet launch vehicle could carry 184 pounds of machinery into space, it could likely carry a nuclear warhead as well. And if that warhead was directed to reenter the atmosphere over North America, then the Soviets could presumably unleash its payload anywhere in the United States they chose.</em></p>
<p style="text-align: justify;"><em>Just after the Explorer 1 launch, Eisenhower created the Advanced Research Projects Agency (ARPA) to collaborate with academic, industry, and government partners in order to formulate, expand, and fund science and technology R&amp;D projects. The agency’s name was later changed to the Defense Advanced Research Projects Agency (DARPA). As we’ve discussed in prior chapters, <u>DARPA went on to be the leading catalyst behind a long list of technologies now enabling the world, including computer networking, the internet, robotics, and self-driving cars</u>. Eisenhower also proposed to Congress the creation of a civilian National Aeronautics and Space Administration (NASA) to oversee the US space program. By mid-June 1958, both houses of Congress had passed versions of a NASA bill. They were quickly consolidated and Eisenhower signed NASA into law on July 29, 1958. Within two months, the nation’s new space agency was up and running.</em></p>
<p style="text-align: justify;"><em>Because of <u>America’s historic response to the Soviet Union’s first satellite launch,</u> similar occasions—events that cause nations to suddenly realize they must work urgently to bridge or surpass a gap that’s arisen between them and a competitor—are now commonly called <a href="https://www.space.com/10437-sputnik-moment.html" target="_blank" rel="noopener noreferrer">Sputnik moments</a>.</em></p>
<p style="text-align: justify;"><em>First, though, <u>in September 2013</u>, Xi unveiled a new Chinese program for foreign infrastructure and economic initiatives throughout Asia, Europe, the Middle East, and Africa. Called the <u>Belt and Road Initiative (BRI)</u>, the policy is historically unparalleled. It’s designed to build a unified market of international trade, economic reliance, and cultural exchange broadly similar in function and value to the Silk Road trade routes that connected the Far East to Europe and the Middle East from antiquity to the fifteenth century. … intended to <u>make China the dominant global power in high-tech manufacturing by providing government subsidies to further mobilize state-controlled enterprises and encourage the acquisition of intellectual property from around the globe</u>. Essentially, the plan is a cohesive effort to move China away from being the world’s foremost provider of cheap labor and manufactured goods to become the world’s foremost producer of new, high-value products in the pharmaceutical, automotive, aerospace, semiconductor, telecommunications, and robotics fields.</em></p>
<p style="text-align: justify;"><em><u>In July 2017</u>, <strong>only two months after Ke Jie’s loss to AlphaGo</strong>, the State Council of China released a landmark <strong>new plan</strong> for government-sponsored, statewide development of artificial intelligence. Titled the “<a href="https://www.newamerica.org/cybersecurity-initiative/digichina/blog/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/" target="_blank" rel="noopener noreferrer">Next (New) Generation Artificial Intelligence Development Plan</a>,” China’s massive <u>three-part program</u> laid out the steps necessary to accomplish specific benchmarks by maximizing the country’s productive forces, national economy, and national competitiveness. The express purpose of the plan is to create an innovative new type of nation, led by science and global technological power, to achieve what Xi calls “the great rejuvenation of the Chinese nation.” First, <strong>by 2020</strong>, the plan spells out <u>China’s intent to equal the most globally advanced levels of AI technology </u>and application capabilities in the US or anywhere else in the world. … Second, <strong>by 2025</strong>, the Chinese intend to capture a verifiable lead over the US and all other countries in the development and production of all core AI technologies, while at the same time making them the structural strength of China’s ongoing industrial and economic transformation. The AI industry will enter into the global high-end value chain. This new-generation will be widely used in <u>intelligent manufacturing, intelligent medicine, intelligent city, intelligent agriculture, national defense construction, and other fields</u> . . . Last, <strong>by 2030</strong>, China intends to <u>lead the world in all aspects of AI</u>. [B]y 2030, China’s AI theories, technologies, and applications should <u>achieve world-leading levels</u>, making China the world’s primary AI innovation center, achieving visible results in intelligent economy and intelligent society applications, and laying an important foundation for becoming a leading innovation-style nation and an economic power. As we’ll see in the next chapter, the <u>Chinese government has already taken significant steps to accomplish its national AI objectives</u>, and it has done so in ways most <strong>Westerners don’t yet realize and will find hard to fathom</strong>.</em></p>
<p style="text-align: justify;"><strong><em>Chapter 13: China’s Expanding Sphere</em></strong></p>
<p style="text-align: justify;"><em>… it is important to acknowledge a political reality. Through processes of <u>free elections</u>, by design—if not always perfect execution—<u>democratic systems of government must eventually respond and account to the majority will of their people</u>, or at least to the will of the people’s elected representatives. The same isn’t true in nondemocratic or authoritarian countries where citizens have no real voice or vote. That’s a definitional reality. Again, <strong>democracy</strong> doesn’t always play out perfectly, but <u>it at least allows free speech, open conversation, informed debate, and peaceful protest</u>. Most importantly, democracy allows for multiple parties and political opposition. <u>Authoritarian governments do not</u>, at least not to any meaningful or ultimately effective degree. Also, throughout all of the following pages, it is not my purpose to denigrate the people or population of any nation, nor to suggest that the morals, ethics, or integrities of any population are better or worse than another. Populations should not and cannot be stereotyped, nor should anyone speak to the mind-set of others or generalize about a culture they’ve not experienced themselves. Governments and administrations, however, along with their policies and practices, can be characterized and ought to be criticized when the circumstances warrant.</em></p>
<p style="text-align: justify;"><em><u>Throughout Mao’s rule, the controlling Communist Party relied heavily on mass</u> surveillance to ensure the political and social conformity of its people. Before the development of technology, social control was accomplished primarily through harsh government retribution against anyone suspected of anti-party attitudes or ideas. Throughout Mao’s rule, perceived violations of Communist Party doctrine were handled swiftly and severely by the central and local governments. Police and military repressions and <u>mass executions of the Chinese people were commonplace</u>. Tens of millions were killed, and tens of millions more were sent to forced labor camps where an uncountable number of additional Chinese citizens perished under brutal conditions. Even outside of the camps, <u>forced suicides and widespread famine were commonplace</u>.</em></p>
<p style="text-align: justify;"><em>With the advance of twenty-first-century technology, the watchful eye of the Communist Party’s authority has become even more penetrating. <u>Digital methods of censorship, surveillance, and social control have become unavoidable, integral parts of Chinese society</u>. Those methods provide the Communist Party, which essentially is the state, with powerful eyes, ears, and influence over most aspects of its citizens’ lives. Again, and as stated<u>, I am not criticizing the Chinese people themselves, nor suggesting that China is entirely alone in surveilling its population. The extent and unchecked degree to which China is doing so, however, is far beyond any Western notions of national security or local crime control rationales for doing so</u>.  …  Tracking physical activities through cameras, however, is only the beginning. China’s influence and control also invasively extend to people’s use of the internet and to their personal digital devices. China’s internet and digital market is controlled primarily by three corporate technology giants—<u>Baidu, Alibaba, and Tencent</u> (collectively referred to as “BAT”).  … <u>As of 2019, Tencent, Alibaba, and Baidu ranked as the third, fifth, and eighth largest internet companies, respectively, in the world</u>. Combined, their power and range are colossal—particularly with respect to AI. It is currently estimated that more than half of all Chinese companies that are in any way involved in AI research, development, or manufacturing have ownership or funding ties that relate directly back to one of the three.  Regardless of the formal structure of their ownership, <u>Chinese companies are subject to a mandated and direct influence from the Communist Party</u>. Its largest enterprises, including the large tech giants Baidu, Alibaba, and Tencent, are <u>required to have Communist Party committees within their organizations</u>.</em></p>
<p style="text-align: justify;"><em>Westerners often mistakenly assume that the content they can access on the internet is essentially the same as what’s available to residents of other countries. But that’s entirely untrue, and China’s control of its internet is one of the most glaring examples.  … Beyond censoring and monitoring the internet, China also surveils its masses by collecting data from their personal devices—most notably their mobile devices and the apps they rely upon to conduct their daily affairs. Since 2015, China has been developing a “<a href="https://www.wired.co.uk/article/china-social-credit-system-explained" target="_blank" rel="noopener noreferrer">social credit system</a>” powered by AI that is expected to be a unified, fully operational umbrella covering all 1.4 billion of its people by 2022. <u>The system is meant to collect all forms of digital data in order to calculate the “social trustworthiness” of individual citizens</u>, and then reward or punish them by either allowing or restricting various opportunities and entitlements based on their scores. The formal and publicly stated aim of the system is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.” An additional party slogan for the system is “Once discredited, limited everywhere.” The analogies to George Orwell’s novel 1984, and its themes of government overreach and <u>Big Brother’s</u> regimentation of social behavior are hard to deny. … As a result, <u>Chinese citizens can find themselves blacklisted or otherwise restricted</u> from renting cars, buying train or airplane tickets, obtaining favorable loan rates, acquiring insurance, purchasing real estate or otherwise obtaining affordable housing, making financial investments, and even attending preferred schools or qualifying for certain jobs and career opportunities.</em></p>
<p style="text-align: justify;"><em>While some contend that China’s use of digital and AI technologies shouldn’t be criticized—and that Xi’s government is entitled to their applications as somehow culturally appropriate and politically acceptable—widespread reports of what’s transpiring in China’s largest western region argue otherwise. Well over 90 percent of mainland China’s population is composed of the <strong>Han</strong> Chinese. All Han share a deeply rooted, common genetic ancestry tracing back to ancient civilizations that originally inhabited a single region along the Yellow River in northern China. Throughout most of China’s recorded history, the Han Chinese have been the culturally dominant majority. … Most <strong>Uighurs </strong>are Muslim. Due to their cultural differences from the Chinese majority, frictions with the Communist Party and central government have existed for many decades. … In recent years, there have been worldwide accusations and consensus that China is guilty of extreme human rights abuses against the Uighurs. It’s now commonly reported that <u>China is detaining between one and two million Uighurs, or 10 to 20 percent of all Uighur people, in more than a hundred detention camps in Xinjiang</u>. Most of those detained aren’t accused of any crimes, and very few records or information are even publicly available.</em></p>
<p style="text-align: justify;"><em>Also, consistent with the Communist Party and central government’s approach elsewhere in the country, <u>officials are using a scoring system to determine when, or if, those detained will be released.</u> One document specifically instructs officials to tell inquiring family members that their own behavior could compromise their detained relatives’ scores. Specifically, authorities are advised to say: “Family members, including you, must abide by the state’s laws and rules and not believe or spread rumors. Only then can you add points for your family member, and after a period of assessment they can leave the school if they meet course completion standards.”</em></p>
<p style="text-align: justify;"><em>While CEIEC is an admitted, wholly-state-owned enterprise, H<u>uawei’s specific ownership is more of a mystery. Officially, the company claims to be 99 percent owned by its employees, their interests purportedly flowing indirectly through a labor union. To date, though, outside experts haven’t been able to clarify the true structure</u>. What is clear, however, is that <u>Huawei is linked to China’s party-state in ways even more direct than those of most Chinese enterprises, and the government’s intelligence agencies undoubtedly have leverage and influence over the company’s decisions, activities, and data—a disconcerting reality given that Huawei has exported telecommunication infrastructures, equipment, and related consumer electronics to more than 150 countries around the world</u>. The next great change in digital technology and capability will come in the form of 5G technologies, in which Huawei is an industry leader. The term 5G stands for the “fifth generation” of wireless cellular technology, which won’t just be an improvement over 3G and 4G capabilities, it will be a transformation. Engineered to operate using millimeter radio waves as signals, 5G networks will transform the internet with a broadband capacity perhaps 100 times the capacity of current 4G networks, and with network response times that will be 10 to 100 times faster than 4G. While a clean, uninterrupted connection to a 4G network produces response times of about 45 milliseconds, a <u>5G network will produce response times possibly less than 1 millisecond, which is 300 to 400 times faster than the blink of an eye</u>.</em></p>
<p style="text-align: justify;"><em><u>At the time of this writing, 5G is available in very few locations around the globe</u>. But the transition from 4G is well underway, and, consistent with the Belt and Road Initiative, <u>Huawei is aggressively marketing its ability to provide 5G</u> core infrastructures and consumer devices to countries and regions throughout the world. … <u>In the US, the Department of Defense has banned sales of Huawei</u> products on military bases, the Federal Communications Commission (FCC) has proposed rules that would formally prevent any American telecom company from using Huawei equipment, and various other legislation is being proposed to protect the country’s infrastructure from the risks Huawei might cause.</em></p>
<p style="text-align: justify;"><em>Militarily, China doesn’t approach the size, power, or sophistication of the US and its allies, which lead by a wide margin on the ground, in the air, and at sea. But China views AI technology as its opportunity to leapfrog certain phases of weapons development to bridge the gap between it and the US. In October 2018, the deputy director of the General Office of China’s Central Military Commission confirmed the Chinese military’s vision for AI when he characterized China’s overall goal as an effort to “<u>narrow the gap between the Chinese military and global advanced powers</u>” by taking advantage of the “ongoing military revolution . . . centered on information technology and intelligent technology.”</em></p>
<p style="text-align: justify;"><strong><em>Chapter 14:  Russian Disruption</em></strong></p>
<p style="text-align: justify;"><em>During his years in office—a period that has now spanned the American presidential administrations of Clinton, Bush, Obama, and Trump—<u>Putin has greatly reduced the country’s poverty percentage</u>, lowered its personal and corporate tax rates, increased wages, and enhanced the country’s consumption and general standard of living. All of Putin’s reformations have resulted in a middle class that’s grown by tremendous numbers since he first took office. As a former KGB agent and onetime head of the KGB’s successor agency, the Russian Federal Security Service (FSB), Putin entered office with deep connections and alliances to the nation’s controlling intelligence and military agencies. Although Russia is constitutionally structured as a multiparty democracy, under Putin’s leadership it’s in fact something far different. <u>Better described as a bureaucratic autocracy, any political opposition that threatens Putin’s standing is routinely suppressed</u>, as are any unfavorable domestic press or media reports.</em></p>
<p style="text-align: justify;"><em>Although he downplays his financial holdings at every opportunity, <u>many experts believe Putin has become one of the world’s wealthiest individuals</u> since taking power—with a fortune spread across a wide range of secretly held Russian oil, natural gas, real estate, and other corporate interests.</em></p>
<p style="text-align: justify;"><em>As a consequence of that background and the ongoing international trade and related difficulties confronting Russia, <u>the country is dramatically far behind both the US and China in AI investments,</u> research facilities, expert talent, and development capabilities. Simply put, <u>Russia doesn’t have the available funding</u>—from either domestic or foreign sources—and is without the technological infrastructure and expertise required to match the level of AI efforts and accomplishments taking place in other parts of the world.</em></p>
<p style="text-align: justify;"><em><u>Realizing their economic obstacles, Putin’s defense agencies and military designers are aggressively putting machine learning technologies to their most immediately accomplishable and impactful uses</u>—electronic warfare (EW) and robotic weapons. .. The second AI track the Kremlin is focused on is <u>domestic and international propaganda, surveillance, and <strong>disinformation</strong></u>. Since Putin first became president, mandates to control and manipulate information have been key components of his policies. Now, some 20 years later, Putin’s administration is still intent on accomplishing its own form of domestic digital authoritarianism. The government’s control of traditional and digital media sources and its repression of independent media outlets have increased under Putin’s reign. <u>There are more reporters in Russian prisons now than at any point since the fall of Soviet Russia</u>. Digital surveillance and social control strategies have been enhanced. Russian social and political speech is monitored carefully, especially for those considered activists or political adversaries, and Putin is now looking to create an independent, sovereign internet that will be fully controlled by the Kremlin and shield all of Russia from vast amounts of outside information,16 akin to the Great Firewall of China. Russia’s System of Operative Search Measures (SORM) was first created in 1995 and requires all Russian telecommunications and internet providers to install hardware provided by the FSB that gives it the ability to monitor Russian phone calls, emails, texts, and web browsing activities. Five years later, during Putin’s first week in office, he expanded the SORM’s reach by allowing a number of additional Russian security agencies apart from the FSB to gather SORM information from Russian citizens and foreign visitors.</em></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1580" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-10-of.jpg" alt="" width="562" height="336" srcset="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-10-of.jpg 562w, https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-10-of-300x179.jpg 300w" sizes="(max-width: 562px) 100vw, 562px" /></p>
<p style="text-align: justify;"><em>It is a standard doctrine of Russian military strategy to conduct information warfare (“<a href="https://www.brookings.edu/research/weapons-of-the-weak-russia-and-ai-driven-asymmetric-warfare/" target="_blank" rel="noopener noreferrer">informatsionaya voyna</a>”) to interfere in the politics and operations of its foreign adversaries through cyber and other operations. The 2010 “<a href="https://carnegieendowment.org/files/2010russia_military_doctrine.pdf" target="_blank" rel="noopener noreferrer">Military Doctrine of the Russian Federation</a>” specifically says such measures are taken, “to <u>achieve political objectives without the utilization of military force</u>.” This is nothing new. Information warfare is a long-held Russian military concept that goes back to the earliest days of the Cold War. As <a href="https://en.wikipedia.org/wiki/Valery_Gerasimov" target="_blank" rel="noopener noreferrer">General Valery Gerasimov</a>, the chief of the general staff of the armed forces of Russia, publicly acknowledged as recently as March 2019, the Russian government and military consider it a simple reality of international power and politics that they should, and do, conduct information and propaganda campaigns, including political interference, as an integral part of their regular national defense strategies. Even the most recently published “Military Doctrine of the Russian Federation” (2015) expressly states that one feature of modern military conflict is “exerting simultaneous pressure on the enemy throughout the enemy’s territory in the global information space.”22 Further, and perhaps most pertinent, Russian military doctrine does not differentiate between times of war and times of peace with respect to strategic noncombat measures waged against adversaries . . . and, by any objective and informed account, Russia considers any country of significant global standing that is not its formal ally to be its adversary. Russian information warfare tactics don’t have the absolute goal of convincing foreign populations that disinformation and lies are necessarily the truth. Instead, Putin’s Russia considers it strategically sufficient just to plant seeds of confusion, doubt, and disruption in the populations of foreign adversaries. The goal, first and foremost, is to internally polarize populations. Leading political theorists have long recognized disinformation as a basic tenet of governments with totalitarian orientations. [AA NOTE:  Russian term for the military deception strategies is <a href="https://en.wikipedia.org/wiki/Russian_military_deception#:~:text=Russian%20military%20deception%2C%20sometimes%20known,camouflage%20to%20denial%20and%20deception." target="_blank" rel="noopener noreferrer">Maskirovka</a>.]</em></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1581" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-11-of.jpg" alt="" width="223" height="347" srcset="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-11-of.jpg 223w, https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-11-of-193x300.jpg 193w" sizes="(max-width: 223px) 100vw, 223px" /></p>
<p style="text-align: justify;"><em>“<u>A people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can do what you please</u>.” Twentieth-century German American philosopher <a href="https://en.wikipedia.org/wiki/Hannah_Arendt" target="_blank" rel="noopener noreferrer">Hannah Arendt</a></em></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1582" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-12-of.jpg" alt="" width="225" height="236" /></p>
<p style="text-align: justify;"><em>With respect to the latter, in 2019, an entirely new and dangerous category of AI disinformation technology began to emerge called <strong>deepfakes</strong>. Using machine learning techniques, <u>a deepfake is a video and/ or audio clip that shows individuals appearing to do or say things that, in actuality, were never done or said—essentially creating events that never occurred</u>. The danger of such technology can’t be overstated, and its potential to sow discord by adversely affecting public impression, opinion, and politics is significant.</em></p>
<p style="text-align: justify;"><em>To what extent the Russian efforts materially influenced the actual outcome of the 2016 US election is, for purposes of this conversation, irrelevant. <u>What should alarm every American citizen, in fact every world citizen, is that intelligence agencies across the globe had little doubt that Russia would continue those interference strategies in the future</u>.</em></p>
<p style="text-align: justify;"><strong><em>Chapter 15: Democratic Ideals in an AI World</em></strong></p>
<p style="text-align: justify;"><em>Just over half the world now has systems of government that can fairly be characterized as democratic, but the proliferation of democracy, even with the US as a principal model, has only occurred in the last 75 years. <u>In 1945, at the close of World War II, there were only 12 democratic governments. Now, approximately 100 of the 195 states recognized by the United Nations are democracies in structure and overall ideology</u>.</em></p>
<p style="text-align: justify;"><em>The DoD will itself be a significant developer and user of AI technologies in years going forward. As the mandated national defender of American rights and dignities, it will also be the country’s primary protector in the face of foreign AI. In response to the global technological changes so rapidly occurring, along with the world’s apparent return to an era of aggressive, strategic competition, <u>the DoD is now taking meaningful steps to ensure the ethical design and use of AI, both domestically and abroad</u>.</em></p>
<p style="text-align: justify;"><em>Khashoggi’s death and its aftermath reminded many in the Western world of Saudi ruthlessness, and that much of what the Saudi government appears to stand for is antithetical to democratic ideals of human dignities and freedoms—particularly free speech and free press.</em></p>
<p style="text-align: justify;"><em>On a separate, but directly related note, a Saudi mobile app available from both Apple and Google merits discussion. “<a href="https://www.hrw.org/news/2019/05/06/saudi-arabias-absher-app-controlling-womens-travel-while-offering-government" target="_blank" rel="noopener noreferrer">Absher</a>” (roughly translated as “Yes, Sir” or “Yes, done”) is a product of the Saudi Interior Ministry that gives Saudi Arabian men the ability to exercise their guardian rights over women by tracking their locations and blocking their ability to travel, conduct financial transactions, and even obtain certain medical procedures.39 To a country’s leadership that considers it culturally appropriate and legally acceptable to discriminate and control the rights of women, this type of app is a perfectly acceptable and socially efficient tool of AI. As is clear from earlier pages, it should come as no surprise that countries and cultures—in fact, any country or culture—will use AI in ways they deem morally and legally acceptable. While the Absher app is but one example, it highlights an imperative question for private enterprise that develops AI under the freedoms provided by democratic principles. That question is whether companies should participate in or enable oppressive uses of their commercial technologies by countries with vastly contrary cultural and moral codes. These types of issues deserve transparent debate, and a cooperative and consistent approach from democratic governments and their private institutions alike.</em></p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1583" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-13-of.jpg" alt="" width="396" height="221" srcset="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-13-of.jpg 396w, https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-13-of-300x167.jpg 300w" sizes="(max-width: 396px) 100vw, 396px" /></p>
<p style="text-align: justify;"><strong><em>Chapter 16. A Computer&#8217;s Conclusion</em></strong></p>
<p style="text-align: justify;"><em>Our job is now to convince the public in particular that using AI to achieve these aims is a necessary and desirable part of our society, but we cannot afford to do so unless we know how it will best be used and when. But in the end, the future demands we make moral decisions as we begin to build a world that is truly safe and sustainable, one where humans and AI can truly coexist together. — <a href="https://openai.com/blog/gpt-2-1-5b-release/" target="_blank" rel="noopener noreferrer">GPT-2</a> (1558 Model) An OpenAI Language-Generating Neural Network</em></p>
<p style="text-align: justify;"><em><u>GPT-2 is a large-scale, unsupervised machine learning application created by the American nonprofit organization, OpenAI</u>.  An acronym for <strong>Generative Pre-Training, Version 2</strong>, the GPT-2 application was <u>trained on a data set of eight million web pages and designed as an intricately deep neural network capable of weighing 1.5 billion parameters</u>. Its narrow task is to generate humanlike, written language responses to submissions of text, or “prompts,” that it generates in the form of either a proposed continuation of the prompt, or a response if the prompt was submitted in the form of a question. In essence, GPT-2’s function is to create additional words that are: (1) consistent with the patterns and content of new text submissions, and (2) based on patterns the program has discerned from its immense training set of internet information.</em></p>
<p style="text-align: justify;"><em>Before closing this book, I thought it would be an appropriate experiment to <u>informally test GPT-2 myself</u>. On the very first page of this book, in the short Author’s Note just before the Prologue, I wrote: Our focus now must be to openly address the current realities of AI to ensure, as well as we can, that it is implemented only in ways consistent with fundamental human dignities . . . and only for purposes consistent with democratic ideals, liberties, and laws. At this point in your reading, I trust you know how convinced I am of those words. They seemed a perfect choice to test GPT-2, an appropriate sample to see what kind of “continuation” the program would produce. When I typed and submitted them into the program, its generator displayed a response almost immediately. The words the algorithm created, on its own and in less time than it took to lift my fingers from the keyboard, are shown as the epigraph at the start of this chapter. They’re so cogent to the entirety of this book that they bear repeating. So, here they are. This is from an algorithm familiar with eight million web pages, but <u>prompted only by my 43 words</u>: “<strong>Our job is now to convince the public in particular that using AI to achieve these aims is a necessary and desirable part of our society, but we cannot afford to do so unless we know how it will best be used and when. But in the end, the future demands we make moral decisions as we begin to build a world that is truly safe and sustainable, one where humans and AI can truly coexist together</strong>.” Impressive? I think so. And <u>I couldn’t have said it better myself</u>. In fact, in an instant and in those two sentences, an artificially intelligent program <u>captured the essence of what I’ve endeavored to make clear through the previous 15 chapters</u>.</em></p>
<p style="text-align: justify;"><em>What I do know, however, is that we’re now at an <strong>inflection point</strong> in the history of the human race. What we do with respect to AI will impact our present, our future, and perhaps our eventual destiny. The strengths of free nations and democratically represented people are, and will always be, their ability to work cooperatively together in order to preserve their individual liberties and ways of life. This is no time to distance ourselves, to be passive or distracted.</em></p>
<p style="text-align: justify;"><em>If this book contributes in any way to a <u>better understanding of AI and an enhanced appreciation of its significance</u>, then I’ll have accomplished my mission. It’s time for another awakening, a public awareness, and a conscientious consensus. Those who one day look back upon these times should not be left wishing our eyes had been more open.</em></p>
<p style="text-align: justify;"><strong><em>Acknowledgments</em></strong></p>
<p style="text-align: justify;"><em>Onward then to my publisher, Glenn Yeffeth of BenBella Books, who was also an extraordinary connection. He was enthusiastic from the start, but most importantly allowed me the freedom to write the book my way. Whereas others might have constrained or altered <u>my structure, thinking it too broad in scope, Glenn took the risk of letting me follow my original vision</u>—trusting that I could sweep from A to Z in some reasoned and lucid manner. Authors often complain, at least to one another, that their publisher took control of their style, their structure, or even their title—yes, publishers have the final say on many more aspects of a finished book than readers have reason to know. But that was never the case with Glenn and his team. Their contributions and insight were immensely constructive and creative, but also, always, cooperative and deferential.</em></p>
<p style="text-align: justify;"><em> </em></p>
<p style="text-align: justify;"><strong><u>Amazon Book Description</u></strong></p>
<p style="text-align: justify;">Late in 2017, the conversation about the global impact of artificial intelligence (AI) changed forever. China delivered a bold message when it released a national plan to dominate all aspects of AI across the planet. Within weeks, Russia&#8217;s Vladimir Putin raised the stakes by declaring AI the future for all humankind, and proclaiming that, &#8220;Whoever becomes the leader in this sphere will become the ruler of the world.&#8221;  The race was on. Consistent with their unique national agendas, countries throughout the world began plotting their paths and hurrying their pace. Now, not long after, the race has become a sprint. Despite everything at risk, for most of us AI remains shrouded by a cloud of mystery and misunderstanding. Hidden behind complex technical terms and confused even further by extravagant depictions in science fiction, the realities of AI and its profound implications are hard to decipher, but no less crucial to understand. In T-Minus AI: Humanity&#8217;s Countdown to Artificial Intelligence and the New Pursuit of Global Power, author Michael Kanaan explains the realities of AI from a human-oriented perspective that&#8217;s easy to comprehend. A recognized national expert and the U.S. Air Force&#8217;s first Chairperson for Artificial Intelligence, Kanaan weaves a compelling new view on our history of innovation and technology to masterfully explain what each of us should know about modern computing, AI, and machine learning. Kanaan also illuminates the global implications of AI by highlighting the cultural and national vulnerabilities already exposed and the pressing issues now squarely on the table. AI has already become China&#8217;s all-purpose tool to impose authoritarian influence around the world. Russia, playing catch up, is weaponizing AI through its military systems and now infamous, aggressive efforts to disrupt democracy by whatever disinformation means possible. America and like-minded nations are awakening to these new realities, and the paths they&#8217;re electing to follow echo loudly, in most cases, the political foundations and moral imperatives upon which they were formed. As we march toward a future far different than ever imagined, T-Minus AI is fascinating and critically well-timed. It leaves the fiction behind, paints the alarming implications of AI for what they actually are, and calls for unified action to protect fundamental human rights and dignities for all.</p>
<p style="text-align: justify;"><strong><u>About the Author</u></strong></p>
<p style="text-align: justify;">Since February 2020, Michael Kanaan has been the Director of Operations at the US Air Force/MIT Artificial Intelligence program in Boston.  Before that, he chaired an Air Force cross-functional team charged with integrating AI initiatives.  He directed all AI and machine learning activities on behalf of the Deputy Director of Air Force Intelligence, who oversees a staff of 30,000 with an annual budget of $55 billion. Following his graduation from the U.S. Air Force Academy in 2011, Kanaan was the Officer in Charge of a $75 million hyperspectral mission at the National Air and Space Intelligence Center, and then the Assistant Director of Operations for the 417-member Geospatial Intelligence Squadron. He was also the National Intelligence Community Information Technology Enterprise Lead for an 1,800-member enterprise responsible for data discovery, intelligence analysis, and targeting development against ISIS. In addition to receiving several awards and distinctions as an Air Force officer, Kanaan was added to Forbes 2019 list of <a href="https://www.forbes.com/profile/michael-kanaan/?sh=3c6707d68645" target="_blank" rel="noopener noreferrer">most influential 30 under 30</a>.  He also teaches a machine learning course at the MIT Sloan School of Business and in May 2020, joined <a href="https://aiedu.org/" target="_blank" rel="noopener noreferrer">the AI Education Project</a> as an Advisory Board Member.</p>
<p style="text-align: justify;"><strong><u><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1584" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-14-of.jpg" alt="" width="233" height="257" /></u></strong></p>
<p style="text-align: justify;"><strong><em><u>Google Research</u></em></strong>:</p>
<p style="text-align: justify;">Michael Kanaan’s <a href="https://twitter.com/michaeljkanaan?lang=en" target="_blank" rel="noopener noreferrer">Twitter feed</a>:  Shares lots of interesting information and insights.  Crazy about books.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1585" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-15-of.jpg" alt="" width="483" height="380" srcset="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-15-of.jpg 483w, https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-15-of-300x236.jpg 300w" sizes="(max-width: 483px) 100vw, 483px" /></p>
<p style="text-align: justify;">On August 5, 2020, Kanaan did 48 minute <a href="https://www.youtube.com/watch?v=cEYM9X6twwU" target="_blank" rel="noopener noreferrer">Interview with Dr. Rollan Roberts</a>, who is an expert on cybersecurity and author of multiple books. He explains that the purpose of his book is to educate and inspire debate about the difficult decisions that need to be made regarding AI – particularly regarding regulation, enforcement, and education. He questions if governments are doing enough to bring AI into the humanities.  What are we doing for our schools?  AI is not all tech.  There are biases in the machines.  Its an integral part of the human experience.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1586" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-16-of.jpg" alt="" width="577" height="401" srcset="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-16-of.jpg 577w, https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-16-of-300x208.jpg 300w" sizes="(max-width: 577px) 100vw, 577px" /></p>
<p style="text-align: justify;">On August 28, 2020, Kanaan did a 48 minute <a href="https://www.youtube.com/watch?v=tNGTq_NajJc" target="_blank" rel="noopener noreferrer">Book Talk Interview</a> with the Center for Strategic &amp; International Studies (CSIS), where he explains the structure of the book.  Believes in starting with analogies – to weave the AI narrative into the human experience – for context and to demystify AI.  Part 1 provides this context, Part 2 defines AI, and Part 3 addresses the biases, dangers, and implications.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1587" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-17-of.jpg" alt="" width="582" height="370" srcset="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-17-of.jpg 582w, https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-17-of-300x191.jpg 300w" sizes="(max-width: 582px) 100vw, 582px" /></p>
<p style="text-align: justify;"><a href="https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world" target="_blank" rel="noopener noreferrer">Putin says the nation that leads in AI will be the ruler of the world</a> – The Verge Article – September 4, 2017.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-1589" src="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-18-of.jpg" alt="" width="411" height="504" srcset="https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-18-of.jpg 411w, https://www.vii-llc.com/wp-content/uploads/2020/11/Idea-Hub-Book-Reviews-T-Minus-AI-18-of-245x300.jpg 245w" sizes="(max-width: 411px) 100vw, 411px" /></p>
<p>The post <a href="https://www.vii-llc.com/2020/11/25/t-minus-ai-humanitys-countdown-to-artificial-intelligence-and-the-new-pursuit-of-global-power/">T-Minus AI: Humanity’s Countdown to Artificial Intelligence and the New Pursuit of Global Power</a> appeared first on <a href="https://www.vii-llc.com">VII Capital Management</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>More from Less: The Surprising Story of How We Learned to Prosper Using Fewer Resources &#8211; and What Happens Next</title>
		<link>https://www.vii-llc.com/2019/12/09/more-from-less-the-surprising-story-of-how-we-learned-to-prosper-using-fewer-resources-and-what-happens-next/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=more-from-less-the-surprising-story-of-how-we-learned-to-prosper-using-fewer-resources-and-what-happens-next</link>
		
		<dc:creator><![CDATA[Adriano Almeida]]></dc:creator>
		<pubDate>Mon, 09 Dec 2019 18:34:07 +0000</pubDate>
				<category><![CDATA[Book Review]]></category>
		<category><![CDATA[Science & Technology]]></category>
		<guid isPermaLink="false">http://mia.art.br/victori/?p=418</guid>

					<description><![CDATA[<p>By Andrew McAfee, Aug/2019 (352p.) &#160; This book is “fascinating and deeply encouraging.”  It offers important and relevant information in a convincing and well-organized manner – that will have a...</p>
<p>The post <a href="https://www.vii-llc.com/2019/12/09/more-from-less-the-surprising-story-of-how-we-learned-to-prosper-using-fewer-resources-and-what-happens-next/">More from Less: The Surprising Story of How We Learned to Prosper Using Fewer Resources &#8211; and What Happens Next</a> appeared first on <a href="https://www.vii-llc.com">VII Capital Management</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h5><span style="text-decoration: underline;"><em>By Andrew McAfee, Aug/2019 (352p.)</em></span></h5>
<p>&nbsp;</p>
<p style="text-align: justify;">This book is “<em>fascinating and deeply encouraging</em>.”  It offers important and relevant information in a convincing and well-organized manner – that will have a <u>much larger and longer-lasting impact</u> than most of the other things we read about on a daily basis.  <strong>Andrew McAfee</strong> was co-author of <a href="https://www.amazon.com/Second-Machine-Age-Prosperity-Technologies-dp-0393239357/dp/0393239357/ref=mt_hardcover?_encoding=UTF8&amp;me=&amp;qid=1575893344">The Second Machine Age</a>, which was another <em>must read</em> when it came out in January 2014, because it anticipated the dominance of Tech that is undeniable today.  He is a renowned scientist who has been, and likely will continue to be, ahead of the ball in terms of understanding and explaining how technology is changing the world. His other book, <a href="https://www.amazon.com/Machine-Platform-Crowd-Harnessing-Digital-dp-0393254291/dp/0393254291/ref=mt_hardcover?_encoding=UTF8&amp;me=&amp;qid=1575882911">Machine, Platform, Crowd</a>, made less of an impact on me when it came out in June 2017.  I think the reason for that is that Geoffrey Parker’s book, <a href="https://www.amazon.com/Platform-Revolution-Networked-Markets-Transforming/dp/0393249131/ref=tmm_hrd_swatch_0?_encoding=UTF8&amp;qid=1575883060&amp;sr=1-1">Platform Revolution</a> had already done a fantastic job of explaining the same subject more than a year earlier.  But this latest book, <a href="https://www.amazon.com/More-Less-Surprising-Learned-Resources-ebook/dp/B07P5GPMTY/ref=sr_1_1?keywords=more+from+less&amp;qid=1575894446&amp;s=books&amp;sr=1-1">More from Less</a>, which came out in <strong>October 2019</strong>, is fresh, and will likely prove to be the most influential as well – given the heated debate over how to best regulate big tech, save the environment, and construct better societies/economies.</p>
<p><a href="http://ide.mit.edu/about-us/people/andrew-mcafee"><img loading="lazy" decoding="async" class="aligncenter wp-image-904 size-full" src="https://www.vii-llc.com/wp-content/uploads/2019/12/Idea-Hub-Book-Reviews-More-from-less-McAfee.jpg" alt="" width="322" height="322" srcset="https://www.vii-llc.com/wp-content/uploads/2019/12/Idea-Hub-Book-Reviews-More-from-less-McAfee.jpg 322w, https://www.vii-llc.com/wp-content/uploads/2019/12/Idea-Hub-Book-Reviews-More-from-less-McAfee-300x300.jpg 300w, https://www.vii-llc.com/wp-content/uploads/2019/12/Idea-Hub-Book-Reviews-More-from-less-McAfee-150x150.jpg 150w, https://www.vii-llc.com/wp-content/uploads/2019/12/Idea-Hub-Book-Reviews-More-from-less-McAfee-100x100.jpg 100w, https://www.vii-llc.com/wp-content/uploads/2019/12/Idea-Hub-Book-Reviews-More-from-less-McAfee-140x140.jpg 140w" sizes="(max-width: 322px) 100vw, 322px" /></a></p>
<p style="text-align: justify;">Below I share most of the highlights that I exported from our Kindle.  There are some great quotes in there, such as: “<em>If you want to be good to the environment, stay away from it.” </em>and “<em>Like innovation itself, technologies are combinatorial; most of them are combinations or recombinations of existing things,</em>” and  “Talent is equally distributed; opportunity is not,” and “<em>Predicting exactly how technological progress will unfold is much like predicting the weather: feasible in the short term, but impossible over a longer time,” </em><em>and </em><em>“we must make Nature worthless,” </em><em>and</em><em> “economies run and grow on ideas.”</em><em>  </em></p>
<p style="text-align: justify;"><strong>Chapter 12</strong> was the most important, in my opinion, because it is where McAfee explains the implication of technology driving concentration and leading to a world of <em>superstars</em> and <em>zombies. </em> He observes that “<em><u>a few companies</u></em><em> have become much more productive and have started paying much higher salaries (two developments that are closely related), while the rest have seen near-stagnant productivity and pay</em>,” before concluding that “<em>when stock ownership is closely held by a relatively small group of people and share prices increase, members of that group become much wealthier than everyone else</em>.”  Finding the stocks of these few <em>outstanding companies</em> that makes us “much wealthier than everyone else” is the reason Victori Capital exists.</p>
<p style="text-align: justify;">So in conclusion:  This book is a <em>must read</em>.</p>
<p>Best,</p>
<p>Adriano</p>
<hr />
<h5></h5>
<p>&nbsp;</p>
<h5><em><span style="text-decoration: underline;">Highlighted Passages</span>:</em></h5>
<p>&nbsp;</p>
<p style="text-align: justify;"><strong>Introduction: Readme</strong></p>
<p style="text-align: justify;"><em>We invented the computer, the Internet, and a suite of other digital technologies that let us dematerialize our consumption: over time they allowed us to consume more and more while taking less and less from the planet. This happened because digital technologies offered the cost savings that come from substituting bits for atoms, and the intense cost pressures of capitalism caused companies to accept this offer over and over. <u>Think, for example, how many devices have been replaced by your smartphone</u>.</em></p>
<p style="text-align: justify;"><em><u>I call tech progress, capitalism, public awareness, and responsive government the “four horsemen of the optimist.”</u> When all four are in place, countries can improve both the human condition and the state of nature. When the four horsemen don’t all ride together, people and the environment suffer.  … They stand in sharp contrast to the <strong>Four Horsemen of the Apocalypse</strong> portrayed in the New Testament’s book of Revelation, which are commonly interpreted as <strong>war, famine, pestilence</strong>, and <strong>death</strong>.</em><em> </em></p>
<p style="text-align: justify;"><em>“</em><a href="https://thebreakthrough.org/journal/issue-5/the-return-of-nature"><em>The Return of Nature: How Technology Liberates the Environment</em></a><em>,” published in 2015 in the Breakthrough Journal. When I encountered that headline, I had to click on it, <u>which led me to one of the most interesting things I’d ever read</u>.</em></p>
<p style="text-align: justify;"><em>What caused <strong>dematerialization</strong> to take over? … <strong>capitalism</strong> is a big part of my explanation.</em></p>
<p style="text-align: justify;"><em>So just about any reader will probably initially feel that something in this book is wrong. Again, <u>I just ask that you approach the book’s ideas with an open mind</u>. I hope you’ll believe that I’m arguing in good faith. My intention here is not to write a polemic or start a flame war. I’m not trying to troll or dunk on anyone (in other words, <u>I’m not trying to provoke anyone into losing their temper or to demonstrate my superiority</u>). I’m just trying to highlight a phenomenon that I find <strong><u>fascinating and deeply encouraging</u></strong><u>,</u> explain how it came about, and discuss its implications. I hope you’ll come along for the journey.</em></p>
<p style="text-align: justify;"><strong>Chapter 1: All the Malthusian Millennia</strong></p>
<p style="text-align: justify;"><em>In one sense, this is entirely fair. As we’ll see, the gloomy predictions that Malthus made right at the end of the eighteenth century have proved to be so wrong that they deserve a special designation. But in another sense we’re being too hard on the good reverend. Most discussions of his work overlook that while Malthus was badly wrong about the future, he was broadly correct about the past.</em></p>
<p style="text-align: justify;"><em>Malthus is best known for An Essay on the Principle of Population, published in 1798 &#8230; Malthus pointed out, correctly, that human populations grow rapidly if no force acts to reduce them. If a couple has two children, each of whom has two children, and this process keeps repeating, then the original couple’s total number of descendants will double with each generation from two to four, then eight, then sixteen, and so on. People can do only two things to retard this exponential (or “geometric”) growth in numbers: not have children, or die.</em><strong><em> </em></strong></p>
<p style="text-align: justify;"><em>The economic historian Gregory Clark put these two types of evidence together and provided my favorite view of what life was like in England over six centuries prior to the publication of Malthus’s Essay. <u>It’s not a pretty picture</u>.</em></p>
<p style="text-align: justify;"><em>Researchers have also found Malthusian vibrations in the populations of Sweden, Italy, and other European countries over the same period.</em></p>
<p style="text-align: justify;"><em>Between the time we Homo sapiens left our African cradle over one hundred thousand years ago and the dawn of the Industrial Era in the late eighteenth century, we lived in a Malthusian world. We covered the planet, yet didn’t conquer it.</em></p>
<p style="text-align: justify;"><em>Ten thousand years ago, about 5 million people were on the planet. As we moved into new regions and improved our technologies, that number increased along a steady but shallow exponential curve, reaching almost 190 million people by the time of Christ. Agriculture allowed higher population densities, so as farming spread, human population growth accelerated in the Common Era.</em><strong><em> </em></strong></p>
<p style="text-align: justify;"><em>By the year 1800, just about a billion of us were on the planet. That sounds like a big number, but when compared to the inhabitable area of the earth, it starts to look small. <u>If all the world’s people were spread out evenly around the planet’s inhabitable land in 1800, everyone would have had almost sixteen acres—an area about as large as nine World Cup soccer fields—to himself or herself. We would not have been able to hear each other, even by shouting</u>.</em></p>
<p style="text-align: justify;"><strong>Chapter 2: Power over the Earth: The Industrial Era</strong></p>
<p style="text-align: justify;"><em>The title of William Rosen’s book about the history of steam power is apt; it was The Most Powerful Idea in the World.</em></p>
<p style="text-align: justify;"><em>Steam changed the course of humanity not by helping to plow farms, but instead by helping to fertilize them.</em></p>
<p style="text-align: justify;"><em>The line connecting population and average prosperity (wages, in other words) zooms off upward and to the right at the start of the nineteenth century and rarely again changes course. England’s Malthusian oscillations and vibrations fade into a small corner of the past. Population and Prosperity in England, 1200–2000</em></p>
<p style="text-align: justify;"><em>Eventually, they could afford them. In 1935, the English social reformer B. Seebohm Rowntree found the working classes in York were eating much the same diets as their employers, a huge change from what he had found during a similar 1899 survey. Even during the depths of the Depression, Rowntree observed that poor families could afford roast beef and fish each once a week, and sausages or other animal protein two more times.</em></p>
<p style="text-align: justify;"><em>Charles Dickens’s A Christmas Carol, published in 1843, mentions apples, pears, oranges, and lemons as seasonal treats, but not bananas. Refrigerated steamships eventually shrank the time and distance between tropical plantations and northern Europe. In 1898 more than 650,000 bunches of bananas, each bearing as many as a hundred pieces of fruit, were exported from the Canary Islands.</em><strong><em> </em></strong></p>
<p style="text-align: justify;"><em>In 1885, Daimler and his colleague Wilhelm Maybach demonstrated their Petroleum Reitwagen, a clunky motorcycle-like machine that was the world’s first vehicle powered by internal combustion. There would be many more of them, more than a few built by the company that became Daimler-Benz, the home of Mercedes.</em></p>
<p style="text-align: justify;"><em>Electric power started small, got big, then shrank again. In 1837 the Vermont blacksmith and tinkerer Thomas Davenport received a US patent for an “Improvement in Propelling Machinery by Magnetism and Electro- Magnetism.” We now call such devices for propelling machinery motors. Unfortunately for Davenport, the batteries of his time were too primitive to supply the electrical energy his device needed, and power lines, utilities, and the grid did not yet exist. Davenport was apparently bankrupt when he died in 1851.</em></p>
<p style="text-align: justify;"><em>About half a century after Davenport’s patent was granted, Thomas Edison, Nikola Tesla, and others made use of an electric motor running in reverse—it could be used to convert mechanical energy (from falling water or expanding steam) into electrical energy. When used in this way, a motor becomes a generator. The electricity could then be conducted over wires to one or more distant motors. </em></p>
<p style="text-align: justify;"><em><u>The belts were often made of leather, and factories needed so many of them that in 1850 leather manufacturing was America’s fifth-largest industry</u>.</em></p>
<p style="text-align: justify;"><em>To some, indoor plumbing might not seem a profound enough innovation to stand alongside electricity and internal combustion. <u>A flush toilet and water on demand out of a tap</u> are certainly convenient, but are they fundamentally important to the story of twentieth-century growth? <u>They absolutely are</u>. Health researchers David Cutler and Grant Miller estimate that the availability of clean water explains fully half of the total decline in the overall US mortality rate between 1900 and 1936, and 75 percent of the decline in infant mortality. Historian Harvey Green calls the technologies of widespread clean water “likely the most important public health intervention of the twentieth century.”</em></p>
<p style="text-align: justify;"><em>The breakthroughs of the Industrial Era—technological, scientific, institutional, and intellectual—created a virtuous cycle of increasing human population and prosperity. <u>It took over two hundred thousand years for the global population of Homo sapiens to hit 1 billion. It only took 125 years to add the next billion, a milestone that was reached in 1928. And the timescales kept getting shorter. Subsequent billions were added in thirty-one, fifteen, twelve, and eleven years</u>. </em></p>
<p style="text-align: justify;"><em>The battles over the Corn Laws led the politician <u>James Wilson, who was in favor of free trade, to found <strong>The Economist</strong></u>. It’s still published today and is one of my favorite magazines (even though it calls itself a newspaper).</em></p>
<p style="text-align: justify;"><strong><em> </em></strong></p>
<p style="text-align: justify;"><strong>Chapter 3: Industrial Errors</strong><strong> </strong></p>
<p style="text-align: justify;"><em>People as Property It has been acceptable in many societies throughout history for people to own other people, especially if they come from a different ethnic group, religion, or tribe. The cognitive scientist Steven Pinker writes that <u>sentiment toward slavery began to change in the late 1700s with the rise of humanism</u>, or the belief that “the universal capacity of a person to suffer and flourish… call[s] on our moral concern.” As Pinker writes in his book </em><a href="https://www.amazon.com/Enlightenment-Now-Science-Humanism-Progress/dp/0525427570/ref=tmm_hrd_swatch_0?_encoding=UTF8&amp;qid=1575891070&amp;sr=8-1"><em>Enlightenment Now</em></a><em>, “The Enlightenment is sometimes called the Humanitarian Revolution, because it led to the abolition of barbaric practices [such as slavery] that had been commonplace across civilizations for millennia.”</em></p>
<p style="text-align: justify;"><em>Many industrialists had no compunction about putting children to work. <u>A 1788 survey in England and Scotland, for example, found that approximately <strong>two-thirds</strong> of all employees in nearly 150 cotton mills were children</u>. </em></p>
<p style="text-align: justify;"><em>“<u>Industrial coal use explains roughly one-third of the urban mortality penalty observed during [the] period [1851–60].” Among British men born in the 1890s, those from most coal-intensive parts of the country were, on average, nearly an inch shorter as adults than those who grew up with the cleanest air</u>. This gap was twice as large as that between children of white-collar and working-class families.</em><strong><em> </em></strong></p>
<p style="text-align: justify;"><em><u>No animals better represent the voracious, nearly all-consuming appetite of the Industrial Era better than the North American bison and the whale</u>. … The population of the North American bison herd completely collapsed in the second half of the nineteenth century. <u>Yellowstone National Park, established in 1872, served on paper as the only refuge from the remorseless hunting</u>. However, <u>poaching inside the park was rampant</u>. By 1894, the <u>Yellowstone herd numbered only twenty-five animals</u>.</em></p>
<p style="text-align: justify;"><em><u>Norwegian inventions were critical in industrializing the whale hunt. The first was the harpoon cannon, </u>which Svend Foyn refined and mounted on the bow of powered ships and chase boats. <u>The second innovation was the factory ship</u>, designed by the whale gunner Petter Sørlle, which acted as a giant carving board for the animals’ carcasses. These two technologies <u>made it much easier and more profitable to hunt rorquals such as the blue, fin, and humpback whales</u>.  … <u>In 1900, as many as a quarter of a million blue whales may have lived</u> in the Southern Ocean. <u>By 1989, about five hundred remained</u>. These animals were used mainly to make margarine, soap, lubricants, and explosives (the glycerin in whale blubber can be used to make nitroglycerin)—<u>all products that could easily have been made with other ingredients</u>.</em></p>
<p style="text-align: justify;"><strong><em> </em></strong></p>
<p style="text-align: justify;"><strong>Chapter 4: Earth Day and Its Debates</strong></p>
<p style="text-align: justify;"><em>The biologist Paul Ehrlich became the most popular exponent of this view. In his bestselling 1968 book, The Population Bomb, Ehrlich laid out a scenario that made Malthus look like a sunny optimist. Early editions of the book began, “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now. At this late date nothing can prevent a substantial increase in the world death rate.”</em></p>
<p style="text-align: justify;"><em>The MIT team discussed the results of their work in the <u>bestselling 1972 book The Limits to Growth</u>. They found that even under their most optimistic scenarios about resource abundance, the known global reserves of aluminum, copper, natural gas, petroleum, and gold would all be exhausted within fifty-five years if population and the economy were both allowed to grow without constraint. Absent constraints, their model showed that the world’s population would suffer a sudden and sharp collapse well before the end of the twenty-first century as resources vanished and economies around the planet ground to a halt.</em></p>
<p style="text-align: justify;"><em>Some observers felt that Limits to Growth’s estimates of energy reserves were, if anything, too optimistic. The ecologist <u>Kenneth Watt predicted in 1970, “By the year 2000, if present trends continue, we will be using up crude oil at such a rate… that there won’t be any more crude oil</u>. You’ll drive up to the pump and say, ‘Fill ’er up, buddy,’ and he’ll say, ‘I am very sorry, there isn’t any.’”</em></p>
<p style="text-align: justify;"><strong><em>Paul Ehrlich</em></strong><em> agreed, writing in 1975, “<u>Giving society cheap, abundant energy at this point would be the moral equivalent of giving an idiot child a machine gun</u>. With cheap, abundant energy, the attempt clearly would be made to pave, develop, industrialize, and exploit every last bit of the planet.”</em></p>
<p style="text-align: justify;"><em>The extraordinarily tight relationship between the size of the economy and the amount of energy used led many researchers to think that the two were essentially equivalent—that if you could measure the amount of energy a society used, you’d have an excellent idea of how large, prosperous, and advanced it was. This line of study was kicked off with a series of articles in Scientific American in 1971, including geologist <u>Earl Cook’s “Flow of Energy in an Industrial Society</u>.”</em><strong><em> </em></strong></p>
<p style="text-align: justify;"><em>Jevons concluded, “Indefinite growth in energy consumption, as in human population, is simply not possible … Making the changes will call for hard political decisions… democratic societies are not noted for their ability to take the long view in making decisions.”</em></p>
<p style="text-align: justify;"><em>Around <strong>Earth Day</strong>, it seemed as if we might not survive the twentieth. To give an idea of the prevailing mood, beliefs, and predictions of the mainstream environmental movement around Earth Day, here are a set of quotes from 1970 that I find representative. <u>They read to me like dispatches from a society in a panic attack</u>. Senator Gaylord Nelson wrote in Look magazine, “Dr. S. Dillon Ripley, secretary of the Smithsonian Institution, believes that in 25 years, somewhere between <u>75 and 80 percent of all the species of living animals will be extinct</u>.” Pete Gunter, a North Texas State University professor, wrote, “Demographers agree almost unanimously on the following grim timetable: <u>by 1975 widespread famines</u> will begin in India; these will spread by 1990 to include all of India, Pakistan, China and the Near East, Africa. By the year 2000, or conceivably sooner, South and Central America will exist under famine conditions.… <u>By the year 2000, thirty years from now, the entire world, with the exception of Western Europe, North America, and Australia, will be in famine</u>.” </em><strong><em> </em></strong></p>
<p style="text-align: justify;"><em>Their first evidence-based claim was that many of the bad things confidently predicted by the environmental movement—chronic food shortages and famines; irreversible ecosystem collapses; mass species die-offs; crippling shortages of natural resources; and so on—kept on not happening. <u>Instead, some of the things that were supposed to get much worse kept getting better</u>.</em></p>
<p style="text-align: justify;"><em>This had not always been his view. <u>In the late 1960s <strong>Julian Simon</strong> had, like Ehrlich, written about the dangers of unchecked population growth. But the continued improvement in human standards of living and lack of environmental catastrophes caused him to change his mind. Simon eventually became a strong optimist because he came to have faith. Not in divine providence, but in <strong>human ingenuity</strong>.</u> Population and economic growth bring with them challenges, but Simon argued that people are actually quite good at meeting challenges. We learn about the world via science, invent new tools and technologies, create institutions such as democracy and the rule of law, and do many other things that let us solve problems and create a better future. </em></p>
<p style="text-align: justify;"><em><u>Simon offered the following terms</u>: <strong>Paul Ehrlich</strong> could pick any resources he liked. He could also pick the time frame for the bet, as long as it was at least a year. If at the end of the chosen time frame the real price of the resources had risen, then Simon would pay Ehrlich the amount of the rise. If prices had fallen, Ehrlich would pay Simon. Ehrlich accepted. He picked a decade for the duration of the bet and chose five resources: copper, chromium, nickel, tin, and tungsten. He virtually “bought” $200 of each on September 29, 1980, and waited for their prices to rise in the following years. They didn’t. The real price of all five metals had fallen by late September of 1990. Chromium declined by only a bit, from $3.90 per pound to $3.70, but the others became much cheaper. The price of tin, for example, collapsed from $8.72 per pound to $3.88. The overall value of Ehrlich’s $1,000 resource portfolio declined by more than half. <u>In October of 1990 he mailed Simon a check for $576.06</u>.</em></p>
<p style="text-align: justify;"><strong> </strong></p>
<p style="text-align: justify;"><strong>Chapter 5: The Dematerialization Surprise</strong></p>
<p style="text-align: justify;"><em>This was unexpected, to put it mildly. As Ausubel wrote, “The reversal in use of some of the materials so surprised me that Iddo Wernick, Paul Waggoner, and I undertook <u>a detailed study of the use of 100 commodities in the United States from 1900 to 2010</u>.… Of the 100 commodities, we found that 36 have peaked in absolute use… Another 53 commodities have peaked relative to the size of the economy, though not yet absolutely. <u>Most of them now seem poised to fall</u>.”</em></p>
<p style="text-align: justify;"><em>“Evidence presented in this paper supports a hypothesis that the United Kingdom began to reduce its consumption of physical resources in the early years of the last decade, well before the economic slowdown that started in 2008.”</em></p>
<p style="text-align: justify;"><em>Goodall was eloquent about the significance of the <strong>dematerialization</strong> of the United States or United Kingdom: “If correct, this finding is important. <u>It suggests that economic growth in a mature economy does not necessarily increase the pressure on the world’s reserves of natural resources and on its physical environment</u>. An advanced country may be able to decouple economic growth and increasing volumes of material goods consumed. <u>A sustainable economy does not necessarily have to be a no-growth economy</u>.</em><strong><em> </em></strong></p>
<p style="text-align: justify;"><em>This graph clearly shows that a huge decoupling has taken place. <u>Throughout the twentieth century up to the time of Earth Day</u>, consumption of metals in America grew just about in lockstep with the overall economy. In the years since Earth Day, the economy has continued to grow pretty steadily, but <strong>consumption of metals</strong> has reversed course and is now decreasing. We’re now getting more “economy” from less metal year after year. We’ll see a similar great reversal in the use of many other resources.</em><em> </em></p>
<p style="text-align: justify;"><em><u><img loading="lazy" decoding="async" class="aligncenter wp-image-645 size-full" src="https://www.vii-llc.com/wp-content/uploads/2020/08/Idea-Hub-Book-Reviews-More-from-less-Image-1.jpg" alt="" width="461" height="285" srcset="https://www.vii-llc.com/wp-content/uploads/2020/08/Idea-Hub-Book-Reviews-More-from-less-Image-1.jpg 461w, https://www.vii-llc.com/wp-content/uploads/2020/08/Idea-Hub-Book-Reviews-More-from-less-Image-1-300x185.jpg 300w" sizes="(max-width: 461px) 100vw, 461px" /></u></em></p>
<p style="text-align: justify;"><em><u><br />
American <strong>consumption of plastics</strong></u>, which is not tracked by the USGS, is an exception to the overall trend of dematerialization. Outside of recessions, the United States continues to use more plastic year after year in the form of trash bags, water bottles, food packaging, toys, outdoor furniture, and countless other products. But in recent years, there has been an important slowdown.  According to the Plastics Industry Trade Association, <u>between 1970 and the start of the Great Recession in 2007 American plastic use grew at a rate of about 5.2 percent per year</u>. This was <u>more than 60 percent faster than the country’s GDP grew over the same period</u>. But a very different pattern has emerged in the years since the recession ended. The <u>growth in plastic consumption has slowed down greatly, to less than 2.0 percent per year between 2009 and 2015</u>. This is almost 14 percent slower than GDP growth over the same period. So <u>while America is not yet post-peak in its use of plastic, it’s quickly closing in on this milestone.</u></em></p>
<p style="text-align: justify;"><em>Finally, let’s look at total <strong>energy consumption</strong> combined with <strong>greenhouse gas emissions</strong>, which are the most harmful side effect of generating energy from fossil fuels. US Real GDP and Total Energy Consumption, 1800– 2017.  I was surprised to learn that <u>total American energy use in 2017 was down almost 2 percent from its 2008 peak</u>, especially since our <u>economy grew by more than 15 percent between those two years</u>. <u>Greenhouse gas emissions have gone down even more quickly than has total energy use</u>. This is largely because we have in recent years been using less coal and more natural gas to generate electricity (a switch we’ll examine in chapter 7), and natural gas produces 50–60 percent less carbon per kilowatt hour than coal does.</em></p>
<p><em><img loading="lazy" decoding="async" class="aligncenter wp-image-646 size-full" src="https://www.vii-llc.com/wp-content/uploads/2020/08/Idea-Hub-Book-Reviews-More-from-less-Image-2.jpg" alt="" width="460" height="306" srcset="https://www.vii-llc.com/wp-content/uploads/2020/08/Idea-Hub-Book-Reviews-More-from-less-Image-2.jpg 460w, https://www.vii-llc.com/wp-content/uploads/2020/08/Idea-Hub-Book-Reviews-More-from-less-Image-2-300x200.jpg 300w" sizes="(max-width: 460px) 100vw, 460px" /></em></p>
<p style="text-align: justify;"><em>The conclusion from this set of graphs is clear: <u>a great reversal of our Industrial Age habits is taking place</u>.  </em></p>
<p style="text-align: justify;"><em>Developing countries, especially fast-growing ones such as India and <strong><u>China</u></strong><u>, are probably not yet dematerializing</u>. But <u>I predict that they will start getting more from less of at least some resources in the not-too-distant future</u>. </em></p>
<p style="text-align: justify;"><strong> </strong></p>
<p style="text-align: justify;"><strong>Chapter 6: CRIB Notes</strong></p>
<p style="text-align: justify;"><strong> </strong><em><u>Real GDP of the United States grew by an average of 3.2 percent per year between the end of World War II and Earth Day</u>. <u>From 1971 to 2017, it grew by an annual average of 2.8 percent</u>. </em></p>
<p style="text-align: justify;"><em> </em><em><u>America’s population increased by an average of 1.5 percent a year from 1946 to 1970, and by 1 percent annually from 1971 to 2016</u>. </em></p>
<p style="text-align: justify;"><strong><em>…</em></strong><em>we don’t make them the same way we used to. <u>We now make them using fewer resources</u>. </em></p>
<p style="text-align: justify;"><em>So it seems most likely to me that we’d use less metal overall in a hypothetical zero-recycling economy than we do in our actual <u>enthusiastic-about-scrap-metal-recycling economy</u>. This does not mean that I think metal recycling is bad. I think it’s great, since it gives us cheaper metal products and reduces total greenhouse gas emissions (since it takes much less energy to obtain metal from scrap than from ore). <u>But <strong>recycling</strong>, whatever its merits, is not part of the dematerialization story</u>. It’s a different story.</em></p>
<p style="text-align: justify;"><em>The <u>back-to-the-land movement</u> is a fascinating chapter in the history of American environmentalism, but a largely insignificant one. There were simply never enough homesteaders and others who turned away from modern, technologically sophisticated life to make much of a difference. Which is a good thing for the environment.  Going back to the land might have been widely discussed, but it was comparatively rarely practiced. We should be thankful for this because homesteading is not great for the environment, for two reasons. First, <u>small-scale farming is less efficient</u> in its use of resources than massive, industrialized, mechanized agriculture. Second, rural life is less environmentally friendly than urban or suburban dwelling. As economist Edward Glaeser summarizes, “<u>If you want to be good to the environment, stay away from it</u>. Move to high-rise apartments surrounded by plenty of concrete&#8230;”</em></p>
<p style="text-align: justify;"><em>But after reading Limits to Growth, A Blueprint for Survival, and other books limning the looming dangers of unchecked population expansion, the missile scientist <strong>Song Jian</strong> came to believe that even faster birth rate reductions were required. He became the architect of the new policy, the main effect of which was to <u>limit ethnic Han Chinese families to a single child</u>. Exceptions to this restriction included giving some couples the right to a second child if their first was a girl, but the one-child policy soon became a central fact of Chinese family life.</em></p>
<p style="text-align: justify;"><em>In 1970, the same year as the original Earth Day festival, the United States established the federal Environmental Protection Agency and made major amendments to 1963’s Clean Air Act. This was the start of a cascade of laws and regulations aimed at reducing pollution and other environmental harms. <u>These have worked amazingly well</u>. </em></p>
<p style="text-align: justify;"><strong>Chapter 7: What Causes Dematerialization? Markets and Marvels</strong></p>
<p style="text-align: justify;"><em>A November 2007 cover story in Forbes magazine touted that the Finnish mobile phone maker Nokia had over a billion customers around the world and asked, “Can anyone catch the cell phone king?” Yes. Apple sold more than a billion iPhones within a decade of its June 2007 launch and became the most valuable publicly traded company in history. Nokia, meanwhile, sold its mobile phone business to Microsoft in 2013 for $7.2 billion to get “more combined muscle to truly break through with consumers,” as the Finnish company’s CEO Stephen Elop said at the time of the deal. It didn’t work. Microsoft sold what remained of Nokia’s mobile phone business and brand to a subsidiary of the Taiwanese electronics manufacturer Foxconn for $350 million in May of 2016. Radio Shack filed for bankruptcy in 2015, and again in 2017.</em></p>
<p style="text-align: justify;"><em><u>Also in 2007</u> the US Government Accountability Office (GAO), a federal agency known as “the congressional watchdog,” published a report with an admirably explanatory title: “Crude Oil: Uncertainty about Future Oil Supply Makes It Important to Develop a Strategy for Addressing a Peak and Decline in Oil Production.” <u>It took seriously the idea of “peak oil,”</u> a phrase coined in 1956 by <strong>M. King Hubbert</strong>, a geologist working for Shell Oil. </em></p>
<p style="text-align: justify;"><em><u>Thanks to fracking, US crude oil production almost doubled between 2007 and 2017,</u> when it approached the benchmark of 10 million barrels per day. By September of 2018 America had surpassed Saudi Arabia to become the world’s largest producer of oil. American <u>natural gas production, which had been essentially flat since the mid-1970s, jumped by nearly 43 percent between 2007 and 2017</u>.</em><strong><em> </em></strong></p>
<p style="text-align: justify;"><em>We do want more all the time, but not more resources. Alfred Marshall was right, but William Jevons was wrong. <u>Our wants and desires keep growing, evidently without end, and therefore so do our economies</u>. But our use of the earth’s resources does not. </em></p>
<p style="text-align: justify;"><em>When fracking made natural gas much cheaper, total demand for coal in the United States went down even though its price decreased.</em><strong><em> </em></strong></p>
<p style="text-align: justify;"><em>If we use more renewable energy, we’ll be replacing coal, gas, oil, and uranium with photons from the sun (solar power) and the movement of air (wind power) and water (hydroelectric power) on the earth. <u>All three of these types of power are also among dematerialization’s champions, since they use up essentially no resources once they’re up and running</u>.</em><strong><em> </em></strong></p>
<p style="text-align: justify;"><em>Neither the fracking revolution nor the world-changing impact of the iPhone’s introduction were well understood in advance<u>. Both continued to be underestimated even after they occurred. The iPhone was introduced in June of 2007, with no shortage of fanfare from Apple and Steve Jobs. Yet several months later the cover of Forbes was still asking if anyone could catch Nokia</u>.</em></p>
<p style="text-align: justify;"><em><u>As the Second Machine Age progresses, dematerialization accelerates</u>. Erik and I coined the phrase Second Machine Age to draw a contrast with the Industrial Era, which as we’ve seen transformed the planet by allowing us to overcome the limitations of muscle power. </em></p>
<p style="text-align: justify;"><em><u>Hardware, software, and networks let us slim, swap, optimize, and evaporate. I contend that they’re the best tools we’ve ever invented for letting us tread more lightly on our planet</u>.</em></p>
<p style="text-align: justify;"><em><u>Like knowledge itself, technologies accumulate</u>. … <u>Like innovation itself, technologies are combinatorial; most of them are combinations or recombinations of existing things</u>. This implies that the number of potentially powerful new technologies increases over time because the number of available building blocks does.</em></p>
<p style="text-align: justify;"><em>For our purposes, <u>capitalism is a way to come up with goods and services and get them to people</u>. Every society that doesn’t want its people to starve or die of exposure has to accomplish this task; capitalism is simply one approach to doing it.</em></p>
<p style="text-align: justify;"><em>The phrase most closely associated with capitalism is <strong>voluntary exchange</strong>. People can’t be forced to buy specific products, take a certain job, or move across the country. Companies don’t have to sell themselves if they don’t want to.</em><em> </em></p>
<p style="text-align: justify;"><em>The biggest difference between rich and poor countries might be whether laws are clearly and consistently enforced. <u>Poorer countries don’t lack laws; they often have extensive legal codes. What’s in short supply is justice for all</u>. </em></p>
<p style="text-align: justify;"><a href="https://www.amazon.com/Limits-Growth-Project-Predicament-Mankind/dp/0876632223/ref=tmm_hrd_swatch_0?_encoding=UTF8&amp;qid=1575892193&amp;sr=8-2"><em>The Limits to Growth</em></a><em>, published in 1972, assumed that exponential consumption would continue, and that the planet would run out of gold within twenty-nine years of 1972; silver within forty-two years; copper and petroleum within fifty; and aluminum within fifty-five. … Known aluminum reserves are almost twenty-five times what they were in the early 1970s. How could these predictions about resource availability, which were taken seriously when they were released, have been so wrong? <u>Because the Limits to Growth team pretty clearly underestimated both <strong>dematerialization</strong> and the endless search for new reserves</u>.</em></p>
<p style="text-align: justify;"><strong>Chapter 8:  Adam Smith Said That:  A Few Words About Capitalism</strong></p>
<p style="text-align: justify;"><em>The <u>impulse to acquisition, pursuit of gain, of money, of the greatest possible amount of money, has in itself nothing to do with capitalism</u>. This impulse exists and has existed among waiters, physicians, coachmen, artists, prostitutes, dishonest officials, soldiers, nobles, crusaders, gamblers, and beggars. One may say that it has been common to all sorts and conditions of men at all times and in all countries of the earth, wherever the objective possibility of it is or has been given. It should be taught in the kindergarten of cultural history that this naïve idea of capitalism must be given up once and for all. —<u>Max Weber, The Protestant Ethic and the Spirit of Capitalism, 1905</u></em></p>
<p style="text-align: justify;"><em><u>Capitalism is not popular these days</u>. In a 2016 survey, <u>a majority of Americans between the ages of nineteen and twenty-eight said that they didn’t support it</u>; in a follow-up survey, capitalism found majority support only among Americans over fifty. </em><em> </em></p>
<p style="text-align: justify;"><em>So, with Adam Smith as a guide, let’s look at three valid critiques of capitalism, and three invalid ones. First, the valid criticisms: Capitalism is selfish. Yes, it absolutely is. But as Smith points out, this is a good thing. … <u>Self-interest is not a flaw in capitalism, it’s a central feature</u>. </em></p>
<p style="text-align: justify;"><em>Capitalism is unequal. Without question, it is. As Smith observed, “Wherever there is great property, there is great inequality.” … For now, I just want to note how insightful Smith was about <u>one of inequality’s most serious consequences: a feeling of not belonging and not participating, of being shut out of larger communities</u>. </em><em> </em></p>
<p style="text-align: justify;"><em>Smith saw that government had a role to play in making sure that competitors don’t become cronies—close friends who collude to all get rich together by simultaneously raising prices. </em></p>
<p style="text-align: justify;"><em>Capitalism will cause great prosperity to blossom, <u>but only in a properly tended garden</u>. <u>Laws and courts are needed to protect the rights, property, and contracts of society’s weaker members</u>; violence and the threat of violence can’t be tolerated; and taxes are necessary even though they’re unwelcome. </em></p>
<p style="text-align: justify;"><em>Social is fine. Socialism is a catastrophe.</em></p>
<p style="text-align: justify;"><em>Hayek used this insight to shoot down the idea of socialism in 1977: <u>“I’ve always doubted that the socialists had a leg to stand on intellectually.</u>… Once you begin to understand that prices are an instrument of communication and guidance which embody more information than we directly have, the whole idea that you can bring about the same order… by simple direction falls to the ground.… I think that intellectually there is just nothing left of socialism.”</em></p>
<p style="text-align: justify;"><em>“In Venezuela, there is no war, nor strike. What’s left of the oil industry is crumbling on its own” because of incompetence and corruption. The IMF predicted that Venezuelan inflation could hit 13,000 percent in 2018, but this estimate proved far too conservative. By November of that year, the annual inflation rate was 1,290,000 percent. Three months later, the IMF estimated that it was 10 million percent.</em></p>
<p style="text-align: justify;"><em>UK prime minister Margaret Thatcher famously observed in 1976, “<u>The trouble with socialism is that eventually you run out of other people’s money</u>.” … The Problem with Capitalism Is That There Isn’t Enough of It.</em></p>
<p style="text-align: justify;"><em> </em></p>
<p style="text-align: justify;"><strong>Chapter 9:  What Else is Needed?  People and Policies</strong></p>
<p style="text-align: justify;"><em>In <u>November of 2017</u>, during an annual period of heavy pollution, the air in Delhi turned bad enough to cause traffic accidents. As was the case in Donora almost seventy years earlier, <u>drivers couldn’t see one another through the gray haze. Schools were finally shut down, but not as a result of any coordinated government action. The shutdown happened only because the deputy chief minister of Delhi State saw children vomiting out the windows of their school bus</u>.</em></p>
<p style="text-align: justify;"><em>Someone who has studied the bet between Julian Simon and Paul Ehrlich might respond that whatever our feelings and beliefs toward animals, <u>there’s no reason to be too concerned about extinctions</u>. </em></p>
<p style="text-align: justify;"><em>I hope we’ll save the elephants, too. Africa had an estimated <u>26 million</u> elephants when Europeans started exploring and exploiting the continent in the 1500s. Our fondness for ivory products and hunting trophies reduced the population to about 10 million by 1913 and 1.3 million by 1979. Weak enforcement of antipoaching laws, large-scale illegal trade in ivory, and rapidly rising incomes in China (the world’s largest ivory market) continued to drive down populations; the comprehensive Great Elephant Census, completed in 2016, <u>counted just over 350,000 animals across the continent</u>.</em></p>
<p style="text-align: justify;"><em><u>Forbidding GMOs is bad not only for the environment but also for people</u>. This is probably easiest to see in the case of golden rice, a strain of rice genetically modified to produce beta-carotene, a precursor to vitamin A. Vitamin A is critically important for young children, yet many Asian and African infants weaned on rice gruel don’t get enough of it. UNICEF estimates that approximately half a million children become blind each year because of <strong>vitamin A deficiency</strong>, half of whom die within a year of losing their sight. In total, the deficiency is thought to cause more than a million deaths annually.</em></p>
<p style="text-align: justify;"><em>Under Trump, the federal government was responsive neither to the best available evidence on climate change nor to the will of its people. It was instead apparently <u>guided by Trump’s belief that “the concept of global warming was created by and for the Chinese in order to make U.S. manufacturing non-competitive,” as he tweeted in 2012</u>.</em></p>
<p style="text-align: justify;"><em>A critic might respond that <u>caring about the environment is a luxury that only rich countries can afford</u>. While there’s some truth to this, it dodges a fundamental question: <u>Why did some countries become rich, but not others</u>?</em><em> </em></p>
<p style="text-align: justify;"><em>In my view the best answer to this question comes from the work of the economist Daron Acemoglu and political scientist James Robinson, summarized in their book <strong>Why Nations Fail</strong>. They argue that the <u>differences between rich countries and poor ones</u> &#8230; stems from <u>differences in their institutions</u>.</em></p>
<p style="text-align: justify;"><em>The author and self-described “rational optimist” <strong>Matt Ridley</strong> makes a stark comparison: “<u>A car today emits less pollution traveling at full speed than a parked car did from leaks in 1970</u>.”</em></p>
<p style="text-align: justify;"><em>The only thing worse than the scale and timing of the Soviet whale hunt was its utter pointlessness. <u>Russians have never had a taste for whale meat</u>. So while Japanese whalers turned 90 percent of the whales they harpooned into products, <u>Soviet crews only took the animals’ blubber (approximately 30 percent of its weight) and tossed the rest of the carcass back into the sea</u>.  In The Truth About Soviet Whaling: A Memoir, Berzin documented how the USSR’s extensive and unresponsive economic planning bureaucracy doomed so many whales. Whaling was considered part of the fisheries industry, and fishing ships were evaluated not on market demand for their catch—<u>Soviet central planners loudly and proudly rejected market signals such as supply, demand, and price as valid elements of an economy—but instead on gross tonnage, or the total weight of whales killed. So plans for growth in USSR fisheries were simply plans for more and more dead whales, no matter their use</u>. </em>[Note:  another example of <em>Goodhart’s Law</em> in practice]
<p style="text-align: justify;"><em> </em></p>
<p style="text-align: justify;"><strong>Chapter 10:  The Global Gallop of the Four Horsemen</strong></p>
<p style="text-align: justify;"><em>People with access to a device such as this can do much more than just communicate. They can also compute and access the substantial portion of humanity’s accumulated knowledge that’s now available for free on the Internet.II These are powerful capabilities, reserved until recently for the global elite. As the author and entrepreneur Peter Diamandis observed in 2012, “Right now, a Maasai warrior on a mobile phone in the middle of Kenya has better mobile communications than the president did twenty-five years ago. If he’s on a smartphone using Google, he has access to more information than the US president did just fifteen years ago.”</em><em> </em></p>
<p style="text-align: justify;"><em>“1991… deserves its spot in the annals of economic history alongside December 1978, when China’s Communist Party approved the opening up of its economy, or even May 1846, when Britain voted to repeal the Corn Laws.” India’s 840 million people quickly found themselves operating in a transformed economic environment—one with a great deal less central planning and more free-market entry, competition, and voluntary exchange.</em></p>
<p style="text-align: justify;"><em>Between 1978 and 1991, then, more than 2.1 billion people—about 40 percent of the world’s 1990 population—began living within substantially more capitalist economic systems. This is certainly the largest and fastest shift toward economic freedom that the world has ever seen. It’s even bigger and more abrupt than the adoption of communism by the Soviet Union and China, which unfolded over the more than three decades between Lenin’s 1917 Bolshevik Revolution and the final victory of Mao’s army in 1949.</em><em> </em></p>
<p style="text-align: justify;"><em>As Evans writes, “<u>When Tele São Paulo was privatized</u>, with Telefónica buying it, there was a waiting list of 7 million lines, out of a population of 20 million.… As well as the 7 million people waiting for a line, it was routine for your number to be swapped with someone else, just because.” <u>Telebrás also appeared to be padding its payrolls by more than a bit</u>: “Telefónica worked out <u>there wasn’t enough room in the headquarters building for all the people listed as working there to physically fit</u>.” Meanwhile, an estimated <u>45 percent of São Paulo’s businesses didn’t have a telephone line</u>. </em></p>
<p style="text-align: justify;"><em>Economist Max Roser calculates that <u>in 1988 41.4 percent of humanity lived in a democracy</u>. <u>Within eighteen years that figure increased almost 40 percent </u>… Although autocracies still governed more than 23 percent of the global population in 2015, there are fewer and fewer of them over time. And as Roser says, “It is worth pointing out that <u>four out of five people in the world that live in an autocracy live in China.</u>”</em></p>
<p style="text-align: justify;"><em> </em></p>
<p style="text-align: justify;"><strong>Chapter 11:  Getting So Much Better</strong></p>
<p style="text-align: justify;"><em>Once you have these tools, you can’t not use them.… You can delete the clichéd image from your brain of supplicant impoverished people not having control of their own lives. That’s not true. —Bono, TED Talk, 2013</em><em> </em></p>
<p style="text-align: justify;"><em>Max Roser’s </em><a href="https://ourworldindata.org/"><em>Our World in Data</em></a><em> is one of my favorite websites, for two reasons. The first is that it contains a lot of valuable information. The second is that it tells an invaluable story—an optimistic and hopeful one.</em></p>
<p style="text-align: justify;"><em>Why isn’t the good news sinking in? A few factors are at work. One is our basic human “<strong>negativity bias</strong>”: bad news makes a bigger impression on us and stays with us longer than does neutral or good news. Another factor is that the <u>press tends to emphasize sensationalistic news, which is often negative</u>. Journalism’s jaded motto is “If it bleeds, it leads.” One other important factor, I think, was identified by the British philosopher John Stuart Mill in an <u>1828 speech</u>: “I have observed that not the man who hopes when others despair, but <u>the man who despairs when others hope, is admired by a large class of persons as a sage</u>.” In many elite circles and publications <strong>negativity seems to be a sign of seriousness and rigor</strong>, while <u>optimism and positivity seem naive and under-informed</u>. Simon, Rosling, Pinker, Roser, and others have pushed back against this institutional negativity bias. They’ve done work that is both rigorous and positive. In fact, they’ve shown that doing rigorous work—looking systematically at the best available evidence—often compels you to be positive about many things because <u>the evidence is so encouraging</u>.</em></p>
<p style="text-align: justify;"><em><u>I’m not trying to make the case that things today are good enough. Because they’re certainly not</u>. The world has too many poor, hungry, and sick people. Too many children are malnourished and uneducated. Too many people, despite the laws on the books, are forced into indentured servitude and slavery. We continue to pump greenhouse gases into the atmosphere, dump plastic into the oceans, kill rare animals, cut down tropical forests, and otherwise befoul our planet. But we can document improvements without saying or implying that everything’s okay now. We should document the improvements because they tell us something critically important: what we’re doing is working and therefore we should keep doing it instead of contemplating huge course changes.</em><em> </em></p>
<p style="text-align: justify;"><em>… documented extinctions are relatively rare (with about 530 recorded within the past five hundred years) and appear to have <u>slowed down in recent decades</u>; for example, <u>no marine creatures have been recorded as extinct in the past fifty years</u>.</em></p>
<p style="text-align: justify;"><em>Ocean overfishing is a classic example of the “<strong>tragedy of the commons</strong>,” an unhappy phenomenon named in a 1968 Science article by the ecologist Garrett Hardin. Hardin defined a commons as a shared resource, such as a pasture or a body of water, that is available to many but owned by none. That open access sounds great but has a big problem: everyone has ample incentive to exploit the commons (by grazing cows on the pasture or taking fish from the water), but <u>because no one owns it, no one has the incentive to protect or sustain it</u>. So the strong tendency is for everyone to do the economically rational thing, which is to try to exploit it before it’s stripped bare. As they do this, they help strip it bare.</em></p>
<p style="text-align: justify;"><em><u>Parks and other protected areas made up only 4 percent of global land area in 1985, but by 2015, this figure had almost quadrupled, to 15.4 percent. At the end of 2017, 5.3 percent of the earth’s oceans were similarly protected</u>.  … Declaring that a piece of land or water is a park isn’t the only way to help out its living things. We can also interact with them less, which seems to suit them just fine. <u>For example, both the demilitarized zone between North and South Korea and the exclusion zone around the still-radioactive Chernobyl nuclear plant in Ukraine have seen animals thrive because humans are absent</u>.</em></p>
<p style="text-align: justify;"><em>We often see this pattern: richer countries have turned the corner, lessening their overall planetary footprint and reversing previous environmental harms, while poorer ones have not yet. This isn’t because poor people are indifferent to the environment. Instead, it’s because <u>poorer countries tend to have weaker institutions and less responsive governments</u>.</em></p>
<p style="text-align: justify;"><em><u>For the first time since the start of the Industrial Era, our planet is getting greener, not browner.</u> Since 2003, large-scale reforestation in Russia and China, growth in African and Australian savannas, and slowing tropical deforestation have combined to increase the amount of carbon-storing vegetation on Earth.</em></p>
<p style="text-align: justify;"><em>The planet’s most worrisome environmental issue is <u>global warming</u>. Sustainability scientist <strong>Kim Nicholas</strong> has beautifully summarized key points about climate change on a sign she takes to marches and rallies. Titled “</em><a href="http://www.kimnicholas.com/climate-change.html"><em>Climate Science 101</em></a><em>,” it reads: <u>It’s Warming. It’s Us. We’re Sure. It’s Bad. We Can Fix It.</u></em></p>
<p style="text-align: justify;"><em>We Can Fix It because It’s Pollution, and we know how to deal with that negative externality. … Worldwide, over 20 percent of greenhouse gas emissions come from industry, 6 percent from buildings, 14 percent from transportation, 24 percent from agriculture, and 25 percent from electricity and heat production. </em></p>
<p style="text-align: justify;"><em>As India’s Indira Gandhi said in 1972 at the United Nation’s first conference on the environment, “<u>Poverty is the biggest polluter</u>.” <u>So as poverty declines, so, too, will pollution</u>.</em></p>
<p style="text-align: justify;"><em><u>In 1999, 1.76 billion people were living in extreme poverty. Just sixteen years later, this number had declined by 60 percent, to 705 million</u>.</em></p>
<p style="text-align: justify;"><em>… <u>in 1986 fewer than half of the world’s teenagers were in school; at present, more than 75 percent are</u>.</em></p>
<p style="text-align: justify;"><em>…global <strong>life expectancy</strong> was about 28.5 years in 1800. Over the next 150 years, that number increased by 20 years. Then, <u>in the years between 1950 and 2015, it increased by 25 more</u>.</em></p>
<p style="text-align: justify;"><em>Today, we still have desperately poor regions, failed states, and the decimations of war. But <u>in no region today is the child mortality rate higher than the world’s average rate was in 1998</u>.</em></p>
<p>&nbsp;</p>
<p style="text-align: justify;"><strong>Chapter 12:  Powers of Concentration</strong></p>
<p style="text-align: justify;"><em>Lewis Dijkstra of the European Commission (EC) concluded, “Everything you heard about urbanization is wrong.” Dijkstra and his colleagues found that by 2015 the world was already 84 percent urbanized, and that contrary to previous estimates Asia, Africa, and Oceania were already more urbanized than both North America and Europe.</em></p>
<p style="text-align: justify;"><em>After standardizing what urban meant and looking at the entire planet using satellite data, a very different picture emerged of where people live. We’re not on our way to becoming a city-dwelling species—we’re already largely there.</em></p>
<p style="text-align: justify;"><em>The <strong>service industries</strong>, meanwhile, continued to need ever-more people. “Service industry” is a category so broad as to be almost meaningless: it includes everything from investment banking to software programming to dry cleaning to dog walking. Most service industries do have two important things in common: <u>many of their jobs have been harder to automate</u> (no dog-walking robot is commercially available yet, as far as I know), and they <u>rely heavily on in-person interactions</u>. </em></p>
<p style="text-align: justify;"><em><u>The US presidential election of 2016 provided a vivid illustration of <strong>concentration</strong></u>—both of the country’s population and its economy. Despite winning almost 3 million more votes than Republican candidate Donald Trump, Democratic candidate Hillary Clinton won a majority of the vote in fewer than five hundred counties. These counties, however, together generated 64 percent of the country’s economy. The more than twenty-five hundred counties won by Trump were responsible for only a bit more than a third of the American economy.</em></p>
<p style="text-align: justify;"><em>The American board game <u>Monopoly traces its roots to 1903</u>, when the Landlord’s Game was invented by Elizabeth Magie to illustrate the <strong>problems of concentrating land ownership</strong>. The game eventually became quite popular. It was played frequently and earnestly during the 1970s by a group of students at the University of Chicago. When one asked the free-market-loving (and Nobel Prize–winning) economist <strong>Milton Friedman</strong> to sign the player’s Monopoly set, Friedman obliged, but <u>also wrote “down with” before the game’s title</u>.</em></p>
<p style="text-align: justify;"><em>… industrial concentration is rising globally not because of a decline in capitalism and tech progress, but rather because of an increase in them. <u>Recent tech progress is so profound that it’s changing the nature of competition</u>, and this change is reflected in the concentration evidence. <u>So it’s not that competition is decreasing (due to bad government policies, weak antitrust enforcement, or other causes) and a new crop of lazy monopolists is being grown. Instead, technology-fueled competition is fierce, and a new generation of sophisticated leading firms is being forged</u>. <u>A few companies have become much more productive and have started paying much higher salaries (two developments that are closely related), while the rest have seen near-stagnant productivity and pay</u>.</em></p>
<p style="text-align: justify;"><em>Van Reenen writes, “Many of the patterns are consistent with a… view where many industries have become ‘winner take most/all’ due to globalization and new technologies rather than a generalized weakening of competition due to relaxed antitrust rules or rising regulation.” Erik Brynjolfsson and I agree with this view.I We argued in 2008 that concentration was increasing because of tech progress, and that it would continue to do so.</em><em> </em></p>
<p style="text-align: justify;"><em>So a period of broad, deep, and fast tech progress such as we’re experiencing during this Second Machine Age should be expected to generate both <strong>superstars </strong>and<strong> zombies</strong> in industries around the world. </em></p>
<p style="text-align: justify;"><em><u>Most Americans, however, don’t own stock in Amazon. Or in any other company.</u> Economist Edward Wolff found that, in 2016, 50.7 percent of US households owned no stocks at all, either directly or in retirement accounts. So all stock market wealth is concentrated in less than half of America’s households. Even within this group, there’s a lot of concentration: Wolff found that the top 10 percent of US households owned 84 percent of total stock market wealth in 2016. <u>High concentration here means high inequality; when stock ownership is closely held by a relatively small group of people and share prices increase, members of that group become much wealthier than everyone else.</u></em></p>
<p style="text-align: justify;"><em>A Capital under Attack A social scientist would say that what Mattis has observed is a decline in social capital in the United States and elsewhere. That term, which has been in use since the turn of the twentieth century, is well defined by the sociologist Robert Putnam as “<u>connections among individuals—social networks and the norms of reciprocity and trustworthiness that arise from them</u>.”</em></p>
<p style="text-align: justify;"><em>Between 1958 and 2015, the Pew Research Center found that <u>public trust in the federal government fell from about 73 percent to about 19 percent</u>.</em></p>
<p style="text-align: justify;"><strong>Chapter 13:  Stress Be the Tie that Binds:  Disconnection</strong></p>
<p style="text-align: justify;"><em>Early in the Second Machine Age, however, Robert Putnam found something very different. He observed a decline in just about all forms of voluntary association, even recreational sports usually played in groups. As the title of his 2000 book put it, we were </em><a href="https://www.amazon.com/Bowling-Alone-Collapse-American-Community/dp/0684832836/ref=tmm_hrd_swatch_0?_encoding=UTF8&amp;qid=1575889109&amp;sr=8-1"><em>Bowling Alone</em></a><em>.</em></p>
<p style="text-align: justify;"><em><u>The US suicide rate rose by 14 percent between 2009 and 2016</u>, when it reached a level not previously seen since the end of World War II. Overdose deaths have climbed even more quickly. They almost doubled between 2008 and 2017, when more than 72,000 people lost their lives to an overdose. <u>This is far more than the 58,220 American military deaths recorded throughout the Vietnam War</u>.</em></p>
<p style="text-align: justify;"><em>According to the Centers for Disease Control, in 2016, <u>197,000 deaths were related to suicide, alcohol, and drug abuse. This was more than four times the 44,674 people who died from HIV/AIDS at the peak of its epidemic in 1994</u>.</em></p>
<p style="text-align: justify;"><em>Durkheim was adamant that “dropping out of society” (to use an appropriate but unscientific expression) was a primary cause of suicide, and more than a century of accumulated evidence and research provide a great deal of support for this view. In 2018, the World Health Organization found that “a sense of isolation” was strongly associated with suicide risk around the world.</em></p>
<p style="text-align: justify;"><em>As Johann Hari, a writer and researcher on the global “war on drugs” puts it, “The opposite of addiction isn’t sobriety, it’s connection.”</em></p>
<p style="text-align: justify;"><em><u>Recent election results across countries as dissimilar as the United States, Poland, Turkey, Hungary, the Philippines, and Brazil indicate a global growing desire for authoritarian leaders</u>.</em></p>
<p style="text-align: justify;"><em>In Suicide, he maintained that companies were important institutions for maintaining social capital during the upheavals of the Industrial Era: “The corporation has everything needed to give the individual a setting, to draw him out of his state of moral isolation.” As concentration continues in the Second Machine Age and establishments and jobs in industries such as manufacturing decrease, such settings become fewer. It’s not surprising that suicides would increase as “moral isolation” does.</em><em> </em></p>
<p style="text-align: justify;"><em>While both Al Gore in 2000 and Hillary Clinton in 2016 won the popular vote but didn’t win a majority of electoral college votes (and so did not become president), <u>this outcome also occurred three times in the nineteenth century.</u></em></p>
<p style="text-align: justify;"><em><u> </u></em></p>
<p style="text-align: justify;"><em>As Steven Sloman and Philip Fernbach explain in their book, </em><a href="https://www.amazon.com/Knowledge-Illusion-Never-Think-Alone/dp/039918435X/ref=tmm_hrd_swatch_0?_encoding=UTF8&amp;qid=1575889193&amp;sr=8-1"><em>The Knowledge Illusion</em></a><em>, many people believe that they have a good idea how a flush toilet works, <u>but few can actually explain the mechanisms by which that device carries away waste and refills with water</u>.</em><em> </em></p>
<p style="text-align: justify;"><em>“As people seek out the social settings they prefer—as they choose the group that makes them feel the most comfortable—the nation grows more politically segregated &#8230;” </em></p>
<p style="text-align: justify;"><em> </em></p>
<p style="text-align: justify;"><strong>Chapter 14:  Looking Ahead:  The World Cleanses Itself this Way</strong></p>
<p style="text-align: justify;"><em>“This is my long-run forecast in brief: the material conditions of life will continue to get better for most people, in most countries, most of the time, indefinitely.” —<strong>Julian Simon</strong>, WIRED, 1997</em></p>
<p style="text-align: justify;"><em>The Pythagorean theorem, a design for a steam engine, and a recipe for delicious chocolate chip cookies <u>aren’t ever going to get “used up” no matter how much they’re used</u>.</em></p>
<p style="text-align: justify;"><em>“The most interesting positive implication of the model is that <u>an economy with a larger total stock of human capital will experience faster growth</u>.”</em></p>
<p style="text-align: justify;"><em>But Romer showed that as long as that economy continued to add to its human capital—the overall ability of its people to come up with new technologies and put them to use—it could actually grow faster even as it grew bigger. This is because the stock of useful, nonrivalrous, nonexcludable ideas would keep growing. As Romer convincingly showed, <u>economies run and grow on ideas</u>.</em><em> </em></p>
<p style="text-align: justify;"><em>The operating system that powers most non-Apple smartphones is Android, which is both free to use and freely modifiable. Google’s parent company, Alphabet, developed and released <strong>Android</strong> without even trying to make it excludable; the explicit goal was to make it as widely imitable as possible. This is an example of the broad trend across digital industries of giving away valuable technologies for free.</em></p>
<p style="text-align: justify;"><em>Contributors to efforts such as these have a range of motivations (Alphabet’s goals with Android were far from purely altruistic—among other things, the parent of Google wanted to achieve a quantum leap in mobile phone users around the world, who would avail themselves of <strong>Google Search</strong> and services such as <strong>YouTube</strong>), but they’re all part of the trend of technology without excludability, which is great news for growth.</em></p>
<p style="text-align: justify;"><strong><em>Google’s</em></strong><em> chief economist, Hal Varian, points out that hundreds of <u>millions of how-to videos are viewed every day on YouTube</u>, saying, “<u>We never had a technology before that could educate such a broad group of people anytime on an as-needed basis for free</u>.”</em></p>
<p style="text-align: justify;"><em>Romer’s work leaves me hopeful because it shows that <u>it’s our ability to build <strong>human capital</strong>, rather than chop down forests, dig mines, or burn fossil fuels that drives growth and prosperity</u>.</em></p>
<p style="text-align: justify;"><em>The world still has billions of desperately poor people, but <u>they won’t remain that way</u>. All available evidence strongly suggests that <u>most will become much wealthier in the years and decades ahead</u>. </em></p>
<p style="text-align: justify;"><em>Malthus’s and Jevons’s ideas gave way to Romer’s, and the world will never be the same.</em></p>
<p style="text-align: justify;"><em><u>Predicting exactly how technological progress will unfold is much like predicting the weather: feasible in the short term, but impossible over a longer time</u>. Great uncertainty and complexity prevent precise forecasts about, for example, the computing devices we’ll be using thirty years from now or the dominant types of artificial intelligence in 2050 and beyond.</em></p>
<p style="text-align: justify;"><em>Because <strong>3-D printing</strong> generates virtually no waste and doesn’t require massive molds, it <u>accelerates dematerialization</u>.</em></p>
<p style="text-align: justify;"><em>The same technologies that power today’s small drones can be scaled up to build “air taxis” with as many as eight propellers and no pilot. Such contraptions sound like science fiction today, <u>but they might be carrying us around by midcentury</u>.</em></p>
<p style="text-align: justify;"><em><u>Opposition to genetically modified organisms</u> is fierce in some quarters, but isn’t based on reason or science. This opposition will, one hopes, fade.</em></p>
<p style="text-align: justify;"><em>The battle against global warming isn’t the only battle we need to fight and win during these decades. We also need to fight against pollution of the air, water, and land so that we and the rest of life on Earth can be healthier. We need to reduce our planetary footprint and give land back to nature so that forests can regrow and animals can move back in. We should leave fewer scars on the land from mines, wells, and clear-cut timberland. We should learn to use less energy if generating that energy also generates greenhouse gases and other pollutants. And we need to continue to lift people out of poverty, reduce mortality rates and disease burdens, ensure clean water and sanitation, give more people more education, increase economic opportunities, and improve the human condition in countless other ways.</em></p>
<p style="text-align: justify;"><em>Like Julian Simon in 1980, I’m willing to make monetary bets. <u>Simon believed that prices for natural resources would decline and was willing to put his money behind his predictions</u>. My bets are about quantities rather than prices. I believe that America’s total consumption of most natural resources will go down in the years ahead, and I am willing to put money on it. I’ll also wager that greenhouse gas emissions will decrease in the United States, and that the country’s environmental footprint will shrink in other ways as well.</em></p>
<p style="text-align: justify;"><em>Details about these bets—what data they draw on, how quantities are calculated, how payouts will be handled, and so on—are available at the <u>Long Bets website (longbets.org).</u> This site is also where you can sign up for one or more bets if you’re interested, and confident enough that these predictions are wrong. Each bet can be between $50 and $1,000. I’m putting up $100,000 of my own money.</em></p>
<p style="text-align: justify;"><em> </em></p>
<p style="text-align: justify;"><strong>Chapter 15:  Interventions:  How to be Good</strong></p>
<p style="text-align: justify;"><em><u>Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has</u>. —Attributed to <strong>Margaret Mead</strong> (1901–78)</em></p>
<p style="text-align: justify;"><em>A study published in the Lancet in 2007 found that over the previous fifteen years death rates from pollution were generally hundreds of times lower for nuclear power than for coal, gas, or oil, and that accident rates were also comparatively low for nuclear. As Shellenberger points out, “<u>Nobody died from radiation at Three Mile Island or Fukushima, and fewer than fifty died from Chernobyl in the thirty years since the accident</u>.”</em><em> </em></p>
<p style="text-align: justify;"><em>As the social entrepreneur Leila Janah puts it, “<u>Talent is equally distributed; opportunity is not.</u>”</em></p>
<p style="text-align: justify;"><em>Salesforce has also announced its intentions to move away entirely from fossil fuel energy sources by 2022. Other large technology companies, including Apple, Facebook, and Microsoft, have similar plans. In 2017 Google reached 100 percent renewable energy for all global operations including both their data centers and offices, becoming the world’s largest corporate buyer of renewable power.</em><em> </em></p>
<p style="text-align: justify;"><em>CEOs and other members of the business community don’t need to be encouraged to keep pursuing dematerialization. They’re going to do this anyway.</em></p>
<p style="text-align: justify;"><em><u>We believe things because the people around us believe them</u>, or members of our political tribe do, or members of the opposite political tribe believe the opposite. <u>Many of us believe things because we have an inherently zero-sum perspective</u>: if someone is doing better, it must be because someone else is doing worse. Most of us are more likely to believe things if we hear them enough times, since <u>we have a glitch in our mental hardware to mistake familiarity for truth</u>. Similarly, we believe a lot of things because our <u>innate negativity bias is reinforced by a constant stream of dire headlines</u>, expert predictions of decline and doom, and vivid images of things going wrong.</em></p>
<p style="text-align: justify;"><em>So is adopting a vegan diet, though few seem willing to abandon altogether foods made from animals: <u>in 2018, only 3 percent of Americans identified as vegan</u>. Short of this extreme step, <u>eating less beef and dairy would help reduce greenhouse gases</u>. As Linus Blomqvist of the ecomodernist think tank Breakthrough Institute puts it, “<u>A diet including chicken and pork, but no dairy or beef, has lower greenhouse gas emissions than a vegetarian diet that includes milk and cheese, and almost gets within spitting distance of a vegan diet</u>.” </em><em> </em></p>
<p style="text-align: justify;"><em>For many of us, the strong tendency when we interact with people who have different beliefs and moral foundations is to quickly try to show them why they’re wrong—why their logic is flawed, their evidence is fake news, and their beliefs are unsupportable. This almost never works. <u>It usually just makes other people dig in their heels and hold on to their existing beliefs even more strongly</u>. <strong>A lot of debate and discussion increases disconnection</strong>. A better way is to start by <strong>finding common ground</strong>. The psychologist Jonathan Haidt, whose work has been mentioned in prior pages several times already, highlights that people with both liberal and conservative moral foundations believe deeply that we have a responsibility to care.</em></p>
<p style="text-align: justify;"><strong> </strong></p>
<p style="text-align: justify;"><strong>Conclusion:  Our Next Planet</strong></p>
<p style="text-align: justify;"><em><u>We’ve sent robotic probes to every planet in this solar system. Earth is BY FAR the best one. —Jeff Bezos, Twitter, 2018</u></em></p>
<p style="text-align: justify;"><em>(Because grasslands keep the planet cooler than forests do,II our current lack of mammoths and mastodons is bad news for climate change.)</em></p>
<p style="text-align: justify;"><em>Which is, as we’ve seen, to let us and our planet flourish. Jesse Ausubel, the first great scholar of dematerialization, counsels that “<strong>we must make Nature worthless.</strong>” He means, of course, that we should be working to make it economically worthless, so that it’s safe from the voracious attention of capitalism. Then we can enjoy its true worth.</em></p>
<p style="text-align: justify;">
<p>The post <a href="https://www.vii-llc.com/2019/12/09/more-from-less-the-surprising-story-of-how-we-learned-to-prosper-using-fewer-resources-and-what-happens-next/">More from Less: The Surprising Story of How We Learned to Prosper Using Fewer Resources &#8211; and What Happens Next</a> appeared first on <a href="https://www.vii-llc.com">VII Capital Management</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Digital Transformation: Survive and Thrive in an Era of Mass Extinction Hardcover</title>
		<link>https://www.vii-llc.com/2019/08/27/digital-transformation-survive-and-thrive-in-an-era-of-mass-extinction-hardcover-july-9-2019/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=digital-transformation-survive-and-thrive-in-an-era-of-mass-extinction-hardcover-july-9-2019</link>
		
		<dc:creator><![CDATA[Adriano Almeida]]></dc:creator>
		<pubDate>Tue, 27 Aug 2019 17:17:31 +0000</pubDate>
				<category><![CDATA[Book Review]]></category>
		<category><![CDATA[Science & Technology]]></category>
		<guid isPermaLink="false">http://mia.art.br/victori/?p=410</guid>

					<description><![CDATA[<p>By Thomas Siebel, Jul/2019 (256p.) &#160; Founder of a salesforce automation software company, Siebel Systems, which was acquired by Oracle in September 2005, Tom Siebel is a credible source on...</p>
<p>The post <a href="https://www.vii-llc.com/2019/08/27/digital-transformation-survive-and-thrive-in-an-era-of-mass-extinction-hardcover-july-9-2019/">Digital Transformation: Survive and Thrive in an Era of Mass Extinction Hardcover</a> appeared first on <a href="https://www.vii-llc.com">VII Capital Management</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h5><span style="text-decoration: underline;"><em>By Thomas Siebel, Jul/2019 (256p.)</em></span></h5>
<p>&nbsp;</p>
<p style="text-align: justify;">Founder of a salesforce automation software company, <a href="https://en.wikipedia.org/wiki/Siebel_Systems"><em>Siebel Systems</em></a>, which was acquired by Oracle in September 2005, Tom Siebel is a credible source on digital transformation, which he characterizes as the combination of 4 grossly-overused buzzwords:  <strong><em>cloud</em></strong>, <strong><em>big data</em></strong>, <strong><em>IoT</em></strong>, and <strong><em>AI</em></strong>.  Packed with such buzzwords and often too promotional and optimistic (in my opinion) – the book is still very informative and educational and it provides interesting insights and opinions, good historic context, and clarity on a complicated topic.  Written mainly for CEOs, the first half of the book was excellent, but towards the end, unfortunately, it became increasingly prescriptive and promotional.  Siebel ‘s new company, <a href="https://c3.ai/?mkwid=sSOfnVbK8_dc{e8995b072a3d7400d607924a5f38be16c4e0aa6d4b23e82c0e389504a2781b12}7Cpcrid{e8995b072a3d7400d607924a5f38be16c4e0aa6d4b23e82c0e389504a2781b12}7C344857713317{e8995b072a3d7400d607924a5f38be16c4e0aa6d4b23e82c0e389504a2781b12}7Cpmt{e8995b072a3d7400d607924a5f38be16c4e0aa6d4b23e82c0e389504a2781b12}7Ce{e8995b072a3d7400d607924a5f38be16c4e0aa6d4b23e82c0e389504a2781b12}7Cpkw{e8995b072a3d7400d607924a5f38be16c4e0aa6d4b23e82c0e389504a2781b12}7Csiebel{e8995b072a3d7400d607924a5f38be16c4e0aa6d4b23e82c0e389504a2781b12}7Cslid{e8995b072a3d7400d607924a5f38be16c4e0aa6d4b23e82c0e389504a2781b12}7C{e8995b072a3d7400d607924a5f38be16c4e0aa6d4b23e82c0e389504a2781b12}7Ctargetids{e8995b072a3d7400d607924a5f38be16c4e0aa6d4b23e82c0e389504a2781b12}7Ckwd-14748853{e8995b072a3d7400d607924a5f38be16c4e0aa6d4b23e82c0e389504a2781b12}7Cgroupid{e8995b072a3d7400d607924a5f38be16c4e0aa6d4b23e82c0e389504a2781b12}7C68637510399{e8995b072a3d7400d607924a5f38be16c4e0aa6d4b23e82c0e389504a2781b12}7C&amp;pgrid=68637510399&amp;ptaid=kwd-14748853&amp;gclid=EAIaIQobChMIx4j9vuWi5AIVmIrICh1woA-AEAAYASAAEgI8PfD_BwE">C3.ai</a>, provides software for deploying enterprise AI solutions, mainly for large, capital-intensive companies in the Energy, Transportation, manufacturing, and Utilities space.  The case studies he provides (like CAT, MMM, and DE) were somewhat underwhelming in my opinion.  I am willing to bet that outstanding companies like Intuit, Idexx, Mettler-Toledo, and Accenture, for instance, are well-ahead of these reactive dinosaurs that he profiles.  It was noteworthy (to me at least) that he never even mentions Accenture, since they are global leaders in digital transformation – and especially since he mentions the importance of the IT consultant, and also mentions several of Accenture’s competitors.  He also claimed that Zelle (one of his clients) was more popular in the US than Venmo, citing an article written in 2018 &#8211; but I do not think that was ever true.</p>
<p style="text-align: center;"><img loading="lazy" decoding="async" class="wp-image-899 size-full aligncenter" src="https://www.vii-llc.com/wp-content/uploads/2019/07/Idea-Hub-Book-Reviews-Digital-Transformation-Thomas-Siebel.jpg" alt="" width="420" height="285" srcset="https://www.vii-llc.com/wp-content/uploads/2019/07/Idea-Hub-Book-Reviews-Digital-Transformation-Thomas-Siebel.jpg 420w, https://www.vii-llc.com/wp-content/uploads/2019/07/Idea-Hub-Book-Reviews-Digital-Transformation-Thomas-Siebel-300x204.jpg 300w" sizes="(max-width: 420px) 100vw, 420px" /></p>
<p style="text-align: justify;">In summary, despite its minor shortcomings and potential inaccuracies, the book offers some precious insights on how technology evolves (in spurts), and why the current inflection point is so special.  All told, Siebel makes a convincing case for why one should stay bullish and focused on tech.  He also goes a long way in educating the reader on the industry and the technologies that underpin digital transformation.</p>
<p>Best,</p>
<p>Adriano</p>
<hr />
<h5></h5>
<p>&nbsp;</p>
<h5><em><span style="text-decoration: underline;">Highlighted Passages</span>:</em></h5>
<p>&nbsp;</p>
<p style="text-align: justify;">His high-level definition of “Digital Transformation”:  “<em>For now, suffice it to say that at the core of digital transformation is the confluence of four profoundly disruptive technologies—cloud computing, big data, the internet of things (IoT), and artificial intelligence (AI).”</em></p>
<p style="text-align: justify;">On <span style="text-decoration: underline;"><strong>Professor Daniel Bell’s early depiction of a <em>Post-Industrial society</em> (from 1973)</strong></span>:  <em>“Bell conceived of this idea before the advent of the personal computer, before the internet as we know it, before email, before the graphical user interface. He predicted that in the coming century, a new social framework would emerge based upon telecommunications that would change social and economic commerce; change the way that knowledge is created and distributed; and change the nature and structure of the workforce. … A post-industrial society is about the delivery of services. It is a game between people. It is powered by information, not muscle power, not mechanical energy:  If an industrial society is defined by the quantity of goods as marking a standard of living, the post-industrial society is defined by the quality of life as measured by the services and amenities—health, education, recreation, and the arts—that are now available for everyone.  The core element is the professional, as he or she is equipped with the education and training to provide the skills necessary to enable the post-industrial society.12 This portends the rise of the intellectual elite—the knowledge worker. Universities become preeminent. A nation’s strength is determined by its scientific capacity.”</em></p>
<p style="text-align: justify;">On the <span style="text-decoration: underline;"><strong>parallel between the evolution of species and technology</strong></span>:  <em>“Punctuated Equilibrium suggests that the absence of fossils is itself data, signaling abrupt bursts of evolutionary change rather than continuous, gradual transformations. According to Gould, change is the exception. Species stay in equilibrium for thousands of generations, changing very little in the grand scheme of things. This equilibrium is punctuated by rapid explosions of diversity, creating countless new species that then settle into the new standard. </em>… <em>The evidence suggests that we are in the midst of an evolutionary punctuation: We are witnessing a mass extinction in the corporate world in the early decades of the 21st century. ”</em></p>
<p style="text-align: justify;">On the <span style="text-decoration: underline;"><strong>importance of the CEO (bullish for Accenture)</strong></span>:  “<em>I have witnessed many tech-adoption cycles over the past four decades. With the promise of performance improvements and productivity increases, such innovations were introduced to industry through the IT organization. Over months or years, and after multiple trials and evaluations, each gained the attention of the chief information officer, who was responsible for technology adoption. The CEO was periodically briefed on the cost and result. With 21st-century digital transformation, the adoption cycle has inverted. What I’m seeing now is that, almost invariably, corporate digital transformations are initiated and propelled by the CEO. Visionary CEOs, individually, are the engines of massive change. This is unprecedented in the history of information technology—possibly unprecedented in the history of commerce. Today, CEO-mandated digital transformation drives the company’s roadmap and goals.”</em></p>
<p style="text-align: justify;">On <span style="text-decoration: underline;"><strong>AirBnB and expanding TAMs</strong></span>:  <em>“In other cases, the disruption could create new, ancillary markets. Take Airbnb, for example. When it was first launched, many predicted that Airbnb would completely disrupt the hotel industry as travelers would increasingly choose to stay in private apartments and houses over hotels. But the hotel industry has not crumbled—in fact, it is still thriving. So is Airbnb. Airbnb’s actual impact has been in other areas: reducing the number of homes available in a neighborhood for people to live, potentially driving up the price of rent. Instead of competing with hotels, Airbnb is competing with renters. “Was that Airbnb’s intent? Almost certainly not,” writes journalist Derek Thompson. “But that is the outcome, anyway, and it is a meaningful—even, yes, disruptive—one. Airbnb is a transformative travel business. But most people failed to predict the thing it would transform—for good and bad.” </em></p>
<p style="text-align: justify;"><em> </em>On the <span style="text-decoration: underline;"><strong>significance of intellectual property</strong></span>:  <em>“In his book Dealing with Darwin, Moore uses the example of Tiger Woods to clarify core and context. There is no debate that Tiger Woods’s core business is his golfing, and his context business is marketing. While marketing generates a large amount of money for Woods, there could not even be marketing (the context) without his golfing (the core). Context helps to support and keep the core running, while core is a company’s competitive advantage. The general rule of thumb: Context means outsourcing, while core means intellectual property.”</em></p>
<p style="text-align: justify;">On the <span style="text-decoration: underline;"><strong>war for AI leadership</strong></span>:  <em>“Today the U.S. and China are engaged in a war for AI leadership. The outcome of that contest remains uncertain. China clearly is committed to an ambitious and explicitly stated national strategy to become the global AI leader.”</em></p>
<p style="text-align: justify;">On the <span style="text-decoration: underline;"><strong>growth of cloud computing</strong></span>:  <em>“In my professional experience I have never seen anything like the adoption rate of cloud computing. It is unprecedented. … How did that happen? I am not certain that I can explain the 180-degree turn at global scale in the span of a few years, but there is no question it happened. And as discussed previously, it is clearly reflected in the revenue growth of the leading cloud vendors. It is also clear that corporate leaders are afraid of cloud vendor lock-in. They want to be able to continually negotiate. They want to deploy different applications in clouds from different vendors, and they want to be free to move applications from one cloud vendor to another.”</em></p>
<p>&nbsp;</p>
<p>The post <a href="https://www.vii-llc.com/2019/08/27/digital-transformation-survive-and-thrive-in-an-era-of-mass-extinction-hardcover-july-9-2019/">Digital Transformation: Survive and Thrive in an Era of Mass Extinction Hardcover</a> appeared first on <a href="https://www.vii-llc.com">VII Capital Management</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
