The Divide Between Silicon Valley and Washington Is a National-Security Threat


Personnel work at the Air Force Space Command Network Operations and Security Center at Peterson Air Force Base in Colorado.
Personnel work at the Air Force Space Command Network Operations and Security Center at Peterson Air Force Base in Colorado.

Closing the gap between technology leaders and policy makers will require a radically different approach from the defense establishment.

A silent divide is weakening America’s national security, and it has nothing to do with President Donald Trump or party polarization. It’s the growing gulf between the tech community in Silicon Valley and the policy-making community in Washington.

Beyond all the acrimonious headlines, Democrats and Republicans share a growing alarm over the return of great-power conflict. China and Russia are challenging American interests, alliances, and values—through territorial aggression; strong-arm tactics and unfair practices in global trade; cyber theft and information warfare; and massive military buildups in new weapons systems such as Russia’s “Satan 2” nuclear long-range missile, China’s autonomous weapons, and satellite-killing capabilities to destroy our communications and imagery systems in space. Since Trump took office, huge bipartisan majorities in Congress have passed tough sanctions against Russia, sweeping reforms to scrutinize and block Chinese investments in sensitive American technology industries, and record defense-budget increases. You know something’s big when senators like the liberal Ron Wyden and the conservative John Cornyn start agreeing.

In Washington, alarm bells are ringing. Here in Silicon Valley, not so much. “Ask people to finish the sentence, ‘China is a ____ of the United States,’” said the former National Economic Council chairman Keith Hennessey. “Policy makers from both parties are likely to answer with ‘competitor,’ ‘strategic rival,’ or even ‘adversary,’ while Silicon Valley leaders will probably tell you China is a ‘supplier,’ ‘investor,’ and especially ‘potential market.’”

In the past year, Google executives, citing ethical concerns, have canceled an artificial-intelligence project with the Pentagon and refused to even bid on the Defense Department’s Project JEDI, a desperately needed $10 billion IT-improvement program. While stiff-arming Washington, Google has been embracing Beijing, helping the Chinese government develop a more effective censored search engine despite outcries from human-rights groups, American politicians, and, more recently, its own employees. Since the 2016 presidential election, Facebook executives have been apologizing to Congress in public while waging a campaign to deny, delay, and deflect regulation and stifle critics in private.

Former Secretary of Defense Ash Carter, Google’s Eric Schmidt, Amazon’s Jeff Bezos, LinkedIn’s Reid Hoffman, Code for America’s Jen Pahlka, and others have been working hard to bridge the divide, bringing technology innovation to Washington and a sense of national service to the tech industry. But their efforts are nowhere near enough. The rift is real, deep, and a long time coming, because it’s really three divides converging into one.

There is a yawning civil-military relations gap between the protectors and the protected. When World War II ended, veterans could be found in seven out of 10 homes on a typical neighborhood street. Today it’s two. Less than half a percent of the U.S. population serves on active duty. A senior executive from a major Silicon Valley firm recently told us that none of the company’s engineers had ever seen anyone from the military.

It should come as no surprise that when people live and work in separate universes, they tend to develop separate views. The civil-military gap helps explain why many in tech companies harbor deep ethical concerns about helping warfighters kill people and win wars, while many in the defense community harbor deep ethical concerns about what they view as the erosion of patriotism and national service in the tech industry. Each side is left wondering, How can anyone possibly think that way? Asked last week what he would tell engineers at companies like Google and Amazon, Chairman of the Joint Chiefs of Staff General Joseph Dunford said, “Hey, we’re the good guys … It’s inexplicable to me that we wouldn’t have a cooperative relationship with the private sector.”

There’s a training gap between leaders in Washington, who are mostly lawyers struggling to understand recent technological advances, and leaders in Silicon Valley, who are mostly engineers struggling to understand the age-old dynamics of international power politics. Congress has 222 lawyers but just eight engineers. On the Senate Armed Services Committee, it’s even more stark. Of its 25 members, 17 are lawyers and just one is an engineer. (He’s actually the only engineer in the entire Senate.) In the past, policy makers didn’t have to work that hard to understand the essence of breakthrough technologies like the telegraph, the automobile, and nuclear fission. Sure, technology moved faster than policy, but the lag was more manageable. Digital technologies are different, spreading quickly and widely on the internet, with societal effects that are hard to imagine and nearly impossible to contain. Understanding these technologies is far more challenging, and understanding them fast is essential to countering Russia and China.

At the same time, today’s brightest young engineers barely remember 9/11, view the Cold War as ancient history rather than lived experience, and can get computer-science degrees at elite institutions without ever taking a course about cybersecurity or thinking about what is in the national interest. For technologists, technology holds the promise of a brighter future, not the peril of dark possibilities. Their overriding challenge is getting a breakthrough to work, not imagining how it could be used by bad actors in nefarious ways.

The congressional hearings with the Facebook CEO Mark Zuckerberg on April 10 and 11 brought the two perspectives—and the chasm between them—into full view. For the tech community, it was a jaw-dropping moment that revealed just how little members of Congress know about the products and companies that are transforming global politics, commerce, and civil society. Senator Orrin Hatch appeared surprised to learn that Facebook earned the majority of its revenue through ad sales. “How do you sustain a business model in which users don’t pay for your service?” Hatch asked quizzically. “Senator, we run ads,” replied Zuckerberg, his aides grinning behind him. Senator Lindsey Graham asked whether Twitter was the same thing as Facebook. Even Senator Brian Schatz, considered one of Congress’s tech aficionados, didn’t seem to know the difference between social media, email, and encrypted text messaging. As Ash Carter wrote, “All I can say is that I wish members [of Congress] had been as poorly prepared to question me on war and peace in the scores of testimonies I gave as they were when asking Facebook about the public duties of tech companies.”

For the policy-making community, the hearings were a jaw-dropping moment showing just how much naïveté and profits were driving Facebook’s decisions, and just how little Zuckerberg and his team ever considered the possibility that all sorts of bad actors could use their platform in all sorts of very bad ways. In his opening statement, Zuckerberg admitted, “Facebook is an idealistic and optimistic company. For most of our existence, we focused on all of the good that connecting people can do.” Zuckerberg added, “But it’s clear now that we didn’t do enough to prevent these tools from being used for harm.”

The third divide is generational. In Washington, power runs vertically and rests in the hands of gray eminences. In Silicon Valley, power runs horizontally and rests in the hands of wunderkinds and their friends. Steve Jobs was 21 years old when he started Apple with his buddy Steve Wozniak. Bill Gates quit college his junior year to start Microsoft. Zuckerberg launched Facebook in his sophomore dorm room. Larry Page and Sergey Brin were old men, starting Google at the age of 25. In the policy world, 30 years of experience usually makes you powerful. In the technical world, 30 years of experience usually makes you obsolete. Policy makers who think college engineering students should be grateful for the opportunity to shadow them and photocopy during college summers have it all wrong. Interns on Capitol Hill answer phones. Interns at SpaceX launch rockets into orbit. For gray eminences silently lamenting in their Washington corner offices, “Who needs these whiny young Millennials?” the answer is: America does.

It’s hard to overstate just how foreign the worlds of Washington and Silicon Valley have become to each other. At the exact moment that great-power conflict is making a comeback and harnessing technology is the key to victory, Silicon Valley and Washington are experiencing a “policy makers are from Mars, tech leaders are from Venus” moment, with both sides unable to trust or understand each other. Even the dress codes are vexing and perplexing. In the tech industry, adults dress like college kids. Inside the Beltway, college kids dress like adults.

Closing this divide is a national-security imperative. And it requires thinking differently, generating inspiration rather than just regulation, and targeting the leaders of tomorrow, not just the leaders of today.

For starters, the Pentagon needs a messaging overhaul. Stop telling engineering students at top universities, “If you want to make money, go into industry, but if you want a mission bigger than yourself, work for me.” When Admiral Mike Rogers, who led the U.S. Cyber Command and the National Security Agency, gave this standard recruiting pitch to Stanford undergraduates a few years ago, it fell flat. It still does. We recently held a focus group of Stanford computer-science majors. When we tested the message on them, heads started shaking in a Wow, you just don’t get it kind of way. “One of the main reasons people pick companies is they want to do social good,” said  Anna Mitchell, a senior. “People would laugh if the government said the only way to be impactful is to work in government.”

For these students and their peers, the desire for impact is real and deep. They believe that they can achieve large-scale change faster and better outside the government than within it. “A message suggesting a dichotomy of working in companies versus helping your country alienates a good portion of people on the fence,” Michael Karr, a Stanford junior, told us. “If you’re working on autonomous vehicles, you could be saving lives by making cars safer.” So what message does work? Giving them opportunities for impact at scale that don’t take a lifetime of moving up the ladder. Deploying the best young engineers against the toughest challenges, early. Telling them what Kevin tells potential recruits: If you do cyber operations for anyone else, you’ll get arrested. If you do them for me in the Air Force, you’ll get a medal.

The Pentagon also needs to create ambassadors, not lifers. More than getting technical experts into government for their entire careers, we need to get more national-security-minded engineers into tech companies. Winning hearts and minds in the tech world starts early, with new college graduates who are more open to new experiences that can last a lifetime. Imagine a Technology Fellows Program like the White House Fellows program, only younger. It would select the 50 most talented American engineering students graduating from college for a prestigious, one-year, high-impact stint in government service, working directly for senior leaders like the Air Force chief of staff, the secretary of defense, or the commander of U.S. forces in the Middle East.

Tech fellows would work on the most important projects and participate in special programs for their cohort to bond and form a lifelong network. “People really care about their cohort,” said Andrew Milich, a Stanford senior specializing in artificial intelligence. Tech fellows could defer company jobs or take a leave of absence, knowing that all the other fellows would be the best in the world who would also be heading back to industry. The goal isn’t for them to stay in government. The goal is for their government experience to stay with them. As one of our students told us, “Everyone has a friend at Google.” Imagine the ripple effects if these friend networks across the tech industry included tech-fellow alumni.

Doing it right won’t be easy. The Tech Fellows Program would have to be high on prestige and low on bureaucracy. Fellows would need flexibility to select projects that align with their values, not just their expertise. As the sophomore Gleb Shevchuk told us, “There has to be a transparent discussion of ethics. The program has to come off as a program that understands the concerns of people who dislike certain things the government is doing.” Google engineers may object to helping the Pentagon improve its targeting algorithms, but they might jump at the chance to defend U.S. satellites from attacks in space.

In addition, the program would have to reduce logistical pain points dramatically. Tech companies compete aggressively on quality-of-life dimensions for their workforce—locating in cities where top talent wants to live, providing free housing and transportation, and offering exciting programs outside of the job. The Tech Fellows Program would need to do the same. The National Security Agency has cutting-edge technological programs that would be a natural fit for tech fellows, but that’s a hard sell. The hot cities for attracting top engineers include Austin, Seattle, San Francisco, New York, and Denver—but not Fort Meade.

In the longer term, the Pentagon needs a radically new civilian talent model. Programs like the Air Force’s Kessel Run and the Defense Digital Service are breaking new ground to bring technology and tech talent into the Pentagon, but these programs are green shoots surrounded by red tape. Will Roper, the assistant secretary of the Air Force for acquisition, technology, and logistics, and someone who is no stranger to innovating inside the Defense Department, would like to see a much more fluid pathway in and out of industry and government. “I would invest to make the term revolving door superlative instead of pejorative,” he told a Georgetown class. “The people that we want are going to be people in industry that will want to come in and help us, and be able to go back out and come back in and help us, [so] that we’re continually refreshing the ideas, the creative thought … and right now we make it damn difficult to get in and out of government.”

These challenges are substantial, but small steps could have big impact over time. Congress could start by holding hearings with the goal of writing the best proposals into the National Defense Authorization Act this year. And if Congress doesn’t take action, then the Pentagon should, creating a Rapid Capabilities Office dedicated to developing new civilian talent programs, just like it has for developing new technologies.

In 1957, the launch of Sputnik spawned a fear that an underfunded education system had allowed the U.S. to lose its technological advantage to the Soviets. A year after the launch, Congress passed the National Defense Education Act, increasing funding for science, mathematics, and foreign-language education at all levels and allowing for substantially more low-cost student loans. Within a decade, the number of college students in the U.S. had more than doubled, supercharging U.S. breakthroughs in the space race. What our national leaders realized in 1957 is still true today: What people know and how they think are just as important to the nation’s defense as the weapon systems we deploy.

Silicon Valley Is Turning Into Its Own Worst Fear


We asked a group of writers to consider the forces that have shaped our lives in 2017. Here, science fiction writer Ted Chiang looks at capitalism, Silicon Valley, and its fear of superintelligent AI.

Justin Metz for BuzzFeed News

This summer, Elon Musk spoke to the National Governors Association and told them that “AI is a fundamental risk to the existence of human civilization.” Doomsayers have been issuing similar warnings for some time, but never before have they commanded so much visibility. Musk isn’t necessarily worried about the rise of a malicious computer like Skynet from The Terminator. Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.

When Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.

This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.


In psychology, the term “insight” is used to describe a recognition of one’s own condition, such as when a person with mental illness is aware of their illness. More broadly, it describes the ability to recognize patterns in one’s own behavior. It’s an example of metacognition, or thinking about one’s own thinking, and it’s something most humans are capable of but animals are not. And I believe the best test of whether an AI is really engaging in human-level cognition would be for it to demonstrate insight of this kind.

Insight is precisely what Musk’s strawberry-picking AI lacks, as do all the other AIs that destroy humanity in similar doomsday scenarios. I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations. Corporations don’t operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn’t reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what “good” means with “whatever the market decides.”

It’s assumed that the AI’s approach will be “the question isn’t who is going to let me, it’s who is going to stop me,” i.e., the mantra of Ayn Randian libertarianism that is so popular in Silicon Valley.

Because corporations lack insight, we expect the government to provide oversight in the form of regulation, but the internet is almost entirely unregulated. Back in 1996, John Perry Barlow published a manifesto saying that the government had no jurisdiction over cyberspace, and in the intervening two decades that notion has served as an axiom to people working in technology. Which leads to another similarity between these civilization-destroying AIs and Silicon Valley tech companies: the lack of external controls. If you suggest to an AI prognosticator that humans would never grant an AI so much autonomy, the response will be that you fundamentally misunderstand the situation, that the idea of an ‘off’ button doesn’t even apply. It’s assumed that the AI’s approach will be “the question isn’t who is going to let me, it’s who is going to stop me,” i.e., the mantra of Ayn Randian libertarianism that is so popular in Silicon Valley.

The ethos of startup culture could serve as a blueprint for civilization-destroying AIs. “Move fast and break things” was once Facebook’s motto; they later changed it to “Move fast with stable infrastructure,” but they were talking about preserving what they had built, not what anyone else had. This attitude of treating the rest of the world as eggs to be broken for one’s own omelet could be the prime directive for an AI bringing about the apocalypse. When Uber wanted more drivers with new cars, its solution was to persuade people with bad credit to take out car loans and then deduct payments directly from their earnings. They positioned this as disrupting the auto loan industry, but everyone else recognized it as predatory lending. The whole idea that disruption is something positive instead of negative is a conceit of tech entrepreneurs. If a superintelligent AI were making a funding pitch to an angel investor, converting the surface of the Earth into strawberry fields would be nothing more than a long overdue disruption of global land use policy.

There are industry observers talking about the need for AIs to have a sense of ethics, and some have proposed that we ensure that any superintelligent AIs we create be “friendly,” meaning that their goals are aligned with human goals. I find these suggestions ironic given that we as a society have failed to teach corporations a sense of ethics, that we did nothing to ensure that Facebook’s and Amazon’s goals were aligned with the public good. But I shouldn’t be surprised; the question of how to create friendly AI is simply more fun to think about than the problem of industry regulation, just as imagining what you’d do during the zombie apocalypse is more fun than thinking about how to mitigate global warming.

There have been some impressive advances in AI recently, like AlphaGo Zero, which became the world’s best Go player in a matter of days purely by playing against itself. But this doesn’t make me worry about the possibility of a superintelligent AI “waking up.” (For one thing, the techniques underlying AlphaGo Zero aren’t useful for tasks in the physical world; we are still a long way from a robot that can walk into your kitchen and cook you some scrambled eggs.) What I’m far more concerned about is the concentration of power in Google, Facebook, and Amazon. They’ve achieved a level of market dominance that is profoundly anticompetitive, but because they operate in a way that doesn’t raise prices for consumers, they don’t meet the traditional criteria for monopolies and so they avoid antitrust scrutiny from the government. We don’t need to worry about Google’s DeepMind research division, we need to worry about the fact that it’s almost impossible to run a business online without using Google’s services.

It’d be tempting to say that fearmongering about superintelligent AI is a deliberate ploy by tech behemoths like Google and Facebook to distract us from what they themselves are doing, which is selling their users’ data to advertisers. If you doubt that’s their goal, ask yourself, why doesn’t Facebook offer a paid version that’s ad free and collects no private information? Most of the apps on your smartphone are available in premium versions that remove the ads; if those developers can manage it, why can’t Facebook? Because Facebook doesn’t want to. Its goal as a company is not to connect you to your friends, it’s to show you ads while making you believe that it’s doing you a favor because the ads are targeted.

So it would make sense if Mark Zuckerberg were issuing the loudest warnings about AI, because pointing to a monster on the horizon would be an effective red herring. But he’s not; he’s actually pretty complacent about AI. The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue. Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieve its goals because that’s the attitude they adopted. (Of course, they saw nothing wrong with this strategy when they were the ones engaging in it; it’s only the possibility that someone else might be better at it than they were that gives them cause for concern.)

Silicon Valley has unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.

There’s a saying, popularized by Fredric Jameson, that it’s easier to imagine the end of the world than to imagine the end of capitalism. It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.

Which brings us back to the importance of insight. Sometimes insight arises spontaneously, but many times it doesn’t. People often get carried away in pursuit of some goal, and they may not realize it until it’s pointed out to them, either by their friends and family or by their therapists. Listening to wake-up calls of this sort is considered a sign of mental health.

We need for the machines to wake up, not in the sense of computers becoming self-aware, but in the sense of corporations recognizing the consequences of their behavior. Just as a superintelligent AI ought to realize that covering the planet in strawberry fields isn’t actually in its or anyone else’s best interests, companies in Silicon Valley need to realize that increasing market share isn’t a good reason to ignore all other considerations. Individuals often reevaluate their priorities after experiencing a personal wake-up call. What we need is for companies to do the same — not to abandon capitalism completely, just to rethink the way they practice it. We need them to behave better than the AIs they fear and demonstrate a capacity for insight.

Tired? Troubled love life? Try banning the gadgets from the bedroom.


Late-night fiddling with devices stimulates your brain and invades what should be a quiet space. Time to turn off

Man on mobile in bed

‘The ill effects of poor sleep on relationships is well documented.’ Photograph: Justin Pumfrey/Getty Images

Two films I watched at the London Film Festival this month jarred with me in an unexpected way. Drinking Buddies and Afternoon Delight are what might be called mumblecore movies – all improvised dialogue and plots that home in on relatively minor events in the emotional lives of their protagonists. I’ll spare you my reviews, but an incidental aspect of these self-consciously naturalistic portrayals of contemporary urban life depressed me. Namely, the proliferation of gadgetry in the bedroom, by which I do not mean sex toys.

In a scene from Drinking Buddies, for example, one half of a couple sits in bed one evening, catching up with her emails on a MacBook, while her boyfriend conducts a text conversation on his smartphone, thus rudely inviting interlopers into their intimate space. Technology similarly seeps into the bedroom in Afternoon Delight, with post-coital stressy business texting rendered as quotidian as brushing your teeth.

There is nothing unusual about this set-up these days – it’s just that these films held a mirror up to a facet of my life that I already didn’t really approve of, and projected it on to a giant screen. My bedside table usually has a phone and an iPad lying on it, as well as paper books; sometimes there’s even a laptop too, although I do try to put that out for the night with the cat, the tiny pulsating “sleep mode” light is just too obviously anathema to actual human sleep.

Is nowhere sacred? Must the ability to text, tweet or post images be at our fingertips while we’re sleeping? The fact that our books, films and alarm clocks often live in the same devices as our various inboxes and social network apps lazily justifies our need to take them to bed with us, but I am not alone in checking my emails, or catching up with current affairs last thing before lights out. I know this is not conducive to proper, satisfying sleep but I do it anyway, and wake up with a headache.

I’m just as bad when I wake up. The first thing I do in the morning is pick up my phone to check the time. Then I compulsively unlock it to “check the weather”. But as soon as my eyes fix on the screen, my attention scatters a thousand different ways, taking me down all sorts of rabbit holes until I finally set it back down, with a twitchy brain and still no idea whether it’s going to rain because it’s the one thing I forgot to check.

Another justification for taking these devices to bed is that there simply isn’t enough time to keep up with the continuous tidal wave of computer-related chores and correspondences, and therefore any quiet moment is fair game for a quick holiday-planning/sock-buying/online-banking session. I wouldn’t be surprised to find that, if the Top five regrets of the dying article (which serially returns to the most-read list on this website) were to be updated in 2033, an item about never allowing yourself a break from screen-based life to daydream or properly rest, even when ill in bed, makes an appearance.

The actor, Daniel Craig, recently credited banning technology from the bedroom as key to his keeping his marriage to Rachel Weisz a happy one. I see his point. Aside from all of this gadgetry allowing friends, colleagues and chores to gatecrash the marital bed, the ill-effects of poor sleep on relationships is well documented. One study, which chimed with me, demonstrating the positive effects of gratitude on overall wellbeing, found that poor sleepers were more selfish and less likely to feel gratitude.

Poor sleep, of course, has countless other negative effects on health, happiness and productivity. And insomnia may predict Alzheimer’s. It is not uncommon for people to tweet or update their Facebook status in the middle of the night when they have insomnia. Aside from the brain-scrambling stimulation of the internet, there is evidence that staring at backlit screens keeps brains more alert and suppresses melatonin levels (although the jury’s out on whether it scrambles melatonin production enough to disrupt sleep .)

I read this fact in an article reporting that Arianna Huffington, the doyenne of digital publishing herself, has banned phones and computers from her bedroom in the name of a good night’s sleep. This reminded me of how I felt when I read that many senior staff at Silicon Valley behemoths including Apple, eBay and Yahoo send their kids to schools based on the Steiner approach, that ban screens from their classrooms and frown upon their use at home: suckered. Could it be that these guys know better than to get high on their own supply?