Most Compelling Reasons to Get a VPN for Your iPhone and iPad


Between hackers, government entities, and snoopy tech giants, there is no shortage of of threats to your online security and privacy. Luckily, there’s a quick and easy solution that can help mitigate some of these threats: a VPN. Find the best VPNs here.

A VPN, or virtual private network, essentially allows you to browse the internet anonymously while keeping your data safe. And you can use them on pretty much all of your smart devices — including your iPhone and iPad. See best VPNs for iPad and iPhone.

 

Top Reasons to Use a VPN on an iOS Device

  1. It’ll keep your data safe. If there’s one thing to remember about using a VPN, it’s this: it’s a simple way to significantly boost your cybersecurity. VPNs will apply end-to-end encryption to all of your internet traffic, meaning that sensitive data can’t be intercepted.
  2. It allows you to unlock geo-restricted content. VPNs are, arguably, most commonly used to bypass geographic restrictions on content like Netflix shows. Essentially, you’ll be able to access content that’s restricted to a different region.
  3. It increases your privacy. If you don’t want your carrier or internet service provider (ISP) in your business, get a VPN. Because your browsing data is encrypted, ISPs and carriers won’t be able to know what you’re up to on your device.
  4. It lets you use public Wi-Fi safely. Public Wi-Fi is handy — but it’s also incredibly unsecure. Normally, we recommend staying away from anything sensitive when you’re using public Wi-Fi. But a VPN encrypts your data, so you can have peace of mind on that unsecured network.
  5. It’ll help you become anonymous. A VPN will boost your internet privacy — but it’ll also make you more anonymous overall. Advertisers and government agencies alike will be much less likely to connect your browsing history to your identity.
  6. You can access secure content remotely. If you ever need to access a sensitive corporate server while on-the-road, a VPN will help you established a secure connection. That way, you aren’t risking your businesses’ sensitive data.
  7. Get around internet censorship. Similar to geographic restrictions, certain regions around the world will block popular websites like Facebook or YouTube. A VPN can help you get around those “great firewalls” if you’re traveling internationally.
  8. If you torrent, it’ll help. We don’t advocate for doing anything illegal, but if you use torrenting software, a VPN will come in handy. Even users who only download legal torrents will often find their torrenting apps getting throttled.
  9. They often come with bonus features. While they aren’t the main draw, most VPNs also come with additional features like built-in firewalls and more.
  10. There’s no reason not to. If you choose a good-quality VPN with solid performance and a no-log policy, there’s really no downside to using a VPN. The most popular VPNs are also extremely easy to set up and use on your iOS device.

In Other Words, Get One

All of this is to say that using a VPN on your iPhone and iPad is kind of a no-brainer; especially if you value your online privacy and cybersecurity. Just make sure to do your research, avoid free VPNs, and make sure to get a good-quality VPN from a reputable company. Here are the top VPNs we recommend.

10 amazing and helpful apps that will inspire you


There are millions of mobile applications available to smartphone users today, and that number will only keep growing as it becomes easier to build and deploy apps. Some apps are for amusement, but others are specifically designed to improve the lives of their users or the world at large.

10 amazing and helpful apps that will inspire you

We asked a panel of Young Entrepreneur Council members the following question about some of the most innovative apps they’ve encountered that were created to help people:

What’s one innovative app you‘ve seen that’s designed to help people, and what can leaders learn from apps like it?

Their best answers are below:

1. Be My Eyes


Be My Eyes is an app that provides blind and low-vision people with visual assistance by connecting them with volunteers and company representatives. What leaders can learn from this app is that basic tasks, like picking out the correct can of soup, can be challenging and humbling and that asking for help is nothing to be ashamed of. If you don’t have the right answer, someone else will. – Duran InciOptimum7

2. Charity Miles

Charity Miles is an app that motivates you to hit your fitness goals by donating money to your favorite charity. For every mile that you run, walk or bike, you earn money from corporate sponsors that is then donated to the charity of your choice. Leaders should take notice and see how they can combine charity with business in a more innovative approach. – Syed BalkhiWPBeginner
 
3. Chummy

Chummy aims to make the world better by paying it forward. Users can request help for everything from moving furniture to finding a lost pet. This should trigger entrepreneurs to think about achieving socially responsible goals by helping other businesses solve their problems. It may require unique thinking to find solutions that are financially sound and socially conscious. – Blair ThomaseMerchantBroker

4. Aaptiv

The best app I’ve seen recently is Aaptiv. It’s like having a personal trainer without having to pay the high fees. Aaptiv is a fitness app that has thousands of workouts involving activities like running, using the elliptical and strength training. When you start your workout session, the trainer will give you the exercises to perform via audio. I think leaders can learn that fitness can be done whenever, even with busy schedules. – Jean GinzburgJeanGinzburg.com

5. Pacifica

The Pacifica app provides a way for people to deal with daily anxiety and stress. It provides audio lessons to help deal with stress, mood trackers and mindfulness meditations. When work gets too stressful, apps like this can help calm your mind, which is important if you want employees to perform their best. – Chris ChristoffMonsterInsights

6. Speak & Translate 

Speak & Translate is an amazing app that allows you to communicate verbally with people who speak different languages. One of my passions is creating connections, and breaking down the boundary of language entirely with an app is absolutely groundbreaking. This app teaches us as leaders that there are no excuses for not making connections. – Stanley MeytinTrue Film Production

7. Voice Access

Voice Access is a new Android accessibility app from Google. It helps people with limited mobility navigate their phones by voice, helping them with opening apps, scrolling, editing text and other common interactions. It’s a well-designed accessibility app, and it should inspire app developers to think about creating user experiences that don’t exclude people with restricted mobility and other disabilities. – Vik PatelFuture Hosting

8. Samaritan

Samaritan gives you the stories behind some of the homeless people you may see every day and allows you to donate money ($1 or more). These funds go toward needed services and expenses like clothing, groceries or gas. Leaders can learn from it because giving back to the community is the right thing to do. – Andrew SchrageMoney Crashers

9. Red Stripe

Red Stripe is an app for people with red-green colorblindness. It uses a smartphone camera to identify these colors and highlights them on the screen with stripe patterns. It’s a simple but useful app that shows how, with a little imagination, the built-in capabilities of mobile devices can be used to enhance people‘s lives. – Justin BlanchardServerMania Inc.

10. Budge

Budge is an app that lets you challenge your friends in the name of giving to charity. You can challenge a friend to things like quitting smoking, losing weight, running 10 miles etc., and whoever loses has to make a donation to charity. Leaders can learn how to make giving to charity a fun team activity with this app

Positive technology


Designing work environments for digital well-being

Digital technology can be a blessing and a curse, both personally and in the workplace. How can organizations make sure the positives outweigh the negatives?

Introduction

A wealth of information creates a poverty of attention.-Herbert Simon1

The transformative impact of technology on the modern workplace is plain to see. Face-to-face meetings have often given way to video conferences, mailrooms to email inboxes, and typewriters and carbon paper to word processors. Technology has also allowed a substantial portion of work—and the workforce—to move beyond the confines of a traditional office.2 It is common for digitally connected professionals to perform some of their work in cafés or shops, at home, even lying by the pool while on “vacation.”

This technological revolution brings with it many obvious benefits. Colleagues can easily communicate across geographies, simultaneously reducing expenses, environmental damage, and bodily wear-and-tear. Open source software, search engines, and online shopping services enable us to summon in a few clicks the tools and information we need to be productive. Online maps, global positioning systems, and real-time translation services help us navigate unfamiliar places and communicate with locals.

But there are downsides to our technology-infused lives. Of particular concern are the engaging—some fear addictive3—aspects of digital technologies, which can sap us of truly finite resources: our time and attention. While companies may benefit from tech-enabled increased productivity in the short term, the blurring of the line between work and life follows a law of diminishing returns. As recent Deloitte research suggests, the value derived from the always-on employee can be undermined by such negative factors as increased cognitive load and diminished employee performance and well-being.4

In short, digital and mobile technologies give—but they also take away. It falls on talent and technology leaders to weigh the efficiencies enabled by always-connected employees against increased demands on scarce time and attention, and longer-term harm to worker productivity, performance, and well-being. Getting the most from technology and people isn’t about simply demanding restraint. It’s about designing digital technologies that facilitate the cultivation of healthy habits of technology use, not addictive behavior. And it’s possible for leaders of organizations to play an active role in designing workplaces that encourage the adoption of healthy technology habits.

The perils of workplace digital technology

Working long, stressful days was once regarded as a characteristic of the proletariat life. Yet today, being “always on” is instead often emblematic of high social status5. Technology may have physically freed us from our desks, but it has also eliminated natural breaks which would ordinarily take place during the workday. And recent research suggests that this effect is not restricted to the workday. According to the American Psychological Association, 53 percent of Americans work over the weekend, 52 percent work outside designated work hours, and 54 percent work even when sick.6 Flextime, typically viewed as a benefit of technology providing greater freedom, actually leads to more work hours.7 Without tangible interventions, there’s little reason to think this behavior will change anytime soon.

These environmental factors and cultural norms are increasingly compounded by technological design elements—some intentional, others not—that make technology use compulsive and habit-forming, taking on the characteristics of an addiction.

In his recent book, Irresistible, New York University marketing and psychology professor Adam Alter identifies a variety of factors that can contribute to digital addiction.8 In the context of the workplace, many of these factors—summarized in the following section—can enable employee technology addiction.

Metrification and alerts

Digital technologies can quantify previously unquantifiable aspects of our lives, yielding fresh insight into how we spend our time. On a personal level, we can track our steps and count our likes, friends, and followers. At work, we are greeted each morning with dozens of unopened emails and reminders of sequences of meetings. During the day, workers are interrupted by continual streams of emails, texts, and instant messages.

Certainly, many such messages and notifications are necessary and helpful. But many others do little more than distract us from important tasks at hand, undermining productivity rather than enhancing it. In a widely cited study, cognitive scientist Gloria Mark and her colleagues state that people compensate for interruptions by working faster, but this comes at a two-fold price: The individual experiences more stress, frustration, and time pressure and effort.10 Concurrently, the organization often experiences not only decreased employee performance,11 but also, as elaborated in the next section, less optimal business decisions due to the lack of adequate time to sufficiently weigh pros and cons and consider and evaluate viable alternatives.

Specifically, constant streams of messages, prioritized in terms of importance can create cognitive scarcity, resulting in a deterioration of the individual’s ability to adequately process information.12 Recent research has found that conditions of scarcity impose a kind of “cognitive tax” on individuals. For example, an experiment that involved focusing low-income persons’ attention on a scenario in which they urgently needed to raise several thousand dollars resulted in the equivalent of a 13-point drop in IQ. (This is similar to the drop in IQ someone would experience after going a night without sleep.) Surprisingly, this phenomenon has similar effects on overloaded individuals who are scarce on a different dimension: time. This raises the concern that digital firehoses of poorly-filtered information can hamper our ability to pay attention, make good decisions, and stick to plans. And when we try to compensate for interruptions by working faster, we only get more frustrated and stressed.13

Another cognitive effect of too many alerts and too much unfiltered information is choice overload. Individuals experiencing choice overload often find it difficult to make decisions unless clear environmental cues or default options are established to help guide—nudge—their decision-making. 14 Such cues and defaults are examples of what the authors of the 2008 book, Nudge, call choice architecture.15 Absent smart choice architecture, workers often come up with their own rules for prioritizing options and tasks. Such improvised heuristics can vary over time and across individuals, and be inconsistent with roles and performance goals.16

Zero cost for inclusion

Virtual meetings offer organizations many advantages, such as cost savings, knowledge transfer, and team culture-building.17 And employees can benefit from less travel and more telecommuting opportunities. But the very ease with which people can be invited to and accept these meetings (especially many days in advance, when calendars are typically more open) can translate into a disadvantage. Meeting organizers often choose to err on the side of inclusion, minimizing the risk of leaving someone out; and the average worker often chooses to attend it for fear of missing out on something important. The all-too-common net result is a day packed with back-to-back meetings, during which much is said, less retained, and even less achieved. This results in either less time to complete actual tasks at hand, or multitasking, which can diminish the quality of the meetings and the overall engagement.

Bottomless bowls

Technology design that removes natural stopping points keeps the user in a state of productive inertia.18 This mind-set often plays a productive role in our work life, enabling us to get into the groove and accomplishing task after task without the inefficacy of acting to continue. Although, when we immerse ourselves in an inconsequential task, there can also be unproductive flows. Who hasn’t lost hours reading low-priority emails simply because they appear one after another? This is perhaps a workplace analog of the “bottomless design” implemented in social media feeds and online entertainment platforms to capture viewers’ attention. The natural default is to continue, not to stop.19

Smart screens and slot machines

Who can resist checking a buzzing mobile device? It could be an email congratulating a promotion or a team message about a testing success. Or it could be spam. Yet we’re compelled to check, and technology designers know that—which is why, drawing from the work of psychologist B. F. Skinner, they know altering the timing between rewards for particular tasks is highly effective—and often addictive. This variability of rewards, which Skinner called the “variable-ratio schedule,”20 has been put to ample use in technology design, embodied particularly in the swipe-down-to-refresh design of many mobile applications. In this sense, our devices are metaphorical slot machines, incentivizing us to continue coming back for the big payoff.21 To capitalize on this addictive quality of the element of surprise, many popular social media sites have changed their algorithms to no longer show feeds in chronological order. Instead, each refresh presents a new curation of a tailored feed—incorporating both old and new—with no apparent rhyme or reason for the new ordering.22

Unhealthy use of workplace technology can do more than compromise productivity—it can impair workers’ physical and mental well-being. A few examples establish the point.

Poor sleep: Addiction to technology and the always-on work culture are contributing to a societal dearth of sleep.23 The wakefulness that accompanies engaging in work means we’re less tired during the day, while exposure to blue screen light emitted by mobile devices simultaneously reduces the melatonin required for good sleep. This self-reinforcing loop makes the seven- to nine-hour sleep cycle, considered necessary to avoid a catalogue of negative health outcomes, more difficult to maintain.24

Physical disconnection: Technology is having an even more profound negative effect on social well-being. While it can enable us to engage in relationships across distances and time zones, this sometimes comes at the expense of good old-fashioned face-to-face relationships.25 With devices always demanding our attention, family and friends are often neglected—altering our entire social structure.26 And our connection to social media too can become strong enough to mimic the rewarding sensation caused by cocaine.27

Anxiety and depression: Information overload is not only distracting, but potentially mentally damaging. We live with a finite amount of time and a limitless well of information and choices, often resulting in a phenomenon called FOMO—fear of missing out. With phones and computers constantly alerting us of all the opportunities available, becoming double-booked is not infrequent and can lead to anxiety when the user needs to skip one meeting in favor of another. Viewing others’ social profiles can also affect our mood.28 We see sites filled with users only emphasizing the positives,29 showcasing glamorous vacation and social photos, or news of promotions and other triumphs. Perhaps it’s no wonder we can begin to question whether our lives pale by comparison.

What employers can do

Skeptics of technology addiction often respond: “Just put the phone down.” Yet willpower is not enough. Technology is designed to psychologically stimulate the reward centers of our brain to keep us coming back for more, mimicking the effects of a physical drug addiction.30 Rectifying this will ultimately require that developers and technologists adopt the human-centered approach of designing technologies and work environments that help users overcome—rather than be overcome by—natural human limitations.31

Fortunately, the growing ubiquity of digital technology is matched by the growing prominence of the cognitive and behavioral sciences, accompanied by a burgeoning collection of practical tools for prompting healthy behavior change. Especially significant is the emergence of the field of behavioral science or when applied, behavioral “nudges.” This core insight finds that relatively modest evidence-based environmental tweaks can lead to outsized changes in behaviors and positive outcomes.32 (See the sidebar, “Behavioral science and design application ethics.”) Take one example: placing less nutritious foods in a cafeteria out of direct sight or easy reach. Doing so doesn’t eliminate any options; individuals are still free to choose whatever they want. But the thoughtful placement prompts more nutritious choices and less “mindless eating.”33 Analogous sorts of behavioral design can be applied to our technology-mediated work environments when employers choose both better technologies that have been designed with user well-being in mind, and better workplace environments, social norms, and expectations to positively influence how we use our devices.

Better technology

Track, analyze, and change usage patterns

All of us are now effectively part of the Internet of Things: We leave behind “digital breadcrumbs” as we go about our digitally mediated lives.35 In particular, this happens on the job: Email and calendar metadata are a rich, largely untapped data source, and it is now technologically feasible to collect “affective computing” data from cheap electronic devices that capture data about tone of voice, facial expression, and even how much we sweat during states of stress or excitement.

It is obviously crucial to avoid using such data in invasive, “big brother” ways.36 Still, it is worthwhile to consider using such data to help individuals better understand and regulate their use of technology.37 For instance, smart meters can display individuals’ application usage patterns, highlighting areas of concern. There is already software which is available to monitor application usage and time spent on various websites; at the enterprise level, other solutions exist that can track the time that an employee spends on each application, creating reports that include comparisons to other employees. Such comparison metrics can help workers truly understand how their efforts compare to those of their colleagues, and, when delivered with the appropriately framed message, convey messages about work-hour social norms in an effort to guide decisions and also discourage “always on behavior.” Such data could also be used to tailor peer comparison messages designed to nudge healthier technology use. Such social proof-based messaging has proven effective in applications ranging from curbing energy use to prompting more timely tax payments.38 For instance, an employee working more than 50 hours a week could be sent a notification informing her that she has been working more than her coworkers, who average around 45 hours of work a week. This nudge could be enough to break her free from the perceived social norm that everyone works a 60-hour week or prompt her to begin a workload conversation with her manager.39

Use AI to promote healthier behavior

Artificial intelligence (AI) can also help us better mediate our interaction with technology, performing tedious “spadework,” to free us to focus on higher-level tasks. In particular, AI can be harnessed to help us manage our digital work environments. For example, some email systems now use AI to sort emails into categories, making urgent emails easier to locate and only pushing primary emails to a user’s phone.40 Google has also worked with behavioral economist Dan Ariely to build AI into its calendar application, which can automatically schedule “appointments” for performing tasks that are important but tend to get crowded out by concrete tasks that are urgent in the short term. “Email shows up and says, ‘Answer me,’” Ariely says. Unfortunately, time for thinking does not do that.”41

At the next level, emerging examples include a chatbot that can help cut down technology-related negative behaviors. For instance, its software features a smart filter that can prevent certain applications, such as a social media feed, from refreshing.42 It is possible that AI products can be designed to ameliorate other forms of stress and anxiety on the job. Another AI-enabled chatbot, designed by a team of Stanford University psychologists and computer scientists, can perform Cognitive Behavioral Therapy (CBT). CBT is often employed as an intervention technique to help individuals identify the factors driving negative thoughts and behaviors and subsequently identify and encourage positive alternative behaviors.43 This technique was covered in recent Deloitte research,44 and has been found to be a solid intervention for improving emotional well-being.45

Encourage productive flows

Employers can build into their email and internal systems mechanisms that incorporate stopping points into applications, nudging users to decide whether to continue an activity. Reminders have proven to be an effective nudge strategy in various contexts.46 Drawing from the consumer realm, some developers have begun to incorporate new nudging features. When a customer begins to excessively use another commonly scarce resource, data, many phones will notify the user that they are about to exceed their data limit. These alerts can nudge a user to break free from the flow of data usage and reassess their continued use. Transferring this concept to the work environment could, for instance, take the form of employers nudging employees to disconnect from emails while on vacation or outside of work hours.

Technology can likewise be used to maintain positive states of flow, and also as a commitment device to nudge us toward better behaviors.47 For example, the “Flowlight” is a kind of “traffic light” designed to signal to coworkers that a knowledge worker is currently “in the zone,” and should not be disturbed. The Flowlight is based on keyboard and mouse usage as well as the user’s instant message status.48 Likewise Thrive Global has a new app that, when you put it in “thrive” mode, responds to senders that you are thriving and will reply later.49

Better environments

The aforementioned ideas exemplify various forms of human-centered design applied to workplace technologies. However, as also alluded to, human-centered design can also be applied to work environments. Indeed, nudging can be viewed as human-centered design applied to choice environments.50 Providing information and establishing policies, restrictions, and guidelines are “classical economics”-inspired levers for effecting behavioral change. Smart defaults, commitment devices, social norms, and peer comparisons are examples of “soft touch” choice architecture tools that can be employed to design work environments that are conducive to more productive uses of technology (see figure 1).

Technology and social pressure

Employer policies and cultural norms can mitigate the always-on culture. For example, both policies and organizational cultures can be tuned to discourage employees from communicating with each other via email outside of work hours. This can be complemented with technological default mechanisms that make it logistically harder or impossible to send emails or set up meetings during off hours.

A less heavy-handed but potentially equally powerful persuasive technique is subtly employing the power of peer pressure via social proof. Social proof is premised on the social psychology finding that individuals often use the behavior of others to guide their own actions.51 Social proof has proven effective in a variety of settings ranging from encouraging people to reuse their hotel towels52 to getting them to pay their taxes on time.53 With this in mind, companies could inform employees that sending emails to colleagues during off hours is not the norm and not encouraged. Going one step further, one leading multinational auto corporation uses a hybrid of technology-enabled processes and cultural norms, allowing employees the option of automatically deleting all emails received during vacation, notifying the sender that the message was not received.54 If this seems too radical, another option is offering a day-long vacation extension, allowing employees who have been off for multiple successive days to ease back into work by catching up on email and other non-collaborative tasks. Another simple bit of choice architecture can lighten the load of numerous back-to-back meetings: Setting the default meeting durations to 25 minutes rather than 30 automatically builds in rest periods.

Commitment devices and social support

Research shows that if someone publicly commits to specific steps to achieve a goal, they are more likely to follow through.55 Commitment devices such as pledges are premised on this finding. For example, Johns Hopkins University has created a well-being pledge for its employees. Interested workers are offered a plethora of opportunities and strategies to help increase work-life fit over the course of 30 or 90 days. Once they sign up, they begin to make life changes with the support of their employer. So far, the organization has found this approach successful.56 In addition to the automatic-reply devices we mentioned earlier, another activity that could incorporate a pre-commitment pledge is a “digital detox,” something Deloitte itself employs. This is a seven-day program that involves making small technology-related changes each day.

Regardless of the specific policy or choice architecture intervention, the overarching aim is to rewire the workplace in ways that improve the employee-technology relationship. To be successful, there must be a push from the top down: It is one thing to create a new policy, but quite another for an organization’s leaders to openly display their commitment to it, and communicate its resulting benefits.

A matter of habit

Improving our relationship with technology—both on the job and off—is less a matter of continual exercise of willpower than designing digital technologies and environments to reflect the realities of human psychology. Poorly (or perversely) designed technologies can hijack our attention and lead to technology addiction. But design can also facilitate the cultivation of healthy habits of technology use. Many of our automatic, repeated behaviors are cued by environmental factors.57 People who successfully cultivate positive habits do so less through continual exercises of willpower than by taking the time to redesign their environments in ways that make positive behaviors more effortless and automatic.

Metaphorically, it pays to reimagine and reshape our environments in ways that make healthy habits a downhill rather than an uphill climb. In the workplace, individual employees can play a role in cocreating positive technological environments. But, ultimately, leaders of organizations should play an active role in spearheading such design efforts and taking an evidence-based approach to learning what works, and continually improving on it.

Why Do Computers Use So Much Energy?


It’s possible they could be vastly more efficient, but for that to happen we need to better understand the thermodynamics of computing

Why Do Computers Use So Much Energy?

Microsoft is currently running an interesting set of hardware experiments. The company is taking a souped-up shipping container stuffed full of computer servers and submerging it in the ocean. The most recent round is taking place near Scotland’s Orkney Islands, and involves a total of 864 standard Microsoft data-center servers. Many people have impugned the rationality of the company that put Seattle on the high-tech map, but seriously—why is Microsoft doing this?

There are several reasons, but one of the most important is that it is far cheaper to keep computer servers cool when they’re on the seafloor. This cooling is not a trivial expense. Precise estimates vary, but currently about 5 percent of all energy consumption in the U.S. goes just to running computers—a huge cost to the economy as whole. Moreover, all that energy used by those computers ultimately gets converted into heat. This results in a second cost: that of keeping the computers from melting.

These issues don’t only arise in artificial, digital computers. There are many naturally occurring computers, and they, too, require huge amounts of energy. To give a rather pointed example, the human brain is a computer. This particular computer uses some 10–20 percent of all the calories that a human consumes. Think about it: our ancestors on the African savanna had to find 20 percent more food every single day, just to keep that ungrateful blob of pink jelly imperiously perched on their shoulders from having a hissy fit. That need for 20 percent more food is a massive penalty to the reproductive fitness of our ancestors. Is that penalty why intelligence is so rare in the evolutionary record? Nobody knows—and nobody has even had the mathematical tools to ask the question before.

There are other biological computers besides brains, and they too consume large amounts of energy. To give one example, many cellular systems can be viewed as computers. Indeed, the comparison of thermodynamic costs in artificial and cellular computers can be extremely humbling for modern computer engineers. For example, a large fraction of the energy budget of a cell goes to translating RNA into sequences of amino acids (i.e., proteins), in the cell’s ribosome. But the thermodynamic efficiency of this computation—the amount of energy required by a ribosome per elementary operation—is many orders of magnitude superior to the thermodynamic efficiency of our current artificial computers. Are there “tricks” that cells use that we could exploit in our artificial computers? Going back to the previous biological example, are there tricks that human brains use to do their computations that we can exploit in our artificial computers?

More generally, why do computers use so much energy in the first place? What are the fundamental physical laws governing the relationship between the precise computation a system runs and how much energy it requires? Can we make our computers more energy-efficient by redesigning how they implement their algorithms?

These are some of the issues my collaborators and I are grappling with in an ongoing research project at the Santa Fe Institute. We are not the first to investigate these issues; they have been considered, for over a century and a half, using semi-formal reasoning based on what was essentially back-of-the-envelope style analysis rather than rigorous mathematical arguments—since the relevant math wasn’t fully mature at the time.

This earlier work resulted in many important insights, in particular the work in the mid to late 20th century by Rolf LandauerCharles Bennett and others.

However, this early work was also limited by the fact that it tried to apply equilibrium statistical physics to analyze the thermodynamics of computers. The problem is that, by definition, an equilibrium system is one whose state never changes. So whatever else they are, computers are definitely nonequilibrium systems.  In fact, they are often very-far-from-equilibrium systems.

Fortunately, completely independent of this early work, there have been some major breakthroughs in the past few decades in the field of nonequilibrium statistical physics (closely related to a field called “stochastic thermodynamics”). These breakthroughs allow us to analyze all kinds of issues concerning how heat, energy, and information get transformed in nonequilibrium systems.

These analyses have provided some astonishing predictions. For example, we can now calculate the (non-zero) probability that a given nanoscale system will violate the second law, reducing its entropy, in a given time interval. (We now understand that the second law does not say that the entropy of a closed system cannot decrease, only that its expected entropy cannot decrease.) There are no controversies here arising from semi-formal reasoning; instead, there are many hundreds of peer-reviewed articles in top journals, a large fraction involving experimental confirmations of theoretical predictions.

Now that we have the right tools for the job, we can revisit the entire topic of the thermodynamics of computation in a fully formal manner. This has already been done for bit erasure, the topic of concern to Landauer and others, and we now have a fully formal understanding of the thermodynamic costs in erasing a bit (which turn out to be surprisingly subtle).

However, computer science extends far, far beyond counting the number of bit erasures in a given computation. Thanks to the breakthroughs of nonequilibrium statistical physics, we can now also investigate the rest of computer science from a thermodynamic perspective. For example, moving from bits to circuits, my collaborators and I now have a detailed analysis of the thermodynamic costs of “straight-line circuits.” Surprisingly, this analysis has resulted in novel extensions of information theory. Moreover, in contrast to the kind of analysis pioneered by Landauer, this analysis of the thermodynamic costs of circuits is exact, not just a lower bound.

Conventional computer science is about all about trade-offs between the memory resources and number of timesteps needed to perform a given computation. In light of the foregoing, it seems that there might be far more thermodynamic trade-offs in performing a computation than had been appreciated in conventional computer science, involving thermodynamic costs in addition to the costs of memory resources and number of timesteps. Such trade-offs would apply in both artificial and biological computers.

Clearly there is a huge amount to be done to develop this modern “thermodynamics of computation.”

Be on the lookout for a forthcoming book from the SFI Press, of contributed papers touching on many of the issues mentioned above. Also, to foster research on this topic we have built a wiki, combining lists of papers, websites, events pages, etc. We highly encourage people to visit it, sign up, and start improving it; the more scientists get involved, from the more fields, the better!

Why Do Computers Use So Much Energy?


It’s possible they could be vastly more efficient, but for that to happen we need to better understand the thermodynamics of computing

Why Do Computers Use So Much Energy?
Microsoft is currently running an interesting set of hardware experiments. The company is taking a souped-up shipping container stuffed full of computer servers and submerging it in the ocean. The most recent round is taking place near Scotland’s Orkney Islands, and involves a total of 864 standard Microsoft data-center servers. Many people have impugned the rationality of the company that put Seattle on the high-tech map, but seriously—why is Microsoft doing this?

There are several reasons, but one of the most important is that it is far cheaper to keep computer servers cool when they’re on the seafloor. This cooling is not a trivial expense. Precise estimates vary, but currently about 5 percent of all energy consumption in the U.S. goes just to running computers—a huge cost to the economy as whole. Moreover, all that energy used by those computers ultimately gets converted into heat. This results in a second cost: that of keeping the computers from melting.

These issues don’t only arise in artificial, digital computers. There are many naturally occurring computers, and they, too, require huge amounts of energy. To give a rather pointed example, the human brain is a computer. This particular computer uses some 10–20 percent of all the calories that a human consumes. Think about it: our ancestors on the African savanna had to find 20 percent more food every single day, just to keep that ungrateful blob of pink jelly imperiously perched on their shoulders from having a hissy fit. That need for 20 percent more food is a massive penalty to the reproductive fitness of our ancestors. Is that penalty why intelligence is so rare in the evolutionary record? Nobody knows—and nobody has even had the mathematical tools to ask the question before.

There are other biological computers besides brains, and they too consume large amounts of energy. To give one example, many cellular systems can be viewed as computers. Indeed, the comparison of thermodynamic costs in artificial and cellular computers can be extremely humbling for modern computer engineers. For example, a large fraction of the energy budget of a cell goes to translating RNA into sequences of amino acids (i.e., proteins), in the cell’s ribosome. But the thermodynamic efficiency of this computation—the amount of energy required by a ribosome per elementary operation—is many orders of magnitude superior to the thermodynamic efficiency of our current artificial computers. Are there “tricks” that cells use that we could exploit in our artificial computers? Going back to the previous biological example, are there tricks that human brains use to do their computations that we can exploit in our artificial computers?

More generally, why do computers use so much energy in the first place? What are the fundamental physical laws governing the relationship between the precise computation a system runs and how much energy it requires? Can we make our computers more energy-efficient by redesigning how they implement their algorithms?

These are some of the issues my collaborators and I are grappling with in an ongoing research project at the Santa Fe Institute. We are not the first to investigate these issues; they have been considered, for over a century and a half, using semi-formal reasoning based on what was essentially back-of-the-envelope style analysis rather than rigorous mathematical arguments—since the relevant math wasn’t fully mature at the time.

This earlier work resulted in many important insights, in particular the work in the mid to late 20th century by Rolf LandauerCharles Bennett and others.

However, this early work was also limited by the fact that it tried to apply equilibrium statistical physics to analyze the thermodynamics of computers. The problem is that, by definition, an equilibrium system is one whose state never changes. So whatever else they are, computers are definitely nonequilibrium systems.  In fact, they are often very-far-from-equilibrium systems.

Fortunately, completely independent of this early work, there have been some major breakthroughs in the past few decades in the field of nonequilibrium statistical physics (closely related to a field called “stochastic thermodynamics”). These breakthroughs allow us to analyze all kinds of issues concerning how heat, energy, and information get transformed in nonequilibrium systems.

These analyses have provided some astonishing predictions. For example, we can now calculate the (non-zero) probability that a given nanoscale system will violate the second law, reducing its entropy, in a given time interval. (We now understand that the second law does not say that the entropy of a closed system cannot decrease, only that its expected entropy cannot decrease.) There are no controversies here arising from semi-formal reasoning; instead, there are many hundreds of peer-reviewed articles in top journals, a large fraction involving experimental confirmations of theoretical predictions.

Now that we have the right tools for the job, we can revisit the entire topic of the thermodynamics of computation in a fully formal manner. This has already been done for bit erasure, the topic of concern to Landauer and others, and we now have a fully formal understanding of the thermodynamic costs in erasing a bit (which turn out to be surprisingly subtle).

However, computer science extends far, far beyond counting the number of bit erasures in a given computation. Thanks to the breakthroughs of nonequilibrium statistical physics, we can now also investigate the rest of computer science from a thermodynamic perspective. For example, moving from bits to circuits, my collaborators and I now have a detailed analysis of the thermodynamic costs of “straight-line circuits.” Surprisingly, this analysis has resulted in novel extensions of information theory. Moreover, in contrast to the kind of analysis pioneered by Landauer, this analysis of the thermodynamic costs of circuits is exact, not just a lower bound.

Conventional computer science is about all about trade-offs between the memory resources and number of timesteps needed to perform a given computation. In light of the foregoing, it seems that there might be far more thermodynamic trade-offs in performing a computation than had been appreciated in conventional computer science, involving thermodynamic costs in addition to the costs of memory resources and number of timesteps. Such trade-offs would apply in both artificial and biological computers.

Clearly there is a huge amount to be done to develop this modern “thermodynamics of computation.”

Be on the lookout for a forthcoming book from the SFI Press, of contributed papers touching on many of the issues mentioned above. Also, to foster research on this topic we have built a wiki, combining lists of papers, websites, events pages, etc. We highly encourage people to visit it, sign up, and start improving it; the more scientists get involved, from the more fields, the better!

Microsoft reveals ‘xCloud’, the dramatically changed future of the Xbox One


Microsoft has revealed “Project xCloud”, a dramatic new change that it hopes mark the future of the Xbox.

The project will allow people to play games wherever and whenever they want, on any platform they want, Microsoft said in its announcement.

In practice, it appears to be something like a Netflix or Spotify for games: allowing you to play and then stream a game onto your phone while you’re on the move, then on your PC when you get back in. Microsoft showed a phone popped into an Xbox controller, allowing people to play as normal on the handset’s small screen and streamed over the internet.

The project is already being tested and the public will be able to get involved from 2019, Microsoft said. It did not say when the full announcement will arrive.

“The future of gaming is a world where you are empowered to play the games you want, with the people you want, whenever you want, wherever you are, and on any device of your choosing,” wrote Kareem Choudhry, who leads the “gaming cloud” team at Microsoft. “Our vision for the evolution of gaming is similar to music and movies — entertainment should be available on demand and accessible from any screen.”

Microsoft is building the system so that the developers of the 3,000 games available for Xbox One today will be available across the cloud, without any more work from developers, Mr Choudhry wrote. That will allow existing Xbox players to game on the move as well as letting developers get their games in front of hundreds of millions of new people, he said.

Streaming games is taking off in recent months, as internet connections become quick enough to allow people to play detailed games without any hardware in their own house. Technologies such as Blade Shadow allow people to hire out a computer that exists somewhere else and comes to them over the internet, and Google announced in recent days that people will be able to stream the new Assassin’s Creed game in their browser.

Games are especially difficult to get right, since unlike music or films they must instantly include inputs from the user and ensure the games are sent in real time and in full quality, wrote Mr Choudhry. More details will be shared in the coming months about how that will work, he said.