Type Here to Get Search Results !

Why Silicon Valley’s Optimization Mindset Sets Us Up for Failure

Why Silicon Valley’s Optimization Mindset Sets Us Up for Failure

September 10, 2021 at 07:01PMRob Reich, Mehran Sahami and Jeremy M. Weinstein

In 2013 a Silicon Valley software engineer decided that food is an inconvenience—a pain point in a busy life. Buying food, preparing it, and cleaning up afterwards struck him as an inefficient way to feed himself. And so was born the idea of Soylent, Rob Rhinehart’s meal replacement powder, described on its website as an International Complete Nutrition Platform. Soylent is the logical result of an engineer’s approach to the “problem” of feeding oneself with food: there must be a more optimal solution.

It’s not hard to sense the trouble with this crushingly instrumental approach to nutrition.

Soylent may optimize meeting one’s daily nutritional needs with minimal cost and time investment. But for most people, food is not just a delivery mechanism for one’s nutritional requirements. It brings gustatory pleasure. It provides for social connection. It sustains and transmits cultural identity. A world in which Soylent spells the end of food also spells the degradation of these values.
[time-brightcove not-tgx=”true”]

Maybe you don’t care about Soylent; it’s just another product in the marketplace that no one is required to buy. If tech workers want to economize on time spent grocery shopping or a busy person faces the choice between grabbing an unhealthy meal at a fast-food joint or bringing along some Soylent, why should anyone complain? In fact, it’s a welcome alternative for some people.

But the story of Soylent is powerful because it reveals the optimization mindset of the technologist. And problems arise when this mindset begins to dominate—when the technologies begin to scale and become universal and unavoidable.

That mindset is inculcated early in the training of technologists. When developing an algorithm, computer science courses often define the goal as providing an optimal solution to a computationally-specified problem. And when you look at the world through this mindset, it’s not just computational inefficiencies that annoy. Eventually, it becomes a defining orientation to life as well. As one of our colleagues at Stanford tells students, everything in life is an optimization problem.

The desire to optimize can favor some values over others. And the choice of which values to favor, and which to sacrifice, are made by the optimizers who then impose those values on the rest of us when their creations reach great scale. For example, consider that Facebook’s decisions about how content gets moderated or who loses their accounts are the rules of expression for more than three billion people on the platform; Google’s choices about what web pages to index determine what information most users of the internet get in response to searches. The small and anomalous group of human beings at these companies create, tweak, and optimize technology based on their notions of how it ought to be better. Their vision and their values about technology are remaking our individual lives and societies. As a result, the problems with the optimization mindset have become our problems, too.

A focus on optimization can lead technologists to believe that increasing efficiency is inherently a good thing. There’s something tempting about this view. Given a choice between doing something efficiently or inefficiently, who would choose the slower, more wasteful, more energy-intensive path?

Yet a moment’s reflection reveals other ways of approaching problems. We put speed bumps onto roads near schools to protect children; judges encourage juries to take ample time to deliberate before rendering a verdict; the media holds off on calling an election until all the polls have closed. It’s also obvious that the efficient pursuit of a malicious goal—such as deliberately harming or misinforming people—makes the world worse, not better. The quest to make something more efficient is not an inherently good thing. Everything depends on the goal.

Technologists with a single-minded focus on efficiency frequently take for granted that the goals they pursue are worth pursuing. But, in the context of Big Tech, that would have us believe that boosting screen time, increasing click-through rates on ads, promoting purchases of an algorithmically-recommended item, and profit-maximizing are the ultimate outcomes we care about.

The problem here is that goals such as connecting people, increasing human flourishing, or promoting freedom, equality, and democracy are not goals that are computationally tractable. Technologists are always on the lookout for quantifiable metrics. Measurable inputs to a model are their lifeblood, and the need to quantify produces a bias toward measuring things that are easy to quantify. But simple metrics can take us further away from the important goals we really care about, which may require multiple or more complicated metrics or, more fundamentally, may not lend themselves to straightforward quantification. This results in technologists frequently substituting what is measurable for what is meaningful. Or as the old saying goes, “Not everything that counts can be counted, and not everything that can be counted counts.”

There is no shortage of examples of the “bad proxy” phenomenon, but perhaps one of the most illustrative is an episode in Facebook’s history. Facebook Vice President Andrew Bosworth revealed in an internal memo in 2016 how the company pursued growth in the number of people on the platform as the one and only relevant metric for their larger mission of giving people the power to build community and bring the world closer together. “The natural state of the world,” he wrote, “is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don’t win. The ones everyone use win.” To accomplish their mission of connecting people, Facebook simplified the task to growing their ever-more connected userbase. As Bosworth noted: “The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good.” But what happens when “connecting people” comes with potential violations of user privacy, greater circulation of hate speech and misinformation, or political polarization that tears at the fabric of our democracy?

The optimization mindset is also prone to the “success disaster.” The issue here is not that the technologist has failed in accomplishing something, but rather that their success in solving for one objective has wide-ranging consequences for other things we care about. The realm of worthy ends is vast, and when it comes to world-changing technologies that have implications for fairness, privacy, national security, justice, human autonomy, freedom of expression, and democracy, it’s fair to assume that values conflict in many circumstances. Solutions aren’t so clear cut and inevitably involve trade-offs among competing values. This is where the optimization mindset can fail us.

Think for example of the amazing technological advances in agriculture. Factory farming has dramatically increased agricultural productivity. Where it once took 55 days to raise a chicken before slaughter, it now takes 35, and an estimated 50 billion are killed every year–more than five million killed every hour of every day of the year. But the success of factory farming has generated terrible consequences for the environment (increases in methane gases that contribute to climate change), our individual health (greater meat consumption is correlated with heart disease), and public health (greater likelihood of transmission of viruses from animals to humans that could cause a pandemic).

Success disasters abound in Big Tech as well. Facebook, YouTube, and Twitter have succeeded in connecting billions of people in a social network, but now that they have created a digital civic square, they have to grapple with the conflict between freedom of expression and the reality of misinformation and hate speech.

The bottom line is that technology is an explicit amplifier. It requires us to be explicit about the values we want to promote and how we trade-off between them, because those values are encoded in some way into the objective functions that are optimized. And it is an amplifier because it can often allow for the execution of a policy far more efficiently than humans. For example, with current technology we could produce vehicles that automatically issue speeding tickets whenever the driver exceeded the speed limit—and could issue a warrant for the driver’s arrest after they had enough speeding tickets. Such a vehicle would provide extreme efficiency in upholding speed limits. However, this amplification of safety would infringe on the competing values of autonomy (to make our own choices about safe driving speeds and the urgency of a given trip) or privacy (not to have our driving constantly surveilled).

Several years ago, one of us received an invitation to a small dinner. Founders, venture capitalists, researchers at a secretive tech lab, and two professors assembled in the private dining room of a four-star hotel in Silicon Valley. The host—one of the most prominent names in technology—thanked everyone for coming and reminded us of the topic we’d been invited to discuss: “What if a new state were created to maximize science and tech progress powered by commercial models—what would that run like? Utopia? Dystopia?”

The conversation progressed, with enthusiasm around the table for the establishment of a small nation-state dedicated to optimizing the progress of science and technology. Rob raised his hand to speak. “I’m just wondering, would this state be a democracy? What’s the governance structure here?” The response was quick: “Democracy? No. To optimize for science, we need a beneficent technocrat in charge. Democracy is too slow, and it holds science back.”

Adapted from Chapter 1 of System Error: Where Big Tech Went Wrong and How We Can Reboot published on September 7 by Harper Collins

 

Top ad res

inarticle code

ad res