Last week we learned about a significant leak of classified material that exposed key details of the Ukrainian war effort and America’s security apparatus. The perpetrator? Not an extremist group or criminal network, but someone we’re more familiar with. A young man who spends too much time online.
Technology and our inability to regulate it have again made things worse. Much worse. The leaker’s preferred platform was Discord, which has been used to share child pornography and coordinate the white supremacist riots in Charlottesville. Discord is not alone. Recently, Instagram assisted the suicide of a young British girl by serving her images of nooses and razor blades. Facebook fueled a mob riot in Myanmar. The list goes on: Teen depression, viral misinformation, widespread distrust of national institutions, polarization, algorithms optimized for rage and radicalization … We’ve discussed this before.
What’s startling about this latest scandal is the banality. A reckless young man trying to gain social status online accidentally shapes world events. Steve Jobs called computers “bicycles for the mind” because they amplify our capabilities dramatically. It’s a nice image. A more apt analogy for many young men, however, is a Kawasaki H2 Mach IV, a motorcycle that possessed far too much power and had a rear-biased weight balance that made it an accident waiting to happen. Too much power, not enough balance, and injury is inevitable. Tech has become a bullet bike, a reliable source of disturbing accidents and organ donations.
People have always been stupid, and everyone is stupid some of the time. (Note: Professor Cipolla’s definition is people whose actions are destructive to themselves and to others.) One of society’s functions is to prevent a tragedy of the commons by building safeguards to protect us from our own stupidity. We usually call this “regulation,” a word Reagan and Thatcher made synonymous with bureaucrats and red tape. Yes, Air Traffic Control delays and the DMV are super annoying, but not crashing into another A-350 on approach to Heathrow, not suffocating as your throat swells from an allergic reaction, and being able to access the funds in your FTX account are all really awesome.
Sixty years ago the U.S. registered more than 50,000 car crash deaths annually. So we created the National Highway Traffic Safety Administration and charged it with making the roads safer. If you’re under 60, this may be hard to imagine, but not that long ago, many Americans saw seat belts as an assault on their personal liberty — some cut them out of their cars. Democracy bested stupidity, however, and between 1966 and 2021, vehicular death rates in America were halved.
The NHTSA is one of the many boring state and federal agencies critical to a healthy society. Before the Food and Drug Administration, the sale and distribution of food and pharmaceuticals was a free-for-all. The Federal Aviation Administration is the reason your chances of dying in a plane crash are 1 in 3.37 billion. Next time someone tells you they don’t trust government, ask them if they trust cars, food, pain killers, buildings, or airplanes.
The limits on innovation imposed by these agencies — their red tape — are real, and worth it. Millions of us are alive and prospering because we had the foresight and discipline to blunt the sharp end of industrial progress with the guardrails of democratic oversight. Until you open your phone …
The greatest anomaly in the history of U.S. regulation is the place more and more of us spend most of our time: online. A lethal cocktail of complexity, lobbying, cultural worship of tech leaders, and anti-government libertarian screed has rendered tech immune to the basic standards of safety and protection. Lethal is the correct term. Tech comes into the purview of other agencies on occasion. (Though it’s always bitching it’s special and shouldn’t be restrained by the olds at the FTC and DOJ.) And the industry’s blocking efforts have been effective. There is no FDA or SEC for tech, which is America’s largest sector by market capitalization and growing.
The justification for this was the go-to new-economy get-out-of-jail-free word: “innovation.” When tech was nascent and niche, we were smart to err on the side of growth vs. regulation. That movie ended a decade ago. Phones aren’t toys for early adopters, and search and social have moved beyond campuses. We don’t require a license to drive a Big Wheel, but if a Big Wheel 5G went 750 miles per hour we might restrict access, or at least demand airbags.
The go-to narrative for these platforms after every new disaster is the delusion of complexity. And that the Internet is just another communications technology (e.g., the phone, a letter), a reflection of society, and it would be near-impossible to put guardrails in place. Also, sprinkle in some blather re free speech. This is all bullshit. AI can write a Seinfeld script in the voice of Shakespeare. It can scan platforms for words and images associated with risks — they’re already doing it for signals you might be shopping for Crocs. But what’s the incentive for a platform to make the investment in any editorial review? Other than decency and regard for others, that is?
We’ve done a good job stupid-proofing the offline world, but that’s increasingly not where we live, especially younger people who now spend roughly the same amount of time online as they do sleeping. They (we) pass the majority of our waking hours riding in a vehicle with no airbags, licenses, or traffic lights. Plus, there are millions of autonomous vehicles on the road controlled by unknown actors, and they’re prone to running over pedestrians.
Tech is embarking on its next big adventure: artificial intelligence. Which likely means rapid innovation, increased productivity, and another tsunami of unforeseen societal harms. Predicting how AI will tear the fabric of civilization is the new bingo. Humanoid phishing scams that access bank accounts, AI-generated “camera footage” and headlines that make the truth increasingly opaque, rogue AI gods determined to eradicate humanity. Experts agree: All of this is possible.
What’s telling is the technologists’ collective reaction to their own creation. For the first time, they want to slow down. A few weeks ago they wrote a petition calling on AI labs to “immediately pause” all training of the most powerful AI models for six months — hundreds of tech leaders signed the letter. The CEO of OpenAI, Sam Altman, who started the hype spiral with ChatGPT, says he is “scared” of his company’s own algorithms. One AI expert said the six-month moratorium isn’t harsh enough; instead, he says, “shut it all down.” What undermined the veracity of the letter was one signatory, Elon Musk, who’s asked others to pause as he fast-tracks his own AI programs. (As usual, he’s full of shit.)
We should grab this opportunity with both hands. Specifically, both hands on the wheel. Not a “pause,” which, in my view, is a bad idea. (China, Russia, and North Korea won’t pause.) If the guy who just disappeared my blue check hadn’t been kicked out of OpenAI, and controlled it instead, do you believe he’d be advocating for a pause? Better idea: The 78 podcasts that garner more downloads than the Prof G Pod should suspend their programming. You know … just to get our arms around this new, and potentially dangerous, podcast medium.
But we do need to seize this moment, likely brief, when some tech leaders have remembered the virtues of government oversight. We need a serious, sustained, and centralized effort at the federal level, perhaps a new cabinet-level agency, to take the lead in regulating … we can call it AI, because pretty soon AI will be everywhere in tech.
There have been efforts at comprehensive technology regulation we can pick up and carry across the finish line. For example, last year Senator Michael Bennet proposed the Digital Platform Commission Act, which would create a federal body to “provide reasonable oversight and regulation of digital platforms.” In other words: Exactly what we’re talking about. But it’s still stuck at the “introduced” stage. As with any political issue, it needs public support.
What won’t work is fake regulation — when the government issues broad, vague statements about what companies should generally do. That’s what Biden did with crypto, and he’s doing it again with AI. Specifically, his “Blueprint for an AI Bill of Rights,” which is filled with truisms, platitudes, and no laws. Similarly, the NIST published its “AI Risk Management Framework.” Again, not laws.
Psychotics and the Homeless
Earlier this month, a tech executive was tragically stabbed and died. Outspoken members of the tech industry immediately speculated the killer was a “psychotic homeless person.” A few days later, an acquaintance of the victim (also a tech entrepreneur) was arrested and charged. Note: The homeless are more likely to be victims of crime than perpetrators.
The above is another variation on a story told repeatedly across an innovation economy where we have incorrectly conflated wealth and innovation with character. A growing vein of the tech community (Venture Catastrophists) deploy weapons of mass distraction and fear to wallpaper over an inconvenient truth: The menace unleashed on America the past two decades isn’t psychotic homeless people or a crime wave, but a tech community whose products depress our teens, polarize our public and render our discourse more coarse … making it less likely we come together and address issues including homelessness and crime. Our failure to regulate this sector, as we have done with every other sector, is stupid.
Life is so rich,
P.P.S. Want to innovate in a way that doesn’t destroy the world? Join me for free on May 16 to discuss How to Have a Breakthrough. Come with questions.