As the AI era unfolds around us, historians reflect on lessons learned from the rollout of the internet and other technological revolutions.
In an essay posted to X on February 10, artificial intelligence entrepreneur Matt Shumer put it bluntly: “I am no longer needed for the actual technical work of my job.”
Shumer’s words, which have racked up 86 million views to date, rattled the nerves of an already-rattled public—and fueled fear for what the future may hold as the AI revolution threatens to disrupt work and ignite or topple the economy.
According to historians, anxieties like these have surfaced during all previous technological revolutions, from the assembly line that altered manufacturing to the trains, cars, and airplanes that shortened travel times to the internet that put information at our fingertips.
One notable difference with AI is the unprecedented speed at which the technology is advancing, with newer tools such as Anthropic’s Claude Opus 4.6 enabling users to write complex computer code, analyze data, or generate reports in a matter of seconds, or even engage in several tasks at once through a process called multi-agent teaming.
Below, two political economy historians, Louis Hyman and Angus Burgin, offer perspective on the AI-fueled shift we are experiencing and the concerns it sparks.
Hyman specializes in labor, capitalism, and the changing nature of work in the United States and has authored or edited five books on the history of American capitalism, including Temp: How American Work, American Business, and the American Dream Became Temporary (Viking, 2018). Burgin focuses on intellectual history and the political economy of technology in the US.
The separate conversations, combined and edited for flow and clarity, appear below:
One of the biggest worries with AI is job loss. In past technological revolutions, which workers lost jobs, and how does the current situation differ?
Louis Hyman: Humans have endured a 300-year competition with machines, and I could give you long lists of workers displaced along the way. The Luddites, a group of 19th-century textile workers in England, are perhaps the most famous example. But farmworkers, carriage drivers, elevator operators, toll collectors, typists, travel agents—all of these, and plenty more, were displaced by technology and automation. The US, in fact, used to be a nation of farmers, and now farmers make up a sliver of our population.
Historically, as new technologies and industries emerged, opportunities and jobs arose that workers could train for and take advantage of if they were willing and able. The internet, for instance, led to opportunities in software development, web design, user-experience engineering, and all kinds of positions that people couldn’t predict. Will that happen with AI? We don’t yet know.
What we do know is that labor transitions are tumultuous and chaotic, especially when social legislation is lacking, as is often the case. Most past transitions affected semi-skilled and skilled laborers—the working class. But AI poses risks to the professional class. Professional workers are used to downsizing; they’re not used to being displaced by automation. This will be the first time it happens to people with the most education: college graduates.
Angus, you’re working on a book about the history of the internet. What lessons can we take from the rollout of the internet in the 1990s that relate to the current situation with AI?
Angus Burgin: In the ’90s, private companies and the federal government invested heavily in the technological infrastructure needed for the internet. Millions of miles of fiber optic cable were laid to meet the anticipated demand, with the government offering tax breaks and other incentives—and marketing the internet as a key to supercharging the economy and even pushing forward [the tenets of] democracy. The idea was to skim off the top of that economic growth and redistribute it for things like education, which would be greatly aided by the new and limitless access to information that even rural schools could harness.
The internet was largely viewed by both sides of the political aisle as a positive that would help society. Al Gore called the internet a “bridge to the 21st century,” and investors poured money into internet startups and stocks, creating the dot-com bubble that wasn’t unlike other booms that occurred when new technologies emerged—railroads, cars, radio, for instance—and not unlike what we’re seeing now with AI.
But the money flowing into the internet infrastructure and dot-coms turned out to be a massive overinvestment based on speculation rather than profits—hence the bubble bursting in 2000. Companies like Pets.com ran out of cash. AOL [American Online] merged with Time Warner. And the stock market plummeted.
Could something similar happen with data centers, the technological infrastructure for AI?
Hyman: Yes, though I worry it could be worse. The money invested in fiber optic cable, for instance, was based on predictions about the demand for data. It didn’t initially pay off, but the fiber put in place in the ’90s eventually led to profits as the demand for data rose over the next five to 10 years.
With AI, data centers are being built with huge amounts of GPU (graphics processing unit) power, but GPUs lose value every year. If a data center that costs billions of dollars goes under, those losses will need to be written off. And if GPU alternatives are created to power AI—an effort underway now—then that alone could render today’s data centers obsolete. Should we hold off on building so many data centers? It might make sense to let the technology evolve more.
The internet is now a staple of life and work, but did the early optimism about it turn out to be shortsighted? What can we see in hindsight?
Burgin: The internet gives us access to this unbelievable world of information. If you’re part of a university network, you can have almost instantaneous access to scholarship from all over the world. You can access books out of copyright—and more. But the internet filters information in ways that make it difficult for people to judge whether it’s reliable or unreliable. It heavily incentivizes partial networks of information oriented toward dopamine flows in users’ brains. And so the divide between phenomenal access to information and how that information is used is much greater than a lot of the early advocates anticipated.
Has the internet strengthened our democracy, as the optimists envisioned, or weakened it?
Burgin: The internet was developed as a Department of Defense project, but it grew out of the dial-up, digital bulletin board systems in the ’80s that people used to connect with others who shared common interests—to build community. Now, however, as others have noted, the internet consists of about five major browsers or platforms full of mostly the same information. In other words, it’s run by a small handful of large, semi-monopolistic corporations that often make their money through advertising. And a lot of people didn’t see that coming. Instead, they viewed the internet as fundamentally anti-monopolistic because its information is user-generated, disaggregated, and dynamic—and because so many early internet companies wanted to disrupt entrenched large business interests. It’s interesting to think about the degree to which the lack of regulation on the early internet may be, in part, culpable for the rise of the tech monopolies dominant today.
Would you say it’s critical to regulate new technologies early on?
Burgin: I’m not a policy expert, but a lesson from the internet is that once structures of power are in place—once a company has the capital, the data, and so on—then it becomes extremely difficult to roll them back. So when people say regulation of AI will be hard—and it will be—that can’t become an excuse for doing nothing. Early policy shapes the long-term landscape.
Hyman: An unexpected thing about life in the US is that it’s not inevitable that people will receive rising wages over time. It’s something Americans have to choose as social policy, in part by working to self-organize and demand these kinds of things. It’s a choice over how this plays out.
During the industrial revolution, we spent 100 years fighting a war between labor and capital, and millions of people were left without work—their lives were squandered. The only time the government was proactive was with the New Deal. What can this teach us?
We need to be attentive to reality. We need to have serious conversations grounded in historical analogy to understand what’s different this time around. At a very basic level, the fact that I can run an AI on my computer and don’t need the massive factory that Henry Ford needed to manufacture his cars—that is one big difference. Another is that previous technological revolutions did not really displace or leave people with the most education out of jobs, especially recent college grads. But that is potentially the case with AI.
Do you see the potential for monopolies with AI, given what looks like a laissez-faire approach to regulation at this juncture?
Hyman: There’s an imagination that AI consists only of OpenAI and Anthropic, with capital streaming into those two companies alone and creating a monopoly. But that’s not what’s happening on the ground. In some ways, AI is the most democratic tech revolution we’ve ever experienced. How so? AI enables people to do a lot from their home computers, without needing to invest in a huge data center or cutting-edge proprietary system, and I expect it will only get cheaper and easier to use over time. Many powerful AI platforms cost nothing—Ollama and LM Studio, for example. These are incredible tools that anyone can take advantage of.
This capability is unlike anything we’ve seen before. It’s undermining to the rich and existing professions, and it has the potential to destabilize everything.
Typically, whoever has power and control over essential resources accumulate the most wealth—people like Andrew Carnegie, who owned the steel factories that supplied materials for railroads and skyscrapers. Today, the assumption is that this will be the chip-makers—companies like Nvidia. And while I agree that the manufacturing of AI parts will be very monopolistic, it’s important to consider that AI involves two components: training and inference. The training component requires huge amounts of computing power and technical expertise. The inference component, however, involves using the systems—doing or creating something beneficial. I don’t think this part will end up being as monopolistic as previous moments in terms of patents and access to capital and technology.
Any other lessons we can take from history?
Burgin: With telecommunications tools like the telegram, telephone, and email, people generally viewed these inventions as positives that would help us communicate and live more harmoniously. The idea was that access to communication and information would enable us to become better citizens of the world.
Intuitively, this makes sense, as much as it sounds naïve. But it underscores the importance of thinking long and hard about the unintended and unanticipated consequences of any new technology, rather than just the intended uses. The internet, for instance, killed the business model of local newspapers, which was built on classifieds. Early advocates didn’t foresee that happening, along with other outcomes such as the lack of mediated, fact-checked information. People tend to consume what appeals to their preconceived worldview, rather than encounter information that challenges it. The algorithms of social media networks exacerbate that phenomenon, pushing people into political silos and leading to the polarization we see today—none of which we saw coming.
What are potential unintended consequences of AI? This is a question we need to carefully consider.
Hyman: The idea that there’s a fixed amount of work to do in the world is incorrect. It’s called the lump of labor fallacy. In reality, there’s a nearly infinite amount of work to do in the world. It’s just a matter of whether you can do it at a reasonable price.
AI offers the potential to do stuff that was never possible before. But the technology won’t determine the outcome, just as the steam engine itself didn’t lead to the industrial revolution. Instead, it was the reorganization of people in factories that enabled new technologies to be implemented. In other words, it’s the social reimagination of what the new technology could bring.
Today, the biggest challenge isn’t understanding the backend of AI. It’s not a technical question but rather a political question about power and a cultural question about how we think of and imagine ourselves in organizations. The human capacity for evaluation, curiosity, critical thinking, and humility—these are the capacities needed in the era of AI.
Source: Johns Hopkins University