Historian Mar Hicks on why nothing about AI is inevitable

AI usage has been deemed by some to be an inevitablity. Many recent headlines and reports quote technology leaders and business executives who say AI will replace jobs or must be used to stay competitive in a variety of industries. There have already been real ramifications for the labor market; some companies have laid off a substantial number of employees to “go all-in” on AI. In response, a number of colleges are creating AI certificate programs for students to demonstrate at least their “awareness” of the technology to future employers, often with backing from the AI companies themselves.
Looking at the history of technology, however, the pronouncements that have been made about generative AI and work can be better understood as marketing tactics to create hype, gain new users, and ultimately deskill jobs as a result. Deskilling reduces the level of competence required to do a job and funnels knowledgeable workers out of their positions, leaving brittle infrastructure behind in their place.
Mar Hicks, a historian of technology at the University of Virginia, researches and teaches about the history of technology, computing, and society, and the larger implications of powerful and widespread digital infrastructures. Hicks’ award-winning book, Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge In Computing, is about how Britain lost its edge as the leader in electronic computing as the proportion of women in computing declined, and there was a dearth of workers with the expertise, knowledge, and skills required to operate increasingly complex computer systems. Hicks co-edited Your Computer is On Fire, a collection of essays that examine how technologies that centralize power tend to weaken democracy, and is currently working on two projects: a history of instances of resistance to large hegemonic digital systems, and a book on the dot-com boom and bust.
Fast Company recently spoke with Hicks about hype cycles, how tools get framed as “inevitable,” and the relationship between technology, labor, and power. This interview has been condensed and edited for clarity.
There is a lot of AI hype. Previously, there was blockchain hype. In the late 1990s, there was dot-com hype. How has hype played a role in the adoption of and investment in prior technologies?
The new technologies that we’ve been seeing over the past 20 to 30 years in particular are dependent on a cycle of funding from multiple sources. Notably, sometimes really significant funding from venture capital sources. They almost by necessity have to sell themselves as more than what they feasibly are. They’re selling a fantasy, which in some cases they hope to make real, and in some cases they have no intention of making real.
This fantasy attracts venture capital because venture capital bets on all sorts of different technologies, many of them somewhat outlandish, in hopes of one or a few being billion-dollar technologies. As that cycle has become more tuned and understood by both the investors and those whose companies are being invested in, various ways to game the system have come about.
Sometimes they can be incredibly harmful. One way to game the system is to promise something you know you can’t deliver simply for very short-term gain. Another way of gaming the system, which I would argue is more dangerous, is to promise something that you either know you can’t deliver, or you’re not sure that technology can deliver, but by trying to essentially reengineer society around the technology, reengineer consumer expectations, reengineer user behaviors, you and your company are planning to create an environment—a labor environment, a regulatory environment, a user environment—that will bring that unlikely thing closer to reality.
In doing so, a lot of dangerous things can happen, especially when the attempt to do these things does something like labor arbitrage, where the profit for a particular technology isn’t coming out of the technology’s utility—it’s coming out of the fact that that technology allows employers to either pay far less or nothing for labor that is somehow seen as equivalent, even if it’s much inferior, or to do things like browbeat labor unions through the threat of a certain technology.
We’re seeing that a little bit now with certain technologies that are being marketed not just as labor saving, but as something that can replace even human thought. That might seem very dystopian, but it’s a common thread in the history of technology. With every wave of technologies that automate some part of what we do in the workplace, the benefits are always overpromised and under-delivered.
Hype cycles seem to be centered on tools. Why are tools historically hyped in a way that invisible systems, practices, and knowledge are not?
Hype cycles tend to be centered on visible products and tools because it’s much harder to explain and mobilize excitement around processes and infrastructures and knowledge bases. When you start going into that level of complexity, you automatically start talking about things in ways that are a bit more grounded in reality.
If you’re going to try to hype something up, you’re promising radical new change. You really don’t want to get into the details very much about how this fits with existing processes, labor markets, and even business models. The more you get into those details, you get into the weeds about how things might not work, or how things quite obviously don’t make sense. Where the profit is coming from not only can start to look unclear, it can start to look very shortsighted.
This focus on tools comes up again and again in a way that’s both very specific but also kind of hand wavy. We see these tools as a thing that we can understand and grasp onto, literally or metaphorically, but also, the tool stands in for a whole bunch of other things that become unsaid or hidden and are left to people to infer. That puts the person or corporation who’s selling the tool in a very powerful position. If you can drum up hype and get people to think the best of what the possible future outcomes might be without telling them directly, then you don’t have to make as many false promises, even as you’re leading people in a very specific direction.
Technology adoption is frequently framed as inevitable by those advocating for it. Why are technologies framed that way? This seems like a technologically deterministic way of thinking—as if it is predetermined that people will adopt a technology just because it exists.
Framing anything—a technology, a historical movement—as inevitable is really a powerful tool for trying to get people and organizations on your side, ideologically. In some cases, when a very powerful person, organization, or set of social interests says that something is inevitable, this makes it much harder for other people who might not see that inevitability or not want that thing to come to pass. Instead of just disagreeing on the level of whether something will work well, the discourse is shifted to arguing whether or not it’s inevitable, and how to mitigate the harm if it is.
Once you fall back to the position that the technology may be inevitable as a critic, you’re already arguing from a much weaker position. You’re arguing from a position that assumes the validity of the statement that a technology is just going to come along and there’s nothing that can be done to stop it, to regulate it, to reject it.
This is a technologically deterministic way of thinking, because it produces this idea that technologies shape society when, of course, it’s usually the other way around. It’s society that creates and chooses what those technologies are and should be. Saying a technology is inevitable and that it is going to determine how things historically develop puts so much power in the hands of the people who make the technology.
I think some of the feeling of inevitability with regard to AI comes from the fact that AI features have already been integrated by engineers into many tools that people rely on, and the makers of these technologies do not provide a way to opt out. How inevitable is widespread AI usage?
The only way we can truly answer that question is in hindsight. If it were inevitable, that means that people and the governments that they have elected to supposedly represent them do not any longer have a say in the process. Technology corporations are essentially doing a massive beta test of the generative AI LLMs for the public at low introductory rates right now, or even for free. I think it’s really premature to say that things have to go the way that the people boosting, profiting, and funding these technologies want them to go.
It’s not inevitable. History doesn’t just happen. People, organizations, and institutions make it happen. There’s always time to change things. Even once a technology or a set of practices becomes entrenched, it can still be changed. That’s real revolution. To say that the technology can become inevitable and sort of entrenched in these simple terms is, let’s just say, a big oversimplification.
How can individuals resist technologically deterministic thinking and AI hype?
There are a couple of things that I would caution people to be on the lookout for. Whenever something is framed as new and exciting, be very wary about just uncritically adopting it or experimenting with it. Likewise, when something is being presented as “free,” even though billions of dollars of investment are going into it and it’s using lots of expensive resources in the form of public utilities like energy or water.
That is sort of a red flag that should cause you to think about not uncritically adopting something into your life, and not even playing around with it with the goal of “checking it out.” That is exactly the behavior and curiosity that companies rely upon to get people hooked, or at least talking about, creating positive rhetoric, and generating buzz for these products to help them spread farther and faster, regardless of their level of utility or readiness.
I would really love to see folks being a little more skeptical of how they use technologies in their own lives, and not saying to themselves, “oh well, it’s here, so nothing I do is going to change that. I guess I’ll just use it anyway.” People do have agency, both as individuals and as larger groups, and I would foreground that if we’re thinking about how we combat technological determinism. I’ve been really heartened to see that so many journalists have changed their approach in the last few years when it comes to pulling back on the breathless, uncritical reporting of new tools and new technologies.
Science and technology reporting frequently focuses on new advances and therefore reporters seek interviews with scientific or technological experts, rather than people who study the broader context. As a historian of science, what do you think is left out when that perspective is not included?
It’s totally reasonable to expect that science and technology journalists will talk to the folks who are experts in a particular technology. But it’s really important to get the context as well, because technology is only useful as much as it’s applied. While these folks are experts in that technology, they are not, just by nature of what they do, going to be experts in the application and the social propagation, the way this is going to impact things economically or politically or educationally. It’s not their job to have very deep or good answers to those things.
Domain experts from those fields need to be brought into the conversation as well, and need to be brought into any reporting of a new technology. We’ve gone through a pretty dangerous metamorphosis since the late 20th century, where anybody who is an expert in computing or can even be seen as competent in computing, not even an expert, has been given an intellectual cachet, where their opinions are considered more important than people who aren’t technological experts, and are being asked questions that they’re not well equipped to answer.
In the most benign case, that means you get poor answers. In the worst case scenario, it means that people who are trying to manipulate public discourse to help their business interests can do that really easily.
I have seen AI compared to the calculator, the loom, the industrial revolution, and mass production, among other things. Are any of these historically accurate comparisons?
I think that certain aspects can be historically accurate, but the way that people cherry pick which aspects to talk about and which technologies to talk about usually has more to do with how they hope things will go rather than being explanatory for how things are likely to go.
As a historian, I think it’s important to use examples from the past, but I prefer to see them used in a way that’s a bit more critical. Instead of just saying “AI is like a calculator, it’s just a new tool, get over it,” maybe we should be comparing it to automated looms and automated weaving, and thinking about how that affected labor, and how frame breakers—Luddites—were coming in and trying to get this technology out of their workplaces, not because they were against technology, but because it was a matter of their survival as individuals and as a community.
These historically accurate comparisons are tricky, and I would just say, be wary of anybody who’s giving a historical comparison that they say is going to 100% map onto what’s happening now, especially if they’re doing it to say “get over it, people were afraid of this technology at the start, too.”
AI has been purported to boost or augment workers’ skills as it automates tasks that they can spend less time on. This reminds me of Ruth Cowan’s More Work for Mother. Do new technologies tend to save time?
Oftentimes, new technologies do not save time, and they often do not function in the ways that they are supposed to or the ways that they’re expected.
One of the throughlines in the history of technology is that big, new infrastructural technologies often make more uncompensated labor. People may have to do more things to essentially shepherd those technologies along and make sure that they don’t break down, or the technologies create new problems that have to be addressed. Even if it seems like it’s saving time for one person, a lot of the time it is creating a ton of work for other people—maybe not in that immediate moment. Maybe people, days, months, even years down the line, have to come in and fix a mess that was created in the past.
In your book Programmed Inequality, you write about how feminized work, work that was “assumed to be rote, deskilled, and best suited to women,” was critical for early computing systems. This work was anything but unskilled. Now we have work that is assumed to be unskilled—and has historically been done by women—being marketed as replaceable by AI: using chatbots to virtually attend or take notes of meetings, to automate tedious tasks like annotating and organizing material, to write emails, reports, code. What do we lose when we let AI do these kinds of tasks rather than letting people accomplish them on their own, if the task is theoretically getting done either way?
In my book, I talk about how early computing, especially in the U.K.—but this was also true in the U.S. to a very large degree—was feminized. In other words, it was done largely by women.
The other thing that the word “feminized” means is work that is seen as—emphasis on seen as—deskilled, and it’s undervalued as a result. It was seen as just another kind of clerical work, or very rote mathematical work, nothing that required any sort of real brilliance, even though it did require a lot of education, skill, and creative thinking to do these early programming jobs at the dawn of the electronic age. Bug tracking software, tools that help people keep their code neat, or even compilers [did not exist]. It was so much more difficult to do these things back then.
When you have this dynamic of hiding the real cost of labor behind automating tools, you’re building incredibly brittle infrastructure. You’re creating a situation where the emphasis is on the tools and the physical technology, and completely hiding, discarding, not paying for, not writing into the budget, all of the labor, knowledge, and expertise that is required to make those tools and systems work.
When you take out that big chunk of infrastructural systems, you take out the know-how. You take out the people and the processes that can make sure that these things are staying on track. And then everybody who relies on that infrastructure is in a dangerous situation. It might fail slowly, or things might be going fine, and then all of a sudden, something catastrophic happens.
What are the potential problems with seeking a technological fix to social or economic issues?
The problems are twofold. The first is that it can only, at best, fix one small part of a larger problem. If you’re seeking a technological fix, you’re fixing only one part of what’s going on. It simply cannot fix the other parts. You need other kinds of solutions for those, working, hopefully cooperatively, with technological change.
I’m not anti-technology by any means. For instance, public health technologies, like the sanitary sewers that most of us in the U.S. and Western Europe only got in the 1800s, have enabled us to live happier and longer lives. But that was a sociopolitical technological fix. Without political buy-in, without states funding these massive infrastructure projects, we would not have those things, and we wouldn’t have the taxpayer bases that fund their maintenance. The technology alone is not enough.
The other problem—which I think is a big problem and becoming bigger right now—is that if you say there’s a technological fix only to something, you cut out all the other stakeholders, and you put tremendous power—decision-making power, economic power, all types of power—in the hands of the persons or organizations, companies that are going to come up with that technology. That’s not democratic at all.
You research the history of the adoption of computing systems, or computerization. How is computerization historically related to power?
It is, in a very direct and material way, related to the power of states, their attempts to conduct warfare, and their attempts to control the essentially human resources of their population through collecting data and manipulating that data.
Many people don’t know or pay attention to that history because they were raised in the era of the personal computer, or the phones, laptops, and wearable technologies that we use today. These are presented to us as personal consumer technologies that are for us when, in fact, if you dig below the surface, even just a little bit, you see how these are hooked into really huge systems, and they function because of these huge systems that are doing many, many more things than just giving us our little communication devices. The adoption of computer systems has both expanded and also paradoxically hidden the power of governments and the companies that they contract with to do things that are in the government’s interest.
Programmed Inequality begins with a short tale from 1959: a (female) computer operator trained (male) new hires with no computing experience, who were promoted into management roles while the operator was demoted to an assistantship below them. This reminded me of what is now occurring with AI, but rather than men displacing women, technology is displacing humans. Specifically, people are choosing to replace people with technology. How do you contextualize this believe that technology is more skilled than people?
The historical context for this myth that technology can fix things in a way that labor can’t is tied up in an effort to centralize power in the hands of people and organizations that are already very powerful.
The reason that we see this very extractive and oftentimes counterproductive pattern with technology adoption is because technology sells the fiction to those who already have a lot of power that this will allow them to wield that power more directly, and, in fact, amass more power. They won’t have to do the messy work of talking to the people who are going to be doing the thing they want them to do. They won’t have to deal with organized labor or even unorganized labor. They won’t have to actually work with people. The reason that’s so burdensome is because people usually have a lot of interesting thoughts and ideas, and have a lot of specific domain knowledge that might slow things down.
If you can sort of ram a technology through, maybe the job is not going to be done very well, but it has this enormous benefit to the people in power that they have either a real or fictional sense that they have ever more control and power, and that can be really dangerous because it’s essentially promoting a type of technological oligarchy.
Is it possible to square the extractive nature of AI with its supposed benefits?
You can’t square that circle, unless you really don’t care about where things are being extracted from. If you’re in one of the communities or classes of people where that value is getting extracted out of, that’s going to be a problem. You are always going to be giving more than you are getting.
Technologies aren’t a rising tide that lifts all boats, unless we very specifically design them that way. With computing, they have not been designed that way. There’s been a ton of rhetoric, from the government to the news media, trying to present the image that it’s going to be good for everybody and it is going to raise everybody up in a somewhat equal manner. That has not been the case since the beginning, and it is not the case now.
How did we get to this current moment, with tech billionaires wielding unprecedented power over the federal government, from the original rise of computing?
The legacy of feminized labor very broadly, not just in computing, but even in the industrial revolution, tells us an awful lot about how we got here. What we see in the throughline of computing history is how it’s been feminized and then gendered masculine, and how certain types of expertise have been valued while certain people have been cut out. The historical process of computing as a field mirrors some politically and socially regressive trends. While the rest of the United States in the mid to late 20th century and into the early 21st was going in a more progressive direction, these technologies were freezing in amber earlier structures of hierarchy, power, and control. As they scaled up and became embedded into all of our institutions, they necessarily brought that with them.
A lot of people have been warning about this since the beginning, but it didn’t really click with the broad mass of people until more recently because the harms hadn’t been quite so evident for a lot of people in wealthy nations. Now those harms are becoming obvious. Our chickens are coming home to roost, in a sense, and while it isn’t surprising, it’s really destructive and sad to see. These are not things that are either inevitable or irreversible, but they are big problems that we are going to have to tackle in a way that’s much more robust than saying, “what kind of technological fix can we slap on this technological problem?”
Technologies always create more problems, that then [tech companies and marketers] want to sell you another technology to fix, and that just continues the cycle of harm. What we need are political, social, economic, communal responses and solutions, and to have enough power as a society made up of people, not just machines, to get those implemented for all of our good.
What's Your Reaction?






