Is Technology our Enemy?
Bite-back occurs when an innovation has unexpected and unintended effects. It is probably the case that in most cases such consequences are negative, but that is not invariably the case. Bite-back happens because a new technology is by definition an exploration into the unknown, and so it is impossible to predict precisely whether it will do more or less of just very different things from what it was meant to do.
One of the best known “laws” in the History of Technology is known as “Kranzberg’s First Law”, named after a historian of technology named Melvin Kranzberg (1917-1995) and states that “technology is neither good nor bad, nor is it neutral.”
This seemingly enigmatic statement is actually quite profound: it says that technology always has an effect, but that it can be salutary or harmful, depending on other factors. Kranzberg never fully specified what these other factors were, but clearly “good institutions” would be central to the outcome.
The closest he came perhaps is in Kranzberg’s Fourth Law, which stated that “nontechnical factors take precedence in technology-policy decisions.”
In their recent book Power and Progress, Daron Acemoglu and Simon Johnson highlight one aspect of this issue, namely who has agency in deciding whether to adopt a new invention and whether the interests of that decider square with some reasonable social welfare function that reflects all groups affected by the innovation?
Many inventions have a narrow epistemic base, that is, they work before it is understood why and how they do, and hence unexpected consequences are likely to occur
The Bite-Back Effect
A different factor, and one that has not received sufficient attention, is what Edward Tenner in his brilliant 1996 book Why Things Bite Back has called technological “bite-back” effects. Bite-back occurs when an innovation has unexpected and unintended effects.
It is probably the case that in most cases such consequences are negative, but that is not invariably the case.
Bite-back happens because a new technology is by definition an exploration into the unknown, and so it is impossible to predict precisely whether it will do more or less of just very different things from what it was meant to do.
Many inventions have a narrow epistemic base, that is, they work before it is understood why and how they do, and hence unexpected consequences are likely to occur.
Bite-back effects come in many shapes and forms, big and small. Lead additives to gasoline solved the problem of engine-knock in cars, but eventually were found to be highly toxic. Chlorofluorocarbons were effective for spray cans, but proved harmful for the Ozone layer. Asbestos was long regarded as very useful in construction until it was recognized as toxic.
Humanity’s age-old war against insects is replete with bite-back effects, most famously of course the familiar tale of DDT. One of the most remarkable examples of all time was the 1912 Haber-Bosch process to extract ammonia from atmospheric nitrogen. By the year 2000, half the nutrients supplied by the world’s crops and 40 percent of proteins can be traced to the nitrogen-fixing process.
It was not suspected until much later that the carefree application of nitrates to agriculture imposed a growing threat of nitrogen pollution and has led to massive algae blooms and the appearance of large “dead zones” in coastal waters.
The mother of all bite-back effects is the epic rise in the use of hydrocarbons in fossil fuels and plastics, the harmful effects of which are now becoming abundantly clear on a planetary scale.
What economists must realize is that if negative bite-back effects are economically significant, our calculations of historical productivity growth are seriously biased upward. These computations have failed to subtract the cost of a scarce input from the revenue, either because its scarcity was not recognized, or because it was not associated with property rights. Hence estimated productivity growth treats a socially costly input as if it were free.
The Economist magazine asked rhetorically over twenty-five years ago if the internal combustion engine had from the start been charged its full environmental cost, whether it would have ever been adopted at all.
Typically, when a bite-back problem is recognized, both regulators and the market try to apply a “technological fix” — some way of minimizing the negative externality or abandoning 2 the technique altogether and finding a substitute that does not suffer from bite-back.
There is nothing automatic about these fixes: the party responsible for the negative externality has no incentive to invest in a technological fix (since the costs are usually spread over a large population) and has to be compelled by regulators or courts; in most cases, the costs end up being borne by taxpayers and consumers.
That fix may have bite-back issues of its own, and a further fix may need to be found, and so on. Such dynamic sequences are not uncommon in the past and are likely to continue. How bad can bite-back issues get?
The mother of all bite-back effects is the epic rise in the use of hydrocarbons in fossil fuels and plastics, the harmful effects of which are now becoming clear on a planetary scale
Unknown Unknowns
We can distinguish between known unknowns and unknown unknowns, paraphrasing the infamous Donald Rumsfeld. Climate change is a known unknown: we do not know precisely how bad it will get, but experts can estimate the probability distribution of the effects of hothouse gases on a range of outcomes. It is unknown where in that distribution the planet will end up — even if we could predict policy responses (which we cannot).
However, we can form reasonable estimates. The possible bite-back effects of other recent macroinventions such as AI, mRNA or in vitro gametogenesis (which allows any human cells to be converted into stem cells and then into eggs or sperm) are, however unknown, because we do not have anything like an epistemic base that will allow us to apply scientific models to produce quantitative estimates of the possible effects.
We simply don’t know enough. The one thing we can say for sure is that climate change, much like Chlorofluorocarbons and microplastics ocean pollution — and very much unlike asbestos or gasoline laced with tetraethyl lead — is harder to deal with because the effects are planetary and not localized.
Nation states and municipalities can implement technological fixes to undo local bite-back effects, even if they are expensive (as they often are). Once the effects are global, however, massive problems of international collective action may interfere even if the technological fixes are available (as there are).
Furthermore, global effects are likely to be far more expensive to fix. The other difficulty with applying technological fixes has to do with the “red queen effect.”
Some inventions that solve a problem that humanity has struggled with eventually encounter a resistance thrown up by mother nature or by other humans that undoes their effects, a special case of bite-back.
Harmful living organisms, such as insects and bacteria, build up resistance to the toxic substances we throw at them; cancer cells do the same. Similarly, the evildoers behind cybercrime and malware respond by immunizing their tools to our defenses. Breakthroughs in vaccination technology run into human ignorance and conspiracy theories. Such wars are never quite won. Does this apply to Artificial Intelligence?
The solution to unintended technological bite-back is to improve and adjust the techniques causing them, or if necessary to replace them with others that do not have deleterious effects
Fire With Fire
One fear is that radically new techniques such as AI may degenerate into red queen battles, in which nasty autocrats, cybercriminals, and other malevolent agents use it to surveil their citizens, create chaos, or steal money.
Civil society throws up safeguards, that are then circumvented by the bad actors and so on. It is hard to see how precisely such AI can be an existential threat to the human race comparable to thermonuclear bombs (as some doomsayers predict), but if it irreversibly erodes trust in institutions and sources of information, it may destroy democratic civil society and that would surely be bad enough.
My conclusion from all this is not that technological progress has been an unmitigated disaster and needs to be halted, but quite the reverse: we need more of it, but also the institutions that Kranzberg had in mind.
Unlike the hapless sorcerer’s apprentice who unleashed forces he could not control and that eventually did him in, humans learn, adapt, and correct. Human ingenuity creates bite-back effects and human ingenuity fixes them.
The solution to unintended technological bite-back is to improve and adjust the techniques causing them, or if necessary to replace them with others that do not have deleterious effects.
This sounds like fighting fire with fire, and in a way it is. In the past, this has worked more often than not. Is this time different? Perhaps not as much as the doomsayers would have us believe.
A technological solution to our current bite-back predicaments seems more likely than a political one. One way or another, it is worth citing Kranzberg’s Sixth Law: “All history is relevant, but the history of technology is the most relevant.”
IEP@BU does not express opinions of its own. The opinions expressed in this publication are those of the authors. Any errors or omissions are the responsibility of the authors.