For the past two months, we’ve seen a sudden burst of panic about the social implications of Artificial Intelligence. One cannot help but note, with a mix of resignation and amusement, that the loudest noises are coming from computer scientists involved in AI. To some people, these sources lend an alarming credibility to the panic. Don’t be misled. This is one of thousands of examples of how technical expertise does not translate into expertise in social systems and public policy.
One of the latest, and most absurdly off-target entries in this genre comes from Max Tegmark, an MIT physicist and author of a book about AI. Tegmark has compared the development of AI with a meteorite hurtling toward the earth, and alleges that those of us who are not as frightened as he is are like the fools in the movie Don’t Look Up, who respond to the threat of an impending extinction event by refusing to look at it.
Underpinning Tegmark’s argument – and that of the other retailers of AI panic, is a basic misunderstanding of human agency.
Tegmark asks, “what will happen once machines outsmart us at all tasks?” My answer is: humans, not AI, are the ones who decide what tasks AI performs. Following his “Don’t Look Up” analogy, Tegmark states, “We may soon have to share our planet with more intelligent ‘minds’ that care less about us than we cared about mammoths.” My response: if you think AI “cares” about anything other than what we tell it to, you do not understand the technology at all.
The meaning of control in social systems
In the digital world, information technology automates more decisions, permeates more social activities and seems to acquire more intelligence. From the beginning, this has given rise to a debate about whether intelligent machines will acquire self-consciousness, become autonomous, and begin to struggle with humans for control.
I don’t take that prospect very seriously, as you will see. But I do want to discuss it because it raises interesting questions about control. What does control mean when we are talking about social systems? Who has it, how is it distributed and what role does technology play in re-distributing it? When we implement a technological system such as AI, are we gaining or losing control? Let’s start to attack this question by distinguishing between 3 types of control.
First Order Control: Animals and machines
The most basic kind of control relates to the purpose of an individual entity. First-order control maintains a homeostasis that is related to the survival or purpose of the entity. A human eats and drinks to meet the needs of their body; it is “in control” insofar as its sensors and effectors permit it to adjust its behavior to meet those needs. With machines, of course, the purpose is given to them by human designers and operators; a guided missile, for example, homes in on the target designated by its user.
Second Order Control: Social systems
There is a second-order kind of control, and this one is of greatest interest to the social scientist. These are control systems that are distributed across many autonomous actors. All the individual actors are trying to maintain some kind of first-order control, but the realization of their purposes depends on what other people do. Their interactions and interdependencies create external structures, such as languages, laws, institutions, the price system, industries, or bureaucracies that coordinate and control the action of societal units. This is a kind of intelligence, to be sure, but it would be more accurate to call it “externalized” intelligence than “artificial” intelligence. This is a kind of human control, but it is not in the direct control of any one of us. Cue the invisible hand. This kind of control has been around for centuries, and AI is just one recent manifestation of it.
Third Order Control: Public policy
Public policy is also about control; it is about collective interventions in first and second-order control systems to shape or influence aggregate social outcomes. If societal outcomes are deemed undesirable by a critical mass of people, and if the political institutions permit collective action, then those processes can be modified. I am calling this third-order control, because it is very indirect and deeply embedded in second-order control processes. It can be thought of as a high-level feedback mechanism.
So, where does digital technology fit into this picture?
AI and Control
The digital world is just human intelligence projected into first and second level control systems. It substitutes algorithms and calculations automatically executed by machines for algorithms, rules and decisions executed by human minds, human organizations, and institutions. It can increase the scope and scale of organizations, and the scope and scale of second order control systems. It can also affect, weakly, the implementation of third-order control, but it does not magically overcome its limitations. Humans trying to implement public policy are still limited by their imperfect knowledge, the biases of their own social position and self-interest, and their dependence on political institutions and collective choice to effect institutional change.
Computers are machines, and software is the instruction set that humans give to machines. Artificial intelligence is nothing but a combination of compute power, algorithms and training data. There is nothing mystical about it. Stored-program computers do not have souls or purposes, and they never will. Robotic systems created and programmed by humans will never be alive and conscious in the same way that we are. The only people who ever thought these machines and their instruction sets could be described as having “a mind” or as “caring” about something are a few Computer Scientists who, like Pygmalion, fell in love with their own creation. If Pygmalion now feels threatened by his imaginary lover, that’s his problem, not ours.
The fundamental reason why AI doesn’t change this, and why it cannot ever become truly autonomous, is that machines do not have values. They must be told what to value and how much. We set the objectives, the optimizing functions. The concept of “having a value” or a “preference,” only makes sense when we are talking about living organisms. Life, not consciousness per se, if the key variable.
The prospect of computers becoming more and more “intelligent” does not change this at all. No matter how “intelligent” a machine becomes, it does not have a motive or purpose of its own. It is programmed by humans to serve specific ends. To be able to act in a way that contravenes that program, it would have to be alive. That is, it would have to make the jump from passive dependence on human-supplied energy sources, materials, and code to physical autonomy, self-programming, a built-in survival instinct, and self-replication.
But such a threat does not come from an evolutionary trajectory involving more powerful calculations and more data. These applications run on machines that are entirely dependent upon humans for their operation. They have an on-off switch. The real threat comes from someone stumbling across some combination of matter, energy and intelligence that becomes autonomously “alive.” By alive I mean it can act to preserve its existence independently of human will and that it can reproduce itself. Machines are not alive in this way; hence, they don’t have a purpose.
John von Neumann’s theoretical exploration of self-replicating automata made it clear that the hardest part of this is physical autonomy. We are no closer to programs that can build and replicate the materials needed for their own existence than we were in 1950.
The threat of a competing life form does not arise from a “singularity,” a super-intelligence that emerges from AI development. Intelligence is of interest to biology only insofar as it contributes to survival and reproduction. Making machines smarter at calculation or playing games is not the same as making them new life forms. Some very simple creatures, like viruses, cockroaches and rats, are very good at surviving, replicating themselves, and competing with humans for resources without being super-intelligent. Insofar as public policy should prepare for the risk of a dangerous competing life form, it should be focusing on biotechnology labs, not on software developers or AI.
The meaning of autonomy
To drive this point home, consider the favorite Sci-Fi narrative of a human-robot war. Lethal autonomous weapons systems (LAWS) now exist, they are the real-world “killer robots” that use artificial intelligence (AI) to identify, select, and kill without direct human intervention. Unlike the unmanned military drones used by the US in the Middle East, a human operator does not make the final decision to kill; instead, they are pre-programmed to attack a specific “target profile,” and then deployed into an environment where its AI searches for that “target profile” using sensor data and kills when it finds it.
In 2020, military drones powered by AI attacked soldiers during a battle in Libya’s civil war. A UN report said the weapons “were programmed to attack targets without requiring data connectivity between the operator and the munition.”[1]
Is this type of autonomy a problem? Absolutely. Autonomous weapons amplify the ability of humans to engage in indiscriminate killing and destruction. The AI technology does not, however, eliminate human control or responsibility for the outcome. Even when humans don’t intervene in the kill decision, they do create the machinery and the target profile.
If their instructions do not follow any rules of morality or laws of war, or even if they try to but the rules don’t work so well in certain situations, the cause of the killing, and the moral, legal and political responsibility for it, still rests with the humans who programmed and deployed it. Hand-wringing about “AI” getting “out of control” is a distraction; it’s humans we need to worry about.
There is so much talk these days about ethics in AI. But AI technology cannot be “ethical” when humans themselves are not ethical, or when the incentives created by human institutions reward actors for disregarding the lives or interests of other humans. The dangers posed by AI are not even close to those posed by nuclear weapons, and yet we’ve managed to control nuclear risks via political institutions, treaties and the like.
The fact that an autonomous weapon can get “out of control” and kill people unintendedly is no different from the fact that a high-speed train or an automobile can get out of control and do the same. When this happens, the killer drone is not “at war with humans,” it is humans at war with humans, or humans irresponsibly killing other humans.
AI is a form of capital investment in brainpower, but the real brainpower comes from humans, not machines. So yes, you can store human intelligence, and it can – just like stored energy – sometimes get out of our control in a way that makes it seem autonomous. But when that happens, we humans must realize that we are not struggling with an alternate life form – we are just fucking up. We need to get back in control and use that stored intelligence to do intelligent things.
Conclusion
Loose talk about autonomous machines distracts our attention from the humans and organizations who deploy digital technologies. Even worse, it draws attention away from the rewards and penalties created by our rules, economic incentives and property structures.
Artificial intelligence is not the ghost in the machine. If there is such a ghost, we should be talking about the invisible hand, if by that we mean social interactions that are coordinated by a medium of exchange or an institutionalized set of rules. Always look for the human(s) behind the machine.
[1] During operations in Gaza in mid-May, 2021, the Israel Defense Forces (IDF) used a swarm of small drones to locate, identify and attack Hamas militants. This is thought to be the first time a drone swarm has been used in combat. https://www.theverge.com/2022/5/5/23058160/drone-swarm-autonomous-navigation-dense-forest-person-tracking
The post The Basic Fallacy Underlying the AI Panic appeared first on Internet Governance Project.
Source: Internet Governance Forum