some rhetorical questions about technology
Robert Adrian

From: "The City Within", edited by Jeanne Randolph, Banff Centre for the Arts, Banff, 1992.

The theme of "Rhetoric, Utopia and Technology" appears to have been based on the idea that technology, especially the new electronic technology, is not only under the control of the dark forces of the military and trans-national corporations but that it is, itself, an instrument of patriarchal control and power. Moreover, in the guise of the ideology of the "technological ethos", technology has become transparent and now dominates our lives and undermines our critical faculty. The implication is that the rhetoric of technology has usurped the place of the rhetoric of utopia - that late industrial consumer capitalism, represented by its world-dominating technology, claims to be an utopian program itself and that any other utopian visions are redundant or, at best, superfluous. In this reading of "Rhetoric, Utopia and Technology" the task would be to recover both rhetoric and utopia from the clutches of technology and to try to discover an utopian vision not dominated by technology and its rhetoric.

But there is another reading of this analysis which considers the new technology both un-rhetorical and anti-utopian. It assumes that the new technology is entirely different from the manufacturing technology of the past 300 years and that it predicts an entirely different social and cultural infrastructure. In such a situation, appeals to the history, traditions or philosophy of the old culture of the industrial west are meaningless. If a new utopia is to be dreamed it will have to discover a rhetoric which does not contain notions like empowerment, control and domination because none of these concepts, like so much other industrial rhetoric, has any meaning in the distressing, from the western standpoint, horizontality of electronic culture. This culture, new as it is, can be discerned in the development of intelligent machines. These machines are so young that no one can predict their future shape or form. The only thing that is certain is that, unless a global tragedy occurs, these machines will bring about a completely new kind of society and it is unlikely that those of us who are committed to the industrial values will like it very much. So why are we building it?

The following are some rhetorical questions concerning the direction of the development of computer technology which try to resist the paranoia or euphoria usually associated with the subject ...


While final decisions are still made by humans, these decisions are increasingly informed by data provided by machines. However in day-to-day practical terms many decisions are not only informed by but are actually made by machines with human beings functioning as evaluators or transmitters of machine decisions.

How far are we prepared to go in delegating decision-making to these machines? How far have we gone? Is there any possible reversal of the process? Was there ever any decision made about delegating decision-making to machines or has it been built into the industrial program right from the beginning - whenever that may have been?


In the battle against hackers, computer crime and industrial espionage, a large part of research and development is being devoted to producing machines which can detect, and protect themselves against, potentially damaging penetration. In order for such protection to be efficient the machine must be able to detect increasingly sophisticated and resourceful intruders - it must be designed to be, in a restricted sense, autonomous. That is: it would be able to decide who or what has access to its memory and programs.

Would this amount to a concious state perhaps at the level of an oyster? Will future development in computer security be in the direction of self-programming machines that eliminate the need for (unreliable) human programmers? How long will it be before the machines, in the interests of security and efficiency, take over their own design and construction? How desirable, in human terms, are machines that achieve higher levels of autonomy - or is it already too late to ask?


Machines, even intelligent machines, cannot be thought of as having desires or even intentions aside from those of their human creators. It is equally difficult to imagine a machine with anything like animal, or even vegetable, survival reflexes. However programs of machine development are so complex, diverse and contradictory that a distinct, coherent human "intention" is also hard to discern. Machine intelligence continues to evolve in leaps of quite amazing magnitude but this evolution is often in conflict with the avowed intentions of the industrial, military, commercial and entertainment interests that the various (often conflicting) programs are meant to serve.

Could this worldwide research and development program be thought of as constituting the "intention" that machines lack - a hidden or subliminal agenda? Is machine development, in the broadest sense, no longer connected to human needs or, more precisely, to the needs of industrial society? Is it appropriate to suggest that the scientists, engineers and technicians are actually engaged in a program of machine evolution and are, in fact, already working for the machines rather than visa versa?


Hans Moravec (CMU Robotics Lab) has pointed out that intelligent machines have already surpassed human intelligence in abstract thinking even though the human brain is actually a much more powerful thinking device than any conceivable computer. He explains this apparent contradiction by the fact that a large part of human intelligence is dedicated to interaction with the physical world while machine intelligence is devoted almost entirely to abstract "thinking". This implies that it is exactly because they are different from humans that machines are so successful as "thinking" devices. But the environment that artificial intelligence and robotics research inhabits is so biased toward human intelligence that other models of intelligence have little chance of "natural selection" in the AI/Robotics jungle.

Can attempts at the duplication or simulation of human intelligence be considered an early or primitive phase of machine evolution? Is the notion of mobility, useful in designing flexible, intuitive, robotic responses to complex and rapidly changing data, necessary in the long run for electrical/mechanical forms that can easily communicate with each other over long distances via electronic networks? Will our increasing reliance on (and respect for) intelligent machines eventually lead to a new way of thinking about machine intelligence - not as artificial but natural?


Much of the development in electronic equipment in the last 10 years has been in the field of communications and the human interface. This has amounted to a revision of the relationship between human and machine. Recent developments in telephone and radio communications (ISDN, data compression methods, cellular networks etc.) allow very efficient exchanges between digital devices - the machines communicate with each other and the human role is merely to act as an interface between the machines. Even when machines blunder or deliver misleading data, the result is an increased effort to correct and improve the technology - usually by seeking reliability through increased autonomy - and not to a rethinking of the appropriatness of the technology itself. The technology itself can no longer be put into question because, although computers are barely 50 years old, the infrastructure of modern business and commerce, government, the military and everyday life in late - or post-industrial societies is unthinkable without them and their integrated communications networks.

Are these systems under the control of their human creators or visa versa? Is it a partnership? Is "control" a meaningful concept in relation to the complexity and power of these systems? What happens to a culture accustomed to authority, dominance and control if power is delegated to, or appropriated by, machines? Can machines, even an autonomous oyster-like computer network, be malicious, corrupt or evil?


The notion of industrial manufacturing - the processing of raw materials into consumer products - is being revolutionised by the concept of nano-technology which operates at the molecular level. Using a computer-aided scanning-tunnelling-electron microscope, atoms can be manipulated to build functioning molecular tools. These tools, programmed to perform specific functions, are capable of building molecular structures. At the moment they are being used in medicine to treat arterial problems but it is predicted that within the forseeable future this technology will be capable of manufacturing quite large and complex objects. Because this technology is entirely dependent on computers it is reasonable to assume that the manufacture of computers and computer components will be one of the priority tasks. In a sense computers would then be in the possession of their own system of replication.

Is it possible to imagine a machine embryology? Would this mean that machines would become, in some sense, biological and enter the realm of "nature"? Could bio-machine intelligence, being stationary but integrated in neural communications networks, be likened to plant life ... a kind of intelligent forest or savanna grassland?

Is this an artificial evolution program?

(Originally written during the 10 week workshop "Rhetoric, Utopia and Technology" at the Banff Centre for the Arts, January - March 1992.)