Text written in 2003 with Henrique Garcia Pereira
Applications of collective robotics
Mankind has been intrigued by the possibility of ‘building’ artificial manlike creatures from the earliest times. For the ancient Greeks this possibility was provided by techné, the procedure that Aristotle conceived to create what nature finds impossible to achieve. Hence, under this view, techné sets itself up between nature and humanity as a creative mediation.
But ‘machinism’ became an object of fascination when vitalist conceptions assimilate machines to living beings, and a whole historical ‘bestiary’ of things related to machines stretches back centuries.
In the Hebrew tradition, the well-known Golem is an artificially created being made out of mud. If we move to Athens, the other ‘pillar’ of civilization as we know it, we find the maidens made out of gold, mechanical helpers built by Hephaistos, the Greek god of metalsmiths, as described by Homer.
This was the path taken by Norbert Wiener as he opened up the cybernetic perspective, viewed as “the unified study of organisms and machines” (Wiener, 1948). One line of development linked to this approach gave rise to the familiar ‘classical’ human-like robot, inspired by the von Neumannian self-replicating automata and based on the top-down attitude that was typical of the “Good old-fashioned Artificial Intelligence” (Von Neumann, 1966, Harvey, 2003).
A much more interesting trend – also stemming from the seminal work of Wiener but intended to “take the human factor out of the loop” – emerged in the mid-1940s with William Grey Walter, who proposed turtle-like robots that exhibit “complex social behavior” in responding to each other’s movements and to their environment (Dorf, 1990). This was the starting point for a new behavior-based robotics, abolishing the need for cognition as mediation between perception and plans for action.
This line of research was pursued in the 1980s by Rodney Brooks, who began building six legged “insect”-like robots at MIT.
As it was the case with ‘traditional’ AI robotics, where Duchamp’s dictum “there is no solution because there is no problem” may find an appropriate field of application. In fact, there is no such thing as an omnipresent, instantaneous, disembodied, all-possessing eye, being able to command the robots from outside, if they have to perform complex, nonrepetitive tasks.
This ‘generation’ of robots was based on Brooks’ novel “subsumption
architecture”, which describes the agent as composed of functionally
distinct control levels, conceived under a layered approach that allows the addition of new layers of control without the need for changing
anything in the already existing layers. The aforementioned control
levels then act in the environment without supervision by a centralized
control and action planning center, as it is the case instead in traditional
AI based robotics, where some kind of ‘bird’s eye view’ always prevailed. Also, no shared representation or any low bandwidth communication system is needed. The most important concept in Brooks’ reactive robots is “situatedness”, which means that the robot’s control mechanism refers directly to the parameters sensed in the world rather than trying to plan using an inner representation of them.
In human sciences, the concept of “situatedness” refers to the primacy of context, situation, embodieded groundness, and particular setting, favoring the bottom-up explanations that are termed “rhizomatic” in the lexicon of Deleuze and Guattari.
Linked to this concept is the “embodiment” feature, which corresponds to the fact that each “robot has a physical body and experiences the world directly through the influence of the word in that body” (Brooks, 1991). For a physically embodied robot in the real world, there are a number of key points to take into account, namely:
– Sensors deliver very uncertain values even in a stable world.
– The data delivered by sensors are not direct descriptions of the world, but rather measures of certain variables that are in fact indirect aspects of that world.
– Commands to actuators have very uncertain effects.
Basics of collective robotics
The idea of ‘collective robotics’ appeared in the 1990s from the convergence of the above described architecture of robots developed by Rodney Brooks with a variety of bio-inspired algorithms, focused on new programming tools for solving distributed problems. These bioinspired algorithms stemmed from the seminal work of Christopher Langton, who launched a new avenue of research in AI denoted Artificial Life (aLife), that “allows us to break our accidental limitations to carbon-based life to explore non-biological forms of life”, cf. Langton, 1987.
The well-known collective behavior of ants, bees and other eusocial insects provided the paradigm for the “swarm intelligence” approach of aLife (Bonabeau et al., 1999). This bottom-up approach is based on the assumption that systems composed of a group of simple agents can give rise to complex collective behavior, which depends only on the interaction between those agents and the environment.
Such an interaction may occur when the environment itself is the communication medium and some form of decentralized self-organized pattern ‘emerges’, without being planned by any ‘exterior’ agency.
It can be noted that this kind of behavior corresponds to the concept of ‘multitude’, put forward by Hardt & Negri (2000) in antagonism to the traditional idea of ‘people’. The latter shares with traditional AI some of its teleological characteristics, induced by a putative a priori ‘identity’.
The positive feedback control loop constitutes the basis for this kind of emergent morphogenesis, but randomness and fluctuations in the ant individual’s behavior, far from being detrimental, may in fact greatly enhance the system’s ability to explore new behaviors and find new solutions.
Collective robotics design implies an effort to keep the resources for computation, sensors and actuators as low as possible for each unit, aiming at reaching simultaneously a group behavior as ‘smart’ as possible.
Industrial-military applications
The word ‘robot’, coined by Karel Ĉapek in his science fiction playwright
R.U.R., Rossum’s United Robots (1920), is derived from the Czech “robota”, which refers to boring and repetitive work. Aiming at ‘substituting’ this type of work done by humans in factories, the first industrial robots appeared in the 1960s as huge, mechanical, hydraulically powered “arms”, that soon proliferated in the production lines of automobile manufacturers. This ‘industrial’ goal was then conveyed to collective robotics, where a bunch of robots is charged to perform a certain task. For a variety of goals, it was shown that, ceteris paribus, a multiple-robot system composed of ‘simple’ agents was more effective than a single ‘sophisticated’ unit (an increase in the total ‘utility’ occurs when collective robotics is applied). For instance, in foraging, a group must pick up objects scattered in the environment. In this application, which is evocative of waste clean-up, harvest, search and rescue, collective robotics is obviously the best solution.
Given the tight intermingling between warfare and industrial R&D in the USA, it is not surprising that, prior to economics, collective robotic systems have been used for more than a decade in the military-police realm.
The image of the ‘killer-robot’, once belonging uniquely to the world of science fiction, is now spread in real war scenarios as a “machine with predatory capabilities”. This meets the dream of military commanders of eliminating the “human element” from the battlefield.
The Defence Advanced Research Projects Agency of the USA developed tiny reconnaissance robots acting like scouts that soldiers or commandos could carry on their backs and scatter in any ‘battlefield’ or space occupied by the ‘enemy’, in order to gather information from places where it is not possible or safe to go (Grabowsky et al., 2003).
Following the general trend for miniaturization and increase in the total number of robots, enormous networks of small agents made their appearance in January 2004, in a conference devoted to ‘smart dust’, the concept coined by Kristofer Pister to denote a myriad of nodes of a “Physical Internet”, designed initially for detecting military devices, spread rapidly for other areas, like glacier monitoring, fire detection and environmental data capture (Larousserie, 2004). The more power structures intend to put technology at their service, the more they end up spreading it around. As Sadie Plant puts it in a somehow pompous way: “intelligence is no more on the side of power” (Plant, 1997).
Entertainement applications
Robotics has the distinction of being a branch of science to find its source in a work of fiction (Asimov’s book Runaround, published in 1942). Hence, the linkage to entertainment is in the very roots of the discipline. New applications for entertainment robots, such as Sony dog robot Aibo, has sparkled companies to create a menagerie of robot toys.
As far as collective robotics is concerned, the most stunning application is probably the RoboCup Soccer, which served as test scenario for the field of multi-agent robotics (Kitano et al., 1997).
Mobile robots with navigational skills have been used in life performances in theme parks and museums (Werger, 1998) and appeared in 2002 in the theater scene, when the MIT Media Lab introduced at the SIGGRAPH Emerging Technologies Exhibit an “Interactive Robot Theatre” (Breazeal et al., 2003). Also, the German pop music band “Kraftwerk” has been using a bunch of robots on stage, in order to enhance medium. In fact, this novel application of robotics may be seen as a détournement of Vivien Vienne’s aphorism on architecture – “how to use space in order to waste time” (Vienne, 2000) –, by adding “and engendering dreams”. In this context, where the output (art) is not accountable, ‘learning’ makes no sense because this feature can not be measured by any kind of ‘performance index’, in contrast with goal oriented applications.
Artistic ‘applications’:
conceptual background of the ArtSBot project
When referring to his own masterpiece “O livro do desassossego”, Fernando Pessoa stressed that its main distinctive feature was “magnificence and unusefulness” (Soares, 1998). In Pessoa’s terms, no such thing as any kind of ‘utility’ can be ascribed to art.
This is the crucial point to be taken into account when art is produced by mechanical devices, whose goal-directed characteristics have obviously been of paramount importance in their former applications, both in the ‘industrial-military’ and entertainment domains. Hence, a profound détournement must be performed, redirecting such mechanical devices for a non-purpose of “magnificence and unusefulness”.
It is obvious that any teleological setting, linked to any kind of ‘objective function’ (in the ‘optimization’ jargon), should be banned from the conceptual background behind any ‘artistic’ application of technology.
Simon Penny expresses this view referring to his experience as an ‘Artist in Data Space’: “My device is ‘anti-optimized’ in order to induce the maximum of ‘personality’.” (Penny, 1995).
The same applies, obviously, when collective robotics is though as an artistic medium. In fact, this novel application of robotics may be seen as a détournement of Vivien Vienne’s aphorism on architecture – “how to use space in order to waste time” (Vienne, 2000) –, by adding “and engendering dreams”. In this context, where the output (art) is not accountable, learning’ makes no sense because this feature can not be measured by any kind of ‘performance index’, in contrast with goal-oriented applications.
Also, the bio-inspired algorithms, with any flavor of ‘fitness’ in neo-Darwinian terms, should be carefully avoided on the grounds of Duchamp’s dictum that “art has no biological source”. Indeed, in contemporary societies where art assumes an ever-increasing importance, there are plenty of artifacts (such as the computer, the Net, etc.) that cannot be accounted for by biological evolution. Even including the ‘cultural’ approach based on ‘Memes’, elaborated in great detail and in a variety of directions by Susan Blackmore in the sequel to the original concept introduced by Richard Dawkins as a replicator of cultural information analogous to the gene (Dawkins, 1976, Blackmore, 1999)8. Memes are a blueprint for the cultural practice of ‘sampling’, of the ‘universal copy and paste’ procedure that emerged from information/communication technologies (Plant, 1997). What makes a Meme ‘catchy’ is not any form of ‘fitness’, but a fuzzy characteristic denoted by mètis in the ancient Greek thought (a kind of sagesse combining sagacity and ‘flair’, attention and malleability, prevision and simulation, opportunity and practical skills).
The dual mind/body problem (that has been floating over Western thought since Descartes) is to be overcome by the ‘horizontal’ synergetic combination of both components, discarding any type of hierarchy, in particular the Cartesian value system, which privileges the abstract and disembodied over the concrete and embodied.
The approach proposed here follows tightly the interconnectedness of being and its formal embodiment as inseparable parts of autopoiesis, in Maturana & Varela’s sense. In visual arts, a similar point is made by Sean Cubitt, when he claims that any contemporary artwork must construct its own local, not presume it. In Cubbitt’s words: “the digital art must be material”. This is the paradox that drives all new approaches on the production of ‘concrete’ artworks by using information technologies.
Along these lines, when conceiving his robots, Rodney Brooks argues that natural beings are not products only of their genes, but of their interaction with environment, which can be simulated in robots by a stimulus-response system. Hence, elaborate behavior in robots is elicited by wiring sensors directly to actuators, under the above mentioned approach, based on “situatedness” and “embodiment”.
From Langton’s aLife paradigm, the point to be stressed here is ‘life as it could be’ and not as it is: no swarm of ‘social insects’ is needed to inspire artistic oriented robots, since the artist, by contrast with insects. is absolutely work adverse.
The building capacity of termites, the cooperative foraging strategies such as trail recruitment and corpse-gathering in ants are features contradictory with artistic non-purposes.
The idea of process-based or generative art may be grounded on some important aLife’s rules, under the condition that those rules allow for the autonomous operation of an artifact employed as an artistic device. Indeed, when creators do want to lose control over their creations, what counts is a local interaction of components giving rise to a global outcome which is not explicitly coded in the components, i.e., a whole that is “greater than the sum of its parts”. This is the case that may be made when applying aLife to purposeless unmanned art, where morphic resonance is the key point, adjusted to Derrida’s attitude vis-à-vis the basic element of inscription, the graphic trace, or grapheme, as he terms it.
Morphic resonance is a concept put forward by Rupert Sheldrake, aiming to ex plain why, once an artificial life form comes into being in a given configuration, it is more likely that the same configuration will occur in the future (Sheldrake, 1981)
This element imposes its own constraints on the production of meaning, which is not molded to the demands of either pre-given objectives or already constituted meaning (Derrida, 1967). In urban theory terms, a city may be seen as a set of “traces”, assembled in space by successive “gestures” (cf. Lefebvre, 1968).
To the best of our knowledge, ArtSBot is the first experiment where collective robotics is applied in the artistic realm.
The project produces artworks by means of the interaction, through the environment, of a group of robots carrying two marking-pens as a painting device. The basics of the algorithm uploaded to each robot’s microcontroller through a PC serial interface consists of a positive feedback mechanism that leads to the reinforcement, by a current robot, of the colors left in the canvas by a previous passage of another robot.
The process is initialized by a random procedure and it is stopped by the human feeling that the artwork is ‘complete’.
In fact, the way robots evolve in the initial steps of their routes evokes irresistibly the surrealist’s dérive, a haphazard deambulation in a city. Then, as long as interaction increases, the dérive come to terms with its détournement, as carried out by the Situationist International in the 1950s.
Situationists criticized surrealist deambulation on the grounds of the exaggerated importance assigned to the unconscious and to chance. In Debord’s words, the dérive “in its infancy would be partly dependent upon chance and would have to accommodate a degree of letting go”.
The situationist’s dérive is a collective art form, an aesthetic operation that had the power to annul the individual components of the artwork, since it is performed in group. By this token, the urban space is an “objective passional terrain” rather than merely subjective-unconscious.
The dérive was, in fact, an action hard to fit into the art system, as it consisted in constructing the modes of a “situation” that leaves no sign. But it agrees perfectly with Dada logic of anti-art.
Indeed, the positive feedback mechanism may be seen as the driving force for revisiting certain spots of the city, which were considered particularly appealing in former passages. This corresponds to a psycogeographical study of urban space, with its currents, fixed points and vortexes, which bring to mind the dynamics of chaotic systems.
The identification of emotionally interesting places within space will (over time) reinforce the distinction that put them in a privileged situation for the group participating in the dérive experiment. This leads to an ‘emogram’, a map of emotive impressions, which is the analogue, in urban situationist terms, of the final artwork performed by the group of robots.
But, at any instance, there is always place for the unexpected, since the random factor coexists with the above mentioned accumulative configuration, along the entire process.
In contemporary terms, the situationist’s dérive is pursued by several groups. For instance, the Italian Stalker group considers walking as a critical tool, and as a form of emergence of a certain kind of art that develops architecture of “situated objects”. This group’s attitude is inspired by the Italian expression “andare a Zonzo”, which means to waste time wandering aimlessly. The fractal character of contemporary cities is recognized by Francesco Careri, when experiencing a series of “interruptions and reprises, fragments of the constructed city and unbuilt zones that alternate in a continuous passage from full to empty and back” (Careri, 2003).
Another way of looking at the ArtSBot experiment is inspired by the surealist cadavre exquis.
The first experience of this type was performed in 1925 by Duhamel, Prévert and Tanguy in literary terms. The first sentence that emerged was “Le cadaver exquis boira le vin nouveau”.
This ‘game’ involved a group of persons that contributed to a collective artwork of which they only knew, until the final outcome, their individual part. When one of the players finishes his ‘work’, the sheet of paper upon which he had drawn is folded in order to hide his contribution, except in a small part, which is the starting point for the next player.
Similarly, in our experiment, each robot does not have the ‘general picture’; he ‘must’ rely on the clue left by a previous passage of another robot.
Giving up definitively of the anthropocentric prejudice that underlies the creation of human-like robots, the points that are retained here from the aLife attitude are stigmergy (in Grassé’s terms), decentralization, autonomy, self-organization, emergence and interaction between agents via the environment.
Since Foucault’s announcement of the “death of the subject” in the 1970s, a decentred self-emerged, driven by the digital revolution. One has to make one’s own history in terms of how it would be traversed by the question of the relationship between structures of rationality and procedures of subjugation which are linked to it. Hence, it becomes more and more difficult for each person to remain absolutely in agreement with oneself (identity is defined by trajectories). Also, now in Derrida’s terms, this deconstructed subject is a person with no fixed identity, with no fixed principles, and without any base for ethics. But Duchamp had already made the same case, when he answered to Cabanne’s question – “What do you believe in?” – by an abrupt: “Nothing, of course”. And he added that he did not believe in the word ‘being’, since he viewed it as “a human invention”. In regard to the issue of his own identity drift, he said by 1963 that the notion of anti-art annoyed him because “whether you are anti- or for-, it’s two sides of the same thing” (Cameron, 1992). Moreover, he had momentarily ‘changed his identity’ in 1921, when – as a pioneer of what is nowadays currently performed in the Net – he asked Man Ray to photograph him as a woman named “Rrose Sélavy”. Also, his famous “Fountain” was sent to the 1917 exhibition of the American Society of Independent Artists by someone called R. Mutt. In fact, Duchamp wrote to his sister:” One of my female friends, under a masculine pseudonym, Richard Mutt, sent in a porcelain urinal as a sculpture” (Duve, 1992).
Also, the case of ‘imitation’ is to be addressed here, leading to complexity via the ‘explosive’ accumulation and recombination of simple unitary actions. The importance of ‘imitation’ in human societies was raised by the often-neglected French sociologist Gabriel Tarde. In Tarde’s approach, what is meant by ‘culture’ stems from the reinforcement of a given stimulus, caused by the imitation of a certain behavior or idea. Tarde’s approach, transposed in contemporary terms as positive feedback, is in the roots of Dawkins’ ‘meme’ concept.
Some kind of positive feedback, coupled with a hint of randomness, is the driving force behind the attitude of the group of painter robots, which produces novelty by unexpected change in the spatial arrangement of traces in the canvas.
The roots of randomness in art may be found in the technique behind a pictorial practice that appeared in ‘minor’ circles of the Italian 16th century mannerism – the “pittura a capriccio”. This technique consisted of applying on the canvas, without referent, quick and successive ink spots, “picked up directly from the artist’s mind”. All the ‘automatic’ surrealistic approach stems from this basic attitude, by adding sometimes a light psychoanalysis flavour. In the 1970s, Wols used to say that when he begins to paint, he does not know what he is going to paint. This turns out to be a veritable methodology, as noted by Fréchuret, 2001, when he writes: ”Ne pas savoir ce que le pinceau, la brosse ou le racloir va laisser comme traces devient très précisémment l’enjeu d’une technique qui, à l’aveuglette, se constitue peu à peu”.
Since no pre-defined plan commands the global behavior of the group of robots, this experiment can be interpreted at the light of Lefebvre’s idea that “Topos is prior to logos” (Lefebvre, 1968).
Aesthetic creation is defined here as a set of transformative rules that claims for a vital examination of all stages of the aesthetic production/consumption process, instead of overrating the output (as it used to come about when art was considered as a ‘matter of taste’).
The case to be made here has nothing to do with ‘evolutionary robotics’. Whilst, in this theoretical topic the aim is to address hard questions about how life may occur, in the practical realm of the ArtSBot project the aim is to experience ephemeral situations leading to novelty and surprise, where art may occur. Here, no evolutionary perspective is needed and no rank order is admissible.
In contemporary societies where the artificial prevails, there is no point in replicating the ancient drive of our ancestors to locate food, mate and protect off-springs.
Using the concept of ‘network semiotics’, Joel Slayton has demonstrated that complex systems exhibiting an autocatalytic pattern do not show any adaptive tendency towards optimization of resources or ‘efficiency’. Rather, there is a continuous shifting among possible phase alternatives. The network semiotic moves between different states of equilibrium in a response to the multitude of uncertain objectives represented as expressions of network applications (Slayton, 1998). Hence, “the purpose for action”, put by Max Weber in the core of “power”, is being dismissed (or at least dispersed) in post-Fordian societies where complexity prevails (i.e., when we move from the Guttenberg galaxy to the Internet galaxy, in Castell’s terms).
In particular, in what concerns the old predator/prey problem, it should be noted that if life forms usually act in a predatory manner, conversely, “only living forms would have need not to” (Shanken, 2001). Also, Foucault’s critique of what he names “biopolitics” – as a mode of organizing and regulating a population considered as biological species in the sense of normalizing large-scale groupings of “docile bodies” – should be taken into account when analogies based on “life as it is” are performed.
Cultural hybridization of science and art
Historical perspective
Despite their apparent opposition, science and art were always closely intermingled, since they draw their common groundwork from the dominant cultural context of each epoch.
This opposition was constructed under the pressure of the Hellenic ideological divide, whose repercussion goes far beyond its remote foundation. In fact, the ‘Greek’ line of thought put science in an elevated ‘philosophical’ domain, while art was disqualified in the ‘worthless’ realm of the techné.
In the 16th century Renaissance, both activities reached a peak of cross-fertilization, under the influence of Leonardo’s striking polyvalence.
In particular, the multi-talented painter-engineer drew plans for a mechanical man. Moreover, Leonardo da Vinci understands the interest of Aristotle’s Camera Obscura for artistic purposes, projecting ‘Nature’ onto an artificial plan, which is the basis for representation.
Along the same lines, Francesco Algarotti claims, in the 18th century, that painters should make use of the Camera Obscura as scientists apply the microscope or the telescope to grasp “the rules of Nature”. And even when the ‘Enlightenment’ brought the importance of science to its ideological ‘climax’, a certain kind of convergence with art is noticeable, by contrast with commonsensical beliefs. During this period of intense cultural effervescence, scientists searched in matter the signs of ‘the Works of God’, whilst artists, influenced by the sentiments of Romanticism, saw their role as ‘Divine Messengers’. For these apparently opposed ideologies – positivism vs. romanticism -, the goal was the same, only the modus faciendi – empirical vs. phenomenological was different.
When Paul Cézanne, considered by the mainstream as “the father of modern art”, uttered his celebrated dictum that forms in painting should be reduced to basic geometrical elements, he initiated, through his artistic praxis, an important development that led to Cubism. This was designated in Penny, 1995, as the ‘industrialization of vision’.
It is worth noting that this approach ultimately found realization in 3D computer graphics almost a century later. Also, as remarked by Plant, 1997, artificial life pursues the very same goals that Paul Cézanne proposed when he said: “Art is harmony parallel to nature”. In this sense, aLife researchers are artists in Cézanne’s terms, since their goal is not to represent the world, but to ‘render visible’.
Going further in reducing the basic elements of painting to points, the ‘pointillist’ attitude disclosed some features of Wolfgang Köler’s “Gestalt Psycologismus”, that came to light in 1929 (Köler, 2000) in the sequel of Wertheimer’s work. In fact, a pointillist tableau makes no sense, except if it is globally conceived (and perceived).
There is also an analogy with this issue in music. To grasp the meaning of a choral piece, it is not enough to listen to the individual singers one-by-one: the performers should be listen to as a ‘whole’, given that they modulate their voices and timing in response to one another. In general terms, the neuronal activity of ‘perception’ implies always a combination of sensory systems to form a gestalt (Freeman, 1991).
When photographic reproduction appeared by the mid of the 19th century, a complete rupture in regard to naturalism occurs naturally: there is no point to imitate nature, since the same role is performed by photography in a much more reliable way. This obviously opens the way to abstractionism.
In regard to the avant-garde movements prevailing in the artistic scene until the middle of the 20th century, they seem in general to be characterized by a deeply ambivalent relationship with science, which became increasingly implicated in wars and capitalistic-bureaucratic ideologies.
Hence, the liberating ‘flavor’ of science almost eclipses, under the repulsion caused by its dreadful consequences. But, paradoxical as it may seem, Dada, the most radical libertarian artistic anti-war movement that bitterly accused ‘science’ of being behind the 1914-18 genocide, included among its most influential ‘members’ Marcel Duchamp, an engineer fascinated by physics, non-Euclidean geometries and chess, and his friend Francis Picabia, who praised velocity and the automobile as signs of ‘modern times’, characterized in first place by the Industrial Revolution scientific discoveries. Duchamp’s ready-mades included, for instance a bicycle wheel, which can be viewed as a scientific commentary on movement and stability. Also, Picabia’s machinist style is apparent in many of his works.
There is nothing inherent in any creation that makes it out ‘art’. Duchamp’s ready-mades are mass product objects, selected by the artist and elevated to the realm of ‘art’ by virtue of having been chosen (in Duchamp words: They are a kind of rendez-vous”). The ready-made producer underscores the object’ itself, as well as its manufacturing process, emphasising rather the context where the object is situated. Whilst a ‘classic artwork’ is yet ‘art’ even when withdrawn from the ‘museum’, a ready-made by itself is no more than trash – its ‘merit’ stems from the context (the place where it is put on display).
Surrealists (in some instances) brought intensive randomness to the realm of art. This feature can be considered as a constructive apport, when no psychoanalyst flavor is added. In fact, putting the umbrella and the typewriter on the space of the operating table works because the viewer recognizes the juxtaposition of the improbable. This can be linked to the contingent character of contemporary scientific models, namely in biology, where chance plays a crucial role, if evolution is seen as a consequence of improbable mutations (in the line of Jacob, Monot, Gould).
The cultural ‘heirs’ of Dada radical attitude in the artistic revolutionary scene of the 1950s and 1960s – Lettrists and Situationists –
maintained the aforementioned ambivalent attitude towards science.
By contrast with Debord’s criticism of the “recuperative” capabilities of
capitalism in the merchandise realm, another line of though – represented
by Asger Jorn, Giuseppe Pinot-Gallizio and, above all, by Constant
Nieuwenhuys – was enthusiastic in what concerns the egalitarian
content of a wide range of scientific disciplines, mainly mechanics,
automation and cybernetics.
In parallel with pointillism, the Lettrists made a case on the role of the ‘letter’ as the basic element of writing. If early Lettrist activity was centred on sound poetry, the emphasis soon shifted to visual art production, maintaining however the ‘letter’ as the basic subject of aesthetic contemplation. One important contribution to contemporary though made by Isou, an early Lettrist, was the central role that he attributed (since 1948!) to young culture. Also, Debord’s initial writings of the Lettrist phase emphasise the “removal of substance” – another important topic in contemporary thought –, by “extracting the letter from the voice and setting it free”. The radical theorists of the Situationist International (IS), relying upon the creation of emotionally appealing ‘situations’, have pursued the Lettrist theoretical effort, putting forward by the middle of the 20th century some important concepts, like dérive, psycogeography and détournement. This attitude was far ahead of its time, except for brief but intense revolutionary flashes. Nonsurprisingly, some of the concepts developed decades ago by the situationists are now in the core of a persistent line of though underlying an egalitarian view of the information/communication technologies. In fact, when the ‘new media’ are scrutinized by social sciences, a number of collaborative attitudes are spotted, for example, in the so-called ‘hackers’, as opposed to the hierarchical structure of the old media. This collaborative strategy, characterized by self-organization, solidarity and gift find some of its roots in the IS philosophy. Moreover, it can be noted that what Debord called détounement in 1956 – “any elements, no matter where they are taken from, can serve in making new combinations” – is, on a greater scale, the system by which most human technology develop. Innovations are generally a very minor discovery, resulting from a synthesis of the already known.
In this view, science was prone to liberate man from forced work, ‘leaving space’ to the development of prodigality, collaborative strategies, solidarity and, above all, to the emergence of a ‘gift culture’ based on ‘energy wasting’, as an alternative to the ‘energy conservation’ that was characteristic of the Industrial Revolution. In Georges Bataille’s approach, the surplus that contemporary society has at its disposal is to be “applied in games and spectacles that derive therefrom, or personal luxury” (Bataille, 1967).
This cultural shift was announced by the influential Dutch scholar Johan Huizinga in his 1938 classic «Homo Ludens», which is an obvious source in situationist literature, as well as Marcel Mauss’ anthropologic study of the “potlatch” among Northwest American Indians, which corresponds to a redistribution of goods in fierce competitions of generosity. In fact, situationists claim for a society of pleasure instead of the stoicism and sacrifice of Stalinism or the peer pressure of consumerism. In this regard, situationists pioneered contemporary awareness of the importance of leisure, instead of labour, as a revolutionary weapon. Therefore, both Guy Debord and Asger Jorn criticized acutely le Corbusier’s “machine à habiter” and Bauhaus functionalism, condemning their underlying monstrous ideology of a ‘homogeneous’ and ‘massive’ urbanism that opens the doors to procedures of containment, exclusion, surveillance and individual control. These procedures tend to destroy the ‘living’ parts of the city, those indigenous working-class zones disclosed by psycogeographic studies. As an example of the outcome of such a studies, they exhibited the map of Paris obtained through the “clustering” of the city. Each cluster corresponds to
an “unité d’ambiance”, a locus where the ‘soft’ mutable elements of the
city scene coexist with the ‘hard’ architectural structures on which the
former are grounded. But, in stark contrast with the idea of a full refusal
of any type of art proposed by Debord, Asger Jorn believed in the collective,
and noncompetitive production of art, viewed obviously not as a high culture product of capitalistic societies, but as a process giving rise to ‘cultural artifacts’ that are to disappear from museums only to
reappear everywhere.
Another development emanated from surrealist automatism can be found in Pollock’s drip paint technique. Applied onto enormous sheets of canvas spread on the floor, it called the attention of mathematicians that analysed it in terms of the experimental patterns produced.
The conclusion reached was that those patterns followed Mandelbrot’s fractal models, i.e. shapes found in Nature that repeats themselves on different scales within the same object (cf. Taylor et. al., 1999). In Pollock’s work hermeneutics, complexity may also be invoked when vortices of concentration of ink spots, roughly ‘clusters’, arise. Those may be interpreted as the effect of strange attractors, as considered by non-linear dynamic systems theory.
The rapid diffusion of systems theory in the 1960s gave rise to a new ‘aesthetical’ attitude, which “puts cybernetics as the theoretical ground for contemporary art” (Penny, 1999). On the other hand, science is now completely immersed in the framework of cultural production, in parallel with its rich tradition of detailed micro-studies directed at experiments, instruments and processes.
Impact of the digital revolution on creativity
Nowadays, the Zeitgeist is dominated by the digital revolution. Simon Penny summarized this piece of evidence through this witty contemporary truism: “The list of things to do before locking up the house includes putting out the cat, turning off the oven and backing up the hard drive”. (Penny, 1995). The same author makes the point that, surprisingly, such an ubiquitous device as the computer was developed without any influence of the most influential thinkers of the 20th century!
This ‘strange’ fact can be grasped if one cogitates on the impossibility of accounting for any kind of bifurcation by classical Western philosophy based on linear ‘progress’.
The Net, which may be considered the most important technological breakthrough since the Guttenberg Revolution, was not ‘predicted’ by any prior ‘science fiction’ authors, who were bounded by linear extrapolations of the achievements of their Zeitgeist.
New forms of creativity, both in science and art, are conveyed by the ‘dematerialization’ brought to light by an intensive use of information/
communication technologies, which have an immense amplifying power.
The term ‘dematerialisation’ was coined by American critics John Chandler and Lucy Lippard to describe an important characteristic of the artistic movements prevailing in the 1960s. This term, referring loosely to the physical disintegration of the traditional ‘matter’ of art and representing an aesthetic attack on the primacy of painting and sculpture, symbolizes a radical gesture directed against an increasingly overbearing art market, not against ‘material’ in itself. (Chandler & Lippard, 1968)
Hence, instead of pursuing their former ‘reductionist’ interpretation of Nature (based on some ‘solvable’ differential equations systems), scientists are increasingly involved in, and concerned with, complex epistemological questions stemming from the new disciplines of the artificial, namely the Turing Test and John Searle’s Chinese Room Argument, and addressing the problem of establishing the criteria for judging if “Machines can think”. Obviously, along these lines, limits in space and time are abolished.
The Turing test for intelligent machines consists of putting behind a curtain an human and behind another curtain a computer; if, after five minutes, an interrogator has no better than a 50% chance of distinguishing human and computer, the computer is intelligent. Searle’s Chinese Room consists of considering a computer program for understanding (written) Chinese; carrying only a book containing the instructions of such a program, John Searle goes inside a closed room which has an input and an output slot; whenever a squiggle comes in, Searle looks in the book for what to do and provides an output; even though this room has exactly (over time!) the same (written) language behavior as a native Chinese, there is no ‘understanding’ going out there.
It is worth noting, however, that most dematerialization supporters – who are, in general, ‘genuine’ materialists adverse to the rhetoric of ‘transcendence via the Net’ – do not intend to get out of matter, but they want to get out of the confining organization of matter which is ideologically shaped into things and organisms by the ancient conservative and traditionalist powers. Here, Derrida’s concern on the materiality of the signifier – challenging the usual disembodied Platonist mathematics – is met on its implications for construction meaning as an endless process of textual difference.
Nowadays, it can be stated that the arrival of the Net signalled the recommencement of an emancipatory project based on creativity, communication and a ‘gift economy’. A symptom of this new trend, based on the fact that ‘information’ escapes from any zero-sum game, comes from the failure to use Internet exclusively for commercial ends.
Also, the appearance and expansion of public domain and open source software like Linux is another sign of the possibility of escaping from the empire of commodity that was supposed to rule utterly human behavior.
Sean Cubbit, in an interview given on January 2003, considers Linux as one of the finest artworks of the 20th century. Also, he puts the case that, whenever one is online, this means that s/he is functioning at the margin (Trace, Online writing Center).
As the process of digital convergence accelerates, divisions between different professions are being broken down and C. P. Snow’s divide between the “Two cultures” is getting somehow thinner.
The conceptual fluidity that allows nomad ideas to migrate from one field to another is enormously facilitated by the common zero+ones basis where everything relies on. In fact, the more abstract the concept (and electronic media are the apex of abstraction), the greater the number of other concepts that are potentially evoked by it.
This feature was extensively exploited by Jean-François Lyothard, the celebrated author of the nowadays classic 1979 book “The Postmodern Condition”, who put into practice his idea of ‘drift’ from one direction to another (and from a subject to another) as Curator of the exhibition “les immateriaux”, held in 1985 at the Centre Beaubourg in Paris. In this exhibition science was mixed with art in a variety of ways, under a common digital underlying environment.
In contemporary art, the “code” is the structuring system of the artwork, putting across the paradoxical meaning of a text, which is also a (virtual) machine. The digital technologies give rise to a malleable aesthetics, based on the principle that anything that can be made can be remade.
On the other hand, the sciences of complexity, conveying their concepts of bifurcation, deterministic chaos and strange attractors, brought a new insight to the analysis of creativity, inasmuch as it is no more plausible to see invention stemming from any essentialist kind of ‘genius’.
In Herbert Simon words: “Chaos derives from deterministic dynamic systems that, if their initial conditions are disturbed may alter their paths radically. Although they are deterministic, their detailed behavior over time is unpredictable, for small perturbations cause large changes in path.” (Simon, 1997). Deleuze & Guattari, 1980, introduced the concept of “machinephylum” to refer to the overall set of self-organizing processes in the universe. This notion allows the establishing of connections between information/communication technologies and auto-catalytic processes. In fact, the singularities at the onset of those processes are critical points in the flow of matter and energy that can be represented by the same models that are in the core of “abstract machines”.
Contemporary views on this topic privileges ‘inversion’ over ‘inspiration’.
This means that any creative breakthrough is not ‘random’, but is based on the scrutiny of prior canons and codes, followed by a bifurcation driven by a strange attractor, whose ‘basin of attraction’ prevails over the former mainstream tendency.
The creativity issue was also approached at the MIT on the grounds of complexity science (Slayton, 1998). The experiments conducted since the 1980s in the context of emergent conversation (a sort of computer-based ‘brainstorming’) lead to the (provisional) conclusion that, since there is no specific objective or purposes guiding the conversational system, a continuous shifting among possible phase alternatives occurs.
This was interpreted by autocatalytic patterns, which are signified only by the computational structure of data. Also, it was clearly remarked that the meshwork grows in “unplanned” directions, evidencing the tendency of subsystems for adaptation towards less than optimum goals. In this context, since there is no such thing as any ‘objective function’ (in the optimization jargon), it is virtually impossible to determine non-subjectively the relative merits of the diverse combinations.
Moreover, the digital revolution bridged the sharp divide that used to occur between two distinct types of knowledge: knowledge-how ability that the physicist Mike Greenhough has identified as the recognition of something being ‘just right’.
This newcomer concept in the ‘hard’ disciplines where quantification
used to be the nec plus ultra (Rutherford believed that “qualitative is nothing but poor quantitative”) goes along with Bachelard’s ‘approximate
knowledge’ that is embedding all branches of science from Heisenberg to Zadeh.
In this context, Keynes summarized the new attitude in science trough the
acute statement: “It’s better to be roughly right than precisely wrong”.
In fact, the knowledge-whether model had been clearly formulated in 1950 by E. H. Gombrich in his classic “The story of art”. The example given by Gombrich to explain the ‘sense of rightness’ refers to a non-expert arranging a bunch of flowers. At a certain point, which would correspond loosely to the ‘optimum’ (in Operations Research jargon), the process stops because ‘it is felt’ that another step (any act of adding, removing, substituting, merging or relocating a flower) would jeopardize the outcome. Such a point of ‘homogenized diversity’ can be interpreted in connectionist terms as the solution of a ‘constraint satisfaction’ problem, where a stabilizing factor induces the satisfaction of a maximum number of constraints, via an extended interaction with our environment and all cultural and social values it embodies (Page, 2000). Therefore, artists and scientists are nowadays indistinguishable in their attempts to measure, translate, transpose and generally deal with the shock of the ‘new’.
‘Robotic Art’: critique of interactivity
The pioneer work of Simon Penny in ‘robotic art’ was developed since 1989, when the current Director of ACE (Arts Computing Engineering Graduate Program) of the University of California produced Petit Mal, an autonomous robot artwork that “reacted to people”. Even though avoiding anthropomorphism, zoomorphism and biomorphism, the goal of interaction with humans” made this experience seriously vulnerable, since it is virtually impracticable at the current stage of technology to reduce human behavior to algorithmic functions and to represent the response of the system back to the user. Simon Penny’s attitude, however, was important in his critical assessment of the rhetoric surrounding VR, on the grounds of its ultimately antimaterialistic and anti-embodying qualities, metaphorised as dreams of transcendence and delivery from the prison of flesh (Penny et. al., 2000).
On the other end, the line of thought where Penny is situated stresses the importance of the ‘interaction with the public’ through the promotion of ‘transactional happenings’, where it supposedly occur some collaboration of the receiver. But, despite the last 40 years myriad of ‘happenings’, the disturbing impression arises that nothing is happening in interactive art – an era of hypothermia of artistic interaction with the ‘public’ seems to be overrunning the cultural arena of the last decades.
Since the emerging of multimedia performance art in New York in the 1960s that the situationists denounced their ‘spectacular’ character that leaves the ‘spectacle’ unchanged, without leading to any kind of transformation of consciousness among their ‘participants’. The 1963 volume 8 of “Internationale Situationniste” writes: ”happenings are a hash produced by throwing together all the artists leftovers”.
This impression may derive from the novel fact that the obligatory link between art and dissent disappeared since the disappearance of the 20th century avantgarde movements, in which such a link was taken for granted.
Permeability between contemporary art, politics and philosophy
Art in the context of massive technocapitalism
Walter Benjamin, in his decisive 1931 essay on “The Work of Art in the Age of Mechanical Reproduction” (Benjamin, 1978), anticipates a constellation of characteristics that apply to contemporary ‘works of art’, altering the classic western aesthetic conception established during the Renaissance:
- The demise of the halo of originality
- A coexistence of many copies of the same image
- An undermining of the concept of the artist as a ‘genius’
- New challenges stimulated through the independence of the art
work from originality - Democratization of the art market place
- Contemporary possibilities for new social meanings of art
In the sequel of the central argument of Benjamin, referring to the erosion of the uniqueness and singularity of the work of art, questions of authenticity and origin – which fundamentally grounded the aesthetic experience of modernist contemplation – became completely displaced.
In fact, Dada announced Barthes’s “death of author” when a group of individuals – Hausman, Grosz, Baader, Herzfeld – ‘signed’ his artworks under the multiple name of “Christ & Co. Ltd.” This concept of multiple authorship under a common ‘label’ reappeared in 1994, in Italy, where a set of writers founded some kind of a ‘literary factory’ denoted by the collective name of ‘Luther Blissett’ and aimed at “building narratives”. Along the same lines, a ‘private firm’ providing ‘narrative services’ in a variety of media was launched in Bologna in 1999.
This ‘firm’, labelled ‘Wu Ming’ (a Chinese ‘logo’ that means “no name”), is a ‘lab of literary design’ that puts emphasis on ‘brain-work’, the most important post-fordist ‘production factor’. Furthermore, Wu Ming does not avoid ‘spectacular’ publicity, as it was the case with previous avantgarde groups, namely the IS, that considered any concrete democratic acquis or any popular culture achievement as prone to be “recuperated” by capitalism, strengthening by this means the ‘spectacle society’, as coined by Debord.
In regard to another IS concern – the end of copyright –, contemporary dissent movements linked to the fight against neoliberal globalization agree completely with the situationist attitude, enlarging it to every existing communication medium, like the Internet. But the practice of sampling the work of others is hardly new. For centuries, artists have plagiarised their predecessor and contemporaries, since all collective endeavors involve a constant process of re-processing (Lautréamont is a notorious plagiarism supporter). If intellectual property had existed in ancient times – in fact it stems from the Enlightenment individualism – , humanity would not be acquainted with Mahabharata, Sun Tzu, The Odyssey or The Arabian Nights. The information/communication technologies have make the ‘sampling’ practice much easier and more aesthetically pleasing, suppressing by peer-to-peer networks the distinction between original’ and ‘copy’.
This had very deep consequences in contemporary art, since aesthetic experience can no longer be isolated from the social conditions which have made its production, dissemination and reception possible.
The aforementioned view of Walter Benjamin, as well as earlier writings of William Morris, may be seen as an attempt to strategize with respect with the phenomenon of industrial mass production, driven by the ‘antiquated’ techno-capitalism of the 1930s. Aiming to fight against this inevitable feature of contemporary societies, that is to say, trying to recover the ‘unique’ characteristics of art “before the Age of Mechanic Reproduction”, some artists took position against the novel characteristics of contemporary art, assuming the ‘pessimistic’ attitude of Adorno on the homo mechanicus. In regard to the linkage of art and society, the situationist critique may be seen as “the most radical gesture” (Plant, 1992) against the invasion of everyday life (Lefevbre, 1968) by Fordist (mis) conceptions, when industrialism made its appearance in the ‘leisure’ realm. In addition, public skepticism towards the military-industrial complex after May 1968, the VietNam war, the mounting ecological concerns, all contributed to problematizing the artistic use of technology within the context of modern techno-capitalism.
The Cybernetic Serendipity held in ICA, London, August-October 1968 – that may be viewed as the event that inaugurates the protohistory of computer-based art – marks also a turning point in the linkage of aesthetics to political contest. It is a curious coincidence that the starting of ‘computer art’ corresponds to the ending of the art/politics linkage.
From this turning point onwards, the contemporary individual is no longer tied to any kind of transcendence (neither theological nor political).
Hence, it is very intricate to invent a language that could express, through art, the ethos of this period, while remaining distinct from it.
Moreover, the temporal acceleration of our times abolishes the gap between any ‘subversive’ avant-garde proposition and its social appropriation by the media, publicity and the like. Furthermore, in contrast with previous intellectual pursuits, the achievements of contemporary culture are not marginal disputes between any group of “happy few”, but affect the lives of everybody on the planet. This leads to a generalized aesthetic obsolescence of artworks – these are rapidly and easily ‘transduced’ into some kind of merchandise [see, for instance, Van Gogh’s chair as painted by Van Gogh in 1888 (Tate Gallery), by Clive Barker in 1967 (Paul and Linda McCartney Collection), and by Hewlett-Packard Development Company in 2004 (Amsterdam Airport)].
Nowadays, art is everywhere, as artists always wished (even in commercial products and in decentralized ‘mediatic’ and ‘mobile’ exhibitions that fly from one country to another). What is important in contemporary art has nothing to do with the object that is produced, but with the underlying creative process: indeed, what the artist does is to reprocess ideas ‘extracted’ from his Zeitgeist.
Hence, as Duchamp put it, “everybody can be an artist”, in the vein of the “self-proclamation theory”. As an example of this, the “Society of Independent Artists, Inc”, founded in 1916 in New York with Duchamp’s involvement, accepted as a member anyone who pays a fee and exhibits an artwork, without indicating how artists are recognized as such, since exhibiting was no problem, given the famous rule “No jury, no prizes”.
Since the contemporary world is driven by the information/communication revolution, it is not astonishing that the art of our times has became ‘digital’ (and that ‘digital’ corporations like HP take possession of art, instead of Museums’ curators or particular collection owners).
Art and philosophy
Since Kant asked the question “Was ist Aufklärung?” – that is, what was his own actuality – philosophy gained a new dimension. According to Foucault, this new dimension is to tell us who we are in terms of the present we are living in. By this token, a new relationship between philosophy and contemporary art emerges. It is obvious that contemporary art has nothing to do with ‘truth’. This is in sharp opposition to Heidegger’s solemn claims, in the sequel of Foucaults’s point of view that truth elicits power, and of Derrida’s critique of the ‘transcendent signified’ as a unitary source of truth. Since ‘truth’ is inevitably distorted, Duchamp twists it further for himself for the fun of it, forgetting some things and selectively modifying or misrecollecting others. For instance, he performs the détournement of Jules Lefebvre “La verité” by inserting a mannequin feebly holding her lamp aloft inside his “Etant données”.
However, having reached the current state of affairs where anything can be “art”, the point is to confront the philosophical questions raised by artistic production. In Derrida’s 1978 text “La verité en peinture”, he asks the reader what would be his reaction to the putative impossibility of putting a frame in his artworks. And all Derridean analysis of art is always focused on the issue of the “frame” (how to bound the space of the oeuvre).
In reality what is important in contemporary art is the interpretative plan, which gives meaning to artistic objects. As noted by Arthur Danto, contemporary art brings to an end its former search for essence, emphasizing its extensive, rather than intensive character. Once vanished all its intensive conditions, “art is now the totality of life” (Danto, 1997). The same seems to apply to hypertext, where extension prevails over intention (Pereira, 2002). The concept of ‘expanded art’, used by Valie Export as a collage extended in time and multiple space and media layers, may also be analysed under this perspective (Valie Export, 2003).
But, some decades before, situationists had already envisaged a society not merely of ‘plenty’ but of outright excess. In particular, Constant – with his New Babylon project that is an infinite container for mass-produced environments, fabulous technologies, and endless artistic exchange – puts forward a similar idea: “New Babylon ne s’arréte nulle part (puisque la Terre est ronde); elle ne connait point de frontiéres (puisque il n’y a pas d’ecónomies nationales, ni de collectivités (puisque l’humanité est flutuante)”.
In fact, this rethoric of excess has a magnifying effect that may be spotted in the roots of the ArtSBot project positive feed-back, but that can hardly be reached by conventional painting.
As reported by Pierre Cabanne in 1968, Duchamp’s aim consisted above all in “forgetting the hand”, inflating his artistic objects by embedding them in language. Hence, it is not surprising that most artists since then do not deal with conventional artworks, but with conceptual thinking.
Penny, 1995, claims that the ‘Cultural Software’ of contemporary aesthetics is represented by ‘Conceptual Art’. In reality, the obvious striking parallels between Conceptual Art and developments in systems theory and computer information processing were disclosed by several authors (e.g. Shanken, 2001).
For Arthur Danto, the role of philosophy is to remove the artist’s hand from the processes of art, as claimed by Duchamp, who was interested in an entirely cerebral art. For instance, in order to explain by discourse Duchamp’s Fountain, it is not required to ‘see’ the object; a photo does the job.
The roots of such a discourse about art are found in the Greek ekphrasis, the verbal description of art works. Diderot made of this kind of narratives a literary canon, aimed at ‘explaining’ to the public (and art patrons …) his contemporary painter’s tableaux, for instance, La Tempête.
The relationship linking the artwork to all discourses that it evokes is also a concern of Derrida, namely in his consideration of the ‘double bind’ brought by the movement of the discourse to the intrinsic immobility of the artwork (Derrida, 1980).
The connection between art and linguistics is an important development where Antoni Muntadas – who emphasised the importance of ‘translation’ in conceptually artistic terms – pursues Umberto Eco’s approach on Opera Aperta. In fact, Muntadas considers translation as incorporating not only languages but also different media and different technologies. His open work, contains different levels of possible interpretation of how languages are transcribed into/onto each other (Muntadas, 2003).
Some views of Kierkegaard and Nietzsche on ‘difference and repetition’, as interpreted by Deleuze, 1994, may be seen as predecessors of contemporary art, considered as an open field of possibilities where critical standards do not rely on any kind of ideology. In fact, those views favor difference over identity, as opposed to the classical philosophical scrutiny of artworks (from Plato to Hegel), based on their characteristics of uniqueness that stems from authorship authority.
In this regard, it is worth stressing the multiplicity of instances produced by ArtSBot, which can be seen as realizations of a random function, showing “how things might be otherwise”. In artistic terms, this means that artworks, being by nature ephemeral and incomplete, contain a certain common feature: they are the present as a moment of becoming (“life as it is” turns into “life as it could be”). The emergence of an artwork stems from the organized spontaneity that contains two apparently antagonistic features: the random and the ‘aquis’.
For Duchamp, this corresponds to his paroxysmal search for “alternatives” and, for Constant, this apparent paradox reveals the coherence of the artist that accepts unpredictability.
On the other hand, the concept of ‘diagram’, as exposed by Deleuze’s reading of Foucault (Deleuze, 1988), is a good framework for understanding, in philosophical terms, the conceptual foundation of contemporary art. In fact, this ‘abstract machine’, as opposed to any transcendent Idea and to any kind of economic infrastructure à la Marx, is the instable and fluent mapping of a series of relationships that give rise to a new entity by a self-organizing process leading to a sudden ‘cooperation’ between previously disconnected elements, when a critical point is reached.
Given the value Magritte placed on thought, it is not surprising that his work has drawn the attention of philosophers like Foucault since the 1960s (e.g. Foucault, 1968). But the ideas that are behind Magritte’s interaction with Foucault could be capable of becoming visible only through painting, supporting Foucault’s proposal of ‘art’ as a multiplicity of parts that are both theoretical and practical, against the canonical ‘conceptual art’ that separates theory from practice. Along these lines, it is worth noting that Magritte is a marvelous inventor of forms, for whom a “false true” value in logic can have a profound aesthetic reality in art. This example, taken from an important art practitioner, illustrates the trivial claim that art ‘has nothing to do’ with classical logic. In complexity theory terms, it can be stated that bifurcating states of permutation are transposed into self-similarity. Also, it is worth noting that contemporary art practice is unimaginable without appealing to the deconstruction concept à la Derrida. This point reaches its pinnacle with neo-conceptualism, which is no more than a distillation of the deconstructive method.
Recombining art, science and philosophy under a new dissenting perspective – unmanned art
The new dissenting perspective in political terms, which opposes to the old idea of a sudden revolution that changes life overnight, is represented for instance by the Italian group called “Tutte Bianche”.
This group is inspired by the Zapatista movement and appeared in Milan in 1994 during the ‘anti-globalization’ fights. The name that was adopted was an ironic response to the epithet of ‘ghosts’ that the Mayor of Milan assigned to squatters. In the line of Marcos’ dictum “we don’t know what to do with a vanguard that cannot be reached by anybody”, they intend to be submerged in the ‘multitude’. They mix the ‘imaginary’ with the material basis of political criticism, the underground with the mainstream, high culture with popular culture (which in Latin languages means “made by the people”).
This type of ‘movement’ focus on the breaking of all dualistic oppositions (like visibility/non-visibility, legality/non-legality, violence/nonviolence before/after the revolution…) by dividing all united things and approximating all separated things, for the sake of the creation of strange feelings of nearness and distance.
The ‘culture jam’ that occurs in the ‘antiglobalisation’ groups is a novel characteristic of our times, in sharp contrast with the ‘goaldirected’ avantguardistic attitude based on some kind of ‘identity’ that prevailed during fights against Fordist capitalism.
This complex union of stability and chaos may be interpreted as a fractal phenomenon, where strange attractors maintain a certain unity, despite all turbulence.
Along these lines, Perniola, 1994, proposes a porous way of thinking which does not anchor itself within methodological safe limits. In fact, Perniola puts forward the idea of transit, the trespassing of one thing into another, of one field of knowledge into another, without ever defining the borders between internal and external limits. From the work of Perniola, one can draw the concept of the thing who feels, the thing that plays, and, a fortiori, the thing – the group of robots – that interacts with the environment in an arty way. This line of thought can be derived from the original idea of Asger Jorn that individual creativity cannot be explained purely in terms of psychic phenomena. In his critique of Breton’s surrealism, Jorn made the point that explication is itself a physical act which materializes thought, and so psychic automatism is closely joined to physical automatism (Jorn, 2001). What is surprising is that this attitude goes along the fresh approach developed recently by Rodney Brooks in the field of robotics. Conversely, it is worth noting how Brooks’s approach influenced computer-based art in its ‘materialization’ aspect (at least, since the 1993 Ars Electronica Conference, cf. Shanken, 2001). In fact, the MIT researcher considers that human nature can be seen to possess the essential characteristics of a machine, even though this idea is usually rejected instinctively by our putative uniqueness, stemming from some kind of “tribal specialness” (Brooks, 2002). In parallel, Derrida criticizes the usual neglect of non-human actors in “Il n’y a pas d’hors-text” (Derrida, 1967) and, in “Papier Machine”, he shows how a text (Rousseau’s Confessions) can function machinalement, i.e., by itself, emancipated from the author and even cut from him.
Similarly, William Carlos Williams considers “a poem as a small (or large) machine made of word”.
But Duchamp had already put in practice the same idea in his 1926-1927 “Large Glass”. In fact, this is a “machine moved by a proverb – it’s the words that make the machine run” (Suquet, 1992). In his unsigned “Green Box”, Duchamp put together the legend to “Large Glass”: ninety-four scraps of paper bearing plans, drawings, hastily jotted notes, and freely drawn rough drafts were delivered in bulk.
In the scope of ArtSBot, it can be stated that if an idea becomes a machine that makes the art, then there is no point in imitating Nature, but to perceive the “beauty of the idea” (Le Witt, 1967). If a self-referential art that does not care for objects is to be made, then the point is to simulate those artificial features of life (as it could be) that are driven by creativity. And creativity, as Debord put it as early as 1957, is not the capacity of arranging objects and forms, it is the invention of new laws on that arrangement. Hence, the point is no more to create objects, not even ‘contexts’ in Duchamp’s terms.
Now, in unmanned art, not only the artwork depends on the idea that generated it, but a complete symbiosis occurs between the artist and the machine.
Human and robot bodies are ultimately related to a common ground: Deleuze & Guattari’s “Machine phylum”. In the late 1960s Deleuze realized the philosophical implications of three levels of the phase space where man and machines co-evolve. These are specific trajectories, corresponding to objects in the real world; attractors, corresponding to the long-term tendencies of those objects; and bifurcations, corresponding to spontaneous mutations occurring in those objects (Deleuze & Guattari, 1980).
The human behind the idea is the Symbiotic Artist, the one who brings about the conditions for ‘situations’ to be constructed.
Technical details of the ArtSBot project and its results
Description of each robot
Architecture
The sensors
The basic architecture of each robot (mbot) contains three components:
The sensors, the controller and the actuators. The sensors receive signals from the environment, which are processed by the micro-controller in order to command the actuators, mechanical devices that produce conditional motion.
The sensors are of two kinds: those that receive the signal from the key environmental variable chosen, which is color, and those that perceive the proximity of obstacles.
In regard to ‘color’ sensors, there are two of them in each robot, directed to the floor. They are called RGB sensors, because they are able only to distinguish between Red, Green and Blue. Each color sensor is composed by one LED (Light Emitting Diode) for each color. In this case, since at the end of the process it is required to discriminate “bright” from “dark” colors, a fourth LED was added, directed to White.
The function of each LED is to measure the intensity of reflected light.
Given that a surface of a certain color reflects more intensively the light of the same color, each LED captures ‘only’ (in practice, ‘mainly’) the color it is directed to. For each cycle of the sensor, the four LEDs are fired sequentially and an integration of the correspondent intensity values provide the RGB (and White) evaluation of the surface covered by the sensor. Since there are two sensors of this kind, a new integration is needed before the signal is transmitted to the processor.
In regard to proximity sensors, there are four of them, located in the robot’s front. They consist of an IR emitter/receptor that produces a signal which is proportional to the distance from a white wall. Hence, the bounding barriers of the terrarium where robots evolve must be white, as well as robots’ enclosing boxes.
Since solar light may interfere with the sensors, robots should function in a small intensity artificial light environment. The range of distances perceived by this type of sensors is 1 to 15 cm.
The controller is an on-board PIC 16F876 from Microchip, which reads signals from sensors, processes them according to a program, and transmits the result to the actuators. The program is uploaded into the chip, prior to each run, through the serial interface of a PC. This program is developed based on the PC graphic interface, consisting of a flowchart where test blocks for sensors and actuators are combined according to a certain sequence, that can obviously be changed whenever wanted. Each test block compares a given variable with a previously defined parameter and executes an “IF…THEN” rule. After compilation, the program is uploaded to the robot via the PC serial interface.
The actuators
The actuators consist of two servomotors producing movement by differential traction based on velocity control and one servomotor for manipulating the two pens. The latter is commanded by a signal analogous to the one sent to traction motors but, in this case, an angular position control is used. The function of this actuator is to raise or drop each pen, according to the signal provided by the controller.
Functioning
The chassis consists of an oval 20 x 15 cm platform, moved by 3 wheels and carrying two marking-pens. Each robot is 12.5 cm tall, weights 750 grs. and their life-time endowed with 8 AA type batteries is approximately 4 hours.
In regard to the programming interface, it contains a special module, called RANDOM, that allows to initialize the process. This function generates an uniformly distributed random number and compares it with the threshold inputted as a control parameter. If the random number exceeds the threshold one pen is dropped and the robot paints a colored trace whose length depends on another control parameter.
Once a robot ‘sees’ a trace of a given binary colour (“dark” or “bright”), the pen of the same range of color is dropped and, consequently, that color is accentuated. Also, the movement actuator reacts to color in the following manner: when a color is perceived by both RGB sensors, the robot goes ahead; when a color is perceived by one of them, the robot turns towards the direction where that color comes from.
This is very similar to Constant’s technique that he applied in his post 1970 work: “Je prends une couleur, je fais une tache, trés lègere, sur la toile vierge. N’importe quoi. Je la jette, j’attends qu’elle sèche, je la regarde, je rajoute une deuxiéme tache.”
Collective behavior of the robots
Prior to launching any collective experiment, the following procedure
is performed:
– Parameterisation of the control program in the graphic interface
with the same values, compilation and transmission for each robot
– Calibration of all sensors of each robot in the programming interface
– Provision of fresh batteries for each robot
– Provision of two new pens (one “dark” and one “bright”)
This procedure guarantees that all robots have the same individual behavior, in order to meet the non-hierarchic requirement and to allow for scalability. Obviously, autonomy and self-organisation are other preconditions assured by this procedure. In regard to how stigmergy is achieved in the experiment, it is worth noting that robots interact only via the environment. In fact, they avoid each other through the effect of the proximity sensors and ‘communicate’ only through the trail left in the canvas by a previous passage. Given that this signal is amplified through the positive feedback mechanism and that no ‘fitness’ function is included in the process, the problem arises of how to stop the experiment. If the battery power was infinite, the canvas would be completely full after a certain time.
A similar process is described by Fréchuret, 2001, regarding Constant’s working method: “Une tache, une couleur, un rien de forme sont suffisants à déclancher le processus créateur. Surgissant dans la blancheur de la toile, elles donnent le signal de depart et, comme par capillarité, se propagent sur le support pour, bientôt, le couvrir totalement”.
In social terms, this situation corresponds paradoxically to Gabriel Tarde’s concept of “absolute sociability”, which consists of “envisaging an urban life so intense that the transmission of a ‘good idea’ to all brains of a given city would be instantaneous”.
Hence, an exterior stopping criterion must be applied: the point previously made about the way we judge an artwork being ‘just right’ is called in this stage, as well as the collective consensus that prevails in the art milieu. Also, the method by which humans perceive a 2D assemblage of colored geometric elements is an issue to be addressed in this regard from the user’s point of view.
In order to get insight on the above discussed issues and to assess how the group of robots react to a ‘real’ world (the canvas and its boundaries), a series of experiments were running, dealing with the extensibility issue.
The experiments were performed in the same conditions, driven by the following combinatory of rules (introduced by a trial-and-error parameterisation of the programming interface, leaded by experience):
– If any of the proximity sensors detect an obstacle nearer than 10 cm, then the robot turns to opposite side of that sensor
– If both RGB sensors read a color, then the pen whose color corresponds to the same range as the average intensities is activated and the robot goes ahead
– If the left RGB sensor reads a color and the right reads white, then the pen whose color corresponds to the same range as the average intensities are activated and the robot turns left
– If the right RGB sensor reads a color and the left reads white, then the pen whose color corresponds to the same range as the average intensities are activated and the robot turns right
– If both RGB sensors read white, then the random module is fired
and a pen is activated with the probability of 2/256
The point of view of the ‘user’
From the user’s point of view, the main difference from the ArtSBot experiment to the usual artistic practice is that ‘the consumer’ may follow the process of making the final product, from its initial stage onward.
The principles of conceptual art, where the process prevails over the retinal outcome, are met here. Furthermore, those principles are materialized into the physical movement of the set of robots in their process of producing the painting.
Since the interaction with the user is a mystifying issue, as seen before, and any empirically determinable community of receivers is anticipated, no ‘direct participation’ of the public is allowed in the making of the picture. However, the viewer’s brain perceives the dynamics of the process, from one configuration to another. Instead of trying to ‘tell a story’ by assigning ‘movement’ or ‘sequence’ to a preset spatial image, the symbiotic art tells the story of the construction of the image.
Hence, the viewer has a dynamic perception of the artwork being made, shifting from one chaotic attractor to another. Sometimes, a point is reached where surprise and novelty strike the receiver. This means that the actual configuration presented in the canvas fires a certain gestalt in the user’s brain, in accordance with his(her) past experience, background and penchant. At this point, the experiment is stopped under the viewer’s directive, since the figure obtained has met his(her) constellation of desires.
If the combinatory of parameters under which a certain experiment
is run does not give rise to any configuration that ‘satisfies’ the user, that ‘instance’ is disregarded and another instance is launched by altering arbitrarily the conditions of the experiment. This corresponds to the Popperian attitude of ‘falsification’, focused on the ‘refusal’ of ‘wrong’ instances, instead of seeking for a desirable solution.
In case that all configurations constructed according to the above described procedure are refused by a particular viewer, then a ‘seed’, chosen by the user, is put onto the canvas. This ‘seed’, which is a specific figure the viewer is chiefly fond of, corresponds to the initializing random process, which is now turned off for this experiment. Then, the group of robots, taking the particular seed as a starting point, ‘exaggerates’ it by their positive feedback mechanism and fuzzifies its form, providing most likely a final product that is not keen to be refused by the user. Obviously, the seed may be changed at the viewer’s will until the refusal of the current instance does not occur.
The seed may also be placed under a Plexiglas terrarium and removed after the experiment.
References
Bataille, G. (1967) La part maudite, Les Édítions de Minuit
Benjamin, W. (1978) The work of art in the age of mechanical reproduction, in Illumunations, Ed. Hanna
Arendt, Schocken Books, New York
Blackmore, S. (1999) The meme machine, Oxford University Press
Bonabeau, E., Dorigo, M., Theraulaz, G. (1999) Swarm Intelligence, Oxford University Press
Breazeal, C., Brooks, A., Gray, J., Hancher, M., Kidd, C., Mcbean, J., Stiehl, D., Strickon, J. (2003) Interactive
Robot Theatre, Communications of the ACM 46(7), p. 76-95
Brooks, R. A. (1991) Intelligence without Reason, Proc. 12th IJCAI, Ed. Morgan Kauffmann, San Mateo, California
Brooks, R. A. (2002) Flesh and machines: how robots will change us, Pantheon Books
Cameron, E. (1992) Given, in The definitively unfinished
Marcel Duchamp, ed. Thierry de Duve, MIT
Careri, F. (2001) New Babylon. Le nomadisme et le dépassement de l’architecture, in Constant, une retrospective, Musée Picasso Antibes
Careri, F. (2003) Walkscapes. Walking as an aesthetic experience, Editorial Gustavo Gil, Barcelona
Chandler, J., Lippard. L. (1968) The dematerialization of Art, Art International, February 1986
Dagen, P. (2001) Constant:une politique de la peinture in Constant, une retrospective, Musée Picasso Antibes
Danto, A. (1997) After the end of art: contemporary art and the pale if history, Princeton University Press
Dawkins, R. (1976) The selfish gene, Oxford University Press
Deleuze, G. (1988) Foucault, University of Minnesota Press
Deleuze, G. (1994) Difference and repetition, Columbia University Press
Deleuze, G. (1980) La carte postale, Flammarion
Deleuze, G., and Guattari, F. (1980) Mille Plateaux, Éditions de Minuit
Dorf, R. (1990) Encyclopaedia of Robotics: Applications and Automation, Wiley-Interscience
Derrida, J. (1967) De la grammatologie, Minuit
Derrida, J. (1980) La carte postale, Flammarion
Duve, T. (1992) Given the Richard Mutt case, in The definitively unfinished Marcel Duchamp, Ed. Terry de Duve, MIT
Foucault, M. (1968) ‘ceci n’est pas une pipe’, Les Cahiers du chemin, nº 2, pp. 79-105
Fréchuret, M (2001) Via le monde, in Constant, une retrospective, Musée Picasso Antibes
Freeman, W. (1991) The physiology of perception, Scientific American, February 1991, p. 34-41
Gibbs, W. (2004) A new race of robots, Scientific American, March 2004, p. 30-39
Grabowsky, R., Navarro-Serment, L.E., Khosha, P. K. (2003) An army of small robots, Scientific American, November 2003, p. 43-47
Grassé, P. P. (1959) La réconstruction du nid et les coordinations inter-individuelles chez Bellicositermes Natalienses et cubitermes sp. La théorie de la stigmergie: Essai d’interprétation des termites constructeurs, Ins. Soc., 6, p. 41-48
Hardt, M. and Negri, A. (2000) Empire, Harvard University Press
Harvey, I. (2003) Robótica evolucionária, nada, nº1, Lisboa, p. 44-49
Jorn, A. (2001) Discours aux pingouins et autres écrits, Ed. École Nationale Supérieure des Beaux-arts de Paris
Kitano, H., et al. (1997) The Robocup 1997 synthetic agents challenge, in Proccedings of the 8th Symposiom on Robotics Research, Nagoya, Japan
Köler, W. (2000) La psycologie de la forme, Gallimard
Langton, C. (1987) Proceedings of Artificial Life, Adison-Wesley
Larousserie, D. (2004) Graines d’intelligence, Sciences et Avenir, Mars 2004, p. 80-83
Lefebvre, H. (1968) La vie quotidienne dans le monde moderne, Gallimard, Paris.
Le Witt, S. (1967) Paragraphs on Conceptual Art, Stiles and Selz, Theories and documents: 8 25, New York
Liggett, H. (2003) Urban Encounters, University of Minnnesota Press
Moura, Leonel (2001) Swarm Paintings: Non-human Art, Architopia, Art Architecture Science, Cascais Biennial, Cascais
Muntadas. A. (2003) On Translation. Catalogue edited by the Fundatió Museu d’Art Contemporani de Barcelona
Page, M. (2000) Creativity and the making of Contemporary Art, in Strange and Charmed, Ed. Siân Ede, Gulbenkian, Lisboa
Penny, S. (1995) Consumer culture and the Technological Imperative: The Artist in Data Space, in Critical Issues in Electronic Media, Ed. Simon Penny, Suny Press
Penny, S. (1999) Systems Aesrhetics + Cyborg Art. The Legacy of Jack Burnham, Sculpture, February 1999 Vol. 18 Nº 1
Penny, S., Smith, J., Sengers, P., Bernhard, A., Shulte, J. (2000) Traces: Embodied Immersive Interaction with semi-autonomous avatars, CALD, Cornegie Mellon University
Pereira, H.G. (2002) Apologia do hipertexto na deriva do texto, Difel, Lisboa
Perniola, M. (1994) Il sex appeal dell’inorganico, Eimaudi, Torino
Plant. S. (1997) Zeros+Ones, Fourth Estate, London
Plant, S. (1992) The most radical gesture: The Situationist International in a Postmodern Age, Routledge
Sadler, S. (1999) The situationist city, MIT Press
Shanken, E. (2001) Art in information age: Technology and conceptual art, in Invisible College: Reconsidering “Conceptual Art”, Ed. Michael Corris, Cambridge UP
Sheldrake, R. (1981) A new science of life: the hypothesis of formative causation, Blond & Briggs, London
Simon, H. (1997) The Sciences of the Artificial, MIT Press
Slayton, J. (1998) Re=purpose of Information: Art as Network, Switch, v. 4, n. 2
Soares, B. (1998) O livro do desassossego, Richard Zenith (ed.) Assírio & Alvim, Lisboa
Suquet, J. (1992) Possible, in The definitively unfinished Marcel Duchamp, ed. Thierry de Duve, MIT
Taylor, R.P, Micolich, A.P., Jones, D. (1999) Fractal analysis of Pollocck’s drip paintings. Nature 399 (June 3), p. 422
Valie Export (2003) Catalogue of the exhibition held in Sevillla, January-April 2004, Éditions de l’Oeil, Paris
Vienne, V. (2000) Confessions of a closet situationist, Communication Arts, March/April 2000
Vila-Matas, E. (2003) El descarriado por la soledad, Letras Libros, Marzo 2003, p. 54
Von Neumann (1966) Theory of self-reproducing automata, ed. by A.W. Burks, University Of Illinois
Werger (1998) Profile of a winner: Brandeis University and Ullanta performance robotics “Robotic Love Triangle”, AI Magazine 19(3), p. 35-38
Wiener, N. (1948) Cybernetics; or, Control and Communication in the Animal and the Machine, MIT Press