With the publication of my book on contemporary limits and transformations of humanity coming out next month I had the chance this week to talk with John Elder of The Sunday Age about the future possibility of rights for robots. John’s article came out today in The Sunday Age and the Sydney Morning Heraldwith the title “What happens when your robot gets ambitious?

In the course of a stimulating conversation with John I argued that one of the main reasons our society finds the question of robot rights so hard — and so scary — to answer today is that we moderns are still suffering from a Cartesian hangover that makes us to see the world as divided into the two categories of “subjects” (human beings) and “objects” (everything else); we load all agency and power onto the subject side of the equation, with the result that everything non-human is thought to be passive and inert (readers of Latour’s We Have Never Been Modern will find themselves on familiar ground here, as will those versant with Michel Serres’s discussions of subject and object in The Parasite and elsewhere). If robots were to have rights in such a way of thinking, then it would mean that they would have crossed over the subject-object abyss and become “one of us” or even perhaps made “in our image”.

The problem with this view of things, though, is that the two-speed gearbox of subject and object is really not up to the task of parsing out the variegated and complex ways we relate to technology (including robots) today, never mind in the future, and I argue that we need something more sophisticated than the all-or-nothing subject-object dyad if we are to do justice to complex ways in which humans interact with increasingly sophisticated and humanoid robots, as well as with technology more generally.

Hollywood blockbusters aside, it’s not a question of “humans versus robots”, but rather we humans ourselves are irreducibly technological beings: strip away from a human being all the technology and technique (the building of dwellings, cultivation of crops, language, social customs, rituals, religions and symbols, tools, art, complex social groups…) and what you are left with is no longer a human being. As Michel Serres is fond of saying (see YouTube clip below), everyone carrying a laptop today is like Saint Denis walking around with their head under their arm: we outsource significant quantities of our cognitive processing to technology as well as much of our manual work to tools, chemical compounds and engines. That is not some alien technological intrusion into a pristine and untroubled non-technological humanity; it is who, as human beings, we are, who we always have been, and who we will be in the future, no doubt with ever more sophisticated ways of building technology into our existence. Technology in general and robots in particular do not threaten our humanity; without it (and them) we would not be human to begin with.

What about the question of robot consciousness though? Well, it’s certainly an important question, but we make a grave error if we assume that it is the only, or even the salient, question in the public debate about any eventual robot rights. I argue that there’s more to the question of robot rights than whether robots are conscious or not, for the good reason that there is more to human rights than the fact that we humans are conscious. Our finitude and neediness–to take just one set of examples–also irreducibly inform the discourse of human rights, and it is unclear how limiting factors like the need for rest and for recreation, or having a family (or even oneself) to support, would pertain to robots. The cry of the Australian Trades Unions in the 1850s was “8 Hours labour, 8 hours recreation, 8 hours rest”, a demand that reflects not only human consciousness but human finitude and the web of relationships into which human beings are born.

If not consciousness, then what about capacity? Well, if we define robots’ status or access to rights by what they can do (think rationally, use language, beat humans at board games…) then we are, at least implicitly, consenting to making one capacity or a suite of capacities the shibboleth of human rights too, and in the new book I argue that this “capacity approach” is a dangerous position to hold. We shouldn’t make human capacities the gatekeepers of moral equality or of the right to have rights, because exceptions can always be found to whatever capacity is chosen and it is often some of the weakest and most vulnerable who are left outside the circle of human rights if entry is granted on the basis of this or that capacity. On this basis, capacity should not be our yardstick for assessing robot rights either. It is much too blunt an instrument.

The Age--What happens when your robot gets ambitious