Interview with Leemon McHenry about his forthcoming book The Event Universe

TheEventUniverseRecently I interviewed Leemon McHenry about his book The Event Universe: The Revisionary Metaphysics of Alfred North Whitehead (UK) (US), the newest addition to the Crosscurrents series. The book is available for pre-order and scheduled for publication in July. Leemon’s other books can be found here.


Chris Watkin: You write in the preface that The Event Universe has been a long time in the making, and indeed it is clear that the book is the fruit of sustained and persistent reflection dating back to the mid 1990s. What set you on the journey to writing The Event Universe, and is it the same book you had in mind from the beginning?

Leemon McHenry: The Event Universe is pretty much what I had in mind from the time I began to think about what problems I wanted to tackle. I didn’t think I’d ever get around to writing this book, but here it is finally in 2015. Actually it was the late 1980s when I began to think about a book on an event ontology. One of Whitehead’s students, Victor Lowe at Johns Hopkins University, was writing the second volume of Whitehead’s intellectual biography and he figured that he wouldn’t be able to finish it in his lifetime. He asked me to write two chapters of the biography on the Whitehead’s philosophy of physics and it was this project that mainly motivated my interest in the theory of events as a unifying concept for physics.  As I explored this theory over the years, I found that Whitehead’s contemporaries at Cambridge, Bertrand Russell and C. D. Broad, followed Whitehead’s lead and proposed their own version of an event ontology. They were all in agreement about the ontological impact of Einstein’s Special and General Theories of Relativity and thought that the Aristotelian view of substance could no longer serve as a foundation of physics.

In the 1990s, I began to correspond with W. V. Quine about his ontology of events. Quine wrote his PhD thesis with Whitehead at Harvard in the 1930s. While he was mainly focused on the logic of Whitehead and Russell’s Principia Mathematica, when he later formulated his metaphysics he advanced an event ontology as well.   About the same time I met the physicist, Henry Stapp, who was working out the details of Whitehead’s event theory for quantum mechanics and its unification with relativity theory. Stapp’s work figures prominently in one of my later chapters where I attempt to update Whitehead’s theory in light of contemporary physics.


Could you briefly summarise into what contemporary debates The Event Universe intervenes, and how it moves those debates on.

Whiteheadians tend to be highly specialized and only talk with one another. One of my aims has been to open a wider dialogue such that Whitehead’s ideas might enter the mainstream of analytic philosophy. To this end, I have compared his theories with those of Russell and Quine and contrasted his ideas with Strawson and Davidson.   In fact, Strawson’s distinction between descriptive and revisionary metaphysics is the focus point for the main debates in my book.

First, there is the ontological status of events in the substance vs. event debate. I am challenging the long-standing Aristotelian tradition in which events are dependent on substances and instead arguing that events are basic.

Second, and closely related to this debate, is the question of whether ordinary language or advancing science is the most satisfactory basis for establishing an ontology. I defend what Strawson called “revisionary metaphysics” as opposed to the approach he preferred, “descriptive metaphysics.”

So, the battle lines are drawn between the revisionary and the descriptive metaphysicians: it is Whitehead, Russell and Quine vs. Aristotle, Strawson and Davidson.

Third, if it is science that provides the ontological foundation, as Whitehead, Russell and Quine argue, then the next question is what ontology would provide the most fruitful approach to what I call “The Big Problem,” namely, the unification of modern physics. Whitehead’s event ontology was originally proposed as a unifying concept for physics and several of his followers have seen this as a plausible solution to what is the most pressing problem of theoretical physics, namely, the unification of Einstein’s theory of relativity with quantum mechanics. Here I examine Whitehead’s theory in the context of the contemporary search for Grand Unified Theories and a Theory of Everything, beginning with Einstein’s Unified Field Theory and ending with Stapp’s modification of Whitehead, Einstein and Heisenberg.

Fourth, as I explore the unification of relativity theory and quantum mechanics, I focus attention on the problem of how to conceive of the nature of time. Since relativity theory and quantum mechanics present two incompatible views about the nature of time, both theories must be modified. I examine three major theories in the philosophy of time – eternalism, presentism and the growing block universe – and argue that the growing block view is the most plausible solution. The growing block universe also turns out to be the most plausible interpretation of Whitehead’s attempt to defend the asymmetry of time against the eternalistic conceptions of Einstein’s theory of relativity.

So, in the end, I am arguing for an ontology of events as a unifying concept for modern physics following Whitehead, Russell and Quine, but I am also updating this theory in light of recent developments in physics and cosmology.


WhiteheadThe physics you engage in the book dates from the latter decades of the 20th century up to the present day (string theory and branes), but your main philosophical reference is Alfred North Whitehead, whose Process and Reality was published in 1929, from the Gifford Lectures given in 1927-8. Can Whitehead’s thought still hold its own in a dialogue with 21st-century physics?

If one accepts the idea that process is a fundamental metaphysical principle of modern physics, then “yes” Whitehead can hold his own, but as I put it in my book, this is a broad metaphysical framework, of which physics fills in the details. There is no question about the fact that Whitehead’s understanding of the physics was limited to what was known in 1928, but what is extraordinary about Whitehead was his ability to generalize from the advances of modern physics in formulating a theory that unifies the fragmentary ideas into one comprehensive system. It is that general metaphysics that still speaks to 21st century physics.


In the conclusion you say that “aside from the philosophically-minded physicists influenced by Whitehead, his unified theory has not had much impact on the course of theorizing in twentieth and twenty-first century physics and cosmology”. Why do you think that is?

Physicists do not usually think in the categories that philosophers invent for them. Very few physicists will have read much philosophy in any detail and even fewer will have read Whitehead. Moreover, Whitehead did not help his case by formulating a language to express the dynamics of reality in terms of “actual occasions,” “eternal objects,” “prehensions,” “concrescence,” and the like. In other words, the sheer difficulty in reading Whitehead has contributed to his lack of influence. I hope that I have made Whitehead more accessible in my book by examining his ideas in the context of Maxwell’s electromagnetic theory, Einstein’s theory of relativity and the search for a quantum ontology.


On a naïve or common-sense reading, many might agree with Peter Strawson’s position in Individuals, which you sketch in Chapter 2: substances are more fundamental than events because substances can exist without events (the mug on my desk is just sitting there doing nothing), whereas events cannot exist without substances (there must be a mug for the mug to break or fall to the floor). In the book you pick apart this view very carefully and systematically. Could you briefly sketch in what main ways you think this common-sense position is misleading?

I trace this position to Aristotle who held the view that grammar is the guide to ontology. Aristotle’s methodology treats substances as paradigm subjects; events and properties function secondarily as verbs, adverbs and adjectives. Whitehead, however, thought that the primacy of substance in the Western tradition was due to the historical accident of the subject-predicate structure of Greek and the dominance of Aristotelian logic that encouraged a conception of substance as basic. Quine followed with his own powerful criticism of this position. In other words, there is nothing special about conceptual scheme enshrined in ordinary language. It has strong pragmatic justification, but as a metaphysical foundation for modern science, it fails to do justice to the empirical evidence.

Your example of the mug is typical of how an Aristotelian would view the relation between substances and events and indeed this is consistent with our common sense notions. But the kinds of events that Whitehead, Russell and Quine have in mind are those that would serve as a foundation for modern physics. Here the concept of energy in particle physics or the point-events that serve as the building blocks for the four-dimensional theory of relativity are basic.


In chapter 1 you suggest that “since physics is the most basic and comprehensive of all the physical sciences, enquiry into the nature of reality should begin here and not with pure armchair speculation or linguistic analysis.” Can the decision to privilege basicality and comprehensiveness itself be anything other than either an armchair decision or a question-begging self-justification of physics?

Well, you have to start somewhere in building foundational principles unless you are theoretically opposed to the project of fundamental ontology or metaphysics more generally (as I believe is the case with the post-modern relativists or the deconstructionists). This is perhaps a debate for another book since I don’t really take up these views. In proposing a “naturalized metaphysics” in the fashion of Whitehead, Russell and Quine, my adversaries are those philosophers who think that metaphysics gets at the nature of reality independently of science. In my view, these philosophers are guilty of treating ignorance of science as a virtue.


In chapter 5 you enter into a discussion of multiverse theories. Could you explain why an event ontology takes you down that path.

There are two reasons for this. First, some physicists and cosmologists have proposed multiverse theories as a path to unification. In other words, the Theory of Everything is sought in the broadest possible theorizing about the Superspace in which our universe was born and will possibly die in the distant future. While Whitehead is seldom acknowledged as a very early proponent of multiverse theory, his theory of cosmic epochs is remarkably close to some of the theories proposed today. Second, these “cosmic epochs,” as he calls the separate universes within the multiverse, are explained in his mereological theory. The succession of big bangs and big crunches in the oscillation model of modern cosmology would be explained in Whitehead’s metaphysics as epochs or extremely large space-time units that are, after all, extremely large events.  So, if the event ontology is offered as a unifying concept for theoretical physics, at the very smallest, microscopic level, quantum events make up the first level of physical reality and at the very largest, macroscopic level, cosmic epochs appear to be the largest events, each with its own big bang and big crunch.


What’s so dangerous about the Copenhagen interpretation of quantum theory?

The Copenhagen interpretation has been enormously successful as a theory of quantum mechanics. I don’t think I would characterize the Copenhagen interpretation as “dangerous,” but in so far as our goal is a comprehensive, unified theory in physics, the Copenhagen interpretation has been the main obstacle in our attempt to achieve this goal. The basic problem, as I argue in The Event Universe, is that its instrumentalism is incompatible with the realism of relativity theory.


In the final chapter you spell out the importance of an event ontology for rethinking the mind-body problem, perception and causation, free will, personal identity and moral agency. You also claim that “as with all revisionary metaphysics, everything changes and nothing changes”. How radical do you deem the move to an event ontology to be in terms of its implications beyond theoretical physics or theoretical metaphysics?

Since most philosophers assume a substance-property metaphysics as basic to making sense of the world, the suggestion that events are basic and substance can be eliminated is radical. Everything changes from a theoretical point of view, but nothing changes with respect to our ordinary, everyday thinking about the world. This is perhaps a bit of an overstatement. As I explore the philosophical implications of an event ontology in my last chapter, there are significant differences for conceiving solutions to the mind-body problem, perception and personal identity. When I said “nothing changes,” I had in mind Berkeley’s statement that he was not denying that ordinary things like ships, shoes and sealing wax really exist. What he was challenging was what he said the philosophers call “corporeal substance.”


In the conclusion you make the prospect of a unified theory contingent on the caveat “if we can make the very plausible assumption that nature itself is unified”. Can that ever be more than a plausible assumption?

I have not given this much thought in my book, but now that you mention it, I think it is more than a very plausible assumption that nature is unified. The history of physics has demonstrated repeatedly success in unified theories, for example, Newton’s unification of celestial and terrestrial motions or Maxwell’s unification of electricity and magnetism. That would suggest some sort of correspondence between the nature of reality and our theoretical pursuits. The unification of nature is corroborated by observation and experiment, but not confirmed.  This is an inductive argument that the most successful and long lasting theories in physics have been unified theories and on that basis we have good reason to believe that future successful theories will be unified.

I thank you for the opportunity to address these challenging questions.

Interview with Wahida Khandker about her forthcoming book Philosophy, Animality and the Life Sciences

Goya dogI am delighted that Crosscurrents will be publishing Wahida Khandker’s new book Philosophy, Animality and the Life Sciences in July 2014. The book is a study of pathological concepts of animal life in Continental philosophy from Bergson to Haraway.

Here is the blurb:

Amongst contemporary debates about our relation to non-human animals, our use of them for scientific research remains a hugely contentious issue, and one that many Continental philosophical engagements with ‘the animal question’ have (rightly) been accused of shying away from.  On the other hand, traditional moral philosophy has been limited to the demarcation of living beings either within or outside of our circle of moral consideration.  Can Continental approaches to the categories of animality and organic life help us to reconsider our treatment of non-human animals?  This book looks at the philosophical assumptions underpinning these debates by following the historical and philosophical development of the concept of ‘pathological life’ as a means of understanding organic life as a whole.  It explores the significance of this across philosophy and the life sciences through the work of a number of key thinkers of life and process, from Henri Bergson to Donna Haraway, and argues that the concept of pathological life plays a pivotal role in contemporary  reconfigurations of the human-animal distinction.

Wahida has also kindly agreed to answer some questions about the book.

A great deal has been written on animality and the human/animal distinction in recent years. What does Philosophy, Animality and the Life Sciences say that is unique in this area?

I think the uniqueness of this book is in its sustained philosophical study, within a particular strand of the Continental tradition (the neo-vitalist strand running from Bergson to Canguilhem and Foucault), of the key problem that defines ‘critical animal studies’: the human-animal distinction and how its definition and development impacts on our treatment of non-human animals.  Much of the work that is currently taking place, as interesting and valuable as it is, tends to only touch on philosophical concepts and problems such as the nature of subjectivity, the concepts of time or process, and epistemological questions concerning the content and limits of conscious experience.  Through thinkers such as Henri Bergson and Alfred North Whitehead, I try to show how such questions serve as the foundation for enquiries into our relationships with other species on our planet.  Of course, such an approach already exists in environmental ethics where ontological questions underpin theories on the interconnectedness of living and non-living species, and can help to promote care for the environment by underlining the fragile interdependency of individual organisms and ecosystems.  But I wanted to follow this ontological approach in order to tackle the problem of animal rights, looking to thinkers such as Jacques Derrida and Donna Haraway for particular formulations of interspecific and transgenic forms of communication between living organisms, and what this might reveal about our attitudes towards the use of animals for food, labour and experimentation. 


You place a particular emphasis on the concept of “pathological life”. Could you sketch the significance of this term for your project?

My project grew out of a broader interest in the concept of organic life that philosophers such as Henri Bergson are well-known for discussing.  As one branch of this broader enquiry, in the writings of the historian of science, George Canguilhem, and following him, Michel Foucault, it is claimed that over the course of the nineteenth century the concept of pathological life supplanted the prevailing vitalist theories of an animating principle as the ‘cause’ of living processes.  The essential point is that the idea that some ‘breath of life’ or external motive force drives a living thing is simply redundant in the scientific study of organic functions.  Rather, life is innately pathological insofar as its efforts can be defined as the attempt to resist disease and death.  What is interesting in the histories of philosophy, science and medicine, is that the line between life and death—the normal and the pathological—is a mobile boundary.  What was once considered pathological or pertaining to disease, at one point in history, is later considered ‘normal’ (e.g., consider the changes in our attitudes towards different mental health conditions and disability).  Thus, I consider the implications of this important shift in the scientific and medical understanding of life for the nature of the human-animal distinction.  My contention is that the moving boundary between the normal and the pathological has not simply improved our understanding of disease and our ability to stave it off, but it has also facilitated the continual re-constitution of the animal as the outside, inferior, or pathological form of the human. 


In the first chapter you criticise the persistence of the Great Chain of Being in our thinking about life and the natural world. To what extent do you think that the Great Chain of Being still controls our discourse about animals today?

It is there in most discussions about the rearing and consumption of animals for food, but this is usually a lazy or unthinking attitude towards meat-eating: human beings are superior, and other animals are there for us to consume.  It is ‘natural’.  Other animals’ capacities tend to be defined as weakened versions of our own.  They lack reason, their feelings of pain or suffering are less intense, and so on.  In fact, such attitudes are also reflected in theories of animal consciousness which tend to be dominated by the assumption that there is a relatively perfected form of consciousness located in the species named Homo sapiens sapiens.  All other forms of consciousness can be categorised as less developed manifestations of conscious life.

In the book you evoke the “fallacy of evolutionary thinking”. Could you explain what you mean by that?

When we think of the evolution of living things, we tend to assume that species that emerged later are naturally better: they are, after all, the products of the Darwinian principle of natural selection according to which weaker or disadvantageous features of a species have been weeded out over time, leaving a fitter organism with characteristics that are advantageous for its survival in its particular environment.  The fallacy lies in a misunderstanding of the role of time.  The evolution of species is not a single process of perfection of life, but rather a continual explosion of diverse forms over the entire course of evolutionary history, which means that species existing today are just as susceptible to selective pressures as species that existed several million years ago.  The same could be said of any lineage we might trace, be it of a species of animal, or a particular concept in the history of ideas.  ‘Later’ does not necessarily mean ‘better’.

You want to resist the tehdency to anthropomorphise animals, but you argue for the animalising of the human subject. Why do you reject the first but accept the second?

Attempts to anthropomorphise animals usually come with attempts to fit them into an ‘acceptable’ frame of one kind or another: such as the definition of consciousness (can they reason?), or the circle of moral consideration (can they suffer?).  The problem with such attempts lies precisely in their exclusivity.  The Great Apes are thought to share many traits with human beings and, it is argued, should be accorded similar rights.  Therefore we should not subject chimpanzees to painful experiments or long periods of confinement in laboratories.  However, rats do not possess these traits; therefore, it is acceptable to experiment upon them.

Despite how it sounds, the animalising of the human is not an attempt to derogate human life (indeed, one would only think this on the assumption that animals are our inferiors).  It is rather an attempt to consider certain traits that we hold to be exclusively or eminently human as equally characteristic of other species.  For example, forms of language and tool use are now recognised in other species.  The other key distinction is between humans as ‘agents’, and animals as ‘patients’.  What happens if we think about animals as participants in the networks of relations that we form with them in (in farms, laboratories, and our homes), rather than simply passive recipients or subject to our will and actions upon them? 


In the book you seem gently to take Donna Haraway to task for not coming out against animal testing. Is the argument of the book intended to draw the reader towards any particular conclusions on this and other current social issues?

I wanted to see if I could write a book at least in part from the stance of an ‘animal advocate.’  When I discuss historical or contemporary instances of animal experimentation, I am immediately concerned with the accompanying attitudes towards it (morally, scientifically/biologically, etc.).  Insofar as I identify the lab-animal-human relation as one of violence, not a productive form of subjectivity (Haraway ponders the duality of the status of laboratory animals), the book does lean towards an ‘abolitionist’ view of the use of animals in scientific research. 


It is sometimes suggested that the status of the animal will be the next big civil rights issue for our society (some argue it already is). Where do you see things going in the years ahead?

There have already been major shifts, at least in certain countries, to the benefit of animals, such as the EU ban on cosmetics testing.  Other factors, not related directly to the animal rights movement, have also seen changes in our attitudes towards intensive farming: the problems of obesity in humans, and the spread of disease in farmed animals have encouraged a wider appreciation of what goes into our food.  I do not think it is a matter of there being a straightforward or enlightened trajectory towards the abolition of animal experimentation, since animal activism remains a marginal endeavour (as opposed to environmentalism, which has become more mainstream over the last two decades or so).  I think with the burgeoning interest in critical animal studies, it will be interesting to see what effect, if any, this has on the role that universities play in the perpetuation of animal research as the norm.  I would like to think that any intensification of research (and intensification of funding for ‘impactful’ research into major diseases such as cancer and heart disease) that uses animals, will be matched by an increase in scrutiny by both academics and the public of such practices.