Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Wednesday, January 11, 2017

Pour être un post-philosophe ?


J’avoue d’office que ma question et ma réponse sont totalement truquées en ce sens que je vais nommer des qualités particulières que je possède personnellement en partie supposée être suffisante. Il y a également des réponses pour lesquelles je me rends compte que je tire un peu vers moi la couverture. Je les laisse néanmoins car elles sont pertinentes. Je dis pourtant qu’elles pourraient être remplacées par des réponses légèrement différentes.

• Mieux vaut avoir une petite culture en philosophie conventionnelle, rien que pour éviter d’être totalement un philistin qui ignore même la terminologie d’Aristote et de tous ceux qui l'ont suivis.

• Etre au courant des discussions ordinaires sur le “free will” (libre arbitre).

• Pouvoir s’exprimer an anglais. Curieusement, je considère que la langue allemande n’ajoute rien d’obligatoire aux qualités du candidat. C’est une manière de dire que, si je pensais qu’il fallait posséder l’allemand pour être post-philosophe, j’aurais fait un effort pour l’apprendre.

• Etre connaisseur de Rilke et admirateur de Malte Laurids Brigge.

• Il faudrait surtout avoir quelques notions de base sur la physique quantique.

• Savoir un peu ce qu’est l’ADN.

• Connaître Richard Dawkins et l’athéisme.

• Il faudrait connaître assez bien la programmation d’ordinateurs et les challenges de ce qu’on appelle parfois l’intelligence artificielle.

• Il faudrait pouvoir taper correctement sur un ordinateur. Le contenu de ce que je propose pourrait difficilement être transmis par quelqu’un qui se sert uniquement du texto sur un iPad. D’ailleurs, il ne m’est jamais de ma vie arrivé de tenter une telle opération. Je reste incroyablement vieux-style !
Il y a sans doute des fautes dans mes propositions. Mais, même s’il n’y avait qu’un gramme de sérieux, sans parler de vérité, on devrait admettre que la plupart des candidats dits « intellectuels » se situent à une distance d’années-lumière.

Sunday, December 4, 2016

Damages of death

The following blog post is dedicated to friends who have suffered—recently or less recently—from the death of loved ones. Unfortunately I'm aware of an unavoidable problem in my reasoning. The basic idea that our human brains were never designed to handle philosophical and/or scientific thinking is best understood by those who've read a science book such as The Magic of Reality by Richard Dawkins. If you've never encountered such a book, then my elementary reasoning might fail to convince you.
It is pointless to think of a deceased individual as “damaged”. He/she has simply disappeared. My use of the word “damages” refers to those who are left behind: relatives and friends of the deceased. Often they will have called upon subterfuges to weaken the blow of the death of their loved one. But this “solution” might not work successfully in the immediate future, if ever. In the past, religions provided the best subterfuges. But, with the disappearance of profound religiosity in society, this subterfuge is losing its force, if not totally disappearing.

To bear the unbearable, I know of only one powerful subterfuge, which has been dominating my personal existence for several years. I adopted it when I became totally atheistic. That was after my encountering, above all, the writings of Richard Dawkins. My subterfuge is quite simple. It consists of admitting that we humans are indeed terribly weak creatures. Our brains were created long ago, at a time when the only ambitions of primitive Homo sapiens were to survive and procreate. This involved tasks such as hunting for food, combating many enemies (including other humans), and recovering from sickness. But the cerebral mechanisms of that archaic creature were hardly designed to grasp challenges that would finally culminate in logic, reason, philosophy and science. The highest level we’ve ever attained consists of realizing in a fuzzy fashion that we’ll never move close to anything like a greater understanding of our existence. So, the best conclusion is to give up searching. Our quest is doomed, and all attempts to pursue this quest will inevitably hurt us. We must simply learn to abandon all such desires.

In The Divine Comedy of Dante Alighieri, people for centuries have shunned the terrible inscription at the entrance into Hall:

Abandon all hope, ye who enter here.


The Barque of Dante by Eugène Delacroix

My personal reaction is totally the opposite. We must indeed abandon all hope for, in doing so, we free ourselves of the pain of trying to understand things that we were simply never built to understand! Consequently, instead of descending into sadness, we can spend the rest of our existence doing only the things we were designed to do, and thinking things that we are capable of thinking.

There is a corollary to my formula for happiness. The consequences of following the river Styx to Hell are not only abominable; they’re also clearly absurd, and therefore impossible. I don’t know where the Homo sapiens invention is located in the panoply of possible creations, but I have the impression that it’s not too far up the ladder. Today, we’ve almost attained a point of implosion… which makes me feel that the end is near. Up until now, the animal world seemed to have advanced in several splurges, none of which ever got anywhere near lasting for a lengthy period. Dinosaurs were probably the greatest happening on Earth… but they were wiped out long before they might have started (?) to to build science laboratories and write books. And it’s most likely that Homo sapiens will do little better than the poor old dinosaurs. So, I can’t possibly imagine how or why the processes of Nature might get involved in building creatures that end up constructing real-life creations of the kind of medieval rubbish described by Dante. If they had the skills to tackle creations of that kind, they would surely be far more interested in building spaceships…

There is another corollary to my formula for happiness. I might describe it as “mind-boggling”… but that would be wrong, because this corollary is so simple and obvious that it doesn’t boggle my little mind in any sense whatsoever. Here’s my second corollary: Everything that makes up the universe as we imagine it (fuzzily) today has been here forever, and will continue to exist forever. Not only is it difficult to imagine that what we call “time” (an invention of Homo sapiens) might have a beginning and an end; it’s totally absurd. So, we should abandon such silly ideas, in the same way that we abandon Dante’s “hope”. That leaves us with the bare necessities of a Good Life freed from archaic rubbish of the kind that fascinated earlier specimens of our race… and faced solely with the pursuit of human happiness and goodness.

If you wanted a model for our existence, and you were prepared to accept a fictional one, I would highly recommend the Sermon on the Mount, which is surely some of the finest literature ever written.

Saturday, October 22, 2016

Random morning message

There’s a nice French expression to designate a sudden urge: une envie de pisser [wish to pee]. That’s what happened to me a moment ago, leading up to the present message. It’s a philosophical viewpoint that has been pursuing me ceaselessly for a long time. So, here it is.

Our outlook on existence is totally biased by the particular dimensions of our observations, which define a mere window. We remain incapable of adopting windows that might be more macroscopic or microscopic.

• The first weakness means that, in spite of our gigantic windows out into space-time, we remain like ants who imagine their anthill as the entire universe.

• The second weakness means that, in spite of our fondness for elementary particles and string theory, we humans are not very good at dealing with things that are far smaller than what we see through our eye-glasses.

Besides, it’s funny that we introduce a direction into these two scale differences. What right do we have to say that, in the macroscopic case, existence appears to get bigger and bigger, whereas it’s smaller and smaller in the microscopic case? Maybe we should simply say that the differences are no more than changes in our two kinds of viewpoints, without claiming that one change is “bigger” and the other “smaller”.

For the moment, it’s primarily the second weakness that has inspired my matinal philosophy message… but nothing really changes when we move to the first weakness. All our human conclusions about what is good or bad, and what is right or wrong, have been concocted from within our familiar everyday window, at the level of human organs and our devices such as eye-glasses. For example, people use their normal viewpoint to encounter all kinds of happenings, from peace and love up to war and terror. This suggests that our above-mentioned human conclusions would no longer have the same sense if we were to modify our viewpoint, by moving in an up/down direction. In other words, morality is not a universal phenomenon. It’s rather a purely relative viewpoint-based affair.

Personally, I am both awed and frightened by this conclusion. For the moment, therefore, I avoid the temptation of accepting it completely.


Ah, if only our existence were to be nothing more than watching a rugby match! Sadly, at no instant in my existence has my life ever moved an instant into such a nirvana. That has always been my major problem...

Friday, January 2, 2015

Rosalie’s duck

Jesus said, "I praise you, Father, Lord of heaven and earth, because you have hidden these things from wise and intelligent people and have revealed them to children."                                                                                — Matthew 11:25
I’m convinced that, if ever the individual referred to as Jesus had existed, he might indeed have said something like that. That's to say, Jesus—himself a bright fellow—surely understood that there was great clear-sightedness, discernment and rationality in the regard of a child.

Back in 1977, when I was driving around Scotland with my children, visiting places that I planned to mention in my forthcoming tourist guide to Great Britain, my 8-year-old son François provided us with a wonderful example of childhood wisdom. We were sitting on the shores of Loch Ness, and talking inevitably about the legendary monster.

Click to enlarge

François: “If ever the monster existed, down at the bottom of Loch Ness, it wouldn’t waste its time wondering whether or not we humans exist. So, why should we spend our time wondering whether or not the monster exists?” That was symmetrical reasoning of a high order.

A few years later on, at the Ruflet estate in Brittany, Christine was talking with the children about a serious family problem that had arisen. I don't recall the details, but it was quite complicated. No matter what solution was imagined, there was always a good reason why it wouldn’t work. So, everybody was moving around in circles, looking for some way of solving the problem. After a long pause in the discussion, young François voiced an unexpected opinion: “It’s like Rosalie’s duck.” 

Now, to understand that remark, you need to know that Rosalie was a rural lady (maybe a window by that time) who had spent her life in charge of the main farm at the Ruflet domain. For us, she was renowned for the excellent poultry she raised, which was constantly present on festive tables in Christine’s family context. And we must imagine that, in the midst of Rosalie’s chickens (with thighs like champion Breton cyclists), there was a duck.


Manya was rather angry to hear her brother’s remark. “François, here we are, talking about a serious family problem, which nobody seems to be able to solve. As soon as we think there’s an answer, it turns out to be wrong. Then we have to start looking for another possible answer. And stupidly, in the middle of our discussion, you start talking about Rosalie’s duck… which has nothing whatsoever to do with what we’re talking about.”

The reaction of François was simple but brilliant: “Manya, you’ve obviously never tried to catch Rosalie’s duck.” He went on to explain that he himself had often tried to catch Rosalie's duck. But, whenever he made an attempt to jump upon the bird, it vanished instantly to another spot. It was impossible to pin it down. And François had realized that this was the essence of the family problem that was being discussed.

In fact, Rosalie's duck was behaving like a run-of-the-mill quantum event. The animal was acting with the elusiveness of an electron. These days, I’ve got around to thinking that, in my forthcoming philosophical autobiography to be entitled We are Such Stuff, I may well use the expression Rosalie’s duck as the title of my chapter on the greatest metaphysical question ever asked (dixit Heidegger):

Why is there something rather than nothingness?

Monday, October 27, 2014

Shortest distance between two points

Don’t ask me why I adore this image:


My joy is strictly Platonic. The old Greek fuddy-duddy tried to persuade us that Heaven is full of ideal forms: of dogs, cats, good guys, bad women, mathematical triangles, tables, chairs, whatever… And the vulgar objects that we encounter here on Earth are pale copies of these ethereal forms. Well, the above image indicates that something went wrong on Plato’s way to the theatre. His ideal straight line got screwed up. It ran into a bug. And that bug happened to be a fragment of green DNA-based life. Piss off, Plato…

Saturday, June 15, 2013

Breakfast outside

A simple joy at Gamone, now that the weather has warmed up, is to breakfast outside, and to spend a moment reading in the sunshine.

[Click to enlarge]

Daniel Dennett's Intuition Pumps is a most refreshing approach to down-to-earth "thinking tools" of the kind that are (or should be) used by philosophically-minded scientists and scientifically-minded philosophers.

Friday, July 29, 2011

Relativity

I've always been intrigued by manifestations of an everyday concept that can only be called relativity… although it has nothing to do with Einstein. I'm talking of the fact that an individual X might consider such-and-such a thing as important, whereas an individual Y might consider the same thing as trivial. That's to say, the thing is, or is not, important/trivial depending on the identity of its respective viewers. And that's why I suggest (rightly or wrongly, at a language level) that it's a case of relativity.

Ever since the inception of my Antipodes blog in December 2006 [display], its spirit has evolved constantly around the concept of an upside-down world in which certain folk seem to be walking on their heads… when viewed, that is, by folk on the other side of our conceptual planet.

I'm amazed whenever the ordinary universe reveals itself (above all, in the domain of quantum physics and cosmology) as extraordinary. Inversely, I'm amused when I see that dull phenomena (such as tourism in my native land) can be interpreted by their beholders as objects of planetary contemplation. I ask myself constantly: Why can't we all agree about what's important (and what's trivial), what's amazing (and what's run-of-the-mill), what's beautiful (and what's dull), what's precious (and what's cheap), etc.

Today, I'm convinced that this theme of everyday relativity is all-important, because it determines whether or not we're talking on the same wavelength, or even talking about the same issues. Back in 2006 in Sydney, I shall never forget the experience of describing with enthusiasm, to my uncle Peter and his wife Nancy Walker, the reasons why it was so fundamentally important for me to make this pilgrimage from France, back to Australia, to visit our ancestral Braidwood. After listening to my profound explanations, Peter said to me: "William, you must realize that nobody gives a screw about all that you've just been saying." I remember, above all, the term "screw", an euphemism for "fuck" (since Peter never used bad language). He was right, in his tiny narrow-minded way. But, in most ways, Peter was utterly wrong, for he had sadly misjudged (underestimated) what makes the world go round. In a nutshell: Our constant challenge of evaluating what went wrong in the past, and trying to improve things for the future. That, my dear ignorant uncle Peter, is what people have been giving countless literal fucks about for the last few billion years.

Sadly, I never saw Braidwood, because there was simply nobody to take me there. For me, this was a gigantic disappointment... which accounts for much of the distaste I now express for that silly sunburnt country and its people that I used to love.

This relativity theme is so huge that I've lost steam (in criticizing my uncle) before I even got started. I'll get back to it in later blog posts...

I've been talking on about anything and everything for years, in this Antipodes blog, designed to evoke interesting responses from those around me, particularly my genetic relatives. Well, in all that time, I continue to find it utterly amazing that this blog has never recorded a single instance of a significant reaction from any individuals in that "genetic relatives" category. It's as if they all signed off as soon as they saw the first words of Antipodes. In fact, I don't give a screw.

Monday, June 20, 2011

Are certain babies born to be criminals?

I was interested to come across an article in the New York Times [display] entitled Genetic Basis for Crime: A New Look, related to a US conference in Arlington that is opening this morning with a forum on "creating databases for information about DNA and new genetic markers that forensic scientists are discovering".

For many decades, we've all known that, when talking about the fuzzy but touchy subject of crime, it has been politically incorrect to evoke an alleged role of genes. Besides, we're more convinced than ever that, today, the only admissible direct answer to the question in my title—Are certain babies born to be criminals?—is no. But much has been evolving recently concerning our appreciation and evaluation of the undeniable influences of people's genes concerning their future behavior in society. And my question needs to be answered in a far more subtle manner than by a simple yes or no. As soon as I started to read the NYTimes article, I said to myself that the journalist surely couldn't carry on discussing a "new look" at arguments about a genetic basis for crime without mentioning the work of Steven Pinker, as evoked in my blog post of 24 February 2011 entitled I think, therefore I am… misguided [display].

Not surprisingly, the journalist soon got around to quoting Pinker, and even mentioned his latest book—The Better Angels of Our Nature, Why Violence has Declined—which is already announced by Amazon and receiving advanced comments… even though it won't be published until next October!

In The Blank Slate, Pinker started out by evoking the most outspoken observer of Man's propensity to violence: the English philosopher Thomas Hobbes.

Here is the text of Hobbes's "life of man" statement, which so alarmed his fellow citizens that the great thinker's brutal analysis was basically ignored for some three centuries… up until recent times.

Hereby it is manifest that, during the time men live without a common power to keep them all in awe, they are in that condition which is called war; and such a war as is of every man against every man. In such condition there is no place for industry, because the fruit thereof is uncertain: and consequently no culture of the earth; no navigation, nor use of the commodities that may be imported by sea; no commodious building; no instruments of moving and removing such things as require much force; no knowledge of the face of the earth; no account of time; no arts; no letters; no society; and which is worst of all, continual fear, and danger of violent death; and the life of man, solitary, poor, nasty, brutish, and short.

To handle the state of Wild West lawlessness that he depicted as the normal destiny of humans in their natural state, Hobbes suggested that society might install a monstrous all-powerful sheriff, the Leviathan, inspired by ancient Judaic mythology.

But this "solution" was both unpleasant and unconvincing. So, ever since that catastrophic vision of humanity penned by Hobbes, countless critics have been trying to prove, simply, that he was totally wrong.

A few decades ago, many Americans tried to believe that even the worst criminals could be coaxed back into the folds of society by a process designated as rehabilitation. In 1970, a young Texan law professor and former US attorney-general, Ramsey Clark, with an unbounded belief in peace, wrote a book entitled Crime in America in which he sought to promote this wishful thinking.

Here are excerpts from Ramsey Clark on the rehabilitation theme:
Rehabilitation must be the goal of modern corrections. Every other consideration should be subordinate to it. To rehabilitate is to give health, freedom from drugs and alcohol, to provide education, vocational training, understanding and the ability to contribute to society. […]

Rehabilitated, an individual will not have the capacity—cannot bring himself—to injure another or take or destroy property. […]

The end sought by rehabilitation is a stable individual returned to community life, capable of constructive participation and incapable of crime. From the very beginning, the direction of the correctional process must be back toward the community. It is in the community that crime will be committed or a useful life lived.
Today, individuals such as Pinker, convinced that genes influence greatly an individual's propensity to commit crimes, are starting to debunk the "moralistic fallacy" (as he puts it) of rehabilitation. The challenge they face consists of explaining to concerned citizens that emphasizing the primeval causal role of genes in criminality is not at all equivalent to imagining the existence of a single binary-valued "crime gene", which is either turned on in the case of wrongdoers, or off in the case of decent citizens. That is not at all what is meant by a genetic dimension to criminality. It's far more subtle and complex than that. Besides, individuals are not generally condemned to a life of crime by the mere presence of "risky genes". Such a presence would simply indicate that there is probably an accrued risk that such individuals would fall into crime more readily than those who have no such genes. Genes, even when present, can be flipped on and off by environmental factors, and that is what gives us hope as far as combating violence and crime is concerned.

Today, as the article on the Arlington conference points out, the tide is turning in the sense that biology and genetics are no longer dirty words in the arena of research on violence and crime. But it would be naive to imagine that the ideas of an evolutionary psychologist such as Pinker are about to be welcomed wholeheartedly by the entire criminological establishment. In any case, the Hobbesian vision of humanity was surely closer to reality than the "blank slate" thinking of idealists such as Jean-Jacques Rousseau and Ramsey Clark.

Saturday, May 28, 2011

Voices from Vienna

When I was a student in Sydney, already fascinated by symbolic logic (as I still am), two of my intellectual heroes were the eccentric British lord Bertrand Russell and the equally exotic Viennese philosopher Ludwig Wittgenstein.

An English translation of Wittgenstein's Tractatus Logico-Philosophicus can be downloaded today from the Gutenberg website [access]. The philosopher's father, in the industrial context of the Austro-Hungarian empire, was a wealthy iron-and-steel baron—of the Krupp or Rothschild kind. When the dreamy melancholic 24-year-old Ludwig inherited this fortune, he gave some of it away, anonymously, to struggling compatriots such as the poet Rainer Maria Rilke, often mentioned in this blog [display].

At a philosophical level, Russell and Wittgenstein represented the great British tradition of empiricism, based upon the common-sense notion that we learn the truth about the world by looking at events that happen and employing the time-honored technique of inductive reasoning. Now, another Viennese philosopher would soon throw a spanner into the works by demonstrating convincingly that scientific knowledge is certainly not acquired by such an illusory empiricist approach.

Karl Popper proposed that an exceptional scientist succeeds in explaining the universe, not by studying data of a laboratory kind, but through his/her intellect and imagination, maybe while seated alone at a desk in the middle of the night. Subsequently, experimental observations enable the inventor of a scientific theory to determination whether the latter might have flaws in it, in which case the theory would need to be corrected, improved or maybe abandoned, to be replaced one day by a better theory.

Today, there is no doubt not a single serious scientist in the world who wouldn't agree entirely with Popper, who is now considered by many intellectuals as the greatest philosopher of the 20th century. Popper is lauded particularly by the Oxford quantum physicist David Deutsch, author of The Fabric of Reality, mentioned in my blog post of July 2007 entitled Brilliant book [display].

A third Viennese intellectual who would achieve fame in the English-speaking world was Ernst Gombrich, regarded by many as the greatest art historian of the 20th century. Settled in London from 1936, he went on to become a distinguished member of the art establishment. His opus The Story of Art (1950) was the first of a rich series of publications that won him acclaim in academic circles, and led to his being knighted. Gombrich had always been a close friend of his compatriot Popper, and actually played a major role in drawing the attention of the English-speaking world to the Viennese philosopher and helping him to publish The Open Society and Its Enemies (1945).

Back in 1976, I wrote to Ernst Gombrich asking for his advice concerning a writing project on which I was working. In a nutshell, I was wondering whether I might be able to put together a history of the use of the arrow symbol in both science and society. Here are the two pages [click to enlarge] of his friendly reply, in which he alludes to his compatriot Popper:



Let me conclude by a couple of trivial anecdotes concerning Ludwig Wittgenstein.

Some people (but not me) imagine that the education meted out by a fine old school can be a guarantee that students will evolve naturally into fine citizens with noble characters. That's what is meant by nurture. Well, around 1903, 14-year-old Wittgenstein went to a reputed establishment in Linz known as the Realschule. And we have no reasons to deny that the spirit of this school played a part in transforming young Ludwig into the outstanding philosopher that he was to become. But there's a hitch in this thinking. At the same school, Ludwig had a mate, just six days older than himself, named Adolf Hitler.

An Australian author, Kimberley Cornish, has even suggested that the future Fuhrer hated the Jewish boy to such an extent that Wittgenstein symbolized the entire race that would soon enrage the mad dictator, as expressed in his Mein Kampf. Cornish's book is nevertheless controversial, in that there is no firm proof that Wittgenstein and Hitler were aware of one another's identity in that high school of 300 students. There is a school photo in which Hitler certainly appears:


But it has never been confirmed—except, curiously, by the photographic services of the Victorian police in Australia—that the boy whom Cornish has labeled as Wittgenstein is correctly identified. And some critics point out that Wittgenstein and Hitler, although they attended the Realschule at the same time, were never in the same class.

My final anecdote, of a personal nature, was related already in my blog post of July 2008 entitled Danger scale [display], in which I announced with excitement my discovery of the writings of Steven Pinker. Since the Harvard psychologist's book deals with children's acquisition of language, I mentioned a story that had amazed me when I heard it, from an English lady named Elizabeth Anscombe, who happened to be a Catholic friend of my wife's parents in Brittany. Well, I learned later on from my mother-in-law (after the lady's departure, much to my regret) that Elizabeth Anscombe, a professor of philosophy at Cambridge, was in fact one of the world's leading authorities on Wittgenstein. I was terribly frustrated to realize that I had missed out on an opportunity of chatting with Elizabeth Anscombe about Ludwig Wittgenstein (whom she had encountered personally)… but Christine's mother could never have suspected that her Australian son-in-law might be interested in an obscure Viennese philosopher.

Thursday, February 24, 2011

I think, therefore I am… misguided

I've just started to reread a book by the Harvard psychologist Steven Pinker, published in 2002.

It's an exceptional book, like all of Pinker's published works, dealing with the time-honored debate between nature and nurture. That's to say: Is the character of a human being determined by his inherited genes, or are we forged essentially by our childhood environment and upbringing? The title of Pinker's book is The Blank Slate. Maybe I should explain to young readers of this blog that, once upon a time, when the Earth was young (before the invention of the iPad), and writing paper was still a relatively expensive commodity, school children used to carry out their class exercises in subjects such as arithmetic and spelling by means of a reusable writing tablet composed of a flat and thin rectangular slab of dark stone, known as a slate.

As Pinker points out, the metaphor of a blank slate is usually attributed to the British philosopher John Locke. The expression itself seems to be a variation on the medieval Latin tabula rasa [scraped tablet]. These days, in everyday French, people use the expression "table rase" to designate the notion of starting from scratch. In monasteries, the tabula was a notice board on which daily chores were associated explicitly with the various monks. So, a tabula rasa was a notice board that was momentarily empty.

In view of its title, Pinker's book on the nature/nurture debate seems to be begging the question, since the blank slate metaphor represents the nurture viewpoint that a baby's brain is relatively empty before being metamorphosed by the acquisition of experiences from the surrounding environment, including above all the people around him, which form his personality, character, intelligence and all the rest. On the contrary, Pinker's book defends the nature viewpoint, in the sense that he considers that human babies are born with a lot of their future intellectual resources already "wired in". In other words, he argues against the theme expressed by his title. At first sight, this is a little confusing for the reader. It's as if the author of a book in support of atheism (such as The God Delusion by Richard Dawkins, for example) were to have chosen whimsically a religiously-oriented metaphorical title such as, say, The Great Ship of God, before going on to liken it to the Titanic.

Pinker introduces several other celebrated metaphors, always in a negative sense. That's to say, he carefully explains these metaphors, and then spends the rest of his book demolishing them.

One of Pinker's earliest targets is dear old René Descartes. Throughout the western world, everybody loves him, because he told us that a human being is a very special hunk of meat equipped with a mysterious thing called a mind. That's to say, a human being is a tandem affair. On the one hand, each individual has a body. And at the same time, he has a mind, which might be thought of as "driving" the body, in much the same way that a person drives an automobile. Now, that's an idea that cannot fail to flatter us, because it's a bit of a bore being stuck with the vulgar hardware, the meat, devoid of an explicit and essentially autonomous self, a mind, indeed a soul. Descartes describes differences between the two collaborators, which might be likened respectively to a nice fresh apple and a hard billiard ball.

I'm not suggesting that Descartes liked apples or played billiards. These objects are merely proposed as a convenient way of looking at Cartesian dualism. Descartes distinguished the two entities by means of a criterion of partial destruction. An individual can lose a part of his body—an arm or a leg, say—just as easily as you take a slice out of an apple. On the other hand, it's practically impossible to cut a billiard ball into slices. Today, nobody likens the human mind to an indestructible billiard ball, because we've heard so much about schizophrenia and so-called split personalities. There are even spectacular cases of patients whose cerebral hemispheres function separately.

A new metaphor for the Cartesian mind was invented by the British philosopher Gilbert Ryle, who referred to it as the ghost in the machine, which seems to be a modernized variation on the ancient "deus ex machina" theme.

During the 18th century, the philosopher Jean-Jacques Rousseau invoked the theme of a wise and pristine creature, untouched by the vices of civilization, whom he designated as the noble savage, whose slates had seen no evil, heard no evil, and spoken no evil. And explorers in the Pacific often imagined that they had come upon such human tribes. But they were inevitably disillusioned before long.

Against the backdrop that I've just sketched, Pinker's task in The Blank Slate is frankly revolutionary, for he attempts to present conclusions—those of the computational theory of the mind—that are often totally astounding. Here's a typically-succinct paragraph in which Pinker employs a wonderful but little-used concept, consilience, which my Macintosh dictionary defines as "an agreement between the approaches to a topic of different academic subjects, especially science and the humanities".

History and culture, then, can be grounded in psychology, which can be grounded in computation, neuroscience, genetics and evolution. But this kind of talk sets off alarms in the minds of many nonscientists. They fear that consilience is a smokescreen for a hostile takeover of the humanities, arts and social sciences by philistines in white coats. The richness of their subject matter would be dumbed down into a generic palaver about neurons, genes and evolutionary urges. This scenario is often called reductionism, and I will conclude the chapter by showing why consilience does not call for it.

I agree entirely with Pinker's precise summary of the situation. On countless occasions, when I've attempted to defend a purely scientific worldview in various human domains, I've encountered immediate accusations of reductionism… often disguised in polite phrases such as "Science can't explain everything" or "You're forgetting the spiritual dimension of our existence". And when I try to affirm that science should be able to explain everything, or that an expression such as "the spiritual dimension of our existence" is fuzzy to the point of being meaningless, many people assume immediately that I'm a dull-witted crackpot with no sensitivity for humanism and culture. Worse, they see me as a misanthrope who has deliberately turned himself away from the real pulsating world of people.

Pinker was courageous to tackle the gigantic challenge of demolishing beliefs in a blank slate, which he designates as "the modern denial of human nature". For me, it's a joy and an easy task to follow Pinker's presentation of the current situation… but that's merely because I agreed entirely with his outlook even before my first encounter with his book. On the other hand, I don't know to what extent a firm believer in the blank-slate concept would be swayed by such a densely-written book. And I fear that even my humble blog post is likely to appear to certain readers as confused mumbo-jumbo.

Tuesday, November 30, 2010

People and places named Berkeley

When I visited London for the first time, in 1962, I had an account with an Australian bank whose offices were located on Berkeley Square, an elegant tree-shaded corner of Westminster.

At that time, I had no reason to be interested in the fact—if I had known it—that this square used to be the London address of an ancient family named Berkeley whose castle was located over in Gloucestershire, to the north of Bristol.

This was not the first time I had encountered the name Berkeley. As a philosophy student in Australia, I had been greatly intrigued by the weirdly imaginative ideas of the Anglo-Irish bishop George Berkeley [1685-1753].

He suggested that material objects might not really exist such as we commonly envisage them. When we perceive the presence of such an object, our perceptions of it are indeed quite real, but they don't necessarily prove that there exists, behind these perceptions, a material object that is constantly present, even when it's not being perceived. This way of looking at things raises a problem. If an object only exists when it is being perceived, then what becomes of it as soon as it is no longer being perceived? Imagine a tree in the forest. Does it cease to exist when it's no longer perceived, and then come back into existence as soon as there's somebody to perceive it once again? That doesn't sound like a very reassuring explanation of existence, to say the least. Berkeley appealed to magic to extricate himself from this puzzling situation. He suggested that the tree never really ceases to exist at any instant, no matter whether or not a human viewer is looking at it, since God is on hand permanently to perceive it. Funnily enough, in spite of the weird nature of Berkeley's theory, it receives an echo in modern physics, where commonsense notions of matter have been replaced by abstract constructs. As Bertrand Russell once said about matter: "I should define it as what satisfies the equations of physics."

George Berkeley (who wasn't yet a bishop) spent a few years in America, and he happens to be the author of a celebrated line of poetry: Westward the course of empire takes its way. These words inspired the famous mural painting by Emanuel Leutze representing the arrival of European Americans on the shores of the Pacific.

These words were also the reason why the name of the poet George Berkeley was given to the future university city in California.

It is said that George Berkeley was in fact a descendant of the above-mentioned ancient family from Gloucestershire. This idea amuses me greatly, for I too am a descendant of those folk. The patriarch of that family, Maurice Berkeley [1218-1281], married Isabel de Douvres, daughter of the Fitzroy chap—designated in the following chart as Richard Chilham, a bastard son of King John—after whom I have named my young Border Collie dog.

My findings in this ancient family-history domain are relatively recent (dating from the second half of 2009), and there are still many loose ends that I haven't got around to exploring. Among these loose ends, there have been these two men named Berkeley. I now realize that I shall only have to "plug myself into" the rich and well-documented history of the Berkeley family, and I shall surely be able to enhance rapidly and considerably my existing research results.

Thursday, October 14, 2010

Paradoxes

Have you ever wanted to know what a web page is? Well, here to satisfy your curiosity is a screen shot of part of a typical web page.


In fact, it's a picture of a Wikipedia web page that explains what a web page is. If you want to access the web page in question, just click the above picture. Now, the Wikipedia article also contains a picture of a typical web page, and you can click that picture to see the web page in question. Once again, that will provide you with a clickable picture of a typical web page. But the embedding seems to stop at that point… instead of going on to infinity (as I had hoped).

When I was a child, I was fascinated by the packet of breakfast cereals that displayed, on the front side of the packet, an image of itself. For years, that picture created a tempest in my mind, and screwed up the calm breakfast atmosphere at South Grafton.

In my previous article, I evoked modern logic. After the cereal packet featuring a picture of a cereal packet (which in turn featured a picture of a cereal packet, and so on), my next biggest mental shock (several years later) was the paradox of Bertrand Russell about sets that are not members of themselves. Consider the set of all possible ideas. Obviously, that set is itself an idea. So, the set of all possible ideas is a member of itself. On the other hand, it's clear that the set of all pipes is not a pipe. So, the set of all pipes is not a member of itself. Consider all possible sets of the latter kind: that's to say, the set of all sets that are not members of themselves. Is that gigantic set a member of itself? Good question. To be a member of itself, that set has to be a set that is not a member of itself.

That sounds like a lot of mere words. No, Russell's paradox was much more than mere words. Curiously, nobody ever bothered to inform members of the philosophy department at Sydney University that Russell had evoked this enigmatic set… and, in so doing, destroyed forever all the formulations of logic that had existed up until then.

Wednesday, October 13, 2010

Man gave names to all the animals

For a year at the University of Sydney, I attended the classes of John Anderson [1893-1962] in Greek philosophy. It wasn't very exciting stuff—a little like entering a fine-looking restaurant in Paris and being served a ham sandwich—for the obvious reason that philosophical thinking, like everything else, has evolved considerably during the two millennia since the ancient Greeks. Listening to the Scottish gentleman rambling on about Plato and Aristotle was equivalent to sitting in on mathematics lectures presenting the elements of Euclidean geometry, or attending a year-long course on the astronomy of Isaac Newton. I've already said that it was grotesque to be teaching a university course in Aristotelian logic at a time when this domain had been totally dominated for decades by so-called symbolic (mathematical) logic. As for delving into the complicated reasons why Socrates was made to drink hemlock for allegedly corrupting the youth of Athens, that was a pure waste of time for students in the middle of the 20th century.

On the other hand, in the midst of all this antiquated mumbo-jumbo, I did appreciate one small but non-trivial item of philosophical culture: Plato's theory that things in the real world are mere imperfect instances of so-called universals, which are ideal models of a purely abstract nature, existing only in the mind of God. Funnily enough, my familiarity with Plato's so-called theory of ideas made it easy for me, many years later, to grasp the avant-garde approach to computer programming known as object-oriented programming. Here you start with an abstract class, which is then used to create effective instances of that class, referred to as objects.

For Plato, the countless dogs that we meet up with in the everyday world are merely instances of the divine concept of dog-ness, while cats are instances of cat-ness. And Bob Dylan seemed to be perspicacious when he pointed out that Man, in the beginning, had been obliged to give names to all the animals. This is exactly what a computer programmer does when he starts to invent the classes for an object-oriented project.

The only annoying aspect of Plato's theory is that, while it may be helpful for somebody who needs to master object-oriented computer programming, it is totally and unequivocally wrong as a philosophical explanation of our real world.

Richard Dawkins explains Plato's error brilliantly in the opening pages of his latest masterpiece, The Greatest Show on Earth, which I mentioned briefly a year ago [display]. Truly, if you plan to buy and read only one book in the immediate future, make sure it's this one, since this book proposes knowledge that is an absolute must for all informed and cultivated citizens of our day and age. The author asks a simple rhetorical question: Why has it taken so long for humanity to grasp Darwin's "luminously simple idea"? Dawkins replies that the fault lies with Plato. To understand evolution, you have to abandon your naive Platonic trust in concepts such as dog-ness, cat-ness or anything-else-ness. We exist in a perpetually evolving universe in which a single creature could well combine simultaneously a bit of dog-ness and bit of cat-ness. Or maybe this creature seems to exhibit a lot of dog-ness today, whereas his remote ancestors were better described as apparent instances of wolf-ness. In any case, there's an amazing aspect of Darwinian evolution that demolishes Plato's universals, not only in theory, but at a real-life practical level. This is the fact that the planet Earth has actually witnessed—at one moment or another, and for a lapse of time that allowed for procreation—a living specimen of every imaginable creature on the scale that separates pure dog-ness from pure cat-ness. To see why this apparently exotic claim can be made, you only have to envisage (if you have sufficient imagination) the last common ancestor of dogs and cats, which may or may not have looked physically like something in between a typical dog and a typical cat. (The chances are that it looked like neither.) Between that strange creature and a dog, evolution gave rise to a big series of intermediate animals that ended up looking more and more like dogs. The same can be said for the path from that archaic creature to a cat. So, we only need to imagine these two series of animals laid end-to-end (with their common ancestor in the middle), and we have obtained the real-life metamorphosis of a dog into a cat, or vice-versa. But, if Man had to find names for every member of this gigantic set of specimens, Dylan would be singing for centuries.

Long ago, when I first heard Professor Anderson describing Plato's theory of ideas, I was truly charmed by the image of our watching shadows cast by a camp-fire on the wall at the far end of a cave. It was a romantic Boy Scout metaphor, and I'm sad today, in a way, to realize that Plato's fire has gone out forever. Happily, though, Darwin has led us out of the obscure cave and into the light and warmth of the Sun.

Sunday, May 9, 2010

Morals

I know this is going to sound silly, but I'll say it all the same. Ever since my youth, I've been intrigued by the philosophy of morals. A child's first introduction to the notions of right and wrong is based largely upon punishment. It's wrong to poke your tongue out at an old man, even though he looks like a scarecrow. So, if the child does so, it's normal that he's likely to be spanked by his mother or father. It's also wrong to play with safety matches, but the punishment is of a different kind. Instead of a spanking, your fingers get burned. Although both actions—making fun of old folk, and playing with dangerous devices—are things that a child "should not do", the child soon starts to feel that there's a difference between these two categories of bad deeds. In the first case, the wrongness consists, as it were, of doing unto another something that you maybe wouldn't wish to be done unto yourself. In the second case, it's simply a matter of not accepting sound advice from experienced oldies who've already made those same mistakes and paid the price in pain.

Within the territory of right and wrong, good and bad, there are a striking number of loopholes, or rather patches of no-man's-land, particularly when other partners step into the picture: social customs, the influence of peer groups, the law of the land and, above all, religions. The territory is transformed into a vast muddy field, where youthful adventurers soon get bogged down… particularly when sex raises its naughty head. For example, young people generally feel that fornicating is good stuff, even when they haven't yet reached the so-called 'legal age" for acts that were referred to, in Australian law, by a delightfully exotic and erotic expression capable of giving a young man an erection: carnal knowledge. Screwing was not explicitly forbidden in the Ten Commandments (except in the form of adultery). Admittedly, if the partners in such a timid crime happened to forget about contraception (often because they didn't know what it was all about), then it could resemble the case of innocent children playing with safety matches.

For all these reasons (and many more), I signed up for a course in moral philosophy at the University of Sydney, in the context of my science studies. There, the naive 16-year-old country boy from Grafton was confronted immediately and inspired immensely by a wise old man from the past named Socrates.

After asserting that "the unexamined life is not worth living", he was put to death for allegedly corrupting the youth of Athens. Clearly, there were diabolical dimensions in the quest for the truth about morals… if such a truth existed. This became more and more obvious to me when I finally had a chance of looking at what had happened in Auschwitz… which had never been a noteworthy event, curiously, back in my hometown circles.

The classes of an obscure professor of moral philosophy named Alain Ker Stout [1900-1983] were an intellectual catastrophe, because he didn't have much to say, and his way of saying it was sadly comical. Funnily, though, I've retained, not only most of the little he told us, but also three of the books upon which his teachings were based.

They still carry my antiseptic ex libris, which looks as if it were written by a lad fresh out of Sunday school… which was in fact the case.

The philosophy of so-called utilitarianism is even dumber than the term used to designate it. Apparently, we should strive to maximize the greatest good for the greatest number of good people. (I'm simplifying.) What does that have to do with utility? Today, only somebody with a mind like that of George W Bush, say, would find this idea "philosophical". To be truthful, I don't know whether or not Bush ever studied John Stuart Mill [1806-1873].

G E Moore [1873-1958] was a brighter analyst… whom I respect for his associations with my two greatest philosophical heroes: Bertrand Russell [1872-1970] and Ludwig Wittgenstein [1889-1951]. But he, too, seems to end up telling us that he doesn't really have anything more profound than common sense to tell us about right and wrong, good and evil, and that stuff.

The well-written little book by Patrick Nowell-Smith [1914-2006] has been a primer for countless readers (Penguin sold more than 100,000 copies) who were intrigued by the idea of a logic-based approach to the philosophy of morals. (It would be more correct to speak of logical positivism rather than logic in a broad traditional sense.) But the interest of this book, today, is mainly historical.

So, what are we left with? Well, unfortunately, we're left with a widespread opinion that, somehow, you need to believe in religion before you even have the right to talk about morals. The antiquated enemies of secular thinking attempt to spread the notion that society would disintegrate into a vast anarchic cesspool of savage depravity if ever the little gods of Judaism, Christianity and Islam were to be removed from the current scene. It goes without saying that these rumormongers are stupid liars, who seek to bully innocent folk into accepting religion in order to save society from barbarian turmoil. But it's the evil liars who are the New Barbarians.

Basically, thinkers such as Richard Dawkins remind us constantly that human nature is what it is, for the better and for the worse, and that the alleged existence of a deity is a totally irrelevant speculation. It's not because the god Jupiter went out of fashion that assassination attempts upon mothers-in-law, say, suddenly spiked. On the contrary, people are starting to believe, these days, that if the gods were finally stacked away in wardrobes with all the other skeletons of human history, there would surely be a drop in crime statistics ranging from raped schoolkids up to kamikaze operations.

In this general context, the brilliant US atheist Sam Harris has succeeded in surprising many of his friends by suggesting that there might indeed be objective links between science and morals. In February 2010, he spoke on this subject at the prestigious conference known as TED [Technology, Entertainment, Design].



In the face of many reactions, Sam Harris has just clarified his thinking in an article entitled Toward a Science of Morality [display]. I like to think that Harris might be onto a good goal: the idea that, somewhere deep down inside our inherited structure of thought, there are inbuilt neuronal circuits (or something like that) that work nonstop at promoting the principle that Auschwitz and countless other barbarian acts were wrong, and that helping little old ladies to cross the busy street in bad weather is a morally good act, for which you deserve to win brownie points. [Remind me to tell you the joke about a pub artist who plays a miniature piano.] For the moment, I would conclude that Harris is not necessarily wrong, but that he nevertheless doesn't need to be right in order for societies to evolve morally in a "well-behaved" fashion.

Tuesday, December 29, 2009

Bertrand Russell on God

Throughout my younger years, the books of the English philosopher and mathematician Bertrand Russell [1872-1970] were no doubt my main non-fictional reading. Even today, my copy of Russell's big History of Western Philosophy (which I bought in Paris in 1962) is located permanently on a bookshelf just alongside my bed.

Whenever I stroll through London's Trafalgar Square, I recall this photo of the 87-year-old white-maned philosopher standing among the lions at the foot of Nelson's Column at a 1959 rally of the Campaign for Nuclear Disarmament.

This evening, it was a pleasure for me to discover this interview on the Dawkins website:



Naturally, I always imagined Russell first and foremost as a philosopher and a mathematician (whom I approached initially through his work in the domain of symbolic logic), and only then as an outspoken freethinker and a nuclear-disarmament campaigner. He impressed me greatly, of course, by describing himself explicitly as an atheist... at a time when this term was hardly fashionable. I tended to interpret this, however, as Russell's way of telling us that he simply didn't have the time or the inclination to be concerned about questions of divinity. That's to say, I imagined him rather as an agnostic, since I never really felt that Russell had provided us with convincing proofs that God did not exist... if indeed such proofs were thinkable.

Today, looking back upon my admiration of Russell, I see him retrospectively as a precursor of Richard Dawkins. Or, rather, I imagine Dawkins as an intellectual descendant of Russell. There is something similar in their elegant style, their power of inquiry and expression, and their profound humanism.

Thursday, May 21, 2009

Dog logic

My first contact with the intellectual discipline known as logic was in 1957 at the University of Sydney, where I attended the classes of John Anderson, whose overall style and behavior might be described as Victorian. That was probably one of the last occasions in academia for an alleged philosopher to ramble on for an entire year about logic without ever going an inch beyond Aristotle [384-322 BCE].

Retrospectively, I find it preposterous that such a course could have still existed in the second half of the 20th century, and been taken seriously, in a philosophical world that was already impregnated by mathematical logic of the subtle kind invented by thinkers such as Bertrand Russell [1872-1970], Alonzo Church [1903-1995] and the genius Kurt Gödel [1906-1978].

Concerning the latter man, I had the privilege of talking to him on the phone for about ten minutes, when I was visiting the USA in the early '70s, and attempting vainly to persuade him to be interviewed for French TV. Gödel insisted stubbornly that his contribution to mathematics was minimal, and that no TV viewer in his right mind would be interested in watching him. Maybe he was right on the second point, because the celebrated incompleteness theorem is not necessarily ideal stuff for what used to be called (unjustly, to my mind) the idiot box.

Talking about the teaching of philosophy in Australia, I often had the impression that it could be weirdly sex-oriented at times, as if philosophy—in the minds of many observers—were a synonym for sin. While I was at university in Sydney (for two short years), a terrible scandal of a typically wowserish Aussie kind (you might need to look up that adjective in a Down Under dictionary) erupted in Tasmania because the professor of philosophy Sydney Sparkes Orr [1914-1966] had seduced a female student. As for John Anderson himself, biographers inevitably draw attention to trivial anecdotes about his advocacy of so-called "free love" (casual adultery)... which sounds very much like what countless inhabitants of the planet Earth are practicing regularly these days, without even bothering to give it a pompous name.

Concerning the substance of Anderson's courses in philosophy, which I would generally describe as light-weight, I did however appreciate his drawn-out analysis of the trial and execution of Socrates for his allegedly corrupting the youth of Athens.

I often thought that the mumbling old Scotsman, attired in a black academic gown, liked to imagine himself as some kind of latter-day Socrates, persecuted by the straight-thinking citizens of the Antipodes. To me, that sounds like a nice summary of the situation... except that nobody at the old Royal George pub in Sydney's Sussex Street, hangout of a mindless sect known as The Push, ever got around to offering the professor a middy of hemlock. [Click the above image of Socrates to access an excellent Wiki article on beer in Australia.]

As far as Aristotelian logic is concerned, I'm convinced today that it's so trivial that my dog Sophia masters it perfectly... in spite of the fact that she never had an opportunity of studying under Professor Anderson. [She did get involved in free love, long ago... which resulted in the birth of Christine's dear dog named Gamone.]

I'm often impressed by demonstrations of what's going on in Sophia's head. She understands perfectly the logical concept of negation... which was a Big Thing for Aristotle. When Sophia sees me getting dressed and closing doors as if I'm about to go out in the car, she analyzes the situation patiently. If she hears the ritual command "Guard the house" (in French), Sophia realizes instantly that there's no way in the world that she's going to accompany me in the automobile. But, if a certain time has elapsed without this formula being pronounced, Sophia suddenly deduces that the absence of the "Guard the house" command means that I'm indeed inviting my dog to accompany me in the car... and she's already jumping excitedly alongside the door of my archaic Citroën. To my mind, Sophia's capacity of interpreting a non-existent prohibition as a positive incitation is truly remarkable. Once Sophia's brain has calculated the reasonable lapse of time during which the absence of the "Guard the house" command can be interpreted as an invitation to a car excursion, it would of course be unthinkable for me, out of forgetfulness, to attempt stupidly to change Sophia's mind. I don't want to have a schizophrenic dog. In this way, my smart Sophia earned herself car trips when she wasn't supposed to accompany me.

Recently, I've been amused by a new manifestation of Sophia's reasoning power: the ability to count from one to two. I don't know whether I've said already in this blog that I'm a huge consumer of French cheese. It's fine that I should be residing alongside both the Saint-Marcellin and Vercors cheese-production zones. Often, I buy a big chunk of hard creamy cheese manufactured in the neighboring Haute-Savoie region. Each time I cut away a slice for a toasted sandwich, there's a small segment of dry crust at each extremity of the slice. Sophia, of course, loves this stuff. Well, I was intrigued recently by the fact that, when I was preparing a sandwich in front of my toaster, and cutting off the crust of a piece of cheese, Sophia did not pounce immediately upon the first bit of cheese crust that fell to the floor. Instead, she hesitated unexpectedly, leaving the piece of cheese crust untouched on the floor, just in front of her snout. She knew that there's a crust fragment at each extremity of the master's morsel, and she waited for the second bit to fall to the floor before gathering up both off them in one fell swoop. This reaction links up with Sophia's delightfully confused behavior whenever I confront her with a pile of several pieces of meat. Her general plan is always to get that food out onto the lawn as rapidly as possible, where she can consume it in a leisurely manner while lying on the grass. But a terrible existential arises in Sophia's mind: Maybe, a chunk of meat might disappear mysteriously, either on the kitchen floor, or out on the lawn. I can see her performing some kind of canine calculation, trying to decide which piece she should carry out, and which pieces must be left for later. Indeed, the seriousness of a dog's calculations concerning food take us back magically in time to the early eras of Creation, when Sophia's ancestors (and mine, too) had to get their act right about such matters. Otherwise, they starved, and neither Sophia nor I would be here today to talk about such archaic ancestors' tales.

The most amazing instrument in Sophia's anatomy is, of course, her snout: a precision molecule detector of a kind that modern science and engineering would have trouble duplicating. Like any dog, Sophia uses this high-tech tool as a shovel, to bury bones. In this domain, the respective intellectual conclusions of Sophia and me often differ.

First, although we humans realize that a dog's snout is precious and fragile, and we do our maximum to optimize the working environment of this fabulous device (I love to wash mud off Sophia's fine face), it's a canine mistake to imagine that Homo sapiens tills the soil in gardens, for example, in order to facilitate the burying of bones.

Second, although we humans—particularly atheists like me—consider that there's nothing "sacred" in the bodily remains of a dead creature, and that anything of a meaty nature deserves to be eaten, I disagree with my dog when she believes that the bacterial action of the soil in my future medieval garden is likely to transform magically a horn from our dear departed billy-goat Gavroche into something akin to a juicy steak. Apparently Sophia still has a Christian streak in her genetic upbringing, which makes her believe in miracles, whereas I have lost all such superstitions. I try to convince her that she errs, but it's not easy to discuss metaphysics with a dog. In spite of that slight discord, Sophia and I—not to mention the distinguished professor John Anderson—would appear to agree basically on the primitive intellectual processes of Aristotelian logic.

In a forthcoming chapter of this philosophical essay, I shall demonstrate that Sophia is in fact Cartesian. Clearly, she thinks, therefore she is...