Sunday, January 23, 2011

Doubting at the door: when it's right to be wrong. (For the lesswrong crowd.)

I've often asked myself: why do evangelical Christians generally support Israel and the oil industry and disbelieve in anthropogenic climate change? One could make a case that these seemingly unrelated values and beliefs are a logical consequence of the axioms of evangelical Christianity, and would thus make infallible sense upon a deep inspection of the belief system of this religious branch. But having undertaken such an inspection in earnest, I have come to another conclusion: evangelical Christians, by and large, in aligning themselves with the totem of the modern Republican party—a totem which has given them a public dignity and appreciation which they had hitherto been denied—have succumbed to the temptation of believing their friends and doubting their enemies, to the degree that poorly understood matters of fiscal policy are now taken as something akin to an article of faith.

It makes me laugh. Yes, let's all laugh. As a rationalist and a singularitarian, and as a less-wronger, I think as a it's easy to guffaw about the illogic and totemism of the American political right. That is, until you start to ask yourself: why am I a rationalist AND a singularitarian? And a less-wronger. And a militant atheist. One could make a case that these seemingly unrelated values are a logical consequence of the axioms of rationality, and would thus make infallible sense upon a deep inspection of the belief system of this intellectual system. But one could make other cases too. Bear with me. Let's play devil's advocate. For rationality's sake. I think these excursions are healthy, even if we ultimately reject the conclusions.

Buddhist statuary, in the wake of the Macedonian invasion of India, began to arraign the Enlightened One with the hairstyle of Greek soldiers. Early American anthropologists all wore Alfred Kroeber's beard and mustache. As enlightened as we may be, enlightened people have succumbed before to arbitrary beliefs and practices, and it's probably a good idea to “hang a question mark on the things we've long taken for granted” from time to time.

We could frame this exercise in a number of different ways, but since I'm writing this essay, we will frame it thus:

Why are singularitarians rationalists?

There are plenty of good reasons not to be a rationalist. In fact, valuing the truth as a sort of Platonic ideal to which everything else must be subservient is an arbitrary moral choice in a universe where moral facts are either non-existent or inaccessible.

Nietzsche says everything better than I do:
"The Will to Truth, which is to tempt us to many a hazardous enterprise, the famous Truthfulness of which all philosophers have hitherto spoken with respect, what questions has this Will to Truth not laid before us! What strange, perplexing, questionable questions! It is already a long story; yet it seems as if it were hardly commenced. Is it any wonder if we at last grow distrustful, lose patience, and turn impatiently away? That this Sphinx teaches us at last to ask questions ourselves? WHO is it really that puts questions to us here? WHAT really is this "Will to Truth" in us? In fact we made a long halt at the question as to the origin of this Will—until at last we came to an absolute standstill before a yet more fundamental question. We inquired about the VALUE of this Will. Granted that we want the truth: WHY NOT RATHER untruth? And uncertainty? Even ignorance? The problem of the value of truth presented itself before us—or was it we who presented ourselves before the problem? Which of us is the Oedipus here? Which the Sphinx? It would seem to be a rendezvous of questions and notes of interrogation. And could it be believed that it at last seems to us as if the problem had never been propounded before, as if we were the first to discern it, get a sight of it, and RISK RAISING it? For there is risk in raising it, perhaps there is no greater risk.

Alexis de Tocqueville thought that America's religiosity was the secret to its material success. Will Durant, even as an atheist, could not help but attribute the progress and happiness of better times to the religious impulses that bound men together, comforted them in their losses, and made them terrified of wronging each other. Modern scholarship, too, has suggested that the religious are happier. In light of these things, a rationalist may find herself feeling a bit like Dillard's proverbial Christian missionary to the Inuit:

Eskimo: "If I did not know about God and sin, would I go to hell?"
Priest: "No, not if you did not know."
Eskimo: "Then why did you tell me?"

We could recast this parable:

Typical modern man: “If I were not rational, would I live a better life?”
Rationalist: “No. The evidence suggests that your life would be better as a Mormon.”
Typical modern man: “Then why the f*** should I be rational?”

Why? Perhaps because the fate of humanity, and by extension, all prospects for continuing human happiness, hang in the balance. Perhaps because in a world where democratic societies are the vanguard of AI research, it is crucial to convince the masses in these societies to be rational enough to do the right thing in this most crucial of junctures. Or perhaps, because Eliezer Yudkowsky believes in the singularity, and Eliezer Yudkowsky believes in improving human rationality.

At the moment, I reject the latter conclusion. I do believe that rationality—particularly the kind of rationality that is unafraid to state its conclusions in clown suit solitude—does tend to lead those who follow it into a logical-but-unorthodox appreciation in the singularity and existential risk.But I do occasionally have my doubts. I doubt, particularly, when I hear a singularitarian reject the AI-specific arguments of an esteemed AI researcher based on his unrelated beliefs about metaphysics (I actually hear this all the time from the SI crowd). When maintaining the whole basket of rationalist beliefs is a requirement before a person's empirically demonstrable theories about artificial intelligence can be given consideration, the lesswrong/Friendly AI movement begins to seem a little more like a personality cult.