1.1 At the Starting Gate
Figuring out how people believe things that aren’t true.
Oh dear—another work on creationism! Hasn’t enough been done on that already? Hasn’t every top gun from Richard Dawkins and Jerry Coyne and Ken Miller and Eugenie Scott and Niles Eldredge and Stephen Jay Gould, let alone dozens of other ventures in sundry books and articles, slain the antievolutionary dragon so completely that we can just cite them and move on? Well, the topic doesn’t seem to be going away. The many hydra-like efforts of the retooled Intelligent Design (ID) movement to “teach the controversy” or show the “strengths and weaknesses” of evolutionary theory show just how robust (and troublesome) the creationist subculture remains, so maybe it is time to rethink both the issue and what might be done about it.
This work endeavors to take a fresh look at the creation/evolution controversy, from top to bottom. My proposition is that the roots of the debate lie not (amazingly enough) with the “usual suspects” of religion and politics. Although those factors obviously play a tremendous role in the superficial textures of the landscape, the root problem lies much deeper than popular apologetics. Rather they stem from truly fundamental cognitive processes, ones which we fail to appreciate and deal with at our cultural peril.
At heart are basic questions that any serious philosophy must recognize and have a workable opinion on.
Starting with: How do people believe things that aren’t true? We can’t dodge that question. Unless you’re claiming that all beliefs are in fact true, and that one just won’t fly. To take just one obvious example: the earth cannot simultaneously be considered the center of the solar system and be revolving around the sun. Or, if you’d like a nonscientific issue, that the historical figure known as Homer either did or not exist as a real human being. While the Homer case is for all practical purposes an unsolvable one, the heliocentrism thing is a different matter. Though in case any readers are of the opinion that this at least is one of those fully settled issues in science, a truly dead horse, we’ll be seeing the unsettling reality is that certain biblical creationist geocentrists have had (and continue to have) a surprising influence on the contemporary antievolution scene. Not all dead horses are in fact completely dead.
Which brings us to a second great question of thought imbedded in the first: How do you figure out that something is true? Are there really standards for such things? If so (and I definitely contend there are) are these truly universal rules, that all self-aware beings must adhere to independently in order to qualify for the “clear thinking” label? Right up front I will declare my conviction that there is only one method for rigorous thought, not a plethora of context sensitive methodologies open to the squishy interpretations of time and circumstance. Sound thought is the same for us today as for an inhabitant of a Pleistocene savanna—or for any hypothetical beings that might populate alien realms.
One standard, all the time, for all things. No exceptions.
That should be simple enough. But there’s more.
I further contend there is also only one way for people who manage to believe things that aren’t true to pull that trick off. And that flawed methodology turns out to lie very deeply in cognitive processes by no means restricted to faulty thinking. Indeed, they are probably ultimately inherent to our success as a species. Dumb ideas in general, and the creationism that is the specific focus of this work, have not come about because their believers have devised some extraordinarily novel way of thinking badly. Nor that the general way of their thinking badly is somehow utterly unconnected from what they are doing when they are not thinking badly.
No, my argument is that creationism is symptomatic of what happens when entirely natural and normal information processing in the brain gets applied to areas to which they were not originally adapted—namely, the more recent human constructs of history and science. Scientific and historical reasoning, where the goal is to try and understand what actually is or has happened, regardless of how much you may desire it to have been or not, turns out not to be a skill we humans fall into easily. It takes conscious vigilance to keep the process on track, and some people are naturally poorer at that calibration than others.
Because there isn’t actually a term to describe what we can see is going on here, I have had to opt for a neologism: the tortucan mind. I define that term fully in Downard (2010), and offer some proposals about how the dynamics of that process may be identified (or confuted) by future scientific investigation, but for now all you need to know is that I’ve plucked the term from the Latin for turtle, and that the concept will get a lot of use in the pages to come.
Another key concept underlying my analysis of how people should go about thinking through things to figure out what are the true and false bits concerns application. You can make any pronouncement you like, define a topic in any manner you please, clever or stupid. But whatever meaning those propositions may actually have is only to be discovered by how they are applied. You figure out what something means by doing it, embodying it in specific examples. This runs from material things like what a chair or an elephant is, to immaterial (but nonetheless very important) notions like beauty or goodness. Thus the utilitarian test of a proposition is at the front of any sense of meaning. What good is any idea if you can’t apply it anywhere—or, if you do try to use it, you keep stumbling on absurd contradictions or have to sweep too much inconvenient information under the rug to make the idea seem to hold up.
With quite amazing consistency, seen throughout Downard (2003b; 2004), this application issue lies at the heart of how the tortucans of the world (and there are lots of them) strut onto the scientific or philosophical stages and make such a recurring nuisance.