The Evangelical Universalist Forum

JRP's Bite-Sized Metaphysics (Series 107)

[The previous series, 106, can be found [url=]here. An index with links to all parts of the work as they are posted can be found here. This series, 107, picks up with the topic arrived at the end of the previous series. The current overall topic of series 106 through 109, is the relation of reasoning to belief.]

[Entry 1 for “A Question of External Validation of Reasoning”]

Most people in most circumstances (to reiterate a paragraph at the end of the previous series) accept and understand that a non-rationally produced belief cannot be trusted very far to deliver an answer worth listening to, in and of itself. It may exhibit many other qualities; but a non-rationally produced belief cannot be trusted with respect to what it ‘claims’ to be–even if the belief happens to be accurate with respect to facts, or even beneficial.

If my brother, Spencer, thinks he has good grounds for believing that my belief of a snake in the hole has been fostered purely from a cocaine-fit (see previous series for the introduction of this analogy, inspired by an old Robin Williams comedy routine), then he would not (or at least should not) be embarrassed to discover there was, after all, a snake in the hole. He had no good reason to believe the snake was there.

Furthermore, my argument that he (and I) should stay away from the hole was ultimately untrustworthy. The form of the argument that we should stay away from the hole was not itself invalid; but without the anchor of rationality at the beginning, there was no good reason to pay attention either to my initial belief (“a snake is in the hole”) or to my consequent inferred belief (“we should stay away from the hole”)–despite the fact that my second belief was, as far as it went, rational!

In other words, there would be no good reason for Spencer to pay attention to my idea with respect to what it claimed to be–or more precisely, what I claimed for it. There would be no good reason for Spencer to pay attention to me, as a person making a personal claim of truth.

[Entry 2]

Spencer certainly could pay attention to, and draw useful inferences from, the real character of my belief, insofar as he perceived it. For instance, he might conclude: “I’d better not let Jason drive the golf-cart! He’s whacked out of his gourd!”

But this is a refusal to take my belief seriously. The form of my subsidiary belief (“We shouldn’t go near the hole!”) would admittedly ‘hold water’; but there would be no ‘water’ to hold, because the original cornerstone belief was not rationally produced. The framework or structure would stand, but it has nothing to properly ‘stand’ on.

(Footnote: My sceptical reader should be able to see an application of principle in his favor, here. If the sceptic believes that belief-in-God is always non-rationally produced, then I think he would agree with me that he should not put weight on such a belief: he should not accept it for himself. Few if any people accept the contention ‘The God Module of my brain produced my belief in God’ as proper grounds for accepting that God exists, for instance. (Certainly no sceptic does…!)

Furthermore, if God happened to exist after all, I do not see why such a sceptic should be held liable for disbelief: if the sceptic was only given irrational grounds for the proposition that God exists.

I encourage the sceptic to keep this principle in mind, and even to accept and defend it as vigorously as possible. For, I will be returning to it later…)

[Entry 3]

What sort of ‘water’ would be needed for my inference (“We shouldn’t go near the hole!”) to be even potentially trustworthy? (It might still be mistaken, of course.) What kind of foundation would give the valid framework something to ‘stand’ on?

The answer can be found with only a little introspection on how we ourselves evaluate such claims every day: the foundational belief must itself be rational. It must be initiated–or, alternately, it must be judged by another initiator to be worthwhile despite its non-rational causation.

[Footnote: In this case, the properly foundational belief would belong to the external judger who is rationally validating the impression produced non-rationally in the subject. This would not be the externalistic fallacy, unless the rational judger went on to claim that [u]therefore the entity he was judging was rational. A valid inference from entity A about an entity B, is not the same as the rational capability of the entity B.]

[Entry 4, from yesterday]

If Spencer asks me why I think there is a snake in the hole, and I tell him I witnessed a group of old ladies in front of us run screaming “Snake! Snake!” off the green after one of them tried to retrieve a ball from the hole, and that as far as I could tell they didn’t know we were there (and so probably weren’t trying to play a trick on us); then not only would I have a rational belief (even if mistaken), but Spencer (as an initiator himself) can judge my ‘reasons’ and make his own decisions as to their potential trustworthiness. Now a subsidiary or consequent belief–that we should not get near the hole–may potentially be worth accepting. (Notice that although such judgments may happen so quickly that the ‘form’ of the judgment is not perceptible to the thinker, in principle they are not automatic despite their speed–they still involve an action by the judger.)

On the other hand, let us say Spencer finds me lying on the green near the hole. I am all swollen up, shaking and sweating. I am muttering “Snake… in hole…”

My claim that a snake is in the hole might be produced entirely by the interaction of a fever or other delirium-inducing physical effect with my brain, combined with neurophysical associations brought about by ‘golf course’ sensory input. (Once, while in a flu-fever, the sound of a woodpecker outside my window mis-associated itself, with the result that I saw a rattlesnake jump at me from the ceiling-fan over my bed! I probably said something loudly, too…)

However, Spencer could still put this bit of data together with other bits of data (perhaps including a rattling sound in the hole) to conclude that there is a snake in the hole, it bit me, and that has caused my delirium.

In this case, my foundational ‘belief’ (if it can be properly called ‘a belief’ in the end!–I’ll be addressing this soon in another series of entries) was, per this example, a non-rationally produced effect and thus an irrational belief. But my brother, being a rational agent, found it to have an accuracy that happened (due to the characteristics of the situation) to correspond with my claim–despite the nominally irrational quality of my belief. My belief was irrational; Spencer’s was not. But the rationality of his belief depended on his ability to act in judgment of the data, not merely to react and counterreact automatically to stimulus. (And notice that we could both still be incorrect.)

[Entry 5]

Yet there is at least one more variation for this situation.

I have been building on the cocaine-induced delusion as my example, and contrasting it with some other options, because it was a relatively easy and colorfully humorous way to illustrate certain principles. However, let us now suppose that my first belief (‘a snake is in the hole’) was produced in me through the following process.

As I walk over to the hole, on the golf course, and bend down to look in, photons ricocheting back from something within the hole careen through my eyes, strike my optic nerves, and send impulses back into my brain. These impulses react and counterreact with other electrochemical potentialities in my brain, which happen (however they got there) to be linked associatively with certain external facts of reality: the existence of golf courses, and of entities often found on golf courses. The result of this set of electrochemical reactions, is the establishment of a new psychophysical state within my brain: a state that corresponds (in whatever fashion) to the belief ‘a snake is in the hole’.

So: is this belief of mine rational, or irrational?

[Entry 6 (for Saturday)]

Now I have reached a crucial distinction between philosophies, in relation to human mental behavior. I could, here, skip on to the beginning of Series 200, where I will discuss issues of this sort with an eye toward deductive conclusions (if any). My goal for this series of entries (and for this Series 100) is considerably less extensive, however; and so I will content myself, for now, with the following observations.

So long as we are merely discussing my own behavior as an individual entity, I think this example falls clearly enough into the same category as the cocaine-induced delusion.

The chief distinction between that prior example and this new situation, is that the environmental linkages in that prior example were secondary causes of the belief (‘a snake is in the hole’) rather than primary causes as in this new example. Yet the prior example of a belief did specifically depend, for its shape, on those secondary causes–the cocaine would not have produced that particular paranoia in me without relevant sensory data for the chemicals to ‘work’ with.

What I am effectively proposing, in this new example, is the cocaine-induced delusion–except without the cocaine. The sensory impressions themselves are proposed to be the primary cause of my belief.

And I think we should be very cautious about considering such a subsequent belief in me, caused in this fashion, to be ‘rational’. These sensory impressions are as non-rational in causation as the cocaine reactions. That they happen to correspond accurately to an external fact (barring, for this example, the possibility of an illusion or other mistake), is no proper ground for calling the subsequent belief ‘rational’–any more than it was a proper ground when the cocaine-induced belief happened to correspond to the existence of an actual snake in the hole.

[Entry 7, for Sunday… I’m doing these early so I can concentrate directly on editing this weekend. :mrgreen: ]

If we say that such a correspondence was accidental, but that this new correspondence is true to the fact from which it directly results; then I reply that when I was rolling on the ground in a delirium thanks to having been snakebit, my delirium was proposed (at the time) to have been a pure reaction to environmental stimulus, not a rational judgment on my part–and yet in that case, the environmental stimulus to which I was reacting was also entirely “true” in relation to its mental result. I was on a golf course; and there was a snake in the hole; and those facts caused, in one fashion, my reactive state of ‘belief’. Now in my new example, the environmental stimulus once again has caused my reactive ‘belief’, and once again the correspondence is proposed to be entirely true. Yet this type of situation had resulted in an irrational belief on my part before. What is the qualitative difference in this new case?

I think it is obvious that there is no qualitative difference; which has implications about the ‘rationality’ of my belief.

[Entry 8; next to last for this series of entries]

It might be very tempting for you, my reader, to claim ‘rationality’ of my belief despite the fully non-rational causation of my new proposed example. It would be easy, for instance, to slide from a rational judgment on your part, into ascribing the quality of ‘rationality’ to my belief. But this would be the externalistic fallacy. Spencer, in my previous example, might be able to verify the accuracy of my belief for me; but his rational verification is not my rational belief.

Consequently, even in the case of this new descriptive explanation for the existence of a ‘belief’ in my mind, I do not think it would be proper to claim this belief to be ‘rational’.

But of course, this type of descriptive explanation for the existence of a belief in my mind, is not restrained merely to my own individual behaviors as an entity. Rather, this type of process–non-rational in characteristic (even if more complex in actuality)–is often proposed and defended as being the basic process explanation of all human reasoning (yours and mine included); and the explanation is proposed in direct relation to characteristic properties of fundamental reality.

However, I am not interested (yet) in discussing this far-reaching proposition, or any alternatives. My goal for this and other series of entries is much simpler; and I think I have demonstrated it sufficiently for my current purposes.

[Entry 9; finale for this series of entries]

What I have demonstrated, is that a belief, far from being necessarily mutually exclusive to reason, can depend upon reasoning–the action (or at least the event) of drawing inferences.

This already directly parries the contention that faith and reason must always, by some type of psychological or philosophical necessity, be mutually exclusive operations (even if not directly opposed in result). On the contrary, faith is always a type of belief (the two terms are sometimes completely equivalent), and a belief can be the result of reasoning. Unless the sceptic (or the religious believer, either way) wishes to merely flatly assert that religious beliefs must be mutually exclusive to reason (whereupon I have no reason to believe him, and thus no reason not to continue), then for all we know a particular person’s religious faith might be based upon (and not be mutually exclusive to) reasoning.

The faith may not be based on very accurate reasoning; I might still be mistaken either in the facts or the principles I think I know; and/or in the methods by which I attempt to reach my conclusion. That doesn’t stop it being a faith (and thus also a belief) based on reasoning.

Thus, the question of whether my reasoning actually is worthwhile should be deferred until I actually explain my reasoning about the topic; yet it does clear the way for me to continue without being excluded from contention before-the-fact merely because the belief may be positively in favor of the existence of some kind of God.

But, I can go even further with this!–although now I enter a more speculative vein.

Next series: so, a belief (including a religious faith) can be a result of reasoning; but can I have a belief (including a religious faith) without reasoning?]