I was in a waiting room recently overhearing a conversation I found quite disturbing. The participants all conveyed utter confidence in their various proclamations. But everything they declared was nonsense.
Well, to be more precise: Their declarations flew in the face of very well-established science. Their perspectives contradicted the results of extensive scientific testing and the best-supported scientific theories. They contradicted the overwhelming opinion of the scientific community, on matters which that community had explicitly and extensively tested.
One gentleman declared, for example, that physicians are to be avoided; treating oneself with herbs was the way to go.
I suspect that these interlocutors knew that they were contradicting what scientists would say on the subject. But they obviously had more confidence in their own assessment of their own anecdotal evidence than in the scientific community. Their ability to readily set-aside the results of good science, however, made me think that they clearly did not understand—at all—how science works. The assertions they were making were plainly empirically testable–indeed they had been empirically tested. And their perspectives had not passed these tests.
I’ve noticed this general incomprehension of how science works for quite a long time now. It is brought to light especially to me through my own teaching of Philosophy of Science. This incomprehension is a condemnation of our educational systems, actually. Children are taught “science” from K through 12. But they seem to acquire very little comprehension of how science actually functions–how hypotheses are tested, etc. They learn the “results” of science, but rarely really understand how these conclusions are established by the evidence.
Lest you think I’m spouting a “positivist” perspective here, I’m not. Positivism in its various forms is clearly mistaken by my lights. I believe we have a great deal of true knowledge, for example, that is not “scientific.” We have a great deal of knowledge of what is good and what is bad, and what is right and what is wrong, for example. Science doesn’t tell us these things. It says literally nothing about the values of things. But we know perfectly well that some things are good and others are bad.
All scientific knowledge is itself based, moreover, on sensory and conceptual intuitions which would not normally be considered “scientific.” Even further, the history of science clearly exemplifies the fallibility of science. Science frequently gets things wrong. Theories which have had enormous evidence in their favour, and are extremely successful and “well-confirmed,” have often turn out to be wrong. Consider the case of Newtonian physics before Einstein did it in.
But “the scientific method” is by far the best way we have of knowing the truth about the empirical world. And when the evidence is strongly in favour of a theory, it is, in fact, highly probably true. So when a scientific theory is strongly confirmed, it is probably true in light of that evidence. It is possible to personally have strong evidence contradicting it. But it is unlikely.
New evidence may always come to light which refutes a well-established theory. So we also err—as many do—if we have excessive confidence in “the results of science.” But unless and until such contrary evidence comes to light, we do not wisely disregard the best way we have of determining the truth about how this physical world works.