Abduction, Deduction, and Induction: Their implications to quantitative methods…
Chong Ho Yu, Ph.D.
Abstract
While quantitative methods have been widely applied by social scientists such as sociologists,
psychologists, and economists, their philosophical premises and assumptions are rarely examined.
The philosophical ideas introduced by Charles Sanders Peirce (1839-1914) are helpful for
researchers in understanding the application of quantitative methods specific to the foundational
concepts of deduction, abduction and induction. In the Peircean logical system the nature of
knowledge and reality relate to each of these concepts: the logic of abduction and deduction
contribute to our conceptual understanding of a phenomenon, while the logic of induction adds
quantitative details to our conceptual knowledge. At the stage of abduction, the goal is to explore
data, find a pattern, and suggest a plausible hypothesis; deduction is to refine the hypothesis based upon other plausible premises; and induction is empirical substantiation. This article seeks to investigate the premises, limitations and applications of deduction, abduction and induction within quantitative methodology.
Fisher (1935, 1955) considered significance testing as “inductive inference” and argued
that this approach is the source of all knowledge. On the other hand, Neyman (1928, 1933a,
1933b) maintained that only deductive inference was appropriate in statistics as shown in his
school of hypothesis testing tradition. However, both deductive and inductive methods have been
criticized for various limitations such as their tendency to explain away details that should be
better understood and their incapability of generating new knowledge (Hempel, 1965; Josephson
& Josephson, 1994; Thagard & Shelley, 1997). In the view of the Peircean logical system, one may
say the logic of abduction and deduction contribute to our conceptual understanding of a
phenomena (Hausman, 1993), while the logic of induction provides empirical support to
conceptual knowledge. In other words, abduction, deduction, and induction work together to
explore, refine and substantiate research questions.
Although abduction is central in the Peircean logical system, Peirce by no means
downplayed the role of deduction and induction in inquiry. Peirce had studied the history of
philosophy thoroughly and was influenced by a multitude of schools of logic (Hoffmann, 1997).
Peirce explained these three logical processes (1934/1960) as, “Deduction proves something must
be. Induction shows that something actually is operative; abduction merely suggests that
something may be” (Vol. 5, p.171). Put another way: Abduction plays the role of generating new
ideas or hypotheses; deduction functions as evaluating the hypotheses; and induction is justifying
the hypothesis with empirical data (Staat, 1993).
This article attempts to apply abduction, which was introduced by Peirce a century ago, to
offer a more comprehensive logical system of research methodology. Therefore, we will evaluate
the strengths and weaknesses of the preceding three logical processes under Peircean direction,
and point to implications for the use of exploratory data analysis (EDA) and quantitative research
within this philosophical paradigm.
It is important to note that the focus of this article is to extend and apply Peircean ideas into
research methodologies in an epistemological fashion, not to analyze the original meanings of
Peircean ideas in the manner of historical study. Almder (1980) contended that Peirce wrote in a
style that could lead to confusion. Not surprisingly, many scholars could not agree on whether
Peircean philosophy is a coherent system or a collection of disconnected thoughts (Anderson,
1987). In response to Weiss (1940) who charged some philosophers with distorting and
dismembering the Peircean philosophy, Buchler (1940) contended that cumulative growth of
philosophy results from the partial or limited acceptance of a given philosopher’s work through
discriminating selection. One obvious example of extending the Pericean school is the “inference
to the best explanation” (IBE) proposed by Harman (1965) based upon the Peircean idea of
abduction. While the “classical” abduction is considered a logic of discovery, IBE is viewed as a
logic of justification (Lipton, 1991). But in the context of debating realism and anti-realism, de
Regt (1994) criticized that Peircean philosophy was mis-used to the extent that the “inference to
the best explanation” had become the “inference to the only explanation.” This article is
concerned with neither history of philosophy nor discernment of various interpretations of the
Peircean system; rather I adopted the position suggested by Buchler, and thus Peircean ideas on
abduction, deduction, and induction are discussed through discriminating selection.
Chong Ho Yu, Ph.D.
Abstract
While quantitative methods have been widely applied by social scientists such as sociologists,
psychologists, and economists, their philosophical premises and assumptions are rarely examined.
The philosophical ideas introduced by Charles Sanders Peirce (1839-1914) are helpful for
researchers in understanding the application of quantitative methods specific to the foundational
concepts of deduction, abduction and induction. In the Peircean logical system the nature of
knowledge and reality relate to each of these concepts: the logic of abduction and deduction
contribute to our conceptual understanding of a phenomenon, while the logic of induction adds
quantitative details to our conceptual knowledge. At the stage of abduction, the goal is to explore
data, find a pattern, and suggest a plausible hypothesis; deduction is to refine the hypothesis based upon other plausible premises; and induction is empirical substantiation. This article seeks to investigate the premises, limitations and applications of deduction, abduction and induction within quantitative methodology.
Fisher (1935, 1955) considered significance testing as “inductive inference” and argued
that this approach is the source of all knowledge. On the other hand, Neyman (1928, 1933a,
1933b) maintained that only deductive inference was appropriate in statistics as shown in his
school of hypothesis testing tradition. However, both deductive and inductive methods have been
criticized for various limitations such as their tendency to explain away details that should be
better understood and their incapability of generating new knowledge (Hempel, 1965; Josephson
& Josephson, 1994; Thagard & Shelley, 1997). In the view of the Peircean logical system, one may
say the logic of abduction and deduction contribute to our conceptual understanding of a
phenomena (Hausman, 1993), while the logic of induction provides empirical support to
conceptual knowledge. In other words, abduction, deduction, and induction work together to
explore, refine and substantiate research questions.
Although abduction is central in the Peircean logical system, Peirce by no means
downplayed the role of deduction and induction in inquiry. Peirce had studied the history of
philosophy thoroughly and was influenced by a multitude of schools of logic (Hoffmann, 1997).
Peirce explained these three logical processes (1934/1960) as, “Deduction proves something must
be. Induction shows that something actually is operative; abduction merely suggests that
something may be” (Vol. 5, p.171). Put another way: Abduction plays the role of generating new
ideas or hypotheses; deduction functions as evaluating the hypotheses; and induction is justifying
the hypothesis with empirical data (Staat, 1993).
This article attempts to apply abduction, which was introduced by Peirce a century ago, to
offer a more comprehensive logical system of research methodology. Therefore, we will evaluate
the strengths and weaknesses of the preceding three logical processes under Peircean direction,
and point to implications for the use of exploratory data analysis (EDA) and quantitative research
within this philosophical paradigm.
It is important to note that the focus of this article is to extend and apply Peircean ideas into
research methodologies in an epistemological fashion, not to analyze the original meanings of
Peircean ideas in the manner of historical study. Almder (1980) contended that Peirce wrote in a
style that could lead to confusion. Not surprisingly, many scholars could not agree on whether
Peircean philosophy is a coherent system or a collection of disconnected thoughts (Anderson,
1987). In response to Weiss (1940) who charged some philosophers with distorting and
dismembering the Peircean philosophy, Buchler (1940) contended that cumulative growth of
philosophy results from the partial or limited acceptance of a given philosopher’s work through
discriminating selection. One obvious example of extending the Pericean school is the “inference
to the best explanation” (IBE) proposed by Harman (1965) based upon the Peircean idea of
abduction. While the “classical” abduction is considered a logic of discovery, IBE is viewed as a
logic of justification (Lipton, 1991). But in the context of debating realism and anti-realism, de
Regt (1994) criticized that Peircean philosophy was mis-used to the extent that the “inference to
the best explanation” had become the “inference to the only explanation.” This article is
concerned with neither history of philosophy nor discernment of various interpretations of the
Peircean system; rather I adopted the position suggested by Buchler, and thus Peircean ideas on
abduction, deduction, and induction are discussed through discriminating selection.
Abduction
Premises of abduction
Before discussing the logic of abduction and its application, it is important to point out its
premises. In the first half of the 20th century, verificationism derived from positivism dominated
the scientific community. For positivists unverifiable beliefs should be rejected. However,
according to Peirce, researchers must start from somewhere, even though the starting point is an
unproven or unverifiable assumption. This starting point of scientific consciousness is private
fancy a flash of thought, or a wild hypothesis. But it is the seed of creativity (Wright, 1999). This
approach is very different from positivism and opens more opportunities for inquirers (Callaway,
1999). In the essay The Fixation of Belief, (1877) Peirce said that we are satisfied with stable
beliefs rather than doubts. Although knowledge is fallible in nature, and in our limited lifetime we
cannot discover the ultimate truth, we will still fix our beliefs at certain points. At the same time,
Peirce did not encourage us to relax our mind and not pursue further inquiry. Instead, he saw
seeking knowledge as interplay between doubts and beliefs, though he did not explicitly use the
Hegelian term "dialectic."
The logic of abduction
Grounded in the fixation of beliefs, the function of abduction is to look for a pattern in a
surprising phenomenon and to suggest a plausible hypothesis. The following example illustrates
the function of abduction:
The surprising phenomenon, B, is observed.
But if A were true, B would be a matter of course.
Hence there is a reason to suspect that A might be true.
By the standard of deductive logic, the preceding reasoning is clearly unacceptable for it is
contradicted with a basic rule of inference in deduction, namely, Modus Poenes. Following this
rule, the legitimate form of reasoning takes the route as follows:
A is observed.
If A, then B.
Hence, B is accepted.
Modus Ponens is commonly applied in the context of conducting a series of deduction for
complicated scientific problems. For example, A; (A B); B; (B C); C; (C D); D…etc.
However, Peirce started from the other end:
B is observed.
If A, then B.
Hence, A can be accepted.
Logicians following deductive reasoning call this the fallacy of affirming the consequent.
Consider this example. It is logical to assert that “It rains; if it rains, the floor is wet; hence, the
floor is wet.” But any reasonable person can see the problem in making statements like: “The floor
is wet; if it rains, the floor is wet; hence, it rains.” Nevertheless, in Peirce’s logical framework this
abductive form of argument is entirely valid, especially when the research goal is to discover
plausible explanations for further inquiry (de Regt, 1994). In order to make inferences to the best
explanation, the researcher must need a set of plausible explanations, and thus, abduction is
usually formulated in the following mode:
The surprising phenomenon, X, is observed.
Among hypotheses A, B, and C, A is capable of explaining X.
Hence, there is a reason to pursue A.
At first glance, abduction is an educated guess among existing hypotheses. Thagard and
Shelley (1999) clarified this misconception. They explained that unifying conceptions were an
important part of abduction, and it would be unfortunate if our understanding of abduction were
limited to more mundane cases where hypotheses are simply assembled. Abduction does not occur
in the context of a fixed language, since the formation of new hypotheses often goes hand in hand
with the development of new theoretical terms such as “quark,” and “gene.” Indeed, Peirce
(1934/1960) emphasized that abduction is the only logical operation that introduces new ideas.
Some philosophers of science such as Popper (1968) and Hempel (1966) suggested that
there is no logic of discovery because discovery relies on creative imagination. Hempel used
Kekule’s discovery of the hexagonal ring as an example. The chemist Kekule failed to devise a
structural formula for the benzene molecule in spite of many trials. One evening he found the
solution to the problem while watching the dance of fire in his fireplace. Gazing into the flames, he
seemed to see atoms dancing in snakelike arrays and suddenly related this to the molecular
structure of benzene. This is how the hexagonal ring was discovered. However, it is doubtful
whether this story supports the notion that there is no logic of discovery. Why didn’t other people
make a scientific breakthrough by observing the fireplace? Does the background knowledge that
had been accumulated by Kekule throughout his professional career play a more important role to
the discovery of the hexagonal ring than a brief moment in front of a fireplace? The dance of fire
may serve as an analogy to the molecular structure that Kekule had contemplated. Without the
deep knowledge of chemistry, it is unlikely that anyone could draw inspiration by the dance of fire.
For Peirce, progress in science depends on the observation of the right facts by minds
furnished with appropriate ideas (Tursman, 1987). Definitely, the intuitive judgment made by an
intellectual is different from that made by a high school student. Peirce cited several examples of
remarkable correct guesses. All success is not simply luck. Instead, the opportunity was taken by
the people who were prepared:
a). Bacon's guess that heat was a mode of motion;
b). Young's guess that the primary colors were violet, green and red;
c). Dalton's guess that there were chemical atoms before the invention of microscope (cited
in Tursman, 1987).
By the same token to continue the last example, the cosmological view that "atom" is the
fundamental element of the universe, introduced by ancient philosophers Leucippus and
Democritus, revived by Epicurus, and confirmed by modern physicists, did not result from a lucky
guess. Besides the atomist theory, there were numerous other cosmological views such as the
Milesian school, which proposed that the basic elements were water, air, fire, earth … etc.
Atomists were familiar with them and provided answers to existing questions based on the existing
framework (Trundle, 1994).
Peirce stated that classification plays a major role in making a hypothesis, that is the
characters of phenomenon are placed into certain categories (Peirce, 1878b). Although Peirce is
not a Kantian (Feibleman 1945), Peirce endorsed Kant's categories in Critique of Pure Reason
(Kant, 1781/1969) to help us to make judgments of the phenomenal world. According to Kant,
human thought and enlightenment are dependent on a limited number of a priori perceptual forms
and ideational categories, such as causality, quality, time and space. Also, Peirce agreed with Kant
that things have internal structure of meaning. Abductive activities are not empirical hypotheses
based on our sensory experience, but rather the very structure of the meanings themselves
(Rosenthal, 1993). Based on the Kantian framework, Peirce (1867/1960) later developed his "New
list of categories." For Peirce all cognition, ranging from perception to logical reasoning, is
mediated by “elements of generality.” (Peirce, 1934/1960). Based upon the notion of categorizing
general elements, Hoffman (1997) viewed abduction as a search for a mode of perception while
facing surprising facts.
Applications of abduction
Abduction can be well applied to quantitative research, especially Exploratory Data
Analysis (EDA) and Exploratory statistics (ES), such as factor rotation in Exploratory Factor
Analysis and path searching in Structural Equation Modeling (Glymour, Scheines, Spirtes, &
Kelly, 1987; Glymour & Cooper, 1999). Josephson and Josephson (1994) argued that the whole
notion of a controlled experiment is covertly based on the logic of abduction. In a controlled
experiment, the researchers control alternate explanations and test the condition generated from
the most plausible hypothesis. However, abduction shares more common ground with EDA than
with controlled experiments. In EDA, after observing some surprising facts, we exploit them and
check the predicted values against the observed values and residuals (Behrens, 1997). Although
there may be more than one convincing pattern, we "abduct" only those that are more plausible for
subsequent controlled experimentation. Since experimentation is hypothesis-driven and EDA is
data-driven, the logic behind them are quite different. The abductive reasoning of EDA goes from
data to hypotheses while inductive reasoning of experimentation goes from hypothesis to expected
data. By the same token, in Exploratory Factor Analysis and Structural Equation Modeling, there
might be more than one possible way to achieve a fit between the data and the model; again, the
researcher must “abduct” a plausible set of variables and paths for modeling building.
Shank (1991), Josephson and Josephson (1994), and Ottens and Shank (1995) related
abductive reasoning to detective work. Detectives collect related “facts” about people and
circumstances. These facts are actually shrewd guesses or hypotheses based on their keen powers
of observation. In this vein, the logic of abduction is in line with EDA. In fact, Tukey (1977, 1980)
often related EDA to detective work. In EDA, the role of the researcher is to explore the data in as
many ways as possible until a plausible "story" of the data emerges. EDA is not “fishing”
significant results from all possible angles during research: it is not trying out everything.
Rescher (1978) interpreted abduction as an opposition to Popper's falsification (1963).
There are millions of possible explanations to a phenomenon. Due to the economy of research, we
cannot afford to falsify every possibility. As mentioned before, we don't have to know everything
to know something. By the same token, we don't have to screen every false thing to dig out the
authentic one. During the process of abduction, the researcher should be guided by the elements of
generality to extract a proper mode of perception.
Summary
In short, abduction can be interpreted as conjecturing the world with appropriate
categories, which arise from the internal structure of meanings. The implications of abduction for
researchers as practiced in EDA and ES, is that the use of EDA and ES is neither exhausting all
possibilities nor making hasty decisions. Researchers must be well equipped with proper
categories in order to sort out the invariant features and patterns of phenomena. Quantitative
research, in this sense, is not number crunching, but a thoughtful way of dissecting data.
Deduction
Premise of deduction
Aristotle is credited as the inventor of deduction (Trundle, 1994). Deduction presupposes
the existence of truth and falsity. Quine (1982) stated that the mission of logic is the pursuit of
truth, which is the endeavor to sort out the true statements from the false statements. Hoffmann
(1997) further elaborated this point by saying that the task of deductive logic is to define the
validity of one truth as it leads to another truth. It is important to note that the meaning of truth in
this context does not refer to the ontological, ultimate reality. Peirce made a distinction between
truth and reality: Truth is the understanding of reality through a self-corrective inquiry process by
the whole intellectual community across time. On the other hand, the existence of reality is
independent of human inquiry (Wiener, 1969). In terms of ontology, there is one reality. In regard
to methodology and epistemology, there is more than one approach and one source of knowledge.
Reality is "what is" while truth is "what would be." Deduction is possible because even without
relating to reality, propositions can be judged as true or false within a logical and conceptual
system.
Logic of deduction
Deduction involves drawing logical consequences from premises. An inference is
endorsed as deductionaly valid when the truth of all premises guarantees the truth of conclusion.
For instance,
First premise: All the beans from the bag are white (True).
Second premise: These beans are from this bag (True).
Conclusion: Therefore, these beans are white (True). (Peirce, 1986).
According to Peirce, deduction is a form of analytic inference and of this sort are all
mathematical demonstrations (1986).
Limitations of deduction
There are several limitations of deductive logic. First, deductive logic confines the
conclusion to a dichotomous answer (True/False). A typical example is the rejection or failure of
rejection of the null hypothesis. This narrowness of thinking is not endorsed by the Peircean
philosophical system, which emphasizes the search for a deeper insight of a surprising fact.
Second, this kind of reasoning cannot lead to the discovery of knowledge that is not already
embedded in the premise (Peirce, 1934/1960). In some cases the premise may even be
tautological--true by definition. Brown (1963) illustrated this weakness by using an example in
economics: An entrepreneur seeks maximization of profits. The maximum profits will be gained
when marginal revenue equals marginal cost. An entrepreneur will operate his business at the
equilibrium between marginal cost and marginal revenue.
The above deduction simply tells you that a rational man would like to make more money.
There is a similar example in cognitive psychology:
Human behaviors are rational.
One of several options is more efficient in achieving the goal.
A rational human will take the option that directs him to achieve his goal (Anderson,
1990).
The above two deductive inferences simply provide examples that a rational man will do
rational things. The specific rational behaviors have been included in the bigger set of generic
rational behaviors. Since deduction facilitates analysis based upon existing knowledge rather than
generating new knowledge, Josephson and Josephson (1994) viewed deduction as truth preserving
and abduction as truth producing.
Third, deduction is incomplete as we cannot logically prove all the premises are true.
Russell and Whitehead (1910) attempted to develop a self-sufficient logical-mathematical system.
In their view, not only can mathematics be reduced to logic, but also logic is the foundation of
mathematics. However, Gödel (1947/1986) showed that we cannot even establish all mathematics
by deductive proof. To be specific, it is impossible to have a self-sufficient system as Russell and
Whitehead postulated. Any lower order theorem or premise needs a higher order theorem or
premise for substantiation; and no system can be complete and consistent at the same time.
Deduction alone is clearly incapable of establishing the empirical knowledge we seek.
Peirce reviewed Russell's book "Principles of Mathematics" in 1903, but he only wrote a
short paragraph with vague comments. Nonetheless, based on Peirce's other writings on logic and
mathematics, Haack (1993) concluded that Peirce would be opposed to Russell and Whitehead's
notion that the epistemological foundations of mathematics lie in logic. It is questionable whether
the logic or the mathematics can fully justify deductive knowledge. No matter how logical a
hypothesis is, it is only sufficient within the system; it is still tentative and requires further
investigation with external proof.
This line of thought posed a serious challenge to researchers who are confident in the
logical structure of statistics. Mathematical logic relies on many unproven premises and
assumptions. Statistical conclusions are considered true only given that all premises and
assumptions that are applied are true. In recent years many Monte Carlo simulations have been
conducted to determine how robust certain tests are, and which statistics should be favored. The
reference and criteria of all these studies are within logical-mathematical systems without any
worldly concerns. For instance, the Fisher protected t-test is considered inferior to the Ryan test
and the Tukey test because it cannot control the inflated Type I error very well (Toothaker, 1993),
not because any psychologists or educators made a terribly wrong decision based upon the Fisher
protected t-test. Pillai-Bartlett statistic is considered superior to Wilk's Lambda and
Hotelling-Lawley Trace because of much greater robustness against unequal covariance matrices
(Olson, 1976), not because any significant scientific breakthroughs are made with the use of
Pillai-Bartlett statistic. For Peirce this kind of self-referential deduction cannot lead to progress in
knowledge. Knowing is an activity which is by definition involvement with the real world
(Burrell, 1968).
As a matter of fact, the inventor of deductive syllogisms, Aristotle, did not isolate formal
logic from external reality and he repeatedly admitted the importance of induction. It is not merely
that the conclusion is deduced correctly according to the formal laws of logic. Aristotle assumes
that the conclusion is verified in reality. Also, he devoted attention to the question: How do we
know the first premises from which deduction must start? (Copleston, 1946/85; Russell, 1945/72)
Certain development of quantitative research methodology is not restricted by logic.
Actually, statistics is by no means pure mathematics without interactions with the real world.
Gauss discovered the Gaussian distribution through astronomical observations. Fisher built his
theories from applications of biometrics and agriculture. Survival analysis or the hazard model is
the fruit of medical and sociological research. Differential item functioning (DIF) was developed
to address the issue of reducing test bias.
Last but not least, for several decades philosophers of science have been debating about the
issue of under-determination, a problematic situation in which several rival theories are
empirically equivalent but logically incompatible (de Regt, 1994; Psillos, 1999).
Under-determination is no stranger to quantitative researchers, who constantly face model
equivalency in factor analysis and structural equation modeling. Under-determination, according
to Leplin (1997), is a problem rooted in the limitations of the hypothetico-deductive methodology,
which is disconfirmatory in nature. For instance, the widely adopted hypothesis testing is based on
the logic of computing the probability of obtaining the observed data (D) given that the theory or
the hypothesis (H) is true (P(DH)). At most this mode of inquiry can inform us when to reject a
theory, but not when to accept one. Thus, quantitative researchers usually draw a conclusion using
the language in this fashion: “Reject the hypothesis” or “fail to reject the hypothesis,” but not
“accept the hypothesis” or “fail to accept the hypothesis.” Passing a test is not confirmatory if the
test is one that even a false theory would be expected to pass. At first glance it may be strange to
say that a false theory could lead to passing of a test, but that is how under-determination occurs.
Whenever a theory is proposed for predicting or explaining a phenomenon, it has a deductive
structure. What is deduced may be an empirical regularity that holds only statistically, and thus,
the answer by deduction works well for the true theory as for the false ones.
Summary
For Peirce, deduction alone is a necessary condition, but not a sufficient condition of
knowledge. Peirce (1934/1960) warned that deduction is applicable only to the ideal state of
things. In other words, deduction alone can be applied to a well-defined problem, but not an
ill-defined problem, which is more likely to be encountered by researchers. Nevertheless,
deduction performs the function of clarifying the relation of logical implications. When
well-defined categories result from abduction, premises can be generated for deductive reasoning.
Induction
Premise of induction
For Peirce, induction is founded on the premise that inquiry is a self-corrective inquiry
process by the whole intellectual community across time. Peirce stressed the collective nature of
inquiry by saying “No mind can take one step without the aid of other minds” (1934/1960, p.398).
Unlike Kuhn's (1962) emphasis on paradigm shift and incommensurability between different
paradigms, Peirce stressed the continuity of knowledge. First, knowledge does not emerge out of
pure logic. Instead, it is a historical and social product. Second, Peirce disregarded the Cartesian
skepticism of doubting everything (DesCartes, 1641/1964). To some extent we have to fix our
beliefs on those positions that are widely accepted by the intellectual community (Peirce, 1877).
Kuhn proposed that the advancement of human knowledge is a revolutionary process in
which new frameworks overthrow outdated frameworks. Peirce, in contrast, considered
knowledge to be continuous and cumulative. Rescher (1978) used the geographical-exploration
model as a metaphor to illustrate Peirce's idea: The replacement of a flat-world view with a
global-world view is a change in conceptual understanding, or a paradigm shift. After we have
discovered all the continents and oceans, measuring the height of Mount Everest and the depth of
the Nile river is adding details to our conceptual knowledge. Although Kuhn's theory looks
glamorous, as a matter of fact, paradigm shifts might occur only once in several centuries. The
majority of scholars are just adding details to existing frameworks. Knowledge is self-corrective
insofar as we inherit the findings from previous scholars and refine them.
Logic of induction
Induction introduced by Francis Bacon is a direct revolt against deduction. Bacon
(1620/1960) found that people who use deductive reasoning rely on the authority of antiquity
(premises made by masters), and the tendency of the mind to construct knowledge within the mind
itself. Bacon criticized deductive users as spiders for they make a web of knowledge out of their
own substance. Although the meaning of deductive knowledge is entirely self-referent, deductive
users tend to take those propositions as assertions. Propositions and assertions are not the same
level of knowledge. For Peirce, abduction and deduction only gives propositions, however,
self-correcting induction provides empirical support to assertions.
Inductive logic is often based upon the notion that probability is the relative frequency in
long run and a general law can be concluded based on numerous cases. For examples,
A1, A2, A3 ... A100 are B.
A1, A2, A3 ... A100 are C.
Therefore, B is C.
Or
A1, A2, A3, … A100 are B.
Hence, all A are B.
Abduction, deduction and induction 17
Nonetheless, the above is by no mean the only way of understanding induction. Induction
could also take the form of prediction:
A1,A2,A3…A100 are B.
Thus, A101 will be B.
Limitations of induction
Hume (1777/1912) argued that things are inconclusive by induction because in infinity
there are always new cases and new evidence. Induction can be justified, if and only if, instances of
which we have no experience resemble those of which we have experience. Thus, the problem of
induction is also known as “the skeptical problem about the future” (Hacking, 1975). Take the
previous argument as an example. If A101 is not B, the statement "B is C" will be refuted.
We never know when a regression line will turn flat, go down, or go up. Even inductive
reasoning using numerous accurate data and high power computing can go wrong, because
predictions are made only under certain specified conditions (Samuelson, 1967). For instance,
based on the case studies in the 19th century, sociologist Max Weber (1904/1976) argued that
capitalism could be developed in Europe because of the Protestant work ethic; other cultures like
the Chinese Confucianism are by essence incompatible with capitalism. However, after World
War Two, the emergence of Asian economic powers such as Taiwan, South Korea, Hong Kong
and Singapore disconfirmed the Weberian hypothesis.
Take the modern economy as another example. Due to American economic problems in
the early '80s, quite a few reputable economists made gloomy predictions about the U.S. economy
such as the takeover of American economic and technological throne by Japan. By the end of the
decade, Roberts (1989) concluded that those economists were wrong; contrary to those forecasts,
in the 80’s the U.S. enjoyed the longest economic expansion in its history. In the 1990s, the
economic positions of the two nations changed: Japan experienced recession while America
experienced expansion.
“The skeptical problem about the future” is also known as “the old riddle of induction.” In
a similar vein to the old riddle, Goodman (1954/1983) introduced the “new riddle of induction,” in
which conceptualization of kinds plays an important role. Goodman demonstrated that whenever
we reach a conclusion based upon inductive reasoning, we could use the same rules of inference,
but different criteria of classification, to draw an opposite conclusion. Goodman’s example is: We
could conclude that all emeralds are green given that 1000 observed emeralds are green. But what
would happen if we re-classify “green” objects as “blue” and “blue” as “green” in the year 2020?
We can say that something is “grue” if it was considered “green” before 2020 and it would be
treated as “blue” after 2020. We can also say that something is “bleen” if it was counted as a “blue”
object before 2020 and it would be regarded as “green” after 2020. Thus, the new riddle is also
known as “the grue problem.”
In addition, Hacking (1999) cited the example of “child abuse,” a construct that has been
taken for granted by many Americans, to demonstrate the new riddle. Hacking pointed out that
actually the concept of “child abuse” in the current form did not exist in other cultures. Cruelty to
children just emerged as a social issue during the Victorian period, but “child abuse” as a social
science concept was formulated in America around 1960. To this extent, Victorians viewed cruelty
to children as a matter of poor people harming their children, but to Americans child abuse was a
classless phenomenon. When the construct “child abuse” became more and more popular, many
American adults recollected childhood trauma during psychotherapy sessions, but authenticity of
these child abuse cases was highly questionable. Hacking proposed that “child abuse” is a typical
example of how re-conceptualization in the future alters our evaluations of the past.
Another main theme of the new riddle focuses on the problem of projectibility. Whether an
“observed pattern” is projectible depends on how we conceptualize the pattern. Skyrms (1975)
used a mathematical example to illustrate this problem: If this series of digits (1, 2, 3, 4, 5) is
shown, what is the next projected number? Without any doubt, for most people the intuitive
answer is simply “6.” Skyrms argued that this seemingly straight-forward numeric sequence could
be populated by this generating function: (A-1)(A-2)(A-3)(A-4)(A-5)+A. Let’s step through this
example using an Excel spreadsheet. In Cell A1 to A10 of the Excel spreadsheet, enter 1-10,
respectively. Next, in Cell B1 enter the function “=(A1-1)*(A1-2)*(A1-3)*(A1-4)*(A1-5)+A1”
and this function will yield “1.” Afterwards, select Cell B1 and “drag” the cursor downwards to
Cell B10; it will copy the same function to B2, B3, B4…B10. As a result, (B1 to B5) will
correspond to (A1 to A10), which are (1, 2, 3, 4, 5). However, the sixth number in Column B,
which is 126, substantively deviates from the intuitive projection. All numbers in the cells below
B6 are also surprising. Skyrms pointed out that whatever number we want to predict for the sixth
number of the series, there is always a generating function that can fit the given members of the
sequence and that will yield the projection we want. This indeterminacy of projection is a
mathematical fact.
Furthermore, the new riddle, which is considered an instantiation of the general problem of
under-determination in epistemology, is germane to quantitative researchers in the context of
“model equivalency” and “factor indeterminacy.” (DeVito, 1997; Forster, 1999; Forster & Sober,
1994; Kieseppa, 2001; Muliak, 1996; Turney, 1999). Specifically, the new riddle and other
philosophical notions of under-determination illustrate that all scientific theories are
under-determined by the limited evidence in the sense that the same phenomenon can be equally
well-explained by rival models that are logically incompatible. In factor analysis, for example,
whether adopting a one-factor or a two-factor model may have tremendous impact on subsequent
inferences. In curving-fitting problem, whether using the Akaike’s Information Criterion or the
Bayesian Information Criterion is crucial in the sense that these two criteria could lead to different
conclusions. Hence, the preceding problem of model selection criteria in quantitative-based
research is analogous to the problem of re-conceptualization of “child abuse” and the problem of
projectibility based upon generating functions. At the present time, there are no commonly agreed
solutions to either the new riddle or the model selection criteria.
Second, induction suggests the possible outcome in relation to events in long run. This is
not definable for an individual event. To make a judgment for a single event based on probability
like "your chance to survive this surgery is 75 percent" is nonsense. In actuality, the patient will
either live or die. In a single event, not only the probability is indefinable, but also the explanatory
power is absent. Induction yields a general statement that explains the event of observing, but not
the facts observed. Josephson and Josephson (1994) gave this example: “Suppose I choose a ball at
random (arbitrarily) from a large hat containing colored balls. The ball I choose is red. Does the
fact that all of the balls in the hat are red explain why this particular ball is red? No…’All A’s are
B’s’ cannot explain why ‘this A is a B’ because it does not say anything about how its being an A
is connected with its being a B.” (p.20)
As mentioned before, induction also suggests probability as a relative frequency. In the
discussion of probability of induction, Peirce (1986) raised his skepticism to this idea: “The
relative probability…is something which we should have a right to talk about if universes were as
plenty as blackberries, if we could put a quantity of them in a bag, shake them well up, draw out a
sample, and examine them to see what proportion of them had one arragement and what proportion
another.” (pp. 300-301). Peirce is not alone in this matter. To many quantitative researchers, other
types of interpretations of probability, such as the subjective interpretation and the propensity
interpretation, should be considered.
Third, Carnap, as an inductive logician, knew the limitation of induction. Carnap (1952)
argued that induction might lead to the generalization of empirical laws but not theoretical laws.
For instance, even if we observe thousands of stones, trees and flowers, we never reach a point at
which we observe a molecule. After we heat many iron bars, we can conclude the empirical fact
that metals will bend when they are heated. But we will never discover the physics of expansion
coefficients in this way.
Indeed, superficial empirical-based induction could lead to wrong conclusions. For
example, by repeated observations, it seems that heavy bodies (e.g. metal, stone) fall faster than
lighter bodies (paper, feather). This Aristotelian belief had misled European scientists for over a
thousand years. Galileo argued that indeed both heavy and light objects fall at the same speed.
There is a popular myth that Galileo conducted an experiment in the Tower of Pisa to prove his
point. Probably he never performed this experiment. Actually this experiment was performed by
one of Galileo's critics and the result supported Aristotle's notion. Galileo did not get the law from
observation, but by a chain of logical arguments (Kuhn, 1985). Again, superficial induction runs
the risk of getting superficial and incorrect conclusion.
Quantitative researchers have been warned that high correlations among variables may not
be meaningful. For example, if one plots GNP, educational level, or anything against time, one
may see some significant but meaningless correlation (Yule, 1926). As Peirce (1934/1960) pointed
out, induction cannot furnish us with new ideas because observations or sensory data only lead us
to superficial conclusions but not the "bottom of things" (p.878).
Last but not least, induction as the sole source of reliable knowledge was never inductively
concluded. An Eighteenth century British moral philosopher Thomas Reid embraced the
conviction that the Baconian philosophy or the inductive method could be extended from the realm
of natural science to mind, society, and morality. He firmly believed that through an inductive
analysis of the faculties and powers by which the mind knows, feels, and wills, moral philosophers
could eventually establish the scientific foundations for morality. However, some form of
circularity was inevitable in his argument when induction was validated by induction. Reid and his
associates counter-measured this challenge by arguing that the human mental structure was
designed explicitly and solely for an inductive means of inquiry (cited in Bozeman, 1977).
However, today the issue of inductive circularity remains unsettled because psychologists still
could not reach a consent pertaining to the human reasoning process. While some psychologists
found that the frequency approach appears to be more natural to learners in the context of
quantitative reasoning (Gigerenzer, 2003; Hoffrage, Gigerenzer, & Martignon, 2002), some other
psychologists revealed that humans have conducted inquiry in the form of Bayesian network by
the age of five (Gopnik & Schulz, 2004). Proclaiming a particular reasoning mode as the human
mind structure in a hegemonic tone, needless to say, would lead to immediate protest.
Summary
For Peirce induction still has validity. Contrary to Hume's notion that our perception of
events is devoid of generality, Peirce argued that the existence we perceive must share generality
with other things in existence. Peirce's metaphysical system resolves the problem of induction by
asserting that the data from our perception are not reducible to discrete, logically and ontologically
independent events (Sullivan, 1991). In addition, for Peirce all empirical reasoning is essentially
making inferences from a sample to a population; the conclusion is merely approximately true
(O'Neill, 1993). Forster (1993) justified this view with the Law of Large Numbers. On one hand,
we don't know the real probability due to our finite existence. However, given a large number of
cases, we can approximate the actual probability. We don't have to know everything to know
something. Also, we don't have to know every case to get an approximation. This approximation is
sufficient to fix our beliefs and lead us to further inquiry.
Conclusion
In summary, abduction, deduction and induction have different merits and shortcomings.
Yet the combination of all three reasoning approaches provides researchers a powerful tool of
inquiry. For Peirce a reasoner should apply abduction, deduction and induction altogether in order
to achieve a comprehensive inquiry. Abduction and deduction are the conceptual understanding of
phenomena, and induction is the quantitative verification. At the stage of abduction, the goal is to
explore the data, find out a pattern, and suggest a plausible hypothesis with the use of proper
categories; deduction is to build a logical and testable hypothesis based upon other plausible
premises; and induction is the approximation towards the truth in order to fix our beliefs for further
inquiry. In short, abduction creates, deduction explicates, and induction verifies.
A good example of their application can be found in the use of the Bayesian Inference
Network (BIN) in psychometrics (Mislevy, 1994). According to Mislevy, the BIN builds around
deductive reasoning to support subsequent inductive reasoning from realized data to probabilities of states. Yet abductive reasoning is vital to the process in two aspects. First, abductive reasoning suggests the framework for inductive reasoning. Second, while the BIN is a tool for reasoning deductively and inductively within the posited structure, abduction is required to reason about the structure. Another example can be found in the mixed methodology developed by Johnson and Onwuegbuzie (2004). Research employing mixed methods (quantitative and qualitative methods) makes use of all three modes of reasoning. To be specific, its logic of inquiry includes the use of induction in pattern recognition, which is commonly used in thematic analysis in qualitative methods, the use of deduction, which is concerned with quantitative testing of theories and hypotheses, and abduction, which is about inferences to the best explanation based on a set of available alternate explanations. It is important to note that researchers do not have to follow a specific order in using abduction, deduction, and induction. In Johnson and Onwuegbuzie’s framework, abduction is a tool of justifying the results at the end rather than generating a hypothesis at the beginning of a study.
One of the goals of this chapter is to illustrate a tight integration among different modes of
inquiry, and its implication to exploratory and confirmatory analyzes. Consider this
counter-example. Glymour (2001) viewed widespread applications of factor analysis as a sign of
system-wide failure in social sciences in terms of causal interpretations. As a strong advocate of
structural equation modeling, which is an extension of confirmatory factor analysis, Glymour is
very critical of this exploratory factor modeling approach. By reviewing the history of
psychometrics, Glymour stated that reliability (a stable factor structure) was never a goal of early
psychometricians. Thurstone faced the problem that there were many competing factor models
that were statistically equivalent. In order to “saving the phenomena” (to uniquely determining the factor loadings), he developed the criterion of the simple structure, which has no special
measure-theoretic virtue or special stability properties. In addition, on finite samples, factor
analysis may fail to recover the true causal structure because of statistical or algorithmic artifacts.
On the contrary, Glymour (2005) developed path-searching algorithms for model building, in
which huge data sets are collected; automated methods are employed to search for regularities in
the data; hypotheses are generated and tested as they go along. The last point is especially
important because for Gkymour there is no sharp distinction between the exploratory and
confirmatory steps.
However, even if path-searching algorithms are capable of conducting hypothesis
generation and testing altogether, it is doubtful whether the process is totally confirmatory and
nothing exploratory. In a strict sense, even CFA is a mixture of exploratory and confirmatory
techniques, in which the end product is derived in part from theory and in part from a
re-specification based on the analysis of the fitness indices. The same argument is well-applied to
path-searching. According to Peirce, in the long run scientific inquiry is a self-correcting process;
earlier theories will inevitably be revised or rejected by later theories. In this sense, all causal
conclusions, no matter how confirmatory they are, must be exploratory in nature because these
confirmed conclusions are subject to further investigations. In short, it is the author’s belief that
integration among abduction, induction, and deduction, as well as between exploratory and
confirmatory analyses, could enable researchers to conduct a thorough investigation.
Premises of abduction
Before discussing the logic of abduction and its application, it is important to point out its
premises. In the first half of the 20th century, verificationism derived from positivism dominated
the scientific community. For positivists unverifiable beliefs should be rejected. However,
according to Peirce, researchers must start from somewhere, even though the starting point is an
unproven or unverifiable assumption. This starting point of scientific consciousness is private
fancy a flash of thought, or a wild hypothesis. But it is the seed of creativity (Wright, 1999). This
approach is very different from positivism and opens more opportunities for inquirers (Callaway,
1999). In the essay The Fixation of Belief, (1877) Peirce said that we are satisfied with stable
beliefs rather than doubts. Although knowledge is fallible in nature, and in our limited lifetime we
cannot discover the ultimate truth, we will still fix our beliefs at certain points. At the same time,
Peirce did not encourage us to relax our mind and not pursue further inquiry. Instead, he saw
seeking knowledge as interplay between doubts and beliefs, though he did not explicitly use the
Hegelian term "dialectic."
The logic of abduction
Grounded in the fixation of beliefs, the function of abduction is to look for a pattern in a
surprising phenomenon and to suggest a plausible hypothesis. The following example illustrates
the function of abduction:
The surprising phenomenon, B, is observed.
But if A were true, B would be a matter of course.
Hence there is a reason to suspect that A might be true.
By the standard of deductive logic, the preceding reasoning is clearly unacceptable for it is
contradicted with a basic rule of inference in deduction, namely, Modus Poenes. Following this
rule, the legitimate form of reasoning takes the route as follows:
A is observed.
If A, then B.
Hence, B is accepted.
Modus Ponens is commonly applied in the context of conducting a series of deduction for
complicated scientific problems. For example, A; (A B); B; (B C); C; (C D); D…etc.
However, Peirce started from the other end:
B is observed.
If A, then B.
Hence, A can be accepted.
Logicians following deductive reasoning call this the fallacy of affirming the consequent.
Consider this example. It is logical to assert that “It rains; if it rains, the floor is wet; hence, the
floor is wet.” But any reasonable person can see the problem in making statements like: “The floor
is wet; if it rains, the floor is wet; hence, it rains.” Nevertheless, in Peirce’s logical framework this
abductive form of argument is entirely valid, especially when the research goal is to discover
plausible explanations for further inquiry (de Regt, 1994). In order to make inferences to the best
explanation, the researcher must need a set of plausible explanations, and thus, abduction is
usually formulated in the following mode:
The surprising phenomenon, X, is observed.
Among hypotheses A, B, and C, A is capable of explaining X.
Hence, there is a reason to pursue A.
At first glance, abduction is an educated guess among existing hypotheses. Thagard and
Shelley (1999) clarified this misconception. They explained that unifying conceptions were an
important part of abduction, and it would be unfortunate if our understanding of abduction were
limited to more mundane cases where hypotheses are simply assembled. Abduction does not occur
in the context of a fixed language, since the formation of new hypotheses often goes hand in hand
with the development of new theoretical terms such as “quark,” and “gene.” Indeed, Peirce
(1934/1960) emphasized that abduction is the only logical operation that introduces new ideas.
Some philosophers of science such as Popper (1968) and Hempel (1966) suggested that
there is no logic of discovery because discovery relies on creative imagination. Hempel used
Kekule’s discovery of the hexagonal ring as an example. The chemist Kekule failed to devise a
structural formula for the benzene molecule in spite of many trials. One evening he found the
solution to the problem while watching the dance of fire in his fireplace. Gazing into the flames, he
seemed to see atoms dancing in snakelike arrays and suddenly related this to the molecular
structure of benzene. This is how the hexagonal ring was discovered. However, it is doubtful
whether this story supports the notion that there is no logic of discovery. Why didn’t other people
make a scientific breakthrough by observing the fireplace? Does the background knowledge that
had been accumulated by Kekule throughout his professional career play a more important role to
the discovery of the hexagonal ring than a brief moment in front of a fireplace? The dance of fire
may serve as an analogy to the molecular structure that Kekule had contemplated. Without the
deep knowledge of chemistry, it is unlikely that anyone could draw inspiration by the dance of fire.
For Peirce, progress in science depends on the observation of the right facts by minds
furnished with appropriate ideas (Tursman, 1987). Definitely, the intuitive judgment made by an
intellectual is different from that made by a high school student. Peirce cited several examples of
remarkable correct guesses. All success is not simply luck. Instead, the opportunity was taken by
the people who were prepared:
a). Bacon's guess that heat was a mode of motion;
b). Young's guess that the primary colors were violet, green and red;
c). Dalton's guess that there were chemical atoms before the invention of microscope (cited
in Tursman, 1987).
By the same token to continue the last example, the cosmological view that "atom" is the
fundamental element of the universe, introduced by ancient philosophers Leucippus and
Democritus, revived by Epicurus, and confirmed by modern physicists, did not result from a lucky
guess. Besides the atomist theory, there were numerous other cosmological views such as the
Milesian school, which proposed that the basic elements were water, air, fire, earth … etc.
Atomists were familiar with them and provided answers to existing questions based on the existing
framework (Trundle, 1994).
Peirce stated that classification plays a major role in making a hypothesis, that is the
characters of phenomenon are placed into certain categories (Peirce, 1878b). Although Peirce is
not a Kantian (Feibleman 1945), Peirce endorsed Kant's categories in Critique of Pure Reason
(Kant, 1781/1969) to help us to make judgments of the phenomenal world. According to Kant,
human thought and enlightenment are dependent on a limited number of a priori perceptual forms
and ideational categories, such as causality, quality, time and space. Also, Peirce agreed with Kant
that things have internal structure of meaning. Abductive activities are not empirical hypotheses
based on our sensory experience, but rather the very structure of the meanings themselves
(Rosenthal, 1993). Based on the Kantian framework, Peirce (1867/1960) later developed his "New
list of categories." For Peirce all cognition, ranging from perception to logical reasoning, is
mediated by “elements of generality.” (Peirce, 1934/1960). Based upon the notion of categorizing
general elements, Hoffman (1997) viewed abduction as a search for a mode of perception while
facing surprising facts.
Applications of abduction
Abduction can be well applied to quantitative research, especially Exploratory Data
Analysis (EDA) and Exploratory statistics (ES), such as factor rotation in Exploratory Factor
Analysis and path searching in Structural Equation Modeling (Glymour, Scheines, Spirtes, &
Kelly, 1987; Glymour & Cooper, 1999). Josephson and Josephson (1994) argued that the whole
notion of a controlled experiment is covertly based on the logic of abduction. In a controlled
experiment, the researchers control alternate explanations and test the condition generated from
the most plausible hypothesis. However, abduction shares more common ground with EDA than
with controlled experiments. In EDA, after observing some surprising facts, we exploit them and
check the predicted values against the observed values and residuals (Behrens, 1997). Although
there may be more than one convincing pattern, we "abduct" only those that are more plausible for
subsequent controlled experimentation. Since experimentation is hypothesis-driven and EDA is
data-driven, the logic behind them are quite different. The abductive reasoning of EDA goes from
data to hypotheses while inductive reasoning of experimentation goes from hypothesis to expected
data. By the same token, in Exploratory Factor Analysis and Structural Equation Modeling, there
might be more than one possible way to achieve a fit between the data and the model; again, the
researcher must “abduct” a plausible set of variables and paths for modeling building.
Shank (1991), Josephson and Josephson (1994), and Ottens and Shank (1995) related
abductive reasoning to detective work. Detectives collect related “facts” about people and
circumstances. These facts are actually shrewd guesses or hypotheses based on their keen powers
of observation. In this vein, the logic of abduction is in line with EDA. In fact, Tukey (1977, 1980)
often related EDA to detective work. In EDA, the role of the researcher is to explore the data in as
many ways as possible until a plausible "story" of the data emerges. EDA is not “fishing”
significant results from all possible angles during research: it is not trying out everything.
Rescher (1978) interpreted abduction as an opposition to Popper's falsification (1963).
There are millions of possible explanations to a phenomenon. Due to the economy of research, we
cannot afford to falsify every possibility. As mentioned before, we don't have to know everything
to know something. By the same token, we don't have to screen every false thing to dig out the
authentic one. During the process of abduction, the researcher should be guided by the elements of
generality to extract a proper mode of perception.
Summary
In short, abduction can be interpreted as conjecturing the world with appropriate
categories, which arise from the internal structure of meanings. The implications of abduction for
researchers as practiced in EDA and ES, is that the use of EDA and ES is neither exhausting all
possibilities nor making hasty decisions. Researchers must be well equipped with proper
categories in order to sort out the invariant features and patterns of phenomena. Quantitative
research, in this sense, is not number crunching, but a thoughtful way of dissecting data.
Deduction
Premise of deduction
Aristotle is credited as the inventor of deduction (Trundle, 1994). Deduction presupposes
the existence of truth and falsity. Quine (1982) stated that the mission of logic is the pursuit of
truth, which is the endeavor to sort out the true statements from the false statements. Hoffmann
(1997) further elaborated this point by saying that the task of deductive logic is to define the
validity of one truth as it leads to another truth. It is important to note that the meaning of truth in
this context does not refer to the ontological, ultimate reality. Peirce made a distinction between
truth and reality: Truth is the understanding of reality through a self-corrective inquiry process by
the whole intellectual community across time. On the other hand, the existence of reality is
independent of human inquiry (Wiener, 1969). In terms of ontology, there is one reality. In regard
to methodology and epistemology, there is more than one approach and one source of knowledge.
Reality is "what is" while truth is "what would be." Deduction is possible because even without
relating to reality, propositions can be judged as true or false within a logical and conceptual
system.
Logic of deduction
Deduction involves drawing logical consequences from premises. An inference is
endorsed as deductionaly valid when the truth of all premises guarantees the truth of conclusion.
For instance,
First premise: All the beans from the bag are white (True).
Second premise: These beans are from this bag (True).
Conclusion: Therefore, these beans are white (True). (Peirce, 1986).
According to Peirce, deduction is a form of analytic inference and of this sort are all
mathematical demonstrations (1986).
Limitations of deduction
There are several limitations of deductive logic. First, deductive logic confines the
conclusion to a dichotomous answer (True/False). A typical example is the rejection or failure of
rejection of the null hypothesis. This narrowness of thinking is not endorsed by the Peircean
philosophical system, which emphasizes the search for a deeper insight of a surprising fact.
Second, this kind of reasoning cannot lead to the discovery of knowledge that is not already
embedded in the premise (Peirce, 1934/1960). In some cases the premise may even be
tautological--true by definition. Brown (1963) illustrated this weakness by using an example in
economics: An entrepreneur seeks maximization of profits. The maximum profits will be gained
when marginal revenue equals marginal cost. An entrepreneur will operate his business at the
equilibrium between marginal cost and marginal revenue.
The above deduction simply tells you that a rational man would like to make more money.
There is a similar example in cognitive psychology:
Human behaviors are rational.
One of several options is more efficient in achieving the goal.
A rational human will take the option that directs him to achieve his goal (Anderson,
1990).
The above two deductive inferences simply provide examples that a rational man will do
rational things. The specific rational behaviors have been included in the bigger set of generic
rational behaviors. Since deduction facilitates analysis based upon existing knowledge rather than
generating new knowledge, Josephson and Josephson (1994) viewed deduction as truth preserving
and abduction as truth producing.
Third, deduction is incomplete as we cannot logically prove all the premises are true.
Russell and Whitehead (1910) attempted to develop a self-sufficient logical-mathematical system.
In their view, not only can mathematics be reduced to logic, but also logic is the foundation of
mathematics. However, Gödel (1947/1986) showed that we cannot even establish all mathematics
by deductive proof. To be specific, it is impossible to have a self-sufficient system as Russell and
Whitehead postulated. Any lower order theorem or premise needs a higher order theorem or
premise for substantiation; and no system can be complete and consistent at the same time.
Deduction alone is clearly incapable of establishing the empirical knowledge we seek.
Peirce reviewed Russell's book "Principles of Mathematics" in 1903, but he only wrote a
short paragraph with vague comments. Nonetheless, based on Peirce's other writings on logic and
mathematics, Haack (1993) concluded that Peirce would be opposed to Russell and Whitehead's
notion that the epistemological foundations of mathematics lie in logic. It is questionable whether
the logic or the mathematics can fully justify deductive knowledge. No matter how logical a
hypothesis is, it is only sufficient within the system; it is still tentative and requires further
investigation with external proof.
This line of thought posed a serious challenge to researchers who are confident in the
logical structure of statistics. Mathematical logic relies on many unproven premises and
assumptions. Statistical conclusions are considered true only given that all premises and
assumptions that are applied are true. In recent years many Monte Carlo simulations have been
conducted to determine how robust certain tests are, and which statistics should be favored. The
reference and criteria of all these studies are within logical-mathematical systems without any
worldly concerns. For instance, the Fisher protected t-test is considered inferior to the Ryan test
and the Tukey test because it cannot control the inflated Type I error very well (Toothaker, 1993),
not because any psychologists or educators made a terribly wrong decision based upon the Fisher
protected t-test. Pillai-Bartlett statistic is considered superior to Wilk's Lambda and
Hotelling-Lawley Trace because of much greater robustness against unequal covariance matrices
(Olson, 1976), not because any significant scientific breakthroughs are made with the use of
Pillai-Bartlett statistic. For Peirce this kind of self-referential deduction cannot lead to progress in
knowledge. Knowing is an activity which is by definition involvement with the real world
(Burrell, 1968).
As a matter of fact, the inventor of deductive syllogisms, Aristotle, did not isolate formal
logic from external reality and he repeatedly admitted the importance of induction. It is not merely
that the conclusion is deduced correctly according to the formal laws of logic. Aristotle assumes
that the conclusion is verified in reality. Also, he devoted attention to the question: How do we
know the first premises from which deduction must start? (Copleston, 1946/85; Russell, 1945/72)
Certain development of quantitative research methodology is not restricted by logic.
Actually, statistics is by no means pure mathematics without interactions with the real world.
Gauss discovered the Gaussian distribution through astronomical observations. Fisher built his
theories from applications of biometrics and agriculture. Survival analysis or the hazard model is
the fruit of medical and sociological research. Differential item functioning (DIF) was developed
to address the issue of reducing test bias.
Last but not least, for several decades philosophers of science have been debating about the
issue of under-determination, a problematic situation in which several rival theories are
empirically equivalent but logically incompatible (de Regt, 1994; Psillos, 1999).
Under-determination is no stranger to quantitative researchers, who constantly face model
equivalency in factor analysis and structural equation modeling. Under-determination, according
to Leplin (1997), is a problem rooted in the limitations of the hypothetico-deductive methodology,
which is disconfirmatory in nature. For instance, the widely adopted hypothesis testing is based on
the logic of computing the probability of obtaining the observed data (D) given that the theory or
the hypothesis (H) is true (P(DH)). At most this mode of inquiry can inform us when to reject a
theory, but not when to accept one. Thus, quantitative researchers usually draw a conclusion using
the language in this fashion: “Reject the hypothesis” or “fail to reject the hypothesis,” but not
“accept the hypothesis” or “fail to accept the hypothesis.” Passing a test is not confirmatory if the
test is one that even a false theory would be expected to pass. At first glance it may be strange to
say that a false theory could lead to passing of a test, but that is how under-determination occurs.
Whenever a theory is proposed for predicting or explaining a phenomenon, it has a deductive
structure. What is deduced may be an empirical regularity that holds only statistically, and thus,
the answer by deduction works well for the true theory as for the false ones.
Summary
For Peirce, deduction alone is a necessary condition, but not a sufficient condition of
knowledge. Peirce (1934/1960) warned that deduction is applicable only to the ideal state of
things. In other words, deduction alone can be applied to a well-defined problem, but not an
ill-defined problem, which is more likely to be encountered by researchers. Nevertheless,
deduction performs the function of clarifying the relation of logical implications. When
well-defined categories result from abduction, premises can be generated for deductive reasoning.
Induction
Premise of induction
For Peirce, induction is founded on the premise that inquiry is a self-corrective inquiry
process by the whole intellectual community across time. Peirce stressed the collective nature of
inquiry by saying “No mind can take one step without the aid of other minds” (1934/1960, p.398).
Unlike Kuhn's (1962) emphasis on paradigm shift and incommensurability between different
paradigms, Peirce stressed the continuity of knowledge. First, knowledge does not emerge out of
pure logic. Instead, it is a historical and social product. Second, Peirce disregarded the Cartesian
skepticism of doubting everything (DesCartes, 1641/1964). To some extent we have to fix our
beliefs on those positions that are widely accepted by the intellectual community (Peirce, 1877).
Kuhn proposed that the advancement of human knowledge is a revolutionary process in
which new frameworks overthrow outdated frameworks. Peirce, in contrast, considered
knowledge to be continuous and cumulative. Rescher (1978) used the geographical-exploration
model as a metaphor to illustrate Peirce's idea: The replacement of a flat-world view with a
global-world view is a change in conceptual understanding, or a paradigm shift. After we have
discovered all the continents and oceans, measuring the height of Mount Everest and the depth of
the Nile river is adding details to our conceptual knowledge. Although Kuhn's theory looks
glamorous, as a matter of fact, paradigm shifts might occur only once in several centuries. The
majority of scholars are just adding details to existing frameworks. Knowledge is self-corrective
insofar as we inherit the findings from previous scholars and refine them.
Logic of induction
Induction introduced by Francis Bacon is a direct revolt against deduction. Bacon
(1620/1960) found that people who use deductive reasoning rely on the authority of antiquity
(premises made by masters), and the tendency of the mind to construct knowledge within the mind
itself. Bacon criticized deductive users as spiders for they make a web of knowledge out of their
own substance. Although the meaning of deductive knowledge is entirely self-referent, deductive
users tend to take those propositions as assertions. Propositions and assertions are not the same
level of knowledge. For Peirce, abduction and deduction only gives propositions, however,
self-correcting induction provides empirical support to assertions.
Inductive logic is often based upon the notion that probability is the relative frequency in
long run and a general law can be concluded based on numerous cases. For examples,
A1, A2, A3 ... A100 are B.
A1, A2, A3 ... A100 are C.
Therefore, B is C.
Or
A1, A2, A3, … A100 are B.
Hence, all A are B.
Abduction, deduction and induction 17
Nonetheless, the above is by no mean the only way of understanding induction. Induction
could also take the form of prediction:
A1,A2,A3…A100 are B.
Thus, A101 will be B.
Limitations of induction
Hume (1777/1912) argued that things are inconclusive by induction because in infinity
there are always new cases and new evidence. Induction can be justified, if and only if, instances of
which we have no experience resemble those of which we have experience. Thus, the problem of
induction is also known as “the skeptical problem about the future” (Hacking, 1975). Take the
previous argument as an example. If A101 is not B, the statement "B is C" will be refuted.
We never know when a regression line will turn flat, go down, or go up. Even inductive
reasoning using numerous accurate data and high power computing can go wrong, because
predictions are made only under certain specified conditions (Samuelson, 1967). For instance,
based on the case studies in the 19th century, sociologist Max Weber (1904/1976) argued that
capitalism could be developed in Europe because of the Protestant work ethic; other cultures like
the Chinese Confucianism are by essence incompatible with capitalism. However, after World
War Two, the emergence of Asian economic powers such as Taiwan, South Korea, Hong Kong
and Singapore disconfirmed the Weberian hypothesis.
Take the modern economy as another example. Due to American economic problems in
the early '80s, quite a few reputable economists made gloomy predictions about the U.S. economy
such as the takeover of American economic and technological throne by Japan. By the end of the
decade, Roberts (1989) concluded that those economists were wrong; contrary to those forecasts,
in the 80’s the U.S. enjoyed the longest economic expansion in its history. In the 1990s, the
economic positions of the two nations changed: Japan experienced recession while America
experienced expansion.
“The skeptical problem about the future” is also known as “the old riddle of induction.” In
a similar vein to the old riddle, Goodman (1954/1983) introduced the “new riddle of induction,” in
which conceptualization of kinds plays an important role. Goodman demonstrated that whenever
we reach a conclusion based upon inductive reasoning, we could use the same rules of inference,
but different criteria of classification, to draw an opposite conclusion. Goodman’s example is: We
could conclude that all emeralds are green given that 1000 observed emeralds are green. But what
would happen if we re-classify “green” objects as “blue” and “blue” as “green” in the year 2020?
We can say that something is “grue” if it was considered “green” before 2020 and it would be
treated as “blue” after 2020. We can also say that something is “bleen” if it was counted as a “blue”
object before 2020 and it would be regarded as “green” after 2020. Thus, the new riddle is also
known as “the grue problem.”
In addition, Hacking (1999) cited the example of “child abuse,” a construct that has been
taken for granted by many Americans, to demonstrate the new riddle. Hacking pointed out that
actually the concept of “child abuse” in the current form did not exist in other cultures. Cruelty to
children just emerged as a social issue during the Victorian period, but “child abuse” as a social
science concept was formulated in America around 1960. To this extent, Victorians viewed cruelty
to children as a matter of poor people harming their children, but to Americans child abuse was a
classless phenomenon. When the construct “child abuse” became more and more popular, many
American adults recollected childhood trauma during psychotherapy sessions, but authenticity of
these child abuse cases was highly questionable. Hacking proposed that “child abuse” is a typical
example of how re-conceptualization in the future alters our evaluations of the past.
Another main theme of the new riddle focuses on the problem of projectibility. Whether an
“observed pattern” is projectible depends on how we conceptualize the pattern. Skyrms (1975)
used a mathematical example to illustrate this problem: If this series of digits (1, 2, 3, 4, 5) is
shown, what is the next projected number? Without any doubt, for most people the intuitive
answer is simply “6.” Skyrms argued that this seemingly straight-forward numeric sequence could
be populated by this generating function: (A-1)(A-2)(A-3)(A-4)(A-5)+A. Let’s step through this
example using an Excel spreadsheet. In Cell A1 to A10 of the Excel spreadsheet, enter 1-10,
respectively. Next, in Cell B1 enter the function “=(A1-1)*(A1-2)*(A1-3)*(A1-4)*(A1-5)+A1”
and this function will yield “1.” Afterwards, select Cell B1 and “drag” the cursor downwards to
Cell B10; it will copy the same function to B2, B3, B4…B10. As a result, (B1 to B5) will
correspond to (A1 to A10), which are (1, 2, 3, 4, 5). However, the sixth number in Column B,
which is 126, substantively deviates from the intuitive projection. All numbers in the cells below
B6 are also surprising. Skyrms pointed out that whatever number we want to predict for the sixth
number of the series, there is always a generating function that can fit the given members of the
sequence and that will yield the projection we want. This indeterminacy of projection is a
mathematical fact.
Furthermore, the new riddle, which is considered an instantiation of the general problem of
under-determination in epistemology, is germane to quantitative researchers in the context of
“model equivalency” and “factor indeterminacy.” (DeVito, 1997; Forster, 1999; Forster & Sober,
1994; Kieseppa, 2001; Muliak, 1996; Turney, 1999). Specifically, the new riddle and other
philosophical notions of under-determination illustrate that all scientific theories are
under-determined by the limited evidence in the sense that the same phenomenon can be equally
well-explained by rival models that are logically incompatible. In factor analysis, for example,
whether adopting a one-factor or a two-factor model may have tremendous impact on subsequent
inferences. In curving-fitting problem, whether using the Akaike’s Information Criterion or the
Bayesian Information Criterion is crucial in the sense that these two criteria could lead to different
conclusions. Hence, the preceding problem of model selection criteria in quantitative-based
research is analogous to the problem of re-conceptualization of “child abuse” and the problem of
projectibility based upon generating functions. At the present time, there are no commonly agreed
solutions to either the new riddle or the model selection criteria.
Second, induction suggests the possible outcome in relation to events in long run. This is
not definable for an individual event. To make a judgment for a single event based on probability
like "your chance to survive this surgery is 75 percent" is nonsense. In actuality, the patient will
either live or die. In a single event, not only the probability is indefinable, but also the explanatory
power is absent. Induction yields a general statement that explains the event of observing, but not
the facts observed. Josephson and Josephson (1994) gave this example: “Suppose I choose a ball at
random (arbitrarily) from a large hat containing colored balls. The ball I choose is red. Does the
fact that all of the balls in the hat are red explain why this particular ball is red? No…’All A’s are
B’s’ cannot explain why ‘this A is a B’ because it does not say anything about how its being an A
is connected with its being a B.” (p.20)
As mentioned before, induction also suggests probability as a relative frequency. In the
discussion of probability of induction, Peirce (1986) raised his skepticism to this idea: “The
relative probability…is something which we should have a right to talk about if universes were as
plenty as blackberries, if we could put a quantity of them in a bag, shake them well up, draw out a
sample, and examine them to see what proportion of them had one arragement and what proportion
another.” (pp. 300-301). Peirce is not alone in this matter. To many quantitative researchers, other
types of interpretations of probability, such as the subjective interpretation and the propensity
interpretation, should be considered.
Third, Carnap, as an inductive logician, knew the limitation of induction. Carnap (1952)
argued that induction might lead to the generalization of empirical laws but not theoretical laws.
For instance, even if we observe thousands of stones, trees and flowers, we never reach a point at
which we observe a molecule. After we heat many iron bars, we can conclude the empirical fact
that metals will bend when they are heated. But we will never discover the physics of expansion
coefficients in this way.
Indeed, superficial empirical-based induction could lead to wrong conclusions. For
example, by repeated observations, it seems that heavy bodies (e.g. metal, stone) fall faster than
lighter bodies (paper, feather). This Aristotelian belief had misled European scientists for over a
thousand years. Galileo argued that indeed both heavy and light objects fall at the same speed.
There is a popular myth that Galileo conducted an experiment in the Tower of Pisa to prove his
point. Probably he never performed this experiment. Actually this experiment was performed by
one of Galileo's critics and the result supported Aristotle's notion. Galileo did not get the law from
observation, but by a chain of logical arguments (Kuhn, 1985). Again, superficial induction runs
the risk of getting superficial and incorrect conclusion.
Quantitative researchers have been warned that high correlations among variables may not
be meaningful. For example, if one plots GNP, educational level, or anything against time, one
may see some significant but meaningless correlation (Yule, 1926). As Peirce (1934/1960) pointed
out, induction cannot furnish us with new ideas because observations or sensory data only lead us
to superficial conclusions but not the "bottom of things" (p.878).
Last but not least, induction as the sole source of reliable knowledge was never inductively
concluded. An Eighteenth century British moral philosopher Thomas Reid embraced the
conviction that the Baconian philosophy or the inductive method could be extended from the realm
of natural science to mind, society, and morality. He firmly believed that through an inductive
analysis of the faculties and powers by which the mind knows, feels, and wills, moral philosophers
could eventually establish the scientific foundations for morality. However, some form of
circularity was inevitable in his argument when induction was validated by induction. Reid and his
associates counter-measured this challenge by arguing that the human mental structure was
designed explicitly and solely for an inductive means of inquiry (cited in Bozeman, 1977).
However, today the issue of inductive circularity remains unsettled because psychologists still
could not reach a consent pertaining to the human reasoning process. While some psychologists
found that the frequency approach appears to be more natural to learners in the context of
quantitative reasoning (Gigerenzer, 2003; Hoffrage, Gigerenzer, & Martignon, 2002), some other
psychologists revealed that humans have conducted inquiry in the form of Bayesian network by
the age of five (Gopnik & Schulz, 2004). Proclaiming a particular reasoning mode as the human
mind structure in a hegemonic tone, needless to say, would lead to immediate protest.
Summary
For Peirce induction still has validity. Contrary to Hume's notion that our perception of
events is devoid of generality, Peirce argued that the existence we perceive must share generality
with other things in existence. Peirce's metaphysical system resolves the problem of induction by
asserting that the data from our perception are not reducible to discrete, logically and ontologically
independent events (Sullivan, 1991). In addition, for Peirce all empirical reasoning is essentially
making inferences from a sample to a population; the conclusion is merely approximately true
(O'Neill, 1993). Forster (1993) justified this view with the Law of Large Numbers. On one hand,
we don't know the real probability due to our finite existence. However, given a large number of
cases, we can approximate the actual probability. We don't have to know everything to know
something. Also, we don't have to know every case to get an approximation. This approximation is
sufficient to fix our beliefs and lead us to further inquiry.
Conclusion
In summary, abduction, deduction and induction have different merits and shortcomings.
Yet the combination of all three reasoning approaches provides researchers a powerful tool of
inquiry. For Peirce a reasoner should apply abduction, deduction and induction altogether in order
to achieve a comprehensive inquiry. Abduction and deduction are the conceptual understanding of
phenomena, and induction is the quantitative verification. At the stage of abduction, the goal is to
explore the data, find out a pattern, and suggest a plausible hypothesis with the use of proper
categories; deduction is to build a logical and testable hypothesis based upon other plausible
premises; and induction is the approximation towards the truth in order to fix our beliefs for further
inquiry. In short, abduction creates, deduction explicates, and induction verifies.
A good example of their application can be found in the use of the Bayesian Inference
Network (BIN) in psychometrics (Mislevy, 1994). According to Mislevy, the BIN builds around
deductive reasoning to support subsequent inductive reasoning from realized data to probabilities of states. Yet abductive reasoning is vital to the process in two aspects. First, abductive reasoning suggests the framework for inductive reasoning. Second, while the BIN is a tool for reasoning deductively and inductively within the posited structure, abduction is required to reason about the structure. Another example can be found in the mixed methodology developed by Johnson and Onwuegbuzie (2004). Research employing mixed methods (quantitative and qualitative methods) makes use of all three modes of reasoning. To be specific, its logic of inquiry includes the use of induction in pattern recognition, which is commonly used in thematic analysis in qualitative methods, the use of deduction, which is concerned with quantitative testing of theories and hypotheses, and abduction, which is about inferences to the best explanation based on a set of available alternate explanations. It is important to note that researchers do not have to follow a specific order in using abduction, deduction, and induction. In Johnson and Onwuegbuzie’s framework, abduction is a tool of justifying the results at the end rather than generating a hypothesis at the beginning of a study.
One of the goals of this chapter is to illustrate a tight integration among different modes of
inquiry, and its implication to exploratory and confirmatory analyzes. Consider this
counter-example. Glymour (2001) viewed widespread applications of factor analysis as a sign of
system-wide failure in social sciences in terms of causal interpretations. As a strong advocate of
structural equation modeling, which is an extension of confirmatory factor analysis, Glymour is
very critical of this exploratory factor modeling approach. By reviewing the history of
psychometrics, Glymour stated that reliability (a stable factor structure) was never a goal of early
psychometricians. Thurstone faced the problem that there were many competing factor models
that were statistically equivalent. In order to “saving the phenomena” (to uniquely determining the factor loadings), he developed the criterion of the simple structure, which has no special
measure-theoretic virtue or special stability properties. In addition, on finite samples, factor
analysis may fail to recover the true causal structure because of statistical or algorithmic artifacts.
On the contrary, Glymour (2005) developed path-searching algorithms for model building, in
which huge data sets are collected; automated methods are employed to search for regularities in
the data; hypotheses are generated and tested as they go along. The last point is especially
important because for Gkymour there is no sharp distinction between the exploratory and
confirmatory steps.
However, even if path-searching algorithms are capable of conducting hypothesis
generation and testing altogether, it is doubtful whether the process is totally confirmatory and
nothing exploratory. In a strict sense, even CFA is a mixture of exploratory and confirmatory
techniques, in which the end product is derived in part from theory and in part from a
re-specification based on the analysis of the fitness indices. The same argument is well-applied to
path-searching. According to Peirce, in the long run scientific inquiry is a self-correcting process;
earlier theories will inevitably be revised or rejected by later theories. In this sense, all causal
conclusions, no matter how confirmatory they are, must be exploratory in nature because these
confirmed conclusions are subject to further investigations. In short, it is the author’s belief that
integration among abduction, induction, and deduction, as well as between exploratory and
confirmatory analyses, could enable researchers to conduct a thorough investigation.
No comments:
Post a Comment
Please let us know your logical, scientific opinions...