Logic History Overview...

Logic History Overview...
Quantification Logic...

Friday, July 22, 2011

Our Hypothetical Thinking Methods__Abduction_Our 1st Reality Processor...

Hi Tim, your post has much of what is intended by way of my investigation and links I mean/t to bring forward. The main point is the boundary or inter-boundary inter-actions of mental self to physical world realities. What I found was that Konrad Zuse actually best developed this language needed to address the hypothetical, or as we've named it__the absolute virtual observer position__by way of his use of automatons__same difference... The trick is always to have the ability to speak from outside the system_virtually_while still addressing all the facts as though inside the system...

Here's a few of Zuse's ideas, I most picked up on, though there are many more in the full article:

Calculating Space…(Excellent explication of real intuition mechanics...)
Conservation of information and conservation of configuration are therefore contradictory to a certain extent… Zuse(One explication of thought's central problem...)
State Changes__Shape-Shifting of Particle-Waves…(Simple explication of supervenience, or super-positionings__A solution to the central explication problem of complex thought__where thought divides into more than one direction, at once, as you've also mentioned__Shape-Shifting Thoughts...)
The Principle of Conservation of Energy Can Be Interpreted As The Conservation of Events…(Very Important...)
The Shape-Shifter Mechanics of Events…(A Best-Fit Explication...)
If the effective quantum is assigned the dimension ‘shifting process’, we obtain the dimension ‘shifting process per unit of time’ for energy…(First excellent definition of energy, I've ever seen...)
The Hydrodynamic Atomic Clock…(Actual explication of physical time mechanics of such motions...)


Until we correctly structure the system as 'an entire paradigm', its hard to identify where the many aspects of our world lie such as the geo aspects of structured and unstructured states of substance and all of the mechanics involved along with the bio aspects of emergence, evolution, thought, logic, etc.

Oh, fully agreed here Tim. This is one of the points I tried to get through to David, back in `06 and `07__but, he thought it could best be explained with simple wave mechanics. He wasn't willing to address the greater physical-mental divide of the larger global community of human processors of these ideas__yet my position was always about the need of a new language to address such interface problems of science and normative ideas__because, I'd already run into these same great roadblocks in my economics' ideas, conversations, web inter-actions and writings. This is further why I've always stated it must be an 'Eclectic Model' or paradigm explanation idea we create, to address fluently all those who do not easily grasp these many complexities, with enough generalities, to make them easily understandable to a general audience. Like you imply, we haven't even invented the name of such a model__yet__except to call it a ToE or some other sort of Unification Model. But__Are we trying to unify physics or thought__And, is there truly, any difference...? Just for a moment, think how eclectic 'Methodology' is, and how 'A Methodology Model of Eclectic Thought' may be to explaining, simply, how we actually do think about the world, when we daily have to address these many varying personalities... Just maybe, if we all realized, our 'pre-suppositional methodologies of thought'__almost fully formed/informed by our stations in life__from 'the have-nots', to 'the haves' actually create most of our basic ideas of the world__then maybe we'd have a foundation to start from, i.e., people are fundamentally different, as to thoughts and life-view choices, and what positions they will defend from, thus creating all the eclectic positions any new ToE or Unification Model must address...

What paradigm must we use to integrate so many seemingly unrelated concepts into one network of understanding whereby we might see the various processes and functions of so many aspects of our universe as intrinsic consequences of a unified natural process?

Well, I know it must be eclectic and general, Tim__as that's not only what Peirce recommended, but also all the great eclectic philosophers of history, especially the more modern eclectics like Bolzano and Cousins__And this would mean 'A General Model of Eclectic Methodology of Thinking' or as already mentioned__'Abduction' as the same, which is our general and natural hypothetical thinking processes, about all the world's eclectic collection of ideas...

As to logic itself, and just out of curiosity, which of the processes of inference is being used to establish such governing processes as the mechanics which oversee the acquisition of knowledge within the mind?

Hands down, no doubt about it__'Abduction' as it's our first 'private hypothetical thinking process' about all our fundamental ideas about self and the world. Just as Peirce stated 100 years ago__'Pragmatism is about Abduction', or 'Abduction is Pragmatism'__Our fundamental thinking about the real physical world, brain and how it actually produces bio-chemical-intelligence, and stores all the knowledge and facts in memory spaces, collected of this real world...

Is inference mechanics subject to the same aspects it attempts to define as it attempts to bridge a gap in our knowledge of reasoning the same as its concepts further attempt to explain how we bridge gaps in other various areas of nature in our process of reasoning and knowledge acquisition?

It absolutely is Tim, and this is why Peirce wrote this paragraph, to explicate this exact process:

“Two things here are all-important to assure oneself of and to remember. The first is that a person is not absolutely an individual. His thoughts are what he is "saying to himself," that is, is saying to that other self that is just coming into life in the flow of time. When one reasons, it is that critical self that one is trying to persuade; and all thought whatsoever is a sign, and is mostly of the nature of language. The second thing to remember is that the man's circle of society (however widely or narrowly this phrase may be understood), is a sort of loosely compacted person, in some respects of higher rank than the person of an individual organism." C.S. Peirce

To me, more is explained about QM right here, than all the books written, and Peirce is the first to suggest putting all our measurement standards on the speed of light, as expressed by the frequencies of a sodium atom long befor Stoney or Planck...

Would we consider the concept of inference mechanics as arising from induction, deduction or abduction within what we know about the process of thought and reasoning itself?

Inference mechanics imo Tim, arises from our biology, and is induction deduction and abduction__It's just the problem with history not realizing we've always used all three, as our natural thinking processes__mainly due to dummy Aristotle not realizing even natural logics require 1st, 2nd and 3rd order logics, or Modal's three first state orders__possibilities, impossibilities and necessities__Aristotles 3rd state deductive-inductive logics, and what Aristotle left out, our 2nd order logic of 'Abduction...' Or you can re-arrange these orders as you please, as 'Abduction' 'Modal' or 'Induction-Deduction' may just be the first logics we awake to each morning, according to how the night's dreams and thoughts went, if ya know what I mean... Anyway, imo, our hypothetical-theoretical abductive thinking is the most important, yet the least discussed and least understood aspect of our triadic thinking processes. I don't know about you, but I'd guess it to be true, that 'Thinking about Thinking' is very important to you also, as it is to me__and that's our 'Natural Hypothetical Abductive Thinking about Thought...' It's just our natural self-thinking, imo, Tim...

Is this even a legitimate question? Lmao…..the geo universe is easy to understand compared to the interface where it meets the bio interactions it spawns……lol. We ultimately become our own road block to absolute understanding and truth.

Yeah, self-processing inanity is one hell of a demon, to fight off... Lmfao...

This is all I attempt, Tim...

The teacher who is indeed wise does not bid you to enter the house of his wisdom but rather leads you to the threshold of your mind. - Kahlil Gibran


"Life's journey is simply learning to see ourselves__completely..."

A few other ideas I'm looking at, Tim:

The New Extended Computational Logic…
The Computational Logic of Proof Logic__A Combination Proof Method…
The Logical Computational Proof of All Factually Necessary Model & Category Systems…
Logical Computational Proofs of Truth Systems’ Facts…
Modal Logic, Abduction, Deduction & Induction Logics Are The Key Test Proofs Method/s… The First Order 1st, 2nd, & 3rd Logics, of The Second Order 1st, 2nd, & 3rd Logics, Plus The Third Order Logics of Identity, Non-Contradiction, Excluded Middle & Sufficient Ground…(Aristotle & Leibniz couldn’t even have discovered the 3rd order logics, without the other two orders’ uses, first…)
The Innate Possibilities, Impossibilities and Necessities = Intelligence…
The Hypothetical Necessities of Ground Thoughts…
The Innate Infinite Regress Impossibility…
The ‘Innateness Math’ Fallacy…
Innate Intelligence_Yes…! Innate Knowledge_No…!
Intelligence = Innate Potential Function…!!!
Those in denial of their own insanity…???
The Descriptive-Analytic Distinctions…
The Descriptive-Prescriptive Distinctions…
The Modal Computational Possibilities, Impossibilities & Necessities…
1st Modal Law__“The highest probability, of the highest possibility, is the only possibility, or the absolute necessity…”
Peirce’s Unification of The Three Logic Systems, 1st, 2nd, & 3rd…
Innateness Alone, Is Impossible, As A Knowledge System…
The Innateness Impossibility__The ‘Tabula Rasa’ Regress Necessity…
A New Universal Computational Logic…
Computational Decision Theory & Facts…
Computational Intelligence…
The Puzzle of Continuous Information Across Particle Space…???
Hydro-Mechanics…
The Conservation of Torque...(Angular momentum and the Universe's many toroids...)
The Conservation of Pulse…???(Wave-front charge...)
At The Core__State Change Is Dimensional Change of Physical Event Mechanics__Wave Amplitude, Length & Frequency Mechanics’ Change…


Triadic Imagination___"To process infinite sense to finite logic, isomorphically transduce to the universal law of pure liberty."

Thursday, July 21, 2011

Establishing Scientific Aspects as Consequences of a Natural Process

Hi Lloyd,

I think I posted a couple of these links back at the ole TQ thread a while back when I first started speaking in terms of digital processing. As is often the case with me, I found that others had previously caught on to many of the concepts which were new to my mind. The main thing to keep in mind about my ideas vs. those which you may come across with these types of digital mechanics is to always cross reference what you're reading about them with a governing materialist viewpoint, which I have. I’ve read at some sights where they try to take such mechanics to exotic extremes as they now do with QM, as with multiple realities and such, and I don’t promote such. The bits they are speaking of must also remain as the motion of the FS in a materialistic fashion as otherwise we would be at a strictly informational paradigm which I don’t support. I mainly support the integer concepts spoken of in the articles thus also the lack of the infinities brought about by continuity with a possible algebraic representation of the underlying dynamics of the universe. Though infinities may lie beyond the universal system as with an eternal and infinite field extension, I see none within.

The main difference that you perhaps won’t find about the concepts that I toy with and those within such articles is my inference of the lack of continuity in the existence and propagation of all structured matter/systems. If we imagine a structured system such as an electron moving throughout its orbital region it is easy to get lost in the apparent continuity of this process, whereby losing the mechanics by which electromagnetic forces might find their true representation. The same can be said for the protons within the nucleus as they move about a different axis of motion with greater mass thus proportionately less linear velocity than that of the electron per the c conservation mechanics of linear velocity to mass I’ve discussed before. The standard model would have us believe that photons are transmitting the EM force between the two structured systems as we imagine them both continuously moving along bouncing photons back and forth like a ping pong ball or something similar.

My approach is to replace the continuity of structure with a propagation mechanics of constant restructuring by way of the more fundamental resolution of the FS at the PSF level. If the preservation of structure isn’t continuous in nature and is found to be a discrete process, then such a process interacting through time would display a centralized region of structure with a surrounding region of FS flowing into structuralization along with a region flowing back out of such. These unstructured regions must be complimentary to the structuring of the system they are seemingly continuously maintaining. Thus, for two systems to remain within certain distances, they must have characteristic motions of spin and such which compliment each other ie attraction due to their unstructured field representations having current flows of FS which would otherwise eject them from being within such distances ie repulsion. When two or more systems are acting within a proper distance to be viewed as being under the influence of such a force, the combination of their motions within the unstructured region of FS would thus effect such current flow whereby allowing for composite states as bosons or fermions. I take such mechanics to the PSF resolution within my thoughts, but if we have trouble relating in this area, you can get a similar mechanics by merely imagining a continuous FS resolution as being the fundamental frame of reference, but rather than letting it flow along whereby allowing for structural continuity, take the absolute observer position whereby it is a more motionless frame of reference thus accomplishing propagating structured systems by a continuous flow of substance within and without whereby regions of the FS surrounding the structure will soon flow into the structured system while other regions are flowing out from such. It’s a similar visualization but I prefer to take it to the discrete representation as with a hypothetical PSF per many of the concepts and mechanics you might find discussed within such digital processing articles.

We speak of decay mechanics, initial nuclear synthesis, and such, and search for concepts which encompass them whereby we might become familiar with such natural processes. However, if I were to be correct, we must then realize that all structured systems are not in continuous form, but are within continuous decay and structuralization simultaneously as the preservation of structure is a dynamic synthesis which always maintains the presence of both dynamics rather than such dynamics only being present at a distant time in the past and later one in the future. I feel that there is the possibility that the study of such structured systems along with the field states of energy they are associated with is a big step in understanding nuclear synthesis and radiation aspects.

This was my point about the importance of thought structuring. When considering such dynamics, I could have stopped my investigations at the decay resolution and tried to construct my thoughts concerning such mechanics. However, being as I followed my logic all the way to the initial aspect of discrete vs. continuous at the most fundamental resolution, the mechanics and nature of such dynamics associated with these two opposing processes set up a governing logic within my thought structuring whereby once I found myself having withdrawn back up to the resolution which contained decay and synthesis mechanics, I had a full working logical process to explore with, rather than trying to gather information with no preestablished governing mechanical aspect. Once one passes up the current position at which the Standard Model is stalled at and includes a materialist perspective of a fundamental substance, then they can back up and make the connections allowed due to such because of the associated dynamics brought about by the concept of a continuous FS. This resolves many of the problems with the Standard Model, but still stalls at a much deeper resolution, IMHO. Yesterday, I saw a science program which vibrated droplets of silicon in a liquid solution whereby creating a wave pattern within the solution which resembled a pilot wave guiding the droplet, and then they performed the double slit experiment and marveled at just how well it described Quantum Mechanical effects by merely accounting for the continuous wave and particle state of matter which was very similar to the animation I made a while back. It’s evident when you see such examples just how far a governing dynamic injects the proper mechanical aspects associated with it to allow for the connections whereby the unfamiliar becomes just a little more familiar, as the entire discussion before there little experiment was on how QM viewed nature at such scales as having no resolved existence as if it were just a substructure waiting to produce an outcome dependent upon our decision to observe or not to observe, at which point they realized that it merely takes a new paradigm to encompass Quantum Mechanical aspects while also preserving those of Relative Mechanics along with Classical Mechanics. Some processes in nature have intrinsic dynamics associated with their very existence which when applied to physical aspects of our universe have the potential to bring about an entire paradigm shift to the point to explain many previously unexplained phenomena, along with allowing a more familiar and intimate understanding of nature itself at those resolutions which require much effort to interact with and are mostly alien to us.

I may not be able to explain exactly what a photon is, but by allowing the governing parameters discussed before to guide my investigations and thought process, I can mentally visualize how a structured atomic system and its unstructured field state are interacting by way of the discontinuous mechanics discussed earlier, whereby having characteristic aspects of frequency and such within the interplay that allows for the apparent continuity of structure, whereby a passing external frequency field state which encounters such an atomic system would disrupt the actual dynamics of such a relationship to the point that the electron might find itself reproduced within a differing energy level orbital which was now complimentary to its very existence thus further structural continuity. This would imply that such dynamics as radioactive decay due to the instability of the governing relationship of the atomic structure simply discontinued the preservation of a previously maintained structural or unstructured element (dependent upon the type of radiation eg. Alpha, Beta, Gamma, etc.) within the overall structured system, whereby ejecting either more constituent structural aspects from within the composite system or causing high energy frequency aspects to radiate outward within the unstructured medium itself which had the ability to ionize other surrounding atomic structured systems by disrupting the resonate frequency and balance within, thus effecting the continuity of its internal structuring. I don’t see it all clearly, but I’ve yet to find such concepts anywhere in my external studies which explain it in this exact manner. I just feel that there is perhaps the chance that radiation decay and nuclear synthesis are actually aspects of structural continuity being ever present in our world every where we find structured systems rather than merely happening at certain times, thus allowing the further study of the possible past and future state of a system by merely studying its present mechanical method of continuity. When these processes are a controlled dynamic through time as with a balance of incoming to outgoing substance we find structured systems propagating within their appropriate unstructured field states as with the wave/particle nature which goes far to bring QM from out of the mythical state which many current interpretations have led it. However, when such a balanced synthesis becomes disrupted whereby the process becomes uncontrolled, the system seeks balance which can be seen at the atomic scale with such things as a change in the relationship of the electron to the nucleus or some form of external radiation or decay to a more stable state. I just feel that such concepts are a plausible approach to explain the Standard Model in terms of the unification of its many varying concepts of particle physics and the interactions thereof as being capable of being found within an all encompassing universal mechanics of the process of structuralization to destructuralization and vice versa or rather merely a synthesis aspect of how material information is processed within the universe. The very concept that structured systems have to be constantly refreshed to be maintained whereby they are not found to be a continuous aspect of nature means that a supporting mechanics must be established to do so, which sets up a logical division of thought or departure from the before supported notion that structural continuity is a given with all of the particle and energy concepts of the Standard Model. It places the mind into a state of logic whereby while establishing the needed mechanics to describe such a dynamic, many of the Standard Model aspects seemingly fall into place as being a consequence of such a fluid process of nature rather than merely a bunch of entities which one must otherwise struggle to find their relationship with each other and that of nature.

I’m not as much trying to sell these concepts to anyone with this discussion as I am trying to maintain the theme of how certain concepts in nature and the mechanical substructure or processes we associate with them have the power to influence how we approach solving problems and making associations in our search for absolute truth within nature itself. Modern science is to the point of investigation that most of the needed concepts and information are there merely needing connection to each other. Thus, it may not be that we lack the imagination or intelligence to solve such universal mysteries, but rather that we have yet to encounter the proper conceptual aspects which would integrate all of the seemingly disconnected concepts of nature that we’ve found into one unifying universal process of existence. When you bring your mind to being poised for such realizations with all of the information needed having been gathered and lying in wait for understanding, then you may be only one thought or concept away from all such concepts firing within the brain as one network or synthesis which brings about the understanding we seek. Intelligence is the connections we’ve made within our own brains which allows groups of neurons and such which represent ideas and concepts to fire in a network or pattern which we interpret as understanding. Many of these such connected networks or areas which pertain to the aspects of the physics of our world are isolated from each other whereby functioning as competing aspects of our intelligence having their own characteristics of truth but otherwise lacking the unity which would allow them to fire as a single unified network. I see this as aspects of the abduction pool whereby various networks compete for relevance in any further hypothesizing. Therein lies the potential for such a mental network which represents a system or function which we’ve learned e.g. digital signal processing vs. analog, etc, to find precedence over various other networks whereby the application of such as a governing dynamic might find the integration of all of the various other networks which harbor aspects of science to find themselves within a single network or continuous sequence of firing neurons whereby all subsystems are supportive of all others within the network rather than competing for relevance. True understanding in such a scenario is the supporting role of all of the various aspects of knowledge acting as a single self supportive system of thought rather than various areas competing for relevance whereby contradictions might be found. There is the possibility that true absolute understanding would bring about an internal physical state of the mind which was analogous to the functioning of the external world i.e. a type of unified network of thought which was a mental analog to the external universal processes we seek to understand. This would potentially allow for huge advances in technology due to no area of the network being isolated whereby the mind had the ability to explore within many or all aspects of the mechanics of our universe thus allowing for the assembly of external structured systems which exploited such dynamics to achieve those things which we currently cannot technologically achieve.

Until we correctly structure the system as an entire paradigm, its hard to identify where the many aspects of our world lie such as the geo aspects of structured and unstructured states of substance and all of the mechanics involved along with the bio aspects of emergence, evolution, thought, logic, etc. What paradigm must we use to integrate so many seemingly unrelated concepts into one network of understanding whereby we might see the various processes and functions of so many aspects of our universe as intrinsic consequences of a unified natural process? As to logic itself, and just out of curiosity, which of the processes of inference is being used to establish such governing processes as the mechanics which oversee the acquisition of knowledge within the mind? Is inference mechanics subject to the same aspects it attempts to define as it attempts to bridge a gap in our knowledge of reasoning the same as its concepts further attempt to explain how we bridge gaps in other various areas of nature in our process of reasoning and knowledge acquisition? Would we consider the concept of inference mechanics as arising from induction, deduction or abduction within what we know about the process of thought and reasoning itself? Is this even a legitimate question? Lmao…..the geo universe is easy to understand compared to the interface where it meets the bio interactions it spawns……lol. We ultimately become our own road block to absolute understanding and truth.

I'll check out the other links you posted further on in a bit.....

A Digital Philosophy Question About Computational Intelligence...

Tim, is this somewhat representative of your digital ideas' logic...?

http://www.digitalphilosophy.org/
http://en.wikipedia.org/wiki/Digital_philosophy

Digital Philosophy (DP) is a new way of thinking about the fundamental workings of processes in nature. DP is an atomic theory carried to a logical extreme where all quantities in nature are finite and discrete. This means that, theoretically, any quantity can be represented exactly by an integer. Further, DP implies that nature harbors no infinities, infinitesimals, continuities, or locally determined random variables. This paper explores Digital Philosophy by examining the consequences of these premises.

At the most fundamental levels of physics, DP implies a totally discrete process called Digital Mechanics. Digital Mechanics[1] (DM) must be a substrate for Quantum Mechanics. Digital Philosophy makes sense with regard to any system if the following assumptions are true:

All the fundamental quantities that represent the state information of the system are ultimately discrete. In principle, an integer can always be an exact representation of every such quantity. For example, there is always an integral number of neutrons in a particular atom. Therefore, configurations of bits, like the binary digits in a computer, can correspond exactly to the most microscopic representation of that kind of state information.

In principle, the temporal evolution of the state information (numbers and kinds of particles) of such a system can be exactly modeled by a digital informational process similar to what goes on in a computer. Such models are straightforward in the case where we are keeping track only of the numbers and kinds of particles. For example, if an oracle announces that a neutron decayed into a proton, an electron, and a neutrino, it’s easy to see how a computer could exactly keep track of the changes to the numbers and kinds of particles in the system. Subtract 1 from the number of neutrons, and add 1 to each of the numbers of protons, electrons, and neutrinos.

The possibility that DP may apply to various fields of science motivates this study.

Tim, here's the original writings of Konrad Zuse, the inventer of the first true digital computers, and his ideas on digital information processing. I think you should read it, if you haven't already:

http://cag.dat.demokritos.gr/Backpages/zuserechnenderraum.pdf

This stuff is extremely interesting Tim, as it includes a triadic information system's diagram, that's very applicable to my work, as per Peirce's diagrammatic logics... I don't know as I agree with it all yet, but It certainly contains some very important foundational information...

Tim, just finished reading Zuse__This paper is the most amazing read I've come across in a long time. If you want to know where Feyneman got his arrow ideas__Look no further... You wanna' read about your grid mechanics__Look no further... You wanna' understand one, two and three space, pluse CM, QM and RM__Look no further... This paper is that good at expanding new ideas' thinking. Don't worry about the math and symbols you don't understand, just read the paper for what you can easily understand__I'll gaurantee it worth your while... His conceptualism is matchless...

http://en.wikipedia.org/wiki/Calculating_Space

Tuesday, July 19, 2011

Abduction__Unification of Concepts’ Mechanics…

Abduction, Deduction, and Induction: Their implications to quantitative methods…

Chong Ho Yu, Ph.D.

Abstract
While quantitative methods have been widely applied by social scientists such as sociologists,
psychologists, and economists, their philosophical premises and assumptions are rarely examined.

The philosophical ideas introduced by Charles Sanders Peirce (1839-1914) are helpful for
researchers in understanding the application of quantitative methods specific to the foundational
concepts of deduction, abduction and induction. In the Peircean logical system the nature of
knowledge and reality relate to each of these concepts: the logic of abduction and deduction
contribute to our conceptual understanding of a phenomenon, while the logic of induction adds
quantitative details to our conceptual knowledge. At the stage of abduction, the goal is to explore
data, find a pattern, and suggest a plausible hypothesis; deduction is to refine the hypothesis based upon other plausible premises; and induction is empirical substantiation. This article seeks to investigate the premises, limitations and applications of deduction, abduction and induction within quantitative methodology.

Fisher (1935, 1955) considered significance testing as “inductive inference” and argued
that this approach is the source of all knowledge. On the other hand, Neyman (1928, 1933a,
1933b) maintained that only deductive inference was appropriate in statistics as shown in his
school of hypothesis testing tradition. However, both deductive and inductive methods have been
criticized for various limitations such as their tendency to explain away details that should be
better understood and their incapability of generating new knowledge (Hempel, 1965; Josephson
& Josephson, 1994; Thagard & Shelley, 1997). In the view of the Peircean logical system, one may
say the logic of abduction and deduction contribute to our conceptual understanding of a
phenomena (Hausman, 1993), while the logic of induction provides empirical support to
conceptual knowledge. In other words, abduction, deduction, and induction work together to
explore, refine and substantiate research questions.

Although abduction is central in the Peircean logical system, Peirce by no means
downplayed the role of deduction and induction in inquiry. Peirce had studied the history of
philosophy thoroughly and was influenced by a multitude of schools of logic (Hoffmann, 1997).
Peirce explained these three logical processes (1934/1960) as, “Deduction proves something must
be. Induction shows that something actually is operative; abduction merely suggests that
something may be” (Vol. 5, p.171). Put another way: Abduction plays the role of generating new
ideas or hypotheses; deduction functions as evaluating the hypotheses; and induction is justifying
the hypothesis with empirical data (Staat, 1993).

This article attempts to apply abduction, which was introduced by Peirce a century ago, to
offer a more comprehensive logical system of research methodology. Therefore, we will evaluate
the strengths and weaknesses of the preceding three logical processes under Peircean direction,
and point to implications for the use of exploratory data analysis (EDA) and quantitative research
within this philosophical paradigm.

It is important to note that the focus of this article is to extend and apply Peircean ideas into
research methodologies in an epistemological fashion, not to analyze the original meanings of
Peircean ideas in the manner of historical study. Almder (1980) contended that Peirce wrote in a
style that could lead to confusion. Not surprisingly, many scholars could not agree on whether
Peircean philosophy is a coherent system or a collection of disconnected thoughts (Anderson,
1987). In response to Weiss (1940) who charged some philosophers with distorting and
dismembering the Peircean philosophy, Buchler (1940) contended that cumulative growth of
philosophy results from the partial or limited acceptance of a given philosopher’s work through
discriminating selection. One obvious example of extending the Pericean school is the “inference
to the best explanation” (IBE) proposed by Harman (1965) based upon the Peircean idea of
abduction. While the “classical” abduction is considered a logic of discovery, IBE is viewed as a
logic of justification (Lipton, 1991). But in the context of debating realism and anti-realism, de
Regt (1994) criticized that Peircean philosophy was mis-used to the extent that the “inference to
the best explanation” had become the “inference to the only explanation.” This article is
concerned with neither history of philosophy nor discernment of various interpretations of the
Peircean system; rather I adopted the position suggested by Buchler, and thus Peircean ideas on
abduction, deduction, and induction are discussed through discriminating selection.

Abduction
Premises of abduction
Before discussing the logic of abduction and its application, it is important to point out its
premises. In the first half of the 20th century, verificationism derived from positivism dominated
the scientific community. For positivists unverifiable beliefs should be rejected. However,
according to Peirce, researchers must start from somewhere, even though the starting point is an
unproven or unverifiable assumption. This starting point of scientific consciousness is private
fancy a flash of thought, or a wild hypothesis. But it is the seed of creativity (Wright, 1999). This
approach is very different from positivism and opens more opportunities for inquirers (Callaway,
1999). In the essay The Fixation of Belief, (1877) Peirce said that we are satisfied with stable
beliefs rather than doubts. Although knowledge is fallible in nature, and in our limited lifetime we
cannot discover the ultimate truth, we will still fix our beliefs at certain points. At the same time,
Peirce did not encourage us to relax our mind and not pursue further inquiry. Instead, he saw
seeking knowledge as interplay between doubts and beliefs, though he did not explicitly use the
Hegelian term "dialectic."

The logic of abduction
Grounded in the fixation of beliefs, the function of abduction is to look for a pattern in a
surprising phenomenon and to suggest a plausible hypothesis. The following example illustrates
the function of abduction:
The surprising phenomenon, B, is observed.
But if A were true, B would be a matter of course.
Hence there is a reason to suspect that A might be true.

By the standard of deductive logic, the preceding reasoning is clearly unacceptable for it is
contradicted with a basic rule of inference in deduction, namely, Modus Poenes. Following this
rule, the legitimate form of reasoning takes the route as follows:
A is observed.
If A, then B.
Hence, B is accepted.

Modus Ponens is commonly applied in the context of conducting a series of deduction for
complicated scientific problems. For example, A; (A 􀃆 B); B; (B 􀃆 C); C; (C 􀃆 D); D…etc.
However, Peirce started from the other end:
B is observed.
If A, then B.
Hence, A can be accepted.

Logicians following deductive reasoning call this the fallacy of affirming the consequent.
Consider this example. It is logical to assert that “It rains; if it rains, the floor is wet; hence, the
floor is wet.” But any reasonable person can see the problem in making statements like: “The floor
is wet; if it rains, the floor is wet; hence, it rains.” Nevertheless, in Peirce’s logical framework this
abductive form of argument is entirely valid, especially when the research goal is to discover
plausible explanations for further inquiry (de Regt, 1994). In order to make inferences to the best
explanation, the researcher must need a set of plausible explanations, and thus, abduction is
usually formulated in the following mode:

The surprising phenomenon, X, is observed.
Among hypotheses A, B, and C, A is capable of explaining X.
Hence, there is a reason to pursue A.

At first glance, abduction is an educated guess among existing hypotheses. Thagard and
Shelley (1999) clarified this misconception. They explained that unifying conceptions were an
important part of abduction, and it would be unfortunate if our understanding of abduction were
limited to more mundane cases where hypotheses are simply assembled. Abduction does not occur
in the context of a fixed language, since the formation of new hypotheses often goes hand in hand
with the development of new theoretical terms such as “quark,” and “gene.” Indeed, Peirce
(1934/1960) emphasized that abduction is the only logical operation that introduces new ideas.
Some philosophers of science such as Popper (1968) and Hempel (1966) suggested that
there is no logic of discovery because discovery relies on creative imagination. Hempel used
Kekule’s discovery of the hexagonal ring as an example. The chemist Kekule failed to devise a
structural formula for the benzene molecule in spite of many trials. One evening he found the
solution to the problem while watching the dance of fire in his fireplace. Gazing into the flames, he
seemed to see atoms dancing in snakelike arrays and suddenly related this to the molecular
structure of benzene. This is how the hexagonal ring was discovered. However, it is doubtful
whether this story supports the notion that there is no logic of discovery. Why didn’t other people
make a scientific breakthrough by observing the fireplace? Does the background knowledge that
had been accumulated by Kekule throughout his professional career play a more important role to
the discovery of the hexagonal ring than a brief moment in front of a fireplace? The dance of fire
may serve as an analogy to the molecular structure that Kekule had contemplated. Without the
deep knowledge of chemistry, it is unlikely that anyone could draw inspiration by the dance of fire.

For Peirce, progress in science depends on the observation of the right facts by minds
furnished with appropriate ideas (Tursman, 1987). Definitely, the intuitive judgment made by an
intellectual is different from that made by a high school student. Peirce cited several examples of
remarkable correct guesses. All success is not simply luck. Instead, the opportunity was taken by
the people who were prepared:

a). Bacon's guess that heat was a mode of motion;
b). Young's guess that the primary colors were violet, green and red;
c). Dalton's guess that there were chemical atoms before the invention of microscope (cited
in Tursman, 1987).

By the same token to continue the last example, the cosmological view that "atom" is the
fundamental element of the universe, introduced by ancient philosophers Leucippus and
Democritus, revived by Epicurus, and confirmed by modern physicists, did not result from a lucky
guess. Besides the atomist theory, there were numerous other cosmological views such as the
Milesian school, which proposed that the basic elements were water, air, fire, earth … etc.
Atomists were familiar with them and provided answers to existing questions based on the existing
framework (Trundle, 1994).

Peirce stated that classification plays a major role in making a hypothesis, that is the
characters of phenomenon are placed into certain categories (Peirce, 1878b). Although Peirce is
not a Kantian (Feibleman 1945), Peirce endorsed Kant's categories in Critique of Pure Reason
(Kant, 1781/1969) to help us to make judgments of the phenomenal world. According to Kant,
human thought and enlightenment are dependent on a limited number of a priori perceptual forms
and ideational categories, such as causality, quality, time and space. Also, Peirce agreed with Kant
that things have internal structure of meaning. Abductive activities are not empirical hypotheses
based on our sensory experience, but rather the very structure of the meanings themselves
(Rosenthal, 1993). Based on the Kantian framework, Peirce (1867/1960) later developed his "New
list of categories." For Peirce all cognition, ranging from perception to logical reasoning, is
mediated by “elements of generality.” (Peirce, 1934/1960). Based upon the notion of categorizing
general elements, Hoffman (1997) viewed abduction as a search for a mode of perception while
facing surprising facts.

Applications of abduction
Abduction can be well applied to quantitative research, especially Exploratory Data
Analysis (EDA) and Exploratory statistics (ES), such as factor rotation in Exploratory Factor
Analysis and path searching in Structural Equation Modeling (Glymour, Scheines, Spirtes, &
Kelly, 1987; Glymour & Cooper, 1999). Josephson and Josephson (1994) argued that the whole
notion of a controlled experiment is covertly based on the logic of abduction. In a controlled
experiment, the researchers control alternate explanations and test the condition generated from
the most plausible hypothesis. However, abduction shares more common ground with EDA than
with controlled experiments. In EDA, after observing some surprising facts, we exploit them and
check the predicted values against the observed values and residuals (Behrens, 1997). Although
there may be more than one convincing pattern, we "abduct" only those that are more plausible for
subsequent controlled experimentation. Since experimentation is hypothesis-driven and EDA is
data-driven, the logic behind them are quite different. The abductive reasoning of EDA goes from
data to hypotheses while inductive reasoning of experimentation goes from hypothesis to expected
data. By the same token, in Exploratory Factor Analysis and Structural Equation Modeling, there
might be more than one possible way to achieve a fit between the data and the model; again, the
researcher must “abduct” a plausible set of variables and paths for modeling building.
Shank (1991), Josephson and Josephson (1994), and Ottens and Shank (1995) related
abductive reasoning to detective work. Detectives collect related “facts” about people and
circumstances. These facts are actually shrewd guesses or hypotheses based on their keen powers
of observation. In this vein, the logic of abduction is in line with EDA. In fact, Tukey (1977, 1980)
often related EDA to detective work. In EDA, the role of the researcher is to explore the data in as
many ways as possible until a plausible "story" of the data emerges. EDA is not “fishing”
significant results from all possible angles during research: it is not trying out everything.
Rescher (1978) interpreted abduction as an opposition to Popper's falsification (1963).
There are millions of possible explanations to a phenomenon. Due to the economy of research, we
cannot afford to falsify every possibility. As mentioned before, we don't have to know everything
to know something. By the same token, we don't have to screen every false thing to dig out the
authentic one. During the process of abduction, the researcher should be guided by the elements of
generality to extract a proper mode of perception.

Summary
In short, abduction can be interpreted as conjecturing the world with appropriate
categories, which arise from the internal structure of meanings. The implications of abduction for
researchers as practiced in EDA and ES, is that the use of EDA and ES is neither exhausting all
possibilities nor making hasty decisions. Researchers must be well equipped with proper
categories in order to sort out the invariant features and patterns of phenomena. Quantitative
research, in this sense, is not number crunching, but a thoughtful way of dissecting data.

Deduction
Premise of deduction
Aristotle is credited as the inventor of deduction (Trundle, 1994). Deduction presupposes
the existence of truth and falsity. Quine (1982) stated that the mission of logic is the pursuit of
truth, which is the endeavor to sort out the true statements from the false statements. Hoffmann
(1997) further elaborated this point by saying that the task of deductive logic is to define the
validity of one truth as it leads to another truth. It is important to note that the meaning of truth in
this context does not refer to the ontological, ultimate reality. Peirce made a distinction between
truth and reality: Truth is the understanding of reality through a self-corrective inquiry process by
the whole intellectual community across time. On the other hand, the existence of reality is
independent of human inquiry (Wiener, 1969). In terms of ontology, there is one reality. In regard
to methodology and epistemology, there is more than one approach and one source of knowledge.
Reality is "what is" while truth is "what would be." Deduction is possible because even without
relating to reality, propositions can be judged as true or false within a logical and conceptual
system.

Logic of deduction
Deduction involves drawing logical consequences from premises. An inference is
endorsed as deductionaly valid when the truth of all premises guarantees the truth of conclusion.

For instance,
First premise: All the beans from the bag are white (True).
Second premise: These beans are from this bag (True).
Conclusion: Therefore, these beans are white (True). (Peirce, 1986).
According to Peirce, deduction is a form of analytic inference and of this sort are all
mathematical demonstrations (1986).

Limitations of deduction
There are several limitations of deductive logic. First, deductive logic confines the
conclusion to a dichotomous answer (True/False). A typical example is the rejection or failure of
rejection of the null hypothesis. This narrowness of thinking is not endorsed by the Peircean
philosophical system, which emphasizes the search for a deeper insight of a surprising fact.
Second, this kind of reasoning cannot lead to the discovery of knowledge that is not already
embedded in the premise (Peirce, 1934/1960). In some cases the premise may even be
tautological--true by definition. Brown (1963) illustrated this weakness by using an example in
economics: An entrepreneur seeks maximization of profits. The maximum profits will be gained
when marginal revenue equals marginal cost. An entrepreneur will operate his business at the
equilibrium between marginal cost and marginal revenue.

The above deduction simply tells you that a rational man would like to make more money.

There is a similar example in cognitive psychology:
Human behaviors are rational.
One of several options is more efficient in achieving the goal.
A rational human will take the option that directs him to achieve his goal (Anderson,
1990).

The above two deductive inferences simply provide examples that a rational man will do
rational things. The specific rational behaviors have been included in the bigger set of generic
rational behaviors. Since deduction facilitates analysis based upon existing knowledge rather than
generating new knowledge, Josephson and Josephson (1994) viewed deduction as truth preserving
and abduction as truth producing.

Third, deduction is incomplete as we cannot logically prove all the premises are true.
Russell and Whitehead (1910) attempted to develop a self-sufficient logical-mathematical system.
In their view, not only can mathematics be reduced to logic, but also logic is the foundation of
mathematics. However, Gödel (1947/1986) showed that we cannot even establish all mathematics
by deductive proof. To be specific, it is impossible to have a self-sufficient system as Russell and
Whitehead postulated. Any lower order theorem or premise needs a higher order theorem or
premise for substantiation; and no system can be complete and consistent at the same time.
Deduction alone is clearly incapable of establishing the empirical knowledge we seek.
Peirce reviewed Russell's book "Principles of Mathematics" in 1903, but he only wrote a
short paragraph with vague comments. Nonetheless, based on Peirce's other writings on logic and
mathematics, Haack (1993) concluded that Peirce would be opposed to Russell and Whitehead's
notion that the epistemological foundations of mathematics lie in logic. It is questionable whether
the logic or the mathematics can fully justify deductive knowledge. No matter how logical a
hypothesis is, it is only sufficient within the system; it is still tentative and requires further
investigation with external proof.

This line of thought posed a serious challenge to researchers who are confident in the
logical structure of statistics. Mathematical logic relies on many unproven premises and
assumptions. Statistical conclusions are considered true only given that all premises and
assumptions that are applied are true. In recent years many Monte Carlo simulations have been
conducted to determine how robust certain tests are, and which statistics should be favored. The
reference and criteria of all these studies are within logical-mathematical systems without any
worldly concerns. For instance, the Fisher protected t-test is considered inferior to the Ryan test
and the Tukey test because it cannot control the inflated Type I error very well (Toothaker, 1993),
not because any psychologists or educators made a terribly wrong decision based upon the Fisher
protected t-test. Pillai-Bartlett statistic is considered superior to Wilk's Lambda and
Hotelling-Lawley Trace because of much greater robustness against unequal covariance matrices
(Olson, 1976), not because any significant scientific breakthroughs are made with the use of
Pillai-Bartlett statistic. For Peirce this kind of self-referential deduction cannot lead to progress in
knowledge. Knowing is an activity which is by definition involvement with the real world
(Burrell, 1968).

As a matter of fact, the inventor of deductive syllogisms, Aristotle, did not isolate formal
logic from external reality and he repeatedly admitted the importance of induction. It is not merely
that the conclusion is deduced correctly according to the formal laws of logic. Aristotle assumes
that the conclusion is verified in reality. Also, he devoted attention to the question: How do we
know the first premises from which deduction must start? (Copleston, 1946/85; Russell, 1945/72)
Certain development of quantitative research methodology is not restricted by logic.
Actually, statistics is by no means pure mathematics without interactions with the real world.
Gauss discovered the Gaussian distribution through astronomical observations. Fisher built his
theories from applications of biometrics and agriculture. Survival analysis or the hazard model is
the fruit of medical and sociological research. Differential item functioning (DIF) was developed
to address the issue of reducing test bias.

Last but not least, for several decades philosophers of science have been debating about the
issue of under-determination, a problematic situation in which several rival theories are
empirically equivalent but logically incompatible (de Regt, 1994; Psillos, 1999).

Under-determination is no stranger to quantitative researchers, who constantly face model
equivalency in factor analysis and structural equation modeling. Under-determination, according
to Leplin (1997), is a problem rooted in the limitations of the hypothetico-deductive methodology,
which is disconfirmatory in nature. For instance, the widely adopted hypothesis testing is based on
the logic of computing the probability of obtaining the observed data (D) given that the theory or
the hypothesis (H) is true (P(DH)). At most this mode of inquiry can inform us when to reject a
theory, but not when to accept one. Thus, quantitative researchers usually draw a conclusion using
the language in this fashion: “Reject the hypothesis” or “fail to reject the hypothesis,” but not
“accept the hypothesis” or “fail to accept the hypothesis.” Passing a test is not confirmatory if the
test is one that even a false theory would be expected to pass. At first glance it may be strange to
say that a false theory could lead to passing of a test, but that is how under-determination occurs.
Whenever a theory is proposed for predicting or explaining a phenomenon, it has a deductive
structure. What is deduced may be an empirical regularity that holds only statistically, and thus,
the answer by deduction works well for the true theory as for the false ones.

Summary
For Peirce, deduction alone is a necessary condition, but not a sufficient condition of
knowledge. Peirce (1934/1960) warned that deduction is applicable only to the ideal state of
things. In other words, deduction alone can be applied to a well-defined problem, but not an
ill-defined problem, which is more likely to be encountered by researchers. Nevertheless,
deduction performs the function of clarifying the relation of logical implications. When
well-defined categories result from abduction, premises can be generated for deductive reasoning.

Induction
Premise of induction
For Peirce, induction is founded on the premise that inquiry is a self-corrective inquiry
process by the whole intellectual community across time. Peirce stressed the collective nature of
inquiry by saying “No mind can take one step without the aid of other minds” (1934/1960, p.398).
Unlike Kuhn's (1962) emphasis on paradigm shift and incommensurability between different
paradigms, Peirce stressed the continuity of knowledge. First, knowledge does not emerge out of
pure logic. Instead, it is a historical and social product. Second, Peirce disregarded the Cartesian
skepticism of doubting everything (DesCartes, 1641/1964). To some extent we have to fix our
beliefs on those positions that are widely accepted by the intellectual community (Peirce, 1877).
Kuhn proposed that the advancement of human knowledge is a revolutionary process in
which new frameworks overthrow outdated frameworks. Peirce, in contrast, considered
knowledge to be continuous and cumulative. Rescher (1978) used the geographical-exploration
model as a metaphor to illustrate Peirce's idea: The replacement of a flat-world view with a
global-world view is a change in conceptual understanding, or a paradigm shift. After we have
discovered all the continents and oceans, measuring the height of Mount Everest and the depth of
the Nile river is adding details to our conceptual knowledge. Although Kuhn's theory looks
glamorous, as a matter of fact, paradigm shifts might occur only once in several centuries. The
majority of scholars are just adding details to existing frameworks. Knowledge is self-corrective
insofar as we inherit the findings from previous scholars and refine them.

Logic of induction
Induction introduced by Francis Bacon is a direct revolt against deduction. Bacon
(1620/1960) found that people who use deductive reasoning rely on the authority of antiquity
(premises made by masters), and the tendency of the mind to construct knowledge within the mind
itself. Bacon criticized deductive users as spiders for they make a web of knowledge out of their
own substance. Although the meaning of deductive knowledge is entirely self-referent, deductive
users tend to take those propositions as assertions. Propositions and assertions are not the same
level of knowledge. For Peirce, abduction and deduction only gives propositions, however,
self-correcting induction provides empirical support to assertions.

Inductive logic is often based upon the notion that probability is the relative frequency in
long run and a general law can be concluded based on numerous cases. For examples,
A1, A2, A3 ... A100 are B.
A1, A2, A3 ... A100 are C.
Therefore, B is C.
Or
A1, A2, A3, … A100 are B.
Hence, all A are B.
Abduction, deduction and induction 17
Nonetheless, the above is by no mean the only way of understanding induction. Induction
could also take the form of prediction:
A1,A2,A3…A100 are B.
Thus, A101 will be B.

Limitations of induction
Hume (1777/1912) argued that things are inconclusive by induction because in infinity
there are always new cases and new evidence. Induction can be justified, if and only if, instances of
which we have no experience resemble those of which we have experience. Thus, the problem of
induction is also known as “the skeptical problem about the future” (Hacking, 1975). Take the
previous argument as an example. If A101 is not B, the statement "B is C" will be refuted.
We never know when a regression line will turn flat, go down, or go up. Even inductive
reasoning using numerous accurate data and high power computing can go wrong, because
predictions are made only under certain specified conditions (Samuelson, 1967). For instance,
based on the case studies in the 19th century, sociologist Max Weber (1904/1976) argued that
capitalism could be developed in Europe because of the Protestant work ethic; other cultures like
the Chinese Confucianism are by essence incompatible with capitalism. However, after World
War Two, the emergence of Asian economic powers such as Taiwan, South Korea, Hong Kong
and Singapore disconfirmed the Weberian hypothesis.

Take the modern economy as another example. Due to American economic problems in
the early '80s, quite a few reputable economists made gloomy predictions about the U.S. economy
such as the takeover of American economic and technological throne by Japan. By the end of the
decade, Roberts (1989) concluded that those economists were wrong; contrary to those forecasts,
in the 80’s the U.S. enjoyed the longest economic expansion in its history. In the 1990s, the
economic positions of the two nations changed: Japan experienced recession while America
experienced expansion.

“The skeptical problem about the future” is also known as “the old riddle of induction.” In
a similar vein to the old riddle, Goodman (1954/1983) introduced the “new riddle of induction,” in
which conceptualization of kinds plays an important role. Goodman demonstrated that whenever
we reach a conclusion based upon inductive reasoning, we could use the same rules of inference,
but different criteria of classification, to draw an opposite conclusion. Goodman’s example is: We
could conclude that all emeralds are green given that 1000 observed emeralds are green. But what
would happen if we re-classify “green” objects as “blue” and “blue” as “green” in the year 2020?
We can say that something is “grue” if it was considered “green” before 2020 and it would be
treated as “blue” after 2020. We can also say that something is “bleen” if it was counted as a “blue”
object before 2020 and it would be regarded as “green” after 2020. Thus, the new riddle is also
known as “the grue problem.”

In addition, Hacking (1999) cited the example of “child abuse,” a construct that has been
taken for granted by many Americans, to demonstrate the new riddle. Hacking pointed out that
actually the concept of “child abuse” in the current form did not exist in other cultures. Cruelty to
children just emerged as a social issue during the Victorian period, but “child abuse” as a social
science concept was formulated in America around 1960. To this extent, Victorians viewed cruelty
to children as a matter of poor people harming their children, but to Americans child abuse was a
classless phenomenon. When the construct “child abuse” became more and more popular, many
American adults recollected childhood trauma during psychotherapy sessions, but authenticity of
these child abuse cases was highly questionable. Hacking proposed that “child abuse” is a typical
example of how re-conceptualization in the future alters our evaluations of the past.

Another main theme of the new riddle focuses on the problem of projectibility. Whether an
“observed pattern” is projectible depends on how we conceptualize the pattern. Skyrms (1975)
used a mathematical example to illustrate this problem: If this series of digits (1, 2, 3, 4, 5) is
shown, what is the next projected number? Without any doubt, for most people the intuitive
answer is simply “6.” Skyrms argued that this seemingly straight-forward numeric sequence could
be populated by this generating function: (A-1)(A-2)(A-3)(A-4)(A-5)+A. Let’s step through this
example using an Excel spreadsheet. In Cell A1 to A10 of the Excel spreadsheet, enter 1-10,
respectively. Next, in Cell B1 enter the function “=(A1-1)*(A1-2)*(A1-3)*(A1-4)*(A1-5)+A1”
and this function will yield “1.” Afterwards, select Cell B1 and “drag” the cursor downwards to
Cell B10; it will copy the same function to B2, B3, B4…B10. As a result, (B1 to B5) will
correspond to (A1 to A10), which are (1, 2, 3, 4, 5). However, the sixth number in Column B,
which is 126, substantively deviates from the intuitive projection. All numbers in the cells below
B6 are also surprising. Skyrms pointed out that whatever number we want to predict for the sixth
number of the series, there is always a generating function that can fit the given members of the
sequence and that will yield the projection we want. This indeterminacy of projection is a
mathematical fact.

Furthermore, the new riddle, which is considered an instantiation of the general problem of
under-determination in epistemology, is germane to quantitative researchers in the context of
“model equivalency” and “factor indeterminacy.” (DeVito, 1997; Forster, 1999; Forster & Sober,
1994; Kieseppa, 2001; Muliak, 1996; Turney, 1999). Specifically, the new riddle and other
philosophical notions of under-determination illustrate that all scientific theories are
under-determined by the limited evidence in the sense that the same phenomenon can be equally
well-explained by rival models that are logically incompatible. In factor analysis, for example,
whether adopting a one-factor or a two-factor model may have tremendous impact on subsequent
inferences. In curving-fitting problem, whether using the Akaike’s Information Criterion or the
Bayesian Information Criterion is crucial in the sense that these two criteria could lead to different
conclusions. Hence, the preceding problem of model selection criteria in quantitative-based
research is analogous to the problem of re-conceptualization of “child abuse” and the problem of
projectibility based upon generating functions. At the present time, there are no commonly agreed
solutions to either the new riddle or the model selection criteria.

Second, induction suggests the possible outcome in relation to events in long run. This is
not definable for an individual event. To make a judgment for a single event based on probability
like "your chance to survive this surgery is 75 percent" is nonsense. In actuality, the patient will
either live or die. In a single event, not only the probability is indefinable, but also the explanatory
power is absent. Induction yields a general statement that explains the event of observing, but not
the facts observed. Josephson and Josephson (1994) gave this example: “Suppose I choose a ball at
random (arbitrarily) from a large hat containing colored balls. The ball I choose is red. Does the
fact that all of the balls in the hat are red explain why this particular ball is red? No…’All A’s are
B’s’ cannot explain why ‘this A is a B’ because it does not say anything about how its being an A
is connected with its being a B.” (p.20)

As mentioned before, induction also suggests probability as a relative frequency. In the
discussion of probability of induction, Peirce (1986) raised his skepticism to this idea: “The
relative probability…is something which we should have a right to talk about if universes were as
plenty as blackberries, if we could put a quantity of them in a bag, shake them well up, draw out a
sample, and examine them to see what proportion of them had one arragement and what proportion
another.” (pp. 300-301). Peirce is not alone in this matter. To many quantitative researchers, other
types of interpretations of probability, such as the subjective interpretation and the propensity
interpretation, should be considered.

Third, Carnap, as an inductive logician, knew the limitation of induction. Carnap (1952)
argued that induction might lead to the generalization of empirical laws but not theoretical laws.
For instance, even if we observe thousands of stones, trees and flowers, we never reach a point at
which we observe a molecule. After we heat many iron bars, we can conclude the empirical fact
that metals will bend when they are heated. But we will never discover the physics of expansion
coefficients in this way.

Indeed, superficial empirical-based induction could lead to wrong conclusions. For
example, by repeated observations, it seems that heavy bodies (e.g. metal, stone) fall faster than
lighter bodies (paper, feather). This Aristotelian belief had misled European scientists for over a
thousand years. Galileo argued that indeed both heavy and light objects fall at the same speed.
There is a popular myth that Galileo conducted an experiment in the Tower of Pisa to prove his
point. Probably he never performed this experiment. Actually this experiment was performed by
one of Galileo's critics and the result supported Aristotle's notion. Galileo did not get the law from
observation, but by a chain of logical arguments (Kuhn, 1985). Again, superficial induction runs
the risk of getting superficial and incorrect conclusion.

Quantitative researchers have been warned that high correlations among variables may not
be meaningful. For example, if one plots GNP, educational level, or anything against time, one
may see some significant but meaningless correlation (Yule, 1926). As Peirce (1934/1960) pointed
out, induction cannot furnish us with new ideas because observations or sensory data only lead us
to superficial conclusions but not the "bottom of things" (p.878).

Last but not least, induction as the sole source of reliable knowledge was never inductively
concluded. An Eighteenth century British moral philosopher Thomas Reid embraced the
conviction that the Baconian philosophy or the inductive method could be extended from the realm
of natural science to mind, society, and morality. He firmly believed that through an inductive
analysis of the faculties and powers by which the mind knows, feels, and wills, moral philosophers
could eventually establish the scientific foundations for morality. However, some form of
circularity was inevitable in his argument when induction was validated by induction. Reid and his
associates counter-measured this challenge by arguing that the human mental structure was
designed explicitly and solely for an inductive means of inquiry (cited in Bozeman, 1977).
However, today the issue of inductive circularity remains unsettled because psychologists still
could not reach a consent pertaining to the human reasoning process. While some psychologists
found that the frequency approach appears to be more natural to learners in the context of
quantitative reasoning (Gigerenzer, 2003; Hoffrage, Gigerenzer, & Martignon, 2002), some other
psychologists revealed that humans have conducted inquiry in the form of Bayesian network by
the age of five (Gopnik & Schulz, 2004). Proclaiming a particular reasoning mode as the human
mind structure in a hegemonic tone, needless to say, would lead to immediate protest.

Summary
For Peirce induction still has validity. Contrary to Hume's notion that our perception of
events is devoid of generality, Peirce argued that the existence we perceive must share generality
with other things in existence. Peirce's metaphysical system resolves the problem of induction by
asserting that the data from our perception are not reducible to discrete, logically and ontologically
independent events (Sullivan, 1991). In addition, for Peirce all empirical reasoning is essentially
making inferences from a sample to a population; the conclusion is merely approximately true
(O'Neill, 1993). Forster (1993) justified this view with the Law of Large Numbers. On one hand,
we don't know the real probability due to our finite existence. However, given a large number of
cases, we can approximate the actual probability. We don't have to know everything to know
something. Also, we don't have to know every case to get an approximation. This approximation is
sufficient to fix our beliefs and lead us to further inquiry.

Conclusion
In summary, abduction, deduction and induction have different merits and shortcomings.
Yet the combination of all three reasoning approaches provides researchers a powerful tool of
inquiry. For Peirce a reasoner should apply abduction, deduction and induction altogether in order
to achieve a comprehensive inquiry. Abduction and deduction are the conceptual understanding of
phenomena, and induction is the quantitative verification. At the stage of abduction, the goal is to
explore the data, find out a pattern, and suggest a plausible hypothesis with the use of proper
categories; deduction is to build a logical and testable hypothesis based upon other plausible
premises; and induction is the approximation towards the truth in order to fix our beliefs for further
inquiry. In short, abduction creates, deduction explicates, and induction verifies.

A good example of their application can be found in the use of the Bayesian Inference
Network (BIN) in psychometrics (Mislevy, 1994). According to Mislevy, the BIN builds around
deductive reasoning to support subsequent inductive reasoning from realized data to probabilities of states. Yet abductive reasoning is vital to the process in two aspects. First, abductive reasoning suggests the framework for inductive reasoning. Second, while the BIN is a tool for reasoning deductively and inductively within the posited structure, abduction is required to reason about the structure. Another example can be found in the mixed methodology developed by Johnson and Onwuegbuzie (2004). Research employing mixed methods (quantitative and qualitative methods) makes use of all three modes of reasoning. To be specific, its logic of inquiry includes the use of induction in pattern recognition, which is commonly used in thematic analysis in qualitative methods, the use of deduction, which is concerned with quantitative testing of theories and hypotheses, and abduction, which is about inferences to the best explanation based on a set of available alternate explanations. It is important to note that researchers do not have to follow a specific order in using abduction, deduction, and induction. In Johnson and Onwuegbuzie’s framework, abduction is a tool of justifying the results at the end rather than generating a hypothesis at the beginning of a study.

One of the goals of this chapter is to illustrate a tight integration among different modes of
inquiry, and its implication to exploratory and confirmatory analyzes. Consider this
counter-example. Glymour (2001) viewed widespread applications of factor analysis as a sign of
system-wide failure in social sciences in terms of causal interpretations. As a strong advocate of
structural equation modeling, which is an extension of confirmatory factor analysis, Glymour is
very critical of this exploratory factor modeling approach. By reviewing the history of
psychometrics, Glymour stated that reliability (a stable factor structure) was never a goal of early
psychometricians. Thurstone faced the problem that there were many competing factor models
that were statistically equivalent. In order to “saving the phenomena” (to uniquely determining the factor loadings), he developed the criterion of the simple structure, which has no special
measure-theoretic virtue or special stability properties. In addition, on finite samples, factor
analysis may fail to recover the true causal structure because of statistical or algorithmic artifacts.
On the contrary, Glymour (2005) developed path-searching algorithms for model building, in
which huge data sets are collected; automated methods are employed to search for regularities in
the data; hypotheses are generated and tested as they go along. The last point is especially
important because for Gkymour there is no sharp distinction between the exploratory and
confirmatory steps.

However, even if path-searching algorithms are capable of conducting hypothesis
generation and testing altogether, it is doubtful whether the process is totally confirmatory and
nothing exploratory. In a strict sense, even CFA is a mixture of exploratory and confirmatory
techniques, in which the end product is derived in part from theory and in part from a
re-specification based on the analysis of the fitness indices. The same argument is well-applied to
path-searching. According to Peirce, in the long run scientific inquiry is a self-correcting process;
earlier theories will inevitably be revised or rejected by later theories. In this sense, all causal
conclusions, no matter how confirmatory they are, must be exploratory in nature because these
confirmed conclusions are subject to further investigations. In short, it is the author’s belief that
integration among abduction, induction, and deduction, as well as between exploratory and
confirmatory analyses, could enable researchers to conduct a thorough investigation.

The Importance of Mental Development/Thought Structuring in Scientific Theorizing

Excellent post Tim__I'll get back to you a bit later, as I've a few other things to attend to right now, but I had to respond to how much I liked your assessment and direction. I visited my physicist/biologist/chemist friend yesterday. We had about a 5 hour talk on these same subject areas__and ended in looking at wisdom and creativity in thought, and abstract thought constructs__which much pertains to what you've written about, in general anyway. It's just it's sometimes very hard to see where each is coming from by this crude method of written texts/posts__yet it looks like we're still very much in tune with our thinkings, and I've got two other advanced PhD thinkers on my end, both interested in the exact same directions we are traveling in__This should start to get real interesting...

I'll expand on all this later, but it's to do with computational inference mechanics, by way of computational inductions and abduction mechanics. This is all to do with the fields of information sciences, since they are more recently covering all this information in more general senses, which is needed to gather the information we require in one single abduction pool, 1st...

Later,

P.s.
Here's a few of my e-mails to associates:

Hi ?__here's a few links to Johnny Logic's(John L. Taylor's) ideas. At the first link, just scroll down to his posts, then you can follow them through, over the next 5 or 6 pages. His and Onealej's posts to each other are an interesting representation of his mind's workings. The second link is to his blog. Many links to his work are listed on the right... Lastly, a quote of wisdom I liked, by Gibran...

Enjoy,

http://forums.philosophyforums.com/threads/inductive-logic-in-science-42244.html

http://www.johnnylogic.org/page_id=1015?

The teacher who is indeed wise does not bid you to enter the house of his wisdom but rather leads you to the threshold of your mind. - Kahlil Gibran

P.s.

You may be interested in this also: http://www.uniurb.it/Filosofia/isonomia/alai.htm Logic of Discovery and Scientific Realism...(just a general update)

And:

Hi ?. I chased John's ideas back to some of their sources. He's well grounded in many sensible authors, especially this group:

file:///C:/Users/User/Documents/Induction%20processes%20of%20inference%20___%20-%20Google%20Books.htm#v=onepage&q&f=false

They wrote this computational induction book back in the `80's. It's very sound, imo...

Here's a link to the main author's newest book, Paul Thagard, on Wisdom, just the 1st chapter__all that's offered free on the web. Quite interesting though, as it addresses the wisdom and creativity quests quite directly, as per how we were speaking yesterday...

http://press.princeton.edu/chapters/s9152.pdf

And a short review:

http://press.princeton.edu/titles/9152.html

And also This I just found:

Cognitively Plausible Heuristics to Tackle the Computational Complexity of Abductive Reasoning http://www.aistudy.co.kr/paper/aaai_journal/AIMag13-02-007.pdf

Enjoy,

The question mark is where his name belongs, because I didn't ask permission to use his name, just so's you know, Tim...

Monday, July 18, 2011

The Importance of Mental Development/Thought Structuring in Scientific Theorizing

As you know Lloyd, I’ve long seen the limits of science along with the philosophical nature of our theorizing. That’s why I named my old thread “Philosophysics of a Fundamental Substance”. I see no way to measure at such limits as direct measurement isn’t the key to verification at the absolute scales. Given your high level of education in the areas of logic, science and most anything else we’ve come across, I would assume that you would see the importance of such theorizing concerning this quote you made: “Socrates, Plato and Aristotle got one thing right__’All is first represented to the mind by intellectual thought__All…!!!”

If we were in the field of particle physics or some other experimental science, I would have to agree with setting the limits you discuss, as I very well see the distinctions you are making. However, our discussions and ideas are theoretical physics, whereby as per the quote above, our instrument of exploration is the mind/brain itself. The same reasons you state for discontinuing the discrete/continuous exploration, I’ve often stated in reference to an external void or eternal and infinite field extension as I see no way of ever proving such and feel that the information we seek can be extrapolated without such further theorizing. From my views however, the same cannot be said for the discrete/continuous theorizing as we are more so shaping our instrument of exploration (ie. our mind) with these concepts than we are declaring any truths of nature. The brain is a dynamic system and such exercises into the limits of science, logic, philosophy, etc, are the methods by which such a dynamic system of processing is shaped and molded in real time. It’s no difference than the many manhours spent assembling the Large Hadron Collider in hopes that the many parts, bolts, transducers, magnets, coils, etc are all in such an order that mankind might reach ever further into the complexities of nature to perhaps bring about a little more familiarity with that which we would otherwise remain unfamiliar with. Our minds are our instruments and our thoughts are the medium by which we probe such complexities hoping to arrive at theories which are an analog to the functions of nature itself.

There is a distinction between discrete processing and continuous processing as with digital and analog signal processing, and such differences could possibly determine how the hypothetical mental transducers, magnets, coils, etc are aligned within our thoughts whereby we might probe past the limits of such solid state machines to the point where observation of a small area at the shortest of distances and times or interaction thereof isn’t the important aspect and yields no importance to possibly even verify or deny such dynamics, but rather we perhaps might get a glimpse of the larger picture as a whole of just how information itself is processed in the form of motion dynamics or currents which result in the very flow of energy, mass, momentum, etc, at the ever larger scales and further distances outwardly. It has been stated that Einstein himself has been credited with changing the game in many ways (though I’m sure we could find other older instances) as his famous equation of E=mc² was derived through his theorizing and later directly experimentally verified rather than the more common method of the time whereby experimentation and observation led to such an equation. I’m of the opinion that there are connections within nature to such a degree that if aligned correctly, the mind and its theorizing can take precedence over the instruments of science, yet the yin to such yang is that we do need such instruments and experiments to verify the correctness of such exploration. As with all truth, there’s only one path which leads to absolute truth, while there are numerous others which lead to the nowhere that is the fallacies of imagination.

So, as you can see, the balance here to me is to purposely bypass all physical scientific limits into the realm of philosophy (as I’ve always stated I’m more of a philosopher than theoretical physicist/scientist by far as I would never consider myself such) to the point that my actual thought processes perhaps align with some exploitable aspect of how physical information is processed as we both have discussed the universe in terms of information theory. As suggested before, I’m uncertain that an instrument or direct measure is even important at such resolutions as it is the whole which is important here rather than any instrument which might transduce a modulated signal which we might someday ascribe to such short distances to suggest what is going on near the Planck limits. There are so many signal processing aspects involved with the detection, processing and output of such instruments that interpretation of events is highly debatable even within the instruments of science as seen with the particle colliders and the many levels of technology needed to transduce and modulate a signal from such short distances back up to the more familiar output methods which scientist use to suggest what they are witnessing.

I feel that per the inference mechanics of abduction, induction and deduction, it is important to sometimes go to such mental extremes whereby first establishing the parameters of thought itself, at which point the focus is brought back up to the verifiable and provable scales, which might allow for the verification of truth itself concerning the nature of our universe. If such mental exploration were to yield a base level train of thought which built back up to an insight which were found to be provable, and thus experimentally accomplished, then we would have found the unification of science, philosophy and logic as all being aspects of traveling the path to absolute universal truth. My point here is that we can discuss such things as decay mechanics, which I see as the real point of interest here by way of its experimental value concerning state transitions from structured to unstructured and vice versa, but the insight into such complex mechanics full of possible wrong turns perhaps must come by way of having previously traveled further along the road of truth and navigated back from such depths to now have a working logic and thought process which might then be used to exploit the aspects we are searching for that may someday lead to verification by way of the scientific methodology. I realize how far in the woods I’ve gone, but I assure you that it is not without purpose and caution as I well know just how easy it is to get lost here. I just need you to realize that such explorations are just as much an exercise of the mind and processes of thought as they are or ever will be those of physical experimental exploration. I’m merely trying to exploit a system of thought or find direction within my logical processes by probing such depths whereby hopefully allowing for the mental connections of information needed to perhaps someday allow for a better understanding of those resolutions at which the universe better presents itself for direct observation rather than basing my sanity or the lack thereof upon those resolutions and scales at which I feel it never will, being as I see the tangible aspects of nature needed for such observations, measures and experiments as arising from the absolute realm which is more so only the diluted state of substance whereby being better suited for mental and perhaps mathematical exploration. Such theories are similar to String Theory, where most would point out its unobservable nature which has become a complex mathematical construct which they feel is analogous to nature itself. To date it has failed to produce any experimental evidence to my knowledge and many assume it never will. I use it as a prime example because it is a representation of where such theories can lead by simply following the math and such. However, if we were wrong and it began producing insights into the universe at some point in the future which were verified by experiment, then by our very scientific methodology, we would have to begin to acknowledge its possible importance and truth concerning such scales which can’t be directly observed. The same can be said if we were to establish some form of logic or reasoning by such exploration which ultimately established the path by which we found physical confirmation. At the level we’ve brought the mechanical aspects to concerning decay mechanics and state transitions, I can’t speak for you, but I don’t know that I will progress there without having taken my thoughts to the furthest depths first and arrived back to apply what I’ve learned there about how the universe processes information itself as such interactions are ultimately operating within such absolute parameters. For better or worse, this is where my thoughts currently stand.

The Beyond Science Measurement Problems...

I guess my point is Tim, that when the reality of the dialogue reaches the limits of science’s ability to distinguish between mind/ego and reality, we are at the limits of what science, math, rationality, measurement and logic are capable of, and in philosophy, we call this epistemology__in science, it would be Planck time and measurement limits__which can be further broken down into supervenience and meta-semiotics__or simply put, thoughts about all these thoughts at science’s maximum limits of interpretation and meaning capabilities. Tim, long ago I came to realize mathematics was capable of wildly exceeding science’s real ability of measuring such mathematics, and more recently realized that all of science’s best efforts, of even using the greater accuracies of the entire Universe as measuring instrument, was only capable of reaching half way to even the Planck scale of measurement lengths, i.e., about 10^-18cm, where Planck scale goes down to 10^-33cm, using this particular aspect of the Planck scale, and even theory only expects scientific proof to exist at 10^-31cm, yet doesn’t expect to be able to reach such levels of measurement accuracy, even in this entire century. Then when it comes to Planck time and volumes, the figures are still further out of reach__so this is why I’ve more or less been discouraging your direction of discrete measurement attempts__as they are far beyond our abilities of achieving, in any time soon. Yes, theoretically we can speak of such ideas__but without scientific proof abilities, there’s no way to test our egos against the real scientific World and Universe__so imo, such attempts are futile__Like the ‘Borg’ said; “Resistance is futile”.

Tim, it’s just if we attempt measurement discreteness theorizings in such areas, beyond our science capabilities, we are committing the same crimes against science and logic the Church committed for centuries__therefore, I suggest we stay within what science is now capable of__“We don’t know.” “We can’t know.” “We won’t know, for a long time to come, as such is pure projection and speculation”__at measurement limits. This is the point where you realize that philosophy is needed to express what science is incapable of expressing, and this is not to say that philosophy can express more thoroughly what science can not__But, simply to show what both or neither science or/and philosophy ‘can not’ understand. Tim, we learn as much about truth systems by knowing what ‘we do not know’ and ‘what we can not know’, as we do by what ‘we do and can know…’ This is why I’ve always brought up the ‘One/Many’ problem, and further related it to ‘incompletenesses’, and such other similar expressed ideas as ‘supervenience’, ‘super-positionings’ and ‘superfluids’, etc. Tim, though the mind and ego knows it can mathematically and rationally measure beyond the 10^-18cm level__What good is it to measure such ‘Ghost Ideas…???’ It’s like the ‘Ghost of subjective psychology’, when all thought is truly objectively intellectual, even all thought about our deepest positive and negative feelings, which are just as easily represented by the differences between absolute mathematical fairness, and unfairness__It’s in the end all mathematical and logical objective representation, within our objective ‘intellects’ of thought__The only place ‘thought’ exists, since no feeling can be experienced except as an intellectual thought__1st… The old eclectics, Socrates, Plato and Aristotle got one thing right__’All is first represented to the mind by intellectual thought__All…!!!’

So Tim, since the measurement of our intellectual physical world of CM, QM and RM is beyond our intellectual scientific measurement abilities__Our only choice of such measurement is philosophy, when it comes to deciding between discrete and non-discrete… Science can not go where philosophy can go, as relates to this, but that’s hard for most scientists and logicians to recognize__and why can philosophy go beyond science, at this level of measurement impossibility...? Because; ‘Only philosophy has developed such languages to handle such problems’__where science ends and philosophy begins… This doesn’t mean, we leave the world of science__it simply means we fully recognize the many short-comings of science and technology, and express these incompleteness problems honestly, and admit that just maybe philosophy is necessary__after all__to express the yet uncompleted Universe of Total Knowledge… As you know, I’ve often stated; “It’s more important to know what you do not know, and can not know, than to know what you do know…” This is still true…

Tim, we simply can not measure, where you wish discreteness to go__There exists no evidence, for such ideas… There’s only the comparison between discrete and non-discrete theories, lacking all possibility of physical evidence__Though your point of realizing further into these dynamics is well taken…

The Inherent Complexities of the Universe and Mind

Hi Lloyd,

To respond to just a few of your points before I explain my reasoning further I’ll just add that I see the issues with communication once we’ve reached this conceptual level also and am open to any ideas you have of symbolic representation and such, but you’ll have to take the lead on that and I’ll just follow. Reverse engineering a universe with all its complexity down to the fundamental resolution and interactions ultimately requires a reduction of seemingly infinite amounts of information down to the concepts required to encompass such resolutions and the mechanics thereof most effectively and efficiently. As to the further differences between static computers vs. the much more complex dynamic brains, it brings to mind the distinction between the study of static systems as with algebra and the limits thereof and that of the much more complex dynamic systems by way of calculus with its infinities and such.

Now, as to the superfluid state of matter, Bose-Einstein condensates, etc, as you know and from my interpretations, Dave always saw such as having been closer to the fundamental state of matter or FS. I’ve always interpreted his concepts as a physical continuous substance which encompassed all states of matter from the vacuum aethereal state of unstructured field all the way to the many structured systems within the universe. He would sometimes mention an exterior void which contained the universal volume of substance but was totally isolated from the universal system whereby allowing it to be a completely closed system. My point being that if we see such a continuous state as being the fundamental resolution then all further states built upon such e.g. solid, liquid, gas, plasma, vacuum, etc, are further expressions of the same entity as we know. However, my concepts are merely suggesting that beneath/within all states of FS (even the seemingly continuous before mentioned fundamental resolution) is the actual fundamental absolute resolution which supplies the absolute reference frame upon which such superfluid states are structured. From my current perspective, just as the page is visibly unfamiliar to a blind person, whereby they interpret the written information thereon by way of the Braille system, so too is the most absolute fundamental resolution seemingly undetectable from our senses unless we learn to translate the meaning within the language by which it is written. The first major step in unification is to understand the continuity amongst all forms of the matter and vacuum space of modern physics by way of a fundamental substance in motion whereby allowing matter, energy, momentum, etc, to be conserved throughout the entire system by way of structured to unstructured conversions and vice versa. The next step is perhaps a realization that all states of such a FS are merely how we are interpreting a much more deeply fundamental resolution the same as a blind person reads a page written with Braille, so to are we experiencing the most fundamental interactions by way of the structured and unstructured systems within the universe.

This doesn’t disallow superfluidic states, super conductors, condensates, etc. It merely explains them as not being the fundamental reference frame, but rather a consequence of some pattern of uniform motion within a larger motion continuum, whereby such things as superfluidic states can be observed to act in motion unison to such a degree that even thermal energy is dispersed throughout at such a high rate that the entire volume is always found to be at exactly the same temperature. I know you may not like my interpretation of such things, but ultimately many of the aspects we discuss along with those of the more accepted Standard Model of modern physics suggests this to my way of mental processing. Dave had stated that the universe was a terrible mathematician or something to that effect. However, I always saw it differently. The exacting conservation of such things as momentum and kinetic energy within two interacting systems is to such a high degree that we see such interactions and the accuracies thereof as laws of physics, not to mention mass to energy conversions, angular deflection, discrete quantum interactions, etc, etc. I’m not sure that the conservation of the many dimensions by which we acknowledge the physical entity of the FS can be explained to such high degrees as they are if the most fundamental resolution is a continuous superfluid state. My ego is more inclined to consider that though a FS helps to unify the seemingly independent physical entity aspects of the universe, the dimensional aspect within our universe by which we know such an entity is best explained by way of discreteness throughout, even if such discrete interactions are so far removed from our more familiar resolutions that they even appear as continuous while interacting to allow for some states.

Many of the conservation aspects of the standard model are stated as laws and such, but have no underlying explanation of the mechanics to how they actually function to such high degrees of accuracy whereby we might consider them a law at all. Newton’s gravity went far to explain planetary orbits, falling bodies and such, but never had any underlying mechanical explanation itself until Einstein came along with his theory of Relativity and tried to explain the function of gravity itself. I guess I see other areas of physics as still supplying concepts to explain one resolution of problems, while still not going to the deeper more fundamental resolution whereby the concepts which just became the bandaid to fix the previous problem is mechanically explained concerning its functions. Two structured systems collide (e.g. a scattering interaction of subatomic particles) within a near perfect elastic collision whereby both momentum and kinetic energy are conserved and traceable throughout the interaction, thus we witness the conservation laws at work, but how do such interactions take place at the most fundamental resolution whereby the actual mechanism for such laws functions? How would such processes arise from a continuous field as we could even further consider an inelastic collision whereby a portion of the conserved kinetic energy was converted to some other form of energy such as heat? Admittingly, such a scenario would present problems with tracking all forms of energy conversions, but the laws imply such values to be accurate to a high degree.

I definitely see the part of our egos within our concepts at this level also, as I’ve stated before that our projection of such fundamental mechanics is perhaps more representative of how our individual brains are wired and internally function as though we are reaching deeper within our own individual thought processes rather than actually arriving at a more fundamental resolution of universal mechanics and the interactions thereof. Perhaps, my brain just better understands continuity being a product of discreteness as I have trouble with trying to explain to myself a high degree of discreteness from a continuous field. Even if the discrete nature of an otherwise continuous photon is established by the discreteness of the structured system which absorbs or emits it, as you’ve pointed out, the formation of the discrete structured system from the initial continuous field (which now houses photons) at some point in the past has to be fully explained as with the analog to digital analogies we’ve been passing back and forth. It just seems more intuitive to me that rather than trying to explain continuous to discrete conversions it is easier to explain discrete mechanics throughout whereby the complexity brought about by resolution separation allows for the illusion of continuity as we blindly read what the fundamental resolution is telling us. Even if we were to go the continuous route and shut the universal reductionism down at Dave’s FS resolution then discreteness should seemingly become an effect thereof whereby no true discreteness is found and any such observation thereof has errors involved whereby the infinite nature of continuity is simply reduced to an acceptable degree to suggest discreteness. I am working on the implications of a discrete field per the frequency mechanics discussed before in terms of a reduction of all interactions to mere energy transferences amongst absolute regions of the fundamental field. Frequencies and wavelengths are reducible to quantitative aspects of energy as with E=hv. I’m simply suggesting here that the further reduction of frequency and wavelength interactions which oversee nature at the FS resolution may find themselves further reduced to how PSF domains interact and impose motion transferences upon each other.

At this level of conversation, I can’t separate between whether such mental influences that suggest one such system over the other is an external aspect of how the universe operates, or an internal aspect of how I perhaps project it to operate, or perhaps even how my own thoughts operate. As we reduce the complexity of the universe down to its fundamental constituent concepts, so too do we perhaps inevitably reduce and or align our thought process down to its underlying constituent aspects whereby in the absence of experiment, measure or observation (which is frequently lacking at this level of discussion) we can’t distinguish between physical reality and mere mental projection due to both the universe and our thoughts operating within similar dynamic aspects and mechanics. But….isn’t this the very beauty of the abduction process as I now have the projection from the various scientific aspects which I’ve encountered having previously formed the pool of information within my mind and must now search for external physical confirmation or denial thereof by way of further use of the scientific methodology or further studies thereof, whereby I might distinguish the functions of my own mind from those of the universe at such fundamental levels?

Much of my pauses in conversation is coming from struggling to further my own thoughts, as such subjects as decay mechanics is the direction we must head as I’m not intentionally excluding them but rather avoiding them due to not having a well established direction of conversation or mechanics thereof. The flow of ideas also bottlenecks as we further refine and reduce our concepts to evermore fundamental aspects and interactions. I’m hoping that my before mentioned frequency reduction aspects stirs further thoughts in this area as well.