Logic History Overview...

Logic History Overview...
Quantification Logic...

Monday, July 4, 2011

Radical Emergence & Supervenience In Relation To Total Theories…

(Tim, just a short summary of what I’ve been saying… By William Seager)
http://blog.stephenwolfram.com/2011/05/talking-about-computing-and-philosophy/

There is an interesting and perhaps rather attractive symmetry of causation in radical emergence that is lacking in doctrines that espouse totality, such as benign emergence. The failure of totality under radical emergence is explicable in terms of a very strong form of ‘top-down’ causation. Totality will fail when complex systems fail to act in the ways they should act if their behavior was entirely generated by the interactions of their constituents according to the fundamental laws governing those constituents and their interactions. Our label for such failure is divergence, so we can say, in short, that divergence is explicable by top-down causation. Now, as noted above, the divergence of complex systems as described by high-level theories is commonplace, and such divergence is explicable by bottom-up causation; we expect that high-level generalizations will fail because of intrusions ‘from below’ of effects stemming from lower-level processes or structures.

Radical emergence accepts that there will be exactly analogous, and genuine as opposed to the merely apparent – or at least entirely explicable in low-level terms – top-down causation found in total theories, intrusions ‘from above’ as well, so that complex systems, described in terms of low-level theory, will suffer from effects stemming from higher-level processes or structures, effects which are not predictable solely from the low-level state of the systems.

Another interesting feature of radical emergence is that it tends to conspire to give an illusion of totality. That is, radical emergence of U from T entails weak T-temporal supervenience (up to intrinsic randomness of T). Thus, within a world, T-complexes that are indiscernible all act exactly the same (or, at least, generate the same behavioural statistics). Such a world could ‘look’ like it was T-total and encourage the search for a total T-theory. A rather bizarre further ‘metaphysical’ possibility is that such a ‘total’ theory could perhaps, given sufficient ingenuity, be found despite the existence of genuine radical emergence. The theory would be false, but not testable. A warning sign of such a situation might be the multiplication beyond plausibility of potential energy fields (of the sort discussed above) required to handle multi-component interaction. More likely, the very complexity of those T-systems in which radical emergence might be found would rule out any test of emergence. The systems of interest would just be too far from the T-constituents for any calculation based solely upon fundamental T-laws of how they should behave to be feasible. That is, of course, the situation we are in and shall remain in.

The issue of testability could become more contentious if it should turn out that the
mathematical details, of our best fundamental theory, rule out not only analytic solutions of critical equations (a situation we are already in) but also simulatability19. It is worth remembering that the totality of physics is not practically testable for the simple reason that the instruments used in physical experimentation are themselves highly complex physical entities for which the hypothesis of radical emergence would have to be ruled out. The discovery and verification of properties of the most basic physical entities are the very ones that require the most complex instruments, such as particle accelerator complexes, as well as the extremely long historical chains of experimental inference which necessarily involve myriads of highly complex instruments. If it should turn out
that certain complex and actual physical systems required for the testing of basic theory are in principle unpredictable because of certain mathematical limitations then it may be that the totality of physics is simply not a testable hypothesis at all.

The contrast between benign and radical emergence can be expressed in a theological metaphor. Imagine God creating a world. He decides it shall be made of, say, quarks and leptons that, in themselves, obey certain laws. But He has a choice about whether His new world shall be total (relative to these elementary constituents) or not. That is, He must decide whether or not to impose serious laws of emergence ‘on top of’ the properties of the basic entities. Either way, a world appears, but the worlds are different. Which world are we in? It is impossible to tell by casual inspection and perhaps impossible to tell by any experiment, no matter how idealized. Thus it may be that radical emergentism cannot be ruled out by any empirical test whatsoever, and thus it may be that we live in a world of radical emergence.

William Seager

19. Simulatability is the feature of a theory that it is possible to calculate, in principle, the state transitions of any system in terms of the fundamental description of an initial state. Simulatability does not require that this calculation be mathematically exact; approximations are allowable so long as we can mathematically guarantee that the error of the approximation can be made as small as we like. For example, while the equations governing an isolated pendulum can be simulated by a mathematically exact representation of the system, the problem of simulating even a three-body gravitationally bound system is mathematically unsolvable. But the many-body problem can be approximated to whatever degree of accuracy we like (given arbitrarily large computing resources). There may be systems which cannot even be approximated in this sense however.

(full article/paper link:)
http://www.utsc.utoronto.ca/~seager/emsup.pdf

P.s.
(some other points from the paper:)
Top-down discipline can exist from a supervening domain, U, to its supervenience base domain, T, even if T lacks T-temporal supervenience and U enjoys U-temporal supervenience. In such a case, we could say that there is ‘de-randomization’ of T (see figure 4 below). It is possible that the apparent deterministic character of classical (or macroscopic mechanics) is the result of this sort of de-randomization, as the quantum states that realize the macro-states provide topdown discipline for the macro-domain. That is, while there may be intrinsic randomness at the micro-level, it somehow cancels out in the myriad interactions involved in the realization of any macro-state.

A possible ground for de-randomization in the micro to macro relationship is given by Ehrenfest’s equations, which assert that the expectation value of an observable such as position or momentum will evolve in accordance with classical laws of mechanics. In a macroscopic system made of huge numbers of microsystems we might expect that such statistical features will exhibit a stability sufficient to allow us to identify the expectation value with the values obtained by particular observations, thus resulting in de-randomization and providing a reason to expect top-down discipline.

Supervenience and Micro-Reduction vs. Macro-Reduction In QM & RM…

To reduce a mental property to physical properties, we must choose physical properties which have a determinateness commensurate with the properties to be reduced. Macroreduction is not a reduction to components, but a reduction in general types of things. With microreduction we eliminate some particular things from our ontology by showing that they are just combinations of things that we are already committed to. Macroreduction doesn't reduce the number of particular things; it reduces the number of types of things. A successful macroreduction shows that two types of things which we took to be different types of things are really the same type of thing. (Microreduction has this reduction of types as a side effect.)

Since explanation requires commensurability, which in turn requires some sort of reduction, and microreduction seems to be unsupportable in at least some interesting cases in biology, the need to legitimize successful reductive explanations suggests that we should look for macroreductions. We need, then, to characterize cases in which 1) microreduction is impossible or at least unhelpful, 2) micro processes are explanatorily relevant, and 3) macro reduction is possible. Part of the problem is knowing where to look to satisfy these desiderata.

http://en.wikipedia.org/wiki/Supervenience

Also:
Quantum Mechanics As A Statistical Theory...(read pages 10 to 16)
http://www.lia.ufc.br/~carlos/artigos/popper.pdf

No comments:

Post a Comment

Please let us know your logical, scientific opinions...