Tuesday, November 20, 2007

Zachriel's EA and Dembski's quote

Continuation from here

Hello again Zachriel,

Dr.s Dembski and Marks:

“Such assumptions, however, are useless when searching to find a sequence of say, 7 letters from a 26-letter alphabet to form a word that will pass successfully through a spell checker, or choosing a sequence of commands from 26 available commands in an Avida type program to generate a logic operation such as XNOR [16]. With no metric to determine nearness, the search landscape for such searches is binary -- either success or failure. There are no sloped hills to climb. We note that such search problems on a binary landscape are in a different class from those useful for, say, adaptive filters. But, in the context of the NFLT, knowledge of the class in which an optimization problem lies provides active information about the problem.”

Zachriel:

“Dembski has defined the same problem I did above. He claims that without some information built into the algorithm about how words are distributed through sequence space, evolutionary algorithms will be no better than random search. Is this what Dembski is saying? And is he right?”

Dembski is saying precisely what he is stating: “Such assumptions are useless when searching to find a sequence of say, 7 letters from a 26-letter alphabet to form a word that will pass successfully through a spell checker ... with no metric to determine nearness, the search landscape for such searches is binary – either success or failure. There are no slopped hills to climb.”

First, what assumptions is he talking about? In the previous paragraph, he refers to the assumptions:

“Within HÄaggstrÄom's familiarity zone, search structures have “links" in the optimization space and smoothness constraints allowing for use of “hill-climbing" optimization. Making such assumptions about underlying search structures is not only common but also vital to the success of optimizing searchers (e.g., adaptive filters and the training of layered perceptron neural networks | [22]).”

So, basically, Dembski and Marks refer to assumptions of links and hill climbing structures in the search space. The point is that merely assuming that there are these link structures within a non-uniform search space , without actual knowledge of these structures incorporated into the search algorithm, does nothing to affect the probability of a *random* search. What is needed is an algorithm that is matched to and programmed to take advantage of a *known* (so that it can provide accurate guidance) non-uniform search structure which actually contains hill climbing optimizations, etc. If the proper active information in the form of the proper type of adaptive search is not applied, there are no slopped hills to climb by *random* search no matter the underlying search structure.

We know that Dembski and Marks are referring to *random* search because in the first quoted paragraph above, he states that “We note that such search problems on a binary landscape are in a different class from those useful for, say, adaptive filters.”

Your algorithm does make use of at least one adaptive filter such as a “stepping stone search” which is defined as “building on more probable search results to achieve a more difficult search” and one instance of stepping stone search is discussed in “Conservation of Information in Search: Measuring the Cost of Success” and a formula for measuring the active information of the discussed stepping stone search function is provided.

Your algorithm is far from the random search of a 7 letter word that Dembski refers to. First, it builds upon the success of previous results (stepping stone search), sorts fit from unfit (also discussed and measurement of active information given in the aforementioned paper) before lesser probable words (any 7 letter words) are achieved, and may actually include other instances of active information. Thus, since it builds off of previous filtered successes, it fits into the category of adaptive searches which Dembski and Marks note are in a separate class than “success or failure” random searches.

Thus, your EA provides an example of Dembski's point ... that *adaptive filters* (which are in a different category than purely random search) must be *properly* applied to take advantage of *known* (as opposed to merely assumed) search space structure -- the non-random dictionary -- in order to provide better than chance performance and arrive at any targeted 7 letter word that will pass the spell checker test. Try the EA on a dictionary that only contains the 7 letter words.

Dembski also states that “Following HÄaggstrÄom [18], let the search space  be partitioned into a set of acceptable solutions T and a set of unacceptable solutions T. The search problem is to find a point in T as expediently as possible. With no active information, the distribution of the space is assumed uniform. This assumption dates to the 18th century and Bernoulli's principle of insufficient reason [2] which states, \in the absence of any prior knowledge, we must assume that the events [in ] ... have equal probability." This is equivalent to an assumption of maximum (information theoretic) entropy on the search space [21]. Many criticisms of the NFLT are aimed at the uniformity assumption [28]. Knowledge of a deviation from uniformity, however, is a type of active information that is consistent with the premise of the NFLT.

So, you have separated acceptable solutions from unacceptable solutions by creating a dictionary, but as already mentioned, your search algorithm is designed to take advantage of the non-random aspects of your dictionary. IOW, the algorithm is matched to the solutions. Knowledge of the solutions and the type of algorithm which will work is used to create the correct algorithm which builds upon previous results and also sorts these results in order to achieve lesser probable results in the future at better than chance performance. All of this complementary knowledge is extraneous to the evolutionary search itself and is essential to the success of the search and is thus active, problem specific, guiding information.

You do realize that the most that your EA shows is that the targets for life, and search algorithm to necessarily travel those pathways and to reach the targets are set before the search ever begins. Is that the hallmark of a system built upon a random set of variables?

Now, let’s discuss NFL Theorem a bit:

NFL Theorem states that no method of search (algorithm) will, on average, perform better than any other algorithm [to produce consistently better than chance results] without problem specific information guiding it. The example of finding the Ace of Spades in "Active Information in Evolutionary Search" excellently illustrates the point of the NFL Theorem.

IOW, from what I understand, if you have no knowledge of the search space structure, and you separate acceptable from unacceptable targets, any method (search algorithm) of finding an acceptable target will perform as well as any other method and will on average discover the acceptable target at no better than chance probability. That is, unless active information regarding the search space structure and/or information helping to determine the location of any of the acceptable targets is programmed into the search algorithm in the form of "warmer/colder hints," “stepping stone searches,” “partitioned searches,” “fitness selectors” etc. (Many types of adaptive algorithms are discussed in "Conservation of Information in Search: Measuring the Cost of Success.")

So, what separates an EA from random search? According to NFL Theorem, if the EA is to find a target better on average than random search there must be some problem specific information (knowledge about the targeted problem) guiding it. “Concerning the NFLT, Ho and Pepyne write “unless you can make prior assumptions about the ... [problems] you are working on, then no search strategy, no matter how sophisticated, can be expected to perform better than any other" [15]. According to Wolpert and Macready search can be improved only by “incorporating problem-specific knowledge into the behavior of the [optimization or search] algorithm" [30].

Anticipating the NFLT, Schaefer [24] notes "a learner [without active information] ... that achieves at least mildly better-than-chance performance ... is like a perpetual motion machine." The "prior assumptions" and "problem specific knowledge" required for "better-than-chance performance" in evolutionary search derives from active information that, when properly fitted to the search algorithm, favorably guides the search.” Have you discovered, within your program, the information theoretic equivalent to perpetual motion machines? (Of course, understanding this point, you will see why I am so sceptical re: processes allegedly creating information at averages exceeding probability without guiding, problem specific, active information.)

Now, back to the “knowledge of a deviation from uniformity of the search space” that was briefly mentioned earlier.

The conclusion of “Active Information in Evolutionary Search:”

“Active information, when properly prescribed, successfully guides an evolutionary search to a solution by incorporating knowledge about the target and the underlying structure of the search space. That structure determines how the search deviates from the uniformity assumption of the NFLT. HÄaggstrÄom's "geographical structure[s]," "link structure[s]," search space "clustering," and smooth surfaces conducive to "hill climbing" are examples that reinforce rather that refute the conclusion that intelligent design proponents draw from the NFLT, namely, that the success of evolutionary search depends on the front-loading of active information. Thus, in biology as in computing, there is no free lunch [8].”

Since the NFL Theorem assumes a uniform search space (maximum information entropy), we can conclude that any random search spaces being acted upon by any chosen method of search (search algorithm) will not, on average, arrive at targets at better than random chance, nor will one random set of search space and search algorithm consistently perform better than chance over time. This is consistent with what I understand to be part of the basis for Conservation of Information:

A "learner... that achieves at least mildly than better-than-chance performance, on average, ... is like a perpetual motion machine - conservation of generalization performance precludes it.”
--Cullen Schaffer on the Law of Conservation of Generalization Performance. Cullen Schaffer, "A conservation law for generalization performance," in Proc. Eleventh International Conference on Machine Learning, H. Willian and W. Cohen. San Francisco: Morgan Kaufmann, 1994, pp.295-265.

That can be tested by generating a random set of variables (laws), which causes a random set of functional targets within a search space, and applying any method of search to discover those targets.

According to my understand of the NFL Theorem and Conservation of Generalization Performance, there are two types of results that will not occur:

1. The search algorithm performing consistently better than chance over a lengthy run time.

2. After many shorter runs, of different random searches on random targets, the averages of finding the targets being better than chance.

“Although commonly used evolutionary algorithms such as particle swarm optimization [9] and genetic algorithms [11] perform well on a wide spectrum of problems, there is no discrepancy between the successful experience of practitioners with such versatile algorithms and the NFLT imposed inability of the search algorithms themselves to create information [4, 10]. The additional information often lies in the experience of the programmer who prescribes how the external information is to be folded into the search algorithm. The NFLT takes issue with claims that one search procedure invariably performs better than another or that remarkable results are due to the search procedure alone [1, 3, 13, 14, 17, 19, 23, 25, 26].”

"The NFLT puts to rest the inflated claims for the information-generating power of evolutionary simulations such as Avida [16] and ev [25]. The NFLT gauges the amount of active information that needs to be introduced to render an evolutionary search successful [18]. Like an athlete on steroids, many such programs are doctored, intentionally or not, to succeed [17]."

"Christensen and Oppacher note the NFLT is \very useful, especially in light of some of the sometimes outrageous claims that had been made of specific optimization algorithms" [4].

"Search algorithms do not contribute information to the search, and the NFLT exposes the inappropriateness of such claims."

"The NFLT shows that claims about one algorithm outperforming another can only be made in regard to benchmarks set by particular targets and particular search structures. Performance attributes and empirical performance comparisons cannot be extrapolated beyond such particulars. There is no all-purpose \magic bullet" search algorithm for all problems [5, 27]."

IOW, from what I understand, a non-random, non-uniform search space structure must be coupled to the correct search algorithm, utilizing the proper filter in accordance with prior knowledge of the target, in order to produce better than chance performance when searching for a target.

Now, let’s apply this to real life. Within life, a target is that informational sequence which functions within the living organism. Similarly, if you were to put the dictionary of targets within your EA together using random variables, in order to produce anything worth comparing to the evolution of life, the targets would need to conform to a system of rules and interact with each other to create some type of function. In other words, the targets would need to be algorithmically complex and specified, yet generated by a random set of laws and attained through a random method of search at better than chance. That is what you’d have to demonstrate to even begin to show that life has evolved from non-planned, non- intelligently designed laws and information.

As Dembski and Marks write in “Active Information in Evolutionary Search,” In the field of evolutionary computing, to which the weasel example belongs, targets are given extrinsically by programmers who attempt to solve problems of their choice and preference. But in biology, not only has life come about without our choice or preference, but there are only so many ways that matter can be configured to be alive and, once alive, only so many ways it can be configured to serve biologically significant functions. Most of the ways open to biological evolution, however, are dead ends. It follows that survival and reproduction sets the intrinsic targets for biological evolution.

Evolutionary convergence, the independent evolution of similar features (such as the camera eye of humans and squids), provides a striking example of intrinsic biological targets. Cambridge paleobiologist Simon Conway Morris [20] finds that evolution converges to only a few endpoints. He therefore theorizes that if the evolutionary process were restarted from the beginning, the life forms of today, including humans, would re-evolve. From the perspective of the NFLT, these limited number of endpoints on which evolution converges constitute intrinsic targets, crafted in part by the environment and by laws of physics and chemistry.”

This provides evidence that our universe will necessarily arrive at pre-set living targets which are guided by problem specific, active information at the foundation of its natural laws. This is consistent with any natural teleological hypothesis referencing the origin of our universe and the subsequent evolution of life. Evolutionary algorithms, NFL Theorem, and the Law of Generalization Performance also provide evidence against any assertions that just any random set of information and laws will cause evolution (extremely basic: the, on average, better than chance generation of algorithmically complex and specified information) to occur.

So, your example operates off of some active information and it seems to show the lower capability of evolutionary algorithms. If your EA was to even simplistically model the evolution of life, you need to show that the pathway from useable protein target to the next less probable useable proteins has a high enough probability to be achieved in the amount of time available and this would have to be consistent with known mutation rates (‘x’ mutations per generation, ‘x’ generations per year, 4 x 10^9 years available and largest protein = 26,926 amino acids). Of course, the probabilities between targets would also have to apply to the generation of instructions for assembling machines and systems from those proteins.

Furthermore, your program would also need to model the further evolution of information processing systems, the evolution of the different functions of RNA, the evolution of logic gates, the evolution of complex machinery, the evolution of IC systems (attained through indirect pathways), redundant systems, repair systems, and convergent evolution (the independent evolution of similar features), to name a few systems and results which have evolved within life, not to mention intelligent beings and conscious systems. I wonder how much and the type(s) of fine tuned, problem specific, active information would be necessary in order to evolve all of those results. And "answers" about how our EA’s won’t create such systems just because there isn’t enough time don't impress me. I’m perfectly fed up with anti-scientific, progression crushing, “chance of the gaps” non-explanations. I want knowledge of how something is accomplished. I don’t want “pat” answers about how things “just happen” to self- assemble given enough random variation and time. I want to know *how* in a manner consistent with our knowledge of the flow of information. This truly sounds like some excellent ID research.

The conclusion. Evolution does not create any new information, it only converts it from one program to another -- from the problem specific, active informational structure of our natural laws at the foundation of our universe to the information processing system of life. Enter the Law of Conservation of Information. As well, since evolution generates information at greater than random chance, it must make use of the problem specific information to find the informational targets. Furthermore, evolution by natural selection provides evidence of teleological foresight and programming of the active, problem specific information necessary for the consistently better than chance performance of evolution.

"Search algorithms, including evolutionary searches, do not generate free information. Instead, they consume information, incurring it as a cost. Over 50 years ago, Leon Brillouin, a pioneer in information theory, made this very point: .The [computing] machine does not create any new information, but it performs a very valuable transformation of known information. [3] When Brillouin’s insight is applied to search algorithms that do not employ specific information about the problem being addressed, one finds that no search performs consistently better than any other. Accordingly, there is no magic-bullet search algorithm that successfully resolves all problems [7], [32]."

Next question: How does one information processing system create another information processing system within it (ie: universe creates life)? I predict: not by unguided, random accident. Sounds like some more ID research.

Monday, September 24, 2007

My understanding of the Universal Probability Bound

RE: the Universal Probability Bound:

-given the sequence of prime numbers: “12357 ..."

According to probability theory, the first digit has a one in ten chance of matching up with the sequence of prime numbers, however the second digit has a one in 100 chance, and the third a one in 1000 chance, etc. So, how far up the pattern of prime numbers will chance take us before making a mistake? The further you go, the more likely chance processes will deviate from the specified pattern. It’s bound to happen eventually as the odds increase dramatically and quickly. But, how do we know where the cut off is?

Dembski has introduced a very “giving the benefit of the doubt to chance” type of calculation based on the age of the known universe and other known factors, and actually borrowing from Seth Lloyd’s calculations. Now, it must be noted that as long as the universe is understood to be finite (having a beginning) then there will be a probability bound. This number may increase or decrease based on future knowledge of the age of the universe. However, a UPB will exist and a scientific understanding can only be based on present knowledge.

This number, as far as I understand, when it is calculated actually allows chance to produce less than 500 bits of specific information before cutting chance off and saying that everything else that is already specified and algorithmically complex and above that bound of 500 bits is also most reasonably beyond the scope of chance operating anywhere within the universe for the duration of the universe and is thus complex specified information and the result of intelligence.

Now, let’s take a closer look at the Universal Probability Bound of 500 bits. What would it take for pure random chance to cover all possible combinations of a 500 bit sequence? Well, any given 500 bit sequence is 1 in 2^500 possible combinations; that is 1 out of more than 3.27 x 10^150 possible sequences. Now let’s look at the age of the universe. It is 15.7 billion years old; that is approx. 4.95 x 10^17 seconds old. After a few simple calculations it is easy to see that the whole universe would have to be flipping 6.61 x 10^132 sets of 500 coins every second for 15.7 billion years in order to generate 3.27 x 10^150 sequences of 500 bits.

But even after all is said and done, all possible combinations will not have been generated because there is no way to guarantee that no pattern will appear twice. Since probabilities deal with averages, it is only after many sets of 15.7 billion years that we will see, on average, an exponential appearance of all of the possible combinations being created. But of course, this assumes that there are indeed that many “sets of coins” being flipped at the above rate in the first place.

And still, there is no guarantee that even with that many random “flips of a coin” that a pattern such as “10" repeated 250 times will even be generated. In fact, it is not in the nature of pure random processes to match patterns which can be described and formulated by a system of rules. Furthermore, science always looks for the best explanation, and law and intelligence (teleological processes) are already available as better explanations than chance for the creation of specified patterns – patterns which can be described and formulated by a system of rules. The limit of 500 bits only provides a very generous Universal Probability Bound, which is based on known measurements of the universe, that places a restriction on invocation of “chance of the gaps” when other better and more reasonable explanations, based on observation, are available.

In fact, here is a little test. Take a 100 bit pattern (including spaces and ending punctuation) such as “aaaaaaaaaaaaaaaaaaaa” and randomly “spin” the letters to your hearts content for as long as you like and see if you ever get a specified pattern.

Again, as I’ve stated before, ID Theory provides a best explanation hypothesis about the nature of the cause of the ‘Big Bang’ model based upon observation and elimination of other alternatives that posit unreasonable gaps based on chance, not based on observation, which are postulated to circumvent observed cause and effect relations.

Where is the CSI necessary for evolution to occur?

First, read through this extremely informative article and the abstracts to these three articles:

here

here

here

then continue ...

According to Dr. Marks work with evolutionary algorithms and computing and intelligent systems, evolving functional information is always guided by previous functional information toward a solution within a search space to solve a previously known problem. [endogenous information] = [active information] + [exogenous information] or as “j” at Uncommon Descent explained, “[the information content of the entire search space] equals [the specific information about target location and search-space structure incorporated into a search algorithm that guides a search to a solution] plus [the information content of the remaining space that must be searched].”

Intelligence is capable of the guiding foresight which is necessary and a sufficient level of intelligence possesses the ability to set up this type of system. This is based on observational evidence. So, if functional information only comes from intelligence and previous information, then where does the information necessary for abiogenesis (the original production of replicating information processing systems) and evolutionary change come from?

Evolution seems to be, for the most part, guided by the laws which allow biochemistry and natural selection which both result from the laws of physics. The laws of physics are at the foundation of our universe which is now seen to be an information processing system. If the universe processes information, and if biochemistry and natural selection is a result of the laws of physics, then the information for evolution by natural selection [and other necessary mechanisms] is at the foundation of our universe (as an information processing system) and represented in the finely tuned relationship between the laws of physics and life’s existence and subsequent evolution. IOW, the universe is fine tuned, that is intelligently programmed, for life and evolution.

My point is that abiogenesis and evolution are not accidental; they are necessarily programmed into our universe, arriving from the fine tuned information at the foundation of our universe, yet do not arrive strictly from laws and chance (stochastic processes) alone, since information processors and functional information are not definable in terms of theoretical law. This is similar to the ending pattern of a shot in a pool game. The pattern of balls is created by the laws of physics once the first pool ball it set in motion, however the ending pattern itself is not describable by natural law. It is a random pattern/event. But, a “trick shooter” can fine tune both the initial set up and the starting shot in order to create a desired pattern in the form of a trick shot. Just like the ending pattern of the shot in the pool game, information and information processing systems are not describable by law. Again, the ending pattern is a random pattern/event. However, information processing systems, which are necessary for evolutionary algorithms to search a given space, and the functional information to search that space, and the information to guide the search do not arise by random accident within any random set of laws. Along with the argument from CSI and my other argument (scroll down to first comment), the best explanation is that these factors of the appearance of life (an information processing system) and evolution (the production of CSI) are programmed into our universe by intelligence.

That can be falsified by creating a program which generates random laws. If these random laws will cause any information processing system which generates functional information to randomly self-organize, just as any pattern of pool balls will self organize randomly after any and every shot, then the above hypothesis is falsified. Dr. Robert Marks is already examining these types of claims, and along with Dr. Dembski is refuting the claims that evolution creates CSI for free by critically examining the evolutionary algorithms which purportedly show how to get functional information for free. Dr. Marks is using the concept of CSI and Conservation of Information and experimenting in his field of expertise – “computational intelligence” and “evolutionary computing” – to discover how previously existing information guides the evolution of further information.

Complex Specified Information ... Simplified with filter included

The 6 levels for determining Complex Specified Information.

Before I begin, the reason why CSI points to previous intelligence is because intelligence has been observed creating this type of information, and based on its pseudo-random properties CSI can not be described by law and is not reasonably attributed, nor has it been observed to have arisen, by random chance. CSI is primarily based on specificity. A specified pattern is described, independent of the event in question, by the rules of a system. As such, explanations other than chance are to be posited which can create informational patterns that are described by the rules of a system. Dr. Dembski describes specified patterns as those patterns which can be described and formulated independent of the event (pattern) in question. The rest of CSI is briefly and simply explained within the following filter.

Clarification: I have no problem with an evolutionary process creating CSI, the question is “how?” First, evolution takes advantage of an information processing system and this is a very important observation -- read “Science of ID” and the first comment. Second, it is obvious that an evolutionary process must freeze each step leading to CSI through natural selection and other mechanisms in order to generate CSI. It is thus obvious that the laws of physics contain the fine tuned CSI necessary to operate upon the information processing system of life and cause it to generate further CSI. This can be falsified by showing that any random set of laws acting on any random information processing system will cause it to evolve CSI. For more and to comment on this idea refer to “Where is the CSI necessary for evolution to occur.”

Now for the steps for determining CSI:

1. Is it shannon information? (Is it a sequence of discrete units chosen from a finite set in which the probability of each unit occurring in the sequence can be measured? Note: shannon information is a measurement of decrease of uncertainty.)

Answer:

No – it’s not even measurable information, much less complex specified information. Stop here.

...or...

Yes – it is at least representable and measurable as communicated data. Move to the next level.


2. Is it specified? (Can the given event (pattern) be described independent of itself by being formulated according to a system of rules? Note: this concept can include, but is not restricted to, function and meaning.)

(Ie:
- “Event (pattern) in question” – independent description [formulated according to rules of a system]
- “12357111317" – sequence of whole numbers divisible only by themselves and one [formulated according to mathematical rules]
- “101010101010" – print ‘10' X 6 [formulated according to algorithmic information theory and rules of an information processor]
-“can you understand this” – meaningful question in which each word can be defined [formulated according to the rules of a linguistic system (English)]
- “‘y(x)’ functional protein system” –‘x’ nucleotide sequence [formulated according to the rules of the information processing system in life]
- “14h7d9fhehfnad89wwww” – (not specified as far as I can tell)

Answer:

No – it is most likely the result of chance. Stop here.

...or...

Yes – it may not be the result of chance; we should look for a better explanation. Move to the next level.

3. Is it specified because of algorithmic compressibility? (Is it a repetitious/regular pattern?)

Answer:

Yes – it is most likely the result of law, such as the repetitious patterns which define snowflakes and crystals. The way to attribute an event to law (natural law) as opposed to random chance is to discover regularities which can be defined by equation/algorithm. Stop here.

...or...

No – the sequence is not describable as a regular pattern, thus tentatively ruling out theoretical natural laws -- natural laws being fundamentally bound to laws of attraction (ie: voltaic, magnetic, gravitational, etc.) thus producing regularities. Law can only be invoked to describe regularities. The sequence is algorithmically complex, may be pseudo-random, and we may have a winner but let’s be sure. If the pattern is short, then it may still be the result of chance occurrences and may be truly random. Our universe is huge beyond comprehension after all. Its may be bound to happen somewhere, sometime. Move to the next level.

4. Is it a specification? (Is its complex specificity beyond the Universal Probability Bound (UPB) – in the case of information, does it contain more than 500 bits of information?)

Answer:

No – it may be the result of intelligence, but not quite sure, as random occurrences (possibly “stretching it”) might still be able to produce this sequence somewhere, sometime. We’ll defer to chance on this one. Stop here.

...or...

Yes – it is pseudo-random and is complex specified information and thus the best (most reasonable) explanation is that of previous intelligent cause. If you would still like to grasp at straws and arbitrarily posit chance as a viable explanation, then please move to the next level.

5. Congratulations, you have just resorted to a “chance of the gaps” argument. You have one last chance to return to the previous level. If not, move up on to the last level.

6. You seem to be quite anti-science as you are proposing a non-falsifiable model and this quote from Professor Hasofer is for you:

“"The problem [of falsifiability of a probabilistic statement] has been dealt with in a recent book by G. Matheron, entitled Estimating and Choosing: An Essay on Probability in Practice (Springer-Verlag, 1989). He proposes that a probabilistic model be considered falsifiable if some of its consequences have zero (or in practice very low) probability. If one of these consequences is observed, the model is then rejected.

‘The fatal weakness of the monkey argument, which calculates probabilities of events “somewhere, sometime”, is that all events, no matter how unlikely they are, have probability one as long as they are logically possible, so that the suggested model can never be falsified. Accepting the validity of Huxley’s reasoning puts the whole probability theory outside the realm of verifiable science. In particular, it vitiates the whole of quantum theory and statistical mechanics, including thermodynamics, and therefore destroys the foundations of all modern science. For example, as Bertrand Russell once pointed out, if we put a kettle on a fire and the water in the kettle froze, we should argue, following Huxley, that a very unlikely event of statistical mechanics occurred, as it should “somewhere, sometime”, rather than trying to find out what went wrong with the experiment!’”

Therefore, ID Theory provides a best explanation hypothesis about the nature of the cause of the ‘Big Bang’ model based upon observation and elimination of other alternatives that posit unreasonable gaps based on chance, not based on observation, which are postulated to circumvent observed cause and effect relations.

Saturday, September 15, 2007

Concept of CSI (part 2)

Continuation from here.

Zachriel:
“You keep talking about CSI and complexity, but the only issue at this point is the definition of “specificity”. Your meandering answer is evidence of this extreme overloading of even basic terminology.”

My example of defining “complexity” was to show that even in information theory, some concepts can and must be defined and quantified in different ways.

... and I have given definitions of specificity in different words (pertaining to the definition which aids in ruling out chance occurrences), hoping that you would understand them. However, you continually ignore them. Or did you just miss these?

Meandering, nope. Trying to explain it in terms you will understand (also borrowing from Dembski’s terminology), yep.

Zachriel:
“This is Dembski’s definition of specificity:

Thus, for a pattern T, a chance hypothesis H, and a semiotic agent S for whom ?S measures specificational resources, the specificity ? is given as follows:

? = –log2[ ?S(T)P(T|H)].”

First, you do realize that in order to measure something’s specificity, the event must first qualify as specific, just like how in order to measure an event using shannon information (using the equation which defines and quantifies shannon information) the event must first reach certain qualifiers. I’ve already discussed this.

Now, yes, you are correct. However, you stopped half way through the article and you seemed to have arbitrarily just pulled out one of the equations. Do you even understand what Dembski is saying here? You do also realize that specificity and a specification are different, correct?

Dembski was almost done building his equation, but not quite. You obviously haven’t read through the whole paper. Read through it, then get back to me. You will notice that Dembski later states, in regard to your referenced equation:

“Is this [equation] enough to show that E did not happen by chance? No.” (Italics added)

Wht not? Because he is not done building the equation yet. He hasn’t factored in the probabilistic resources. I’ll get back to this right away, but first ...

The other thing that you must have missed, regarding the symbols used in the equation, directly follows your quote of Dembski’s equation. Here it is:

“Note that T in ϕ S(T) is treated as a pattern and that T in P(T|H) is treated as an event (i.e., the event identified by the pattern).”

It seems that the above referenced equation is showing a comparison of the event in question (the event identified by the pattern) with its independently given pattern, compared to its chance probabilistic hypothesis, thus actually showing that we were both wrong in thinking that just any pattern (event) could be shoved into the above equation. The equation itself only works on those events which have an independently given pattern (thus already qualifying as specific) and giving a measurement of specificity, but not a specification. You will notice that a greater than 1 complex specificity = a specification and thus CSI if you continue to read the paper.

Dembski does point out that in the completed equation, that when a complex specificity produces a greater than 1 result, you have CSI. As far as I understand, this is a result of inputing all available probabilistic resources, which is something normal probability theory does not take into consideration. Normally, probability calculations give you a number between 0 and 1, showing a probability, but they do so without consideration of probabilistic resources and the qualifier of the event conforming to an independently given pattern. Once this is all calculated and it’s measurement is greater than one (greater than the UPB), then you have CSI.
Moreover, the specification is a measurement in bits of information and as such can not be less than 1 anyway, since 1 bit is the smallest amount of measurable information (this has to do with the fact that measurable information must have at least two states -- thus the base unit of the binary digit (bit), which is one of those two states).

You must have seriously missed where Dembski, referencing pure probabilistic methods in “teasing” out non-chance explanations, said (and I already stated a part of this earlier):

“In eliminating H, what is so special about basing the extremal sets Tγ and Tδ on the probability density function f associated with the chance hypothesis H (that is, H induces the probability measure P(.|H) that can be represented as f.dU)? Answer: THERE IS NOTHING SPECIAL ABOUT f BEING THE PROBABILITY DENSITY FUNCTION ASSOCIATED WITH H; INSTEAD, WHAT IS IMPORTANT IS THAT f BE CAPABLE TO BEING DEFINED INDEPENDENTLY OF E, THE EVENT OR SAMPLE THAT IS OBSERVED. And indeed, Fisher’s approach to eliminating chance hypotheses has already been extended in this way, though the extension, thus far, has mainly been tacit rather than explicit.” [caps lock added]

Furthermore ...

Dr. Dembski: “Note that putting the logarithm to the base 2 in front of the product ϕ S(T)P(T|H) has the effect of changing scale and directionality, turning probabilities into number of bits and thereby making the specificity a measure of information. This logarithmic transformation therefore ensures that the simpler the patterns and the smaller the probability of the targets they constrain, the larger specificity.”

Thus, the full equation that you haven’t even referenced yet gives us a measurement (quantity) in bits, of the specified information as a result of ‘log base 2'.

In fact, here is the full equation and Demsbski’s note:

“The fundamental claim of this paper is that for a chance hypothesis H, if the specified complexity χ = –log2[ 120 10  ϕ S(T)P(T|H)] is greater than 1, then T is a specification and the semiotic agent S is entitled to eliminate H as the explanation for the occurrence of any event E that conforms to the pattern T”

In order to understand where this 10^120 comes from, let’s look at a sequence of prime numbers:

“12357" – this sequence is algorithmically complex and yet specific (as per the qualitative definition) to the independently given pattern of prime numbers (stated in the language of mathematics as the sequence of whole numbers divisible only by itself and one), however there is not enough specified complexity to cause this pattern to be a specification greater than 1. It does conform to an independently given pattern, however, it is relatively small and could actually be produced randomly. So, we need to calculate probabilistic resources and this is where the probability bound and the above equation comes into play.

According to probability theory, the first digit has a one in 10 chance of matching up with the sequence of prime numbers (or a pre-specification), however the second digit has a one in 100 chance, and the third a one in 1000 chance, etc. So, how far up the pattern of prime numbers will chance take us before making a mistake? The further you go, the more likely chance processes will deviate from the specific (or pre-specified) pattern. It’s bound to happen eventually as the odds increase dramatically and quickly. But, how do we know where the cut off is? Dembski has introduced a very “giving the benefit of the doubt to chance” type of calculation based on the age of the known universe and other known factors, and actually borrowing from Seth Lloyd’s calculations. Now, it must be noted that as long as the universe is understood to be finite (having a beginning) then there will be a probability bound. This number may increase or decrease based on future knowledge of the age of the universe. However, a UPB will exist and a scientific understanding can only be based on present knowledge. This number, as far as I understand, actually allows chance to produce less than 500 bits of specific information before cutting chance off and saying that everything else that is already specified and above that bound of 500 bits is also definitely beyond the scope of chance operating anywhere within the universe and is thus complex specified information and the result of intelligence. I dare anyone to produce even 100 bits of specified information completely randomly, much less anything on the order of complex specified information.

Clarification: I have no problem with an evolutionary process creating CSI as it makes use of a replicating information processing system. As I have said earlier, it is the “how” that is the real question and the present problem. IMO, the present observations actually seem to support a mechanism which produces CSI in sudden leaps rather than gradually. In fact, Dr. Robert Marks is presently working on evolutionary algorithms to test their abilities and discover experimentally what is necessary to create CSI and how CSI guides evolutionary algorithms towards a goal.

So, do you understand, yet, how to separate the concept of specificity (of which I have provided ample definitions and examples previously) from the measurement of complex specified information as a specification greater than 1 after factoring in the UPB in the completed equation?

Zachriel:
“Dembski’s definition has a multitude of problems in application,”

So, you are now appealing to “fallacy by assertion?” (yes, I think I just made that up)

fallacy by assertion: “the fallacy which is embodied within the idea that simply asserting something as true will thus make it to be true.”

Come on, Zachriel, this is a debate/discussion; not an assertion marathon.

I have already shown you how to apply it earlier and you just conveniently chose not to respond. Remember matching the three different patterns with the three different causal choices? If there are a multitude of problems in the definition of specificity please do bring them foreward. You’ve already brought some up, but after I answered these objections, you haven’t referred to them again.

Actually, to be honest with you, since this is quite a young and recently developed concept, there may be a few problems with the concept of specificity and I welcome the chance to hear from another viewpoint and discuss these problems and see if they are indeed intractable.

Zachriel:

“but grappling with those problems isn’t necessary to show that it is inconsistent with other uses of the word within his argument. This equivocation is at the heart of Dembski’s fallacy.”

Have you ever “dumbed something down” for someone into wording and examples that they would understand in order to explain the concept to them, because they couldn’t comprehend the full detailed explanation? Furthermore, have you ever approached a concept from more than one angle, in order to help someone fully comprehend it? This is indeed a cornerstone principle in teaching. I have employed this countless times, as I’ve worked with kids for ten years.

You have yet to show me where any of Dembski’s definitions of specificity are equivocations rather than saying the same thing in different wording to different audiences of differing aptitudes or rewording as a method of clarification.

Zachriel:
“Dembski has provided a specific equation. This definition should be consistent with other definitions of specificity, as in “This is how specification is used in ID theory..."
Do you accept this definition or not?”

I agree with the concept of specificity and its qualitative definition. As for the equation, I do not understand all of the math involved with the equation, but from what I do understand it does seem to make sense. You do understand the difference between an equation as a definition of something such as *force* in f=ma, as opposed to a qualitative definition of what is “force?”

But, then again, I’ve already been over this with you in discussing shannon information and you chose to completely ignore me. Why should it be any different now?

This definitional equation which provides a quantity of complex specified information with all available probabilistic resources factored in is consistent with all other qualifying definitions of specificity as it contains them within its equation.

You have yet to show anything to the contrary.

As far as application goes, I do think that the equation may be somewhat ambiguous to use on an event which is not based on measurable information. But, then again, I’d have to completely understand the math involved in order to pass my full judgement on the equation.

Furthermore, do you understand the difference between a pre-specification, a specification, specified information, specified complexity, and complex specified information? I ask, because you don’t seem to understand these concepts. If information is specified/specific (which I’ve already explained) and complex (which I’ve already explained), then you can measure for specificity (which is the equation that you have referenced). However, this doesn’t give us a specification, since the probabilistic resources (UPB) are not yet factored in. Once the UPB is factored in, then you can measure the specified complexity for a specification. If the specified complexity is greater than one, then you have a measure of specification and you are dealing with complex specified information.

It is a little confusing, and it has taken me a while to process it all, but how can YOU honestly go around with obfuscating arguments and false accusations (which you haven’t even backed up yet) of equivocations when it is obvious that you don’t even understand the concepts?

Do you do that with articles re: quantum mechanics just because you can’t understand the probabilities and math involved or the concept of wave-particle duality or some other esoteric concept?

I will soon be posting another blog post re: CSI (simplified) and the easy to use filter for determining if something is CSI. Here it is.

P.S. If you want to discuss the theory of ID and my hypothesis go to “Science of Intelligent Design” ...

Wednesday, August 29, 2007

Concept of Complex Specified Information

Hello Zachriel,

welcome back!

Unfortunately, I do have much to say and am unversed in brevity. Moreover, because I do believe that there are some important issues which you brought up as a response to my last blog post, I have decided to create my response as another blog post -- this one.

First, I will kick this off by letting you know that, while I do not understand all of the math that is involved with Dembski’s Complex Specified Information, I do believe that Dembski has explained the basic concepts in a manner so that one does not need to understand all of the math involved to understand the basic concepts upon which the math is established.

Zachriel's statements will be bolded and my response will follow in plain type.

Zachriel:
“Many of the terms used in Intelligent Design arguments are equivocations. Even if unintentional, these equivocations lead people to invalid conclusions, then to hold these conclusions against all argument.”


I disagree. You will have to show me which terms are equivocations. Just remember that there is a difference between equivocation and two different ways of saying one thing. Of course it must be shown that the “two different ways” are indeed “the same thing.” As well, no where have I equivocated by defining a word one way and then using it in a context where that definition does not apply, at least as far as I can tell. Furthermore, some words can be used in more than one way without being equivocations as long as you are up front about how you are defining and using that word.

Zachriel:
“A case in point is "specificity". You suggest a dictionary definition for "specification: a detailed precise presentation of something or of a plan or proposal for something" adding "in order for anything to be specified, it must be converted by an information processor into its specified object or idea". But these are not the only definitions of "specific: sharing or being those properties of something that allow it to be referred to a particular category", and this can easily lead to confusion or conflation. We have to proceed carefully.”


Sure, which is why I provided the definition that is applicable to the topic at hand and have not engaged in any equivocation.

CJYman: "This is how specification is used in ID theory..."

Zachriel:
“But no. This is not how Dembski uses it in the context of Complex Specified Information. His definition of specificity is quantitative and based on the simplest (meaning shortest) possible description of a pattern by a semiotic agent.”


You are partially correct, but you’re missing something. You are referring to merely compressible specification, not complex specification. I will address this further when I respond to your next statement.

First, let’s look at “an independently given pattern:”

Now, here is the main idea behind specificity, as described by Dr. Dembski:

Dr. Dembski, from here:
“There now exists a rigorous criterion-complexity-specification-for distinguishing intelligently caused objects from unintelligently caused ones. Many special sciences already use this criterion, though in a pre-theoretic form (e.g., forensic science, artificial intelligence, cryptography, archeology, and the Search for Extra-Terrestrial Intelligence) ... The contingency must conform to an independently given pattern, and we must be able independently to formulate that pattern. A random ink blot is unspecifiable; a message written with ink on paper is specifiable.”

The pattern is independently given if it can be converted into function/meaning according to the rules of an information processing system/semiotic agent not contained within the pattern. In the above example, the ink markings specify function according to the rules (language and lexicon) of a human information processor/semiotic agent. Thus, the pattern to match is one that has meaning/function. That was also the point in Dr. Dembski’s example (in “Specification: the Pattern that Signifies Intelligence,” pg. 14-15) of difference between a prespecification of random coin tosses and a specification of a coin toss that could be converted into the first 100 bits of the Champernowne sequence. The sequence specifies a function of a combination of mathematical and binary rules as opposed to just matching a previous random toss. The Champernowne sequence exemplifies the same idea behind receiving a signal of prime numbers from ET.

And, yes, measurability (quantity) is important in scientifically distinguishing specificity, which is why the units must be measurable in amount of information (shannon information theory) and randomness (algorithmic information theory) and conform to other criteria in order to be complex specified information.

It is true that, in “Specification: the Pattern that Signifies Intelligence,” Dembski does explain the mathematics behind complexity and specification using both shannon and algorithmic information theory and probability theory, however the purpose of my last blog post was to merely flesh out the non-mathematical description of the concept of specification, since many people do not understand what it entails.

Zachriel:

“Leaving aside the voluminous problems with his definition, this is quite a bit different than yours. All patterns can be specified by a semiotic agent, the question is the compactness of that description.

? = –log2[ ?S(T)P(T|H)].”

Incorrect.

First, from what I understand, a semiotic agent is an information processor since they both are systems which can interpret signs/signals and the basic definition of an information processor is that which converts signs/signals into function/meaning. But that is only another way of saying what Dr. Dembski states on pg. 16 (“Specification: the Pattern which Signifies Intelligence”): “To formulate such a description, S employs a communication system, that is, a system of signs. S is therefore not merely an agent but a semiotic agent.”

And actually, Dembski and my definitions are saying the same thing, just with different wording.
Here is the definition I used: specificity = “to include within its detailed plan a separate item.” Dembski’s statement, “Contingency conforming to an independently given pattern,”is the same concept in different words. First, a plan in the form of coded information is a measurable contingency. Second, an independently given pattern, as formulated by an information processing/semiotic system has meaning/function. Now, look at Dr. Dembski’s above examples. The sequence of units in the message is the contingency and the meaningful/functional sequence of letters or numbers which conform to the rules of an information processing system is the separate item. According to myself, the sequence of units in the message is the detailed plan and the meaningful/functional sequence of letters or numbers which conform to the rules of an information processing system is the separate item. A sequence of letters is the contingency/plan which is converted into to a separate item/indpendentaly given pattern (language) of meaning/function.

An information processor is necessary to convert the contingency/plan into the separate item/independently given pattern which is meaningful/functional. I was merely discussing the same concept with different wording and then focussing on my own argument from information processing, which actually becomes a slightly different aspect of ID Theory since I deal with the cause of information processors/semiotic agents (that which causes specificity).

Re: “compactness” and algorithmic information theory:

Algorithmic compressibility (“compactness of the description”) is one way to rule out chance, however, algorithmic compressibility does not rule out natural laws, since high algorithmic compressibility expresses regular repetition which is indicative of a causal law. So, it is true that natural law can create specifications in the form of repetitive patterns. Repetitive patterns are specified because they represent a simplified/compressed description caused by a specific algorithm being processed by an information processing system/semiotic agent. In being repetitive and caused by the laws of the algorithm, they are ruled out as having been caused by chance. Compressible specifications can be represented by an algorithm as a string shorter than the original pattern.

First example:

-the pattern:“1010101010101010101010101010101010101010"
-the algorithm: print ‘10' twenty times

The algorithm is the independently given pattern (an independent simplified/compressed description) processed by the rules of the language of its program (information processor/semiotic agent) and thus the overall pattern, following the laws of the algorithm, has a low randomness. In this case chance is ruled out, and the pattern is specific.

Second example:

-pattern:“473826180405263487661320436416"
-algorithm: there is no simplified/compressed independently given description of the pattern.

According to algorithmic information theory, the second example is more random than the first example -- all units are random with respect to each other within the overall pattern. Because it cannot be compressed – doesn’t follow a rule/law -- and is thus random, from my understanding of Dembski’s argument, it is complex.

The first example can not have reasonably arrived by chance acting independent of a law , yet there is no reason why the second couldn’t have. Note that I said, “acting independent of law.” In nature, there are many sequences which display regularities and are thus caused by natural law. Ie: snowflakes and the molecular structure of diamonds. In these cases the pattern is caused by laws of physical attraction between their units acting in accord with other chance factors, however it is not merely attributable to chance in the way a random arrangement of rocks is caused by chance without a law to describe the exact arrangement.

So, compressible specifications rule out chance, but not natural law. How is natural law ruled out?

Well, what if there was no compressibility in a given bit string, ruling out law, but there still was an independently given pattern which rules out chance? IOW, what if we have both complexity and specification -- complex specificity.

Let’s look at an example:

“Canyouunderstandthis” has no algorithmic compressibility (no regularities) and its sequence is not caused by laws of physical attraction between its units, thus natural laws are ruled out and it is random, thus complex. However, it still possesses specificity since it can be processed by an information processor/semiotic agent as having meaning/function, thus ruling out chance. It is an example of specified complexity, or complex specified information.

Now, I have no problem with evolution generating complex specified information -- the “how” is
different question. But, the main point of my original argument is that evolution requires a semiotic system to process measurable information and cause specificity -- the converting of measurable and complex information (DNA) into functional integrated molecules (regulatory proteins, molecular machines, etc.).

My question is, “what causes semiotic systems?” This ties into my hypothesis re: the cause of information processing systems, which I will again reiterate near the end of this post.

Now, I’ll address your statement that “all patterns can be specified by a semiotic agent.”

If according to yourself, “all patterns can be specified by a semiotic agent,” then please describe the independently given pattern within the following random 500 bit “pattern”:

“abfbdhdiellscnieorehfgjkabglskjfvbzmcnvbslgfjhwapohgkjsfvbizpalsienfnjgutnbbxvzgaqtwoeitbbns
pldxjcba”

You will notice that there is no compressibility, thus algorithmic law is ruled out. Also, the sequence is not caused by the physical laws of attraction between its units so natural law is ruled out. As per the EF and argument from CSI, it is now up to you to describe the independently given pattern in order to rule out chance. IOW, you as a semiotic system must create a pattern (ie: language) which is independent of the random 500 bit “pattern” and in which the 500 bit “pattern” can be processed into meaning/function.

Furthermore, the cell, as a semiotic agent/information processor does not and can not process just any DNA signal much less just any signal which is not even in the proper biochemical format. So, if correct, even just this one example refutes your statement.

Angels and Crystal Spheres


Zachriel:
“Consider the historical example of Angels. We evoke a time when humans could observe and record the intricate movements of planets (planets meaning the classical planets; Sun, Moon, Venus, Jupiter, Saturn, Mars, Mercury) and plot them against events on Earth, but who lacked a unifying explanation such as gravity. Will the "Explanatory Filter" render false positives in such cases?

COME ON ZACHRIEL, WE’VE ALREADY BEEN THROUGH THIS

First, did they have any scientific understanding of angels and the phenomenon they were purporting to explain?

Second, did they have any observation of “inter-relatedness” between angels and planetary orbits?

We have both, when it comes to intelligence and complex specified information.

Third, they did not even follow the explanatory filter at all. If they did, and if they had an understanding of natural laws, they first would have looked for regularities since natural laws are based on regularities which is why they can be summed up in mathematical equations and algorithms. If they had looked for these regularities, they would have noticed cycles, and thus proposed, at the least, that some unknown law governed the motion of the planets. Moreover, in order to actually move beyond the first stage of the explanatory filter, they would have needed to positively rule out natural law, as has been done with coded information. Now, I do realize that our ancestors did not know about gravity and its effects, however did they positively rule out laws as a cause of planetary motion, as has been done with DNA sequence (it is aperiodic/irregular and attached to the backbone, thus not exerting attractive influence on its sequence)? Life is controlled, not only by physics and chemistry, but also by coded information, which itself is a “non-physical-chemical principle” as stated by Michael Polanyi in “Life Transcending Physics and Chemistry,” Chemical & Engineering News (21 August 1967).

Also, can you measure the shannon information content of the sequence of planetary orbit positions? If not, then we can’t measure the informational complexity of the system. If you can’t measure the informational complexity of the system, then we are dealing with neither coded information, nor an information processing system, nor scientifically measurable complex specificity, and thus your obfuscation does not apply.

Zachriel:
“The most complex devices of the day were astrolabes. Take a look at one. Intricate, complex. Certainly designed. Yet, it is only a simulacrum of the planetary orbits. The very process by which you would deduce that the astrolabe is designed, leads to the claim that the movements of planets are designed. And his is exactly the conculsion our ancient semiotes reached. Terrestrial astrolabes were made of brass—the celestial of quitessence.”

These intelligent representations of nature are not coded information -- potentially functional, yet still not scientifically measurable as information. I have already dealt with representations which are not coded information (in my last blog post) and why it is not scientifically measurable as complex and specific because of inability to measure the informational content (both shannon information and algorithmic information) and potential of false positives in the form of animals in the clouds, faces in inkblots, and animals naturally sculpted in sandstone. By your logic, which is not derived from anything I have explained, I can arrange paint into a very complex pattern of a waterfall on a canvass (obviously designed) and arrive at the conclusion that the waterfall itself is intelligently designed.

Furthermore, it seems that these astrolabes are based on measurements of regularities, and as such show that whatever they are representing is caused by law, thus failing the first stage of the Explanatory Filter. Humans can design many things, but only complex specified information can be scientifically verified as being designed through the use of the EF, through the filter for determining complex specified information, and through the observation that all complex specified information that we do know the cause has been intelligently designed.

Planetary orbits are governed by a law because they follow a regularity and are thus ruled out by the first phase of the EF. Furthermore, they contain no measurable complex information which is processed into its function/meaning by a semiotic system, so we can’t measure the specification. Simple as that. Planetary orbits strike out twice. And in this game, after one strike you’re out.

Hypothesis

Zachriel:
“You seem to be confused as to the nature of a hypothesis, conflating it with your conclusion.”

What do you mean by conclusion? Do you mean as per merriam-webster disctionary:

-1 a : a reasoned judgment : INFERENCE

If so, then you are correct that I am “conflating” an hypothesis with a conclusion. However, you are incorrect that I am confused. You obviously don’t realize that an hypothesis is indeed a proposed, testable, and potentially falsifiable reasoned judgement or inference (conclusion).

According to wikipedia:
“A hypothesis consists either of a suggested explanation for a phenomenon or of a reasoned proposal suggesting a possible correlation between multiple phenomena.”

Or maybe you are just stating that my conclusion that complex, specified information is an indication of intelligence is separate from my hypothesis that a program will only produce an information processing system if programmed to necessarily do so by an intelligence.

If that is the case, then let me put it to you in a way that you may find digestible:

1. Hypothesis: “Functional DNA is complex specified information and as such can not be created by natural law.”
-falsify this by showing one example of which functional DNA is caused by any type of natural law. Remember that laws are descriptions of regularities and as such can be formulated into mathematical equations. Furthermore, regularities in nature are caused by physical laws of attraction (gravitational, magnetic, voltaic, etc.). So, find a physical law of attraction and its representative mathematical equation or algorithm which causes the functional sequences within DNA and the above hypothesis will be falsified.

2. Hypothesis: “Information processing/semiotic systems do not generate themselves randomly within a program which creates complex patterns and is founded on a random set of laws.”
-falsify this by creating a computer simulation which develops programs based on random sets of laws and see if information processing systems randomly self-organize. Of course, if that is possible, then the original information processing system which the simulation is modelling can also be seen as a random production itself, thus eliminating intelligence as a necessary cause of information processing systems.

3. Hypothesis: “Complex specified information can not be generated independently of its compatible information processing system.” Complex Specified Information is defined as such by being converted by its compatible processor and the definition of an information processor is a system which converts information into function.
-falsify this by creating two teams of engineers. Without any contact with the other team, one team must create a new language of complex specified information and a message written in the language and the second team must build a new information processing system and program. Then, attempt to run the message through the program. If the program outputs a meaningful message then this hypothesis is falsified.

I have written more that is relevant to the above hypothesis here starting in para 5 beginning with: “Basically, if we look at an information processing system ...” through the next three paras.

4. Data (observation): 100% of the information processing systems in which we know the cause originate in an intelligence. Intelligence can and has produced information processing systems.

5. Conclusion: “Since life contains an information processing system acting on complex specified information, life is the result of intelligence.”

Please read through “The Science of Intelligent Design.” Then, if you would like to respond to the argument of ID as Science, I would appreciate it if you could do so in that thread just so I can try and keep on topic. Thanks.

Zachriel:
“You haven't provided any method of testing your idea.”

You didn’t see the suggested computer program experiment? If information processors were the result of random laws, then a program which created complex patterns and was founded upon a random set of laws would cause an information processing system to randomly self-organize. The above hypothesis and thus ID Theory would then be falsified.

Zachriel:
“Consider that if it was a strong scientific theory (as opposed to a vague speculation), it would immediately lead to very specific and distinguishing empirical predictions.”

Vague speculation? Nope, there is no vague speculation of the inter-relatedness between coded information and intelligence. 100% of all information processing systems that we know the cause are caused by intelligence. That is the available data (observation).

Second, there is the Explanatory Filter with three very non-vague stages, which hasn’t turned up a false negative yet as far as I am aware.

Third, there is the non-speculative argument for complex specification which must be scientifically measurable and complex (random) information that can be converted into function/meaning. This concept can even be used in SETI research program to discover ET without ever meeting him, without knowledge of how the signal was created, and without a knowledge of the form of ET intelligence. All that is known is that the signal comes from an intelligence (at least as intelligent as humans) which does not reside on earth.

You bet it will lead to a very specific and distinguishing empirical prediction, as in the computer simulation. At least by intelligently programming a computer program to generate an information processor, there will be proof of concept for ID Theory. I say prediction (singular) because this hypothesis is only one aspect of ID Theory. Then there is front loading, a law of conservation of information, programming for evolution ... but I am only focussing on one for now.

Thursday, August 23, 2007

ID THEORY vs. PLANETARY ORBITS, CRYSTAL SPHERES, ANGELS, DEMONS, AND IGNORANCE?

This is the beginning and continuation of a discussion I was having with Zachriel at Telic Thoughts in this huge thread on July 7, 2007 at 11:21 am.

Hello Zachriel (if you do indeed decide to visit me here), welcome to my humble blog.

My apologies that this is so long, however, I felt that I needed considerable room to describe coded information and what is meant by specificity, as well as providing one testable and potentially falsifiable ID hypothesis.

My comments from the blog at Telic Thoughts will be in green and Zachriel, your comments will appear in blue and will be
centred.

My continuing response that did not originally appear at Telic Thoughts will follow in plain type.

With that, Zachriel sets the stage:

Zachriel: Well, we already know that people have repeatedly made erroneous conclusions about
design by filling Gaps in human knowledge with some sort of designer. Angels pushing planets on
crystal spheres. Demons causing disease. Faeiry Rings. An angry Sky God hurling lightning bolts.

CJYman: These are eroneous conclusions because they are not created through any scientific
inference or experiment. Tell me, what experience do we have with angels, crystal spheres, demons, or angry sky gods, so that we can use them as causal explanations of phenomenon?

Zachriel: Precisely! They are not valid scientific inferences. It isn't enough to point at
impressive, intricate, specific and detailed planetary movements and say that they are due to agency. If a claim is to have scientific validity, it has to lead to specific and distinguishing empirical consequences.


CJYman: Exactly, and this is where the point re: codes and intelligence as a valid scientific
inference which I usually bring to the table and which stunney continues to bring forward is
extremely relevant.

Zachriel: Codes? Do you mean cryptanalysis? If so, then cryptanalysis is deep within the orthodox paradigm. We know that people make codes to safeguard and communicate secrets. We know that people try to break codes to steal secrets. As such, cryptography can be analyzed as a branch of game theory. "The enemy knows the system." Codes are usually understood by making and testing various assumptions about the encoder, the probable messages, and the mechanism of encoding.

“Codes” – I mean coded information, of which cryptanalysis deals with in part. What system creates coded information and the information processor to convert present codes into future integrated systems? Here’s a hint – you possess it.

In fact, cryptanalysis deals with discovering a code's cypher, or specificity. If there is a cypher, causing the code to be specified, then it is possible to process (convert) the code with some type of information processor (ie: "enigma machine"). This is important and I will be discussing specificity further on in this post, and how specificity is one of the main defining factors of coded information.

CJYman: Furthermore, appealing to angels and demons is major orders of magnitude off of
appealing to intelligence, since science is beginning to understand intelligence, can model
intelligence, understands that intelligence and information are intricately related — in fact necessary for each other as far as we can scientifically determine.

Zachriel: Again, precisely the point. We refer to a library of knowledge to help us form tentative
assumptions, which we then use to devise specific and distinguishing empirical tests.

Hold on – that certainly did not seem to be “precisely the point” when you seemed to be saying that ID theory is no more scientific than “angels, demons, and crystal spheres theory.” I have just shown that line of thinking to be COMPLETELY incorrect and a horrible obfuscation, and you seem to agree with me now.

So why again, does it seem that you were trying to conflate ID theory with a belief in angels, demons and crystal spheres? What was your point in bringing up angels, demons, and crystal spheres?

It is obviously perfectly scientific to create a tentative assumption that intelligence is necessary to program the appearance of information processing systems. That is one aspect of ID theory.


CJYman: Conversely, there is no scientific understanding of angels and demons, no scientific
models of 'artificial' angels/angelicness or demons/demonicness, and so far no scientific
'inter-relatedness' between angels, crystal spheres, and planetary motion.

Because of that, I am extremely fed up with your irrelevant and nonsensical smoke and mirror obfuscations of ID with angels and crystal spheres.

Zachriel: Again, precisely the point. They were invoked to explain Gaps in human understanding
of these phenomena. Whatever poetic value they have had, invoking agency to explain planetary
motions is scientifically vacuous.

Again, what does this have to do with ID theory? You say “precisely the point,” but I am missing your original point and why you brought up angels and crystal spheres in the first place.

CJYman: Specificity, not merely complexity, is a sign of intelligence.

Zachriel: Planetary motions are not only complex, but specified. They certainly don't move about willy-nilly. For instance, they are confined to a narrow region of the sky called the Ecliptic. Until modern times, this was unexplained.

It is obvious we are not on the same playing field when discussing specificity. You are discussing complex natural laws of attraction (voltaic, magnetic, and gravitational) that cause something to move about in a specific orbit, rather than willy-nilly. However, there is no specification or representation/conversion as the term is used within ID theory.

Let’s look to the dictionary for a start:

-specified: to include as an item in a specification.

-specification: a detailed precise presentation of something or of a plan or proposal for something.

As you can see, when something is specified, it includes, within its detailed plan, a separate item.

– ie: SPECIFIC markings on Mount Rushmore SPECIFY four president’s faces; the SPECIFIC arrangement of the english letters “c” - “a” - “t” SPECIFIES the idea of a four legged mammal which carries her cubs/kittens around by the scruff of the neck and meows/roars; the SPECIFIC arrangement of nucleotides within genes can SPECIFY a molecular machine.

Another small detail is that in order for anything to be specified, it must be converted by an information processor into its specified object or idea.

This is how specification is used in ID theory and in coded information. In fact, this is how one determines if something indeed IS coded information – is there any specificity (representation/conversion). Do the units in question specify a separate function or meaning when processed? Or, as the dictionary puts it, is a separate item included within the plan – a plan being a specific arrangement (sequence)?

When dealing with something that is specified, it causes a SPECIFIC item as based on the SPECIFIC sequence/organization of its plan.

I will admit that there is a slight problem with scientifically deciding whether an object is indeed specified whenever intelligence is the system converting (processing) the object in question into its specification. This is because art (intelligent representation) is subjective and intelligence seems to be able to make subjective interpretations and decide that a pattern merely LOOKS LIKE something else. There is actually no scientific way to decide if a shape which is not composed of discrete units and not chosen from a finite “alphabet” actually represents the object in question.

However, that is not how coded informational systems work, so there needs to be a distinguishing line drawn here. Coded information is measurable because it consists of discrete units chosen from a finite alphabet and it is objectively processed by rules set by its processing system. By objectively, I mean that a specific input always equals one specific output; there is only one way for it to be processed by its compatible information processor, and it will objectively output the function, according to the rules of its compatible processor, which corresponds to the specific input with no subjective interpretation except for any “deeper meaning,” nuances, implications, etc. However any of those “deeper meanings” would be the artistic layers of some intelligently created codes (such as language) and as art those layers can not be scientifically determined to exist as an intelligent construct within the code, even though the code itself can be scientifically verified to exist because it can be processed objectively according to the rules of its governing system as having meaning/function of some type.

Furthermore, there is no ambiguity within a non-conscious information processing system as it will be completely objective as based on its programming. Ask our genetic information processing system what “ACTG ...” is and it will pump out the correct protein or regulation or system, etc. which that specific sequence specifies/represents. But if intelligent systems such as you and I look at the same ink blot or cloud or eroded sand sculpture, we will both be able to decide on separate “meanings” for the systems in question, including no meaning at all.

However, when we are dealing with coded information created by intelligence, instead of merely having an infinite number of possible abstract artistic shapes, we also have an agreed upon language with defined units, rules and words which, although created subjectively, once they are created and implemented they provide an objective set of rules for deciding specificity. Furthermore, unlike possible artistic shapes that might look like something an intelligence could have made, coded information can also be measured according to number of units measured against its available “alphabet.”

Here are two 100 bit strings of shannon information, however only one of them is coded information – or as Dr. Dembski puts it, “specified information:”

Canyouunderstandthis – specifies a question in the English language
oeskysqpqdykvuuzhfeh – specifies nothing

How can you know that one of them is coded information and thus the result of intelligence, rather than merely the result of a random string generator? You can only do so by discovering specificity since specificity is only possible in light of an information processing system (in the above case: an intelligent English speaking human) and intelligence creates information processors and according to ID theory random laws do not. IF THE STRING IS NOT DEFINED BY NATURAL LAWS OF ATTRACTION AND IF IT IS PROCESSED INTO A SEPARATE FUNCTION, IT IS SPECIFIED AND THUS THE RESULT OF INTELLIGENCE, SINCE INFORMATION/PROCESSING SYSTEMS ARE THE RESULT OF INTELLIGENCE.

Of course, Dr. Dembski goes even further for discovering necessarily programmed (intelligently designed) results within an evolving system, and explains that specified information over “x” number of bits does not have the time within our universe to have been discovered through random search (random variation) no matter how many times intermediate lengths of that string were frozen by natural selection. (At least that is how I understand Dr. Dembski’s argument) But that is not what I am discussing right now. I am merely discussing the existence of coded information/processing systems.

So, codes and information processing systems are defined not by how the systems look but by how they are specified according to the rules of its compatible processor which converts the specific arrangement of units into specific functional structure, along with the fact that coded information is not defined by physical laws of attraction between its units. Again, as per the given dictionary definition of “specified,” is a separate item included within the coded plan; and do they specify (represent) a separate function when processed -- are they processed and converted into further functional systems? Within the genome, proteins are one of the separate items which are included in the plan (blueprint) of the genome.

DEFINITION: coded information = an aperiodic sequence of units, in which the arrangement is not defined by physical laws of attraction, chosen from a finite “alphabet” which is processed (converted) into a separate system (of ideas/meaning or objects/functions). Furthermore, because it contains discrete units chosen from a finite “alphabet,” coded information can be measured in binary digits (bits) as per shannon information theory. Basically, discover a cypher (specificity) which is based on a finite alphabet, and you have discovered coded information. As far as I understand, this is the main idea behind code cracking -- cryptanalysis.

Now, let’s apply this to planetary orbits. Sure they move in a definite orbit, but do these orbits or the arrangement (sequence) of the planets themselves contain within them a separate function when processed? Well, I am not aware of any system which processes the sequence of the planets and their orbits to produce a separate item according to the specificity of the planetary orbits. The planetary orbits do not intrinsically (as the nature of the system) specify or represent anything.

Sure, we humans can extrinsically impose information on the regularities of the system (as we can with any system which acts regularly: ie atomic vibrations) and use it as a time keeping device, but that is the result of subjective intelligence and is external to the system in question and is merely a MEASUREMENT OF REGULARITY WHICH IS DEFINED BY PHYSICAL LAWS OF NATURE/ATTRACTION. The planetary system does not contain any coded (sequenced) information within it. If it did, then the planets themselves or their positions relative to each other would contain further function or meaning. That is what is known as astrology. But, I’m quite confident that you don’t see astrology as scientific.

Now, let’s look at life. How do we know we aren’t imposing information upon the system, and as you state a bit later “Is it really a "code" or are we confusing the analogy with the thing itself?”
It is actually quite obvious that life is an actual coded information processing system because it does not need any imposition externally in order to be specified. It processes its own information completely objectively -- specific input = specific output with no multiple subjective interpretations.

First, DNA isn’t a molecule that organizes itself according to physical laws of attraction between its units. Second, DNA specifies amino acids, proteins, and other systems when processed by its information processor regardless of whether we humans measure it or not. And life itself is not just a measurement of regularities – it is an actual processing (converting) of an irregular sequence of units, not defined by physical laws of attraction, into further function.

In fact, as I imply [elsewhere], if DNA existed on its own without any system to process it, it would not be coded information (or at the very least we would not be able to scientifically determine if it was coded information) -- it would merely be a random string of chemicals, probably forged accidentally and randomly since the organization of its units (nucleotides) do not follow any natural laws of attraction between the units. Likewise, if the english language did not exist within this universe and somehow a few lines of random markings organized themselves into the pattern: “In the beginning God created the heavens and the earth...” on the side of a white birch tree, there would be no way to scientifically know if the random patterns were indeed coded information -- in fact these random markings would have no meaning since they would not specify anything without the existence of human intelligence and the English language.

Conversely, with SETI, the reason why a sequence of 1, 3, 5, 7, 11, 13 ... would be seen as coded information resulting from intelligence is because it is not MERELY a random string of units. It is an aperiodic sequence (not merely a measurement of regularities caused by physical laws of attraction) processed by intelligence as “the mathematical idea of prime numbers” AND since its arrangement (sequence) is indeed not defined by physical laws of attraction, unlike planetary orbits, we can be confident in saying we have scientifically verified that “we are not alone.” The pattern 1, 3, 5, 7, 11, 13 ... specifies or represents “a section of prime numbers” and is not a pattern that is created by natural laws of physical attraction as are planetary orbits.

Reminder: the subjective interpretation by intelligence of a periodic system, which is defined by physical laws of attraction and which does not rely on a processing system, as being informative based on its regularities (using planetary orbits as a timekeeper) must not be confused with the objective processing of an aperiodic system of units not defined by physical laws of attraction which has specificity/representation/meaning (life’s processing of DNA), in which processing of the system causes separate items which function together. The former is not coded information; however the latter is coded information.

The main point as I see it is that when dealing with specificity we are also dealing with an information processor of some type, since specificity is only recognizable in light of some type of information processor. There must be something which converts and causes specification. So, since intelligence and information are intricately related – both necessary for each other as far as we can scientifically determine – then, until a random set of laws causes an information processing system to randomly generate itself, ID is the best scientific explanation for specificity (coded information).


Zachriel: Previously, the planets were thought to be designed to control the destiny of humankind, and their movements gave clues to that destiny. But as you point out, "so far no scientific 'inter-relatedness' between angels, crystal spheres, and planetary motion."

Exactly, and there IS scientific ‘inter-relatedness’ between coded information and intelligence.


CJYman: Every example of a code (specificity) that we know the origins has its origination in an
intelligence …… and according to yourself: ...

Zachriel: In real-life science, we always compare purported artifacts to known examples, then
attempt to identify characteristics of the artisan or art. That's how it's done.

Yes, and that’s why we compare the information processing system of life with all known examples of coded information which has been produced by intelligence.

Zachriel:Look at your statement carefully. The strength of your argument depends on the extent of human ignorance. Nevertheless, it still may be suitable for generating a hypothesis.

Ignorance of what? How information processing systems randomly and accidentally generate themselves without any underlying intelligence/plan/information? That’s not even a scientific hypothesis because when dealing with accidental randomness there is no underlying NATURAL LAW and as such there is no way to falsify an hypothesis based on random accidents.

I refer you to a quote from a Professor Hasofer:

"The problem [of falsifiability of a probabilistic statement] has been dealt with in a recent book by G. Matheron, entitled Estimating and Choosing: An Essay on Probability in Practice (Springer-Verlag, 1989). He proposes that a probabilistic model be considered falsifiable if some of its consequences have zero (or in practice very low) probability. If one of these consequences is observed, the model is then rejected.
‘The fatal weakness of the monkey argument, which calculates probabilities of events “somewhere, sometime”, is that all events, no matter how unlikely they are, have probability one as long as they are logically possible, so that the suggested model can never be falsified. Accepting the validity of Huxley’s reasoning puts the whole probability theory outside the realm of verifiable science. In particular, it vitiates the whole of quantum theory and statistical mechanics, including thermodynamics, and therefore destroys the foundations of all modern science. For example, as Bertrand Russell once pointed out, if we put a kettle on a fire and the water in the kettle froze, we should argue, following Huxley, that a very unlikely event of statistical mechanics occurred, as it should “somewhere, sometime”, rather than trying to find out what went wrong with the experiment!’”

So, what really went “wrong” with our universe for it to have created a system of replicating information processing systems which continually (at least for about 3.5 billion years) generate novel information? The reason I ask what went “wrong” is because the very foundation of life is based on a system (coded information) which is not defined by physical laws of attraction between its units. So, if we can not call upon natural laws of physical attraction, then is random accidental chance to blame or, as per the above quote, should we be looking elsewhere in order to generate an actual scientific hypothesis?

Here’s a hint: the very first information processing system in the universe (presumably life) was either created randomly and accidentally or was influenced or programmed to exist by intelligence since the informational units which are the foundation of life are not defined by physical laws of attraction. Since there is no room for physical laws of attraction, then it is either “sheer dumb luck” or it is somehow caused by agency and the only type of agency that we are scientifically aware of is necessarily associated with intelligence. Or, maybe there is a second scientifc option ... any ideas?

Furthermore, how does the scientific knowledge of the ‘inter-relatedness’ of information processing systems and intelligence depend on human ignorance? This ‘inter-relatedness’ depends ENTIRELY on what we DO EXPERIMENTALLY KNOW: intelligence requires information processing and information processing requires previous intelligence. Do you have any scientific data that even remotely suggests otherwise?

IMHO, it is modern evolutionary theory, or at least how it is sold, which depends on the extent of human ignorance. How much do we really know about life, how it operates, and how evolution and abiogenesis occur opposed to what we are sold ... er ... told by the scientific priests among us? I think that the “Edge of Evolution” really served to expose the fact that we are still in the dark ages in actual observation and experimental understanding of evolution.
For more, I refer you to Dr. Shapiro.

Zachriel: The question you want to ponder is whether or not the genetic code is intelligently
designed. So, state it as a hypothesis, and then form specific and distinguishing predictions. If it is designed, then what is the causal link to the designer? How did the designer manufacture the coding device? Who is this designer? Is there more than one designer? Are they aliens or gods? What are they like? Is it really a "code" or are we confusing the analogy with the thing itself? What observations do we make?

These are all excellent questions and are perfectly compatible and would only be explored within an ID paradigm. A few more good question would be:

-“How do you design a program to necessarily produce an information processor within it and how does this relate to our universe and the first information processor within it?” and,

-“How do you design an information processing system to evolve toward intelligence and consciousness as it interacts with its environment and how would this relate to life as we know it?” and,

-“Is there a law of conservation of information, where the generation of a certain amount of new information must be dependant on a specific amount of previous information, so that there is never a true informational free lunch?” and,

-“Is it possible to front load a small amount of information to contain and necessarily produce a larger amount of information upon interaction with its environment: in effect, a type of technologically advanced information compression strategy?”

Of course all of these questions can only be answered as the necessary evidence and data are discovered. Some may be scientifically answerable and others may not. For more discussion of the science of IDT refer to “The Science of Intelligent Design.” (from this blog)

Zachriel: Each answer will raise more questions. The scientific method is not an end-point, but a
process of investigation.

Completely agreed!


Now, to wrap this all up, let me try to put it a slightly different way and present my scientific question and hypothesis.

Over the course of time there have been at least four different information processing “events” including that of the “appearance” of the universe (when time/matter began). When I say “events”, I am referring to the appearance of new information and information processing systems.

Here are four of them:

1. The information processor which causes the program we know as our universe (matter and energy). This is a quantum information processor. For more info read the book “Programming the Universe” by Seth Lloyd.

2. Matter and energy in the form of atoms creating the information processing system of the living cell -- abiogenesis. This is a biochemical information processor.

3. Living cells in the form of neurons creating the information processing system of the brain. I would call this a cellular (neurons) information processor.

4. A collection of interacting brains creating systems of language (including further information processing systems – computers and artificial intelligence). These are electronic information processors.

Now, in reference to these information processing systems, I would like to ask a scientific question – one that has the potential of being testable and falsified.

That question is: “will any random set of laws governing the program of an information processing system spontaneously generate even one, much less three information processing systems within the initial program, layering them all on top of each other, or is preceding intelligence (programming) necessary as a cause for this phenomenon?” IOW, what causes information processors? Are they caused by physical laws of attraction, random chance, or intelligence, or ...?

Now, here are some definitions in my own words and brief explanations. BTW: these are only IMO and do not necessarily reflect any ID scientist’s understanding of the matter, however I do see them as defensible, informative, expandable, and perfectly consistent with ID theory.

Conclusion:

- Re: coded information -- if you can created a cypher based on a finite alphabet for a group of units and de-code those units into their specific functional integrated items, then you are dealing with coded information.

- Natural laws of attraction do not define coded information, since coded information is not caused by physical laws of attraction between its units. Natural laws, which depend on forces of attraction (such as voltaic, magnetic, and gravitational) on their own are not a cause of coded information.

- Random causes and “anything is probable after enough time” are not scientific hypothesis, since they are not falsifiable (refer to above quote by prof. Hasofer). Science wants hard data, patterns, and repeatable laws, not "pat" answers.

Approaching my above scientific question with the understanding that science is beginning to understand intelligence, can model intelligence, understands that intelligence and information are intricately related -- in fact necessary for each other as far as we can scientifically determine, one can create the scientific hypothesis that intelligence is a necessary cause of information processing systems and that a program producing a random set of laws will not cause an information processing system to randomly self-organize. Thus, we can scientifically infer that information and intelligence were both present at the initial singularity of our universe.

BTW: this is not an argument against abiogenesis. This is a positive argument for intelligent programming for information/processing systems to exist within overarching programs. In order to falsify this, merely produce a program (which can be, for the sake of argument, taken as a given) which causes a random set of laws and see if information processing systems randomly self organize. In fact, it is my personal opinion that if ID were the governing paradigm we would probably be closer to understanding abiogenesis and evolution from an information processing point of view, rather than sticking our collective heads in the sand and only considering those “hypothesis” which allow us to get something (information processing) from nothing. IMO, the search for RANDOM abiogenesis is the search for the ultimate perpetual motion machine (free energy from nothing).

For more, refer to my thread “The Science of Intelligent Design.”