Wednesday, August 29, 2007

Concept of Complex Specified Information

Hello Zachriel,

welcome back!

Unfortunately, I do have much to say and am unversed in brevity. Moreover, because I do believe that there are some important issues which you brought up as a response to my last blog post, I have decided to create my response as another blog post -- this one.

First, I will kick this off by letting you know that, while I do not understand all of the math that is involved with Dembski’s Complex Specified Information, I do believe that Dembski has explained the basic concepts in a manner so that one does not need to understand all of the math involved to understand the basic concepts upon which the math is established.

Zachriel's statements will be bolded and my response will follow in plain type.

Zachriel:
“Many of the terms used in Intelligent Design arguments are equivocations. Even if unintentional, these equivocations lead people to invalid conclusions, then to hold these conclusions against all argument.”


I disagree. You will have to show me which terms are equivocations. Just remember that there is a difference between equivocation and two different ways of saying one thing. Of course it must be shown that the “two different ways” are indeed “the same thing.” As well, no where have I equivocated by defining a word one way and then using it in a context where that definition does not apply, at least as far as I can tell. Furthermore, some words can be used in more than one way without being equivocations as long as you are up front about how you are defining and using that word.

Zachriel:
“A case in point is "specificity". You suggest a dictionary definition for "specification: a detailed precise presentation of something or of a plan or proposal for something" adding "in order for anything to be specified, it must be converted by an information processor into its specified object or idea". But these are not the only definitions of "specific: sharing or being those properties of something that allow it to be referred to a particular category", and this can easily lead to confusion or conflation. We have to proceed carefully.”


Sure, which is why I provided the definition that is applicable to the topic at hand and have not engaged in any equivocation.

CJYman: "This is how specification is used in ID theory..."

Zachriel:
“But no. This is not how Dembski uses it in the context of Complex Specified Information. His definition of specificity is quantitative and based on the simplest (meaning shortest) possible description of a pattern by a semiotic agent.”


You are partially correct, but you’re missing something. You are referring to merely compressible specification, not complex specification. I will address this further when I respond to your next statement.

First, let’s look at “an independently given pattern:”

Now, here is the main idea behind specificity, as described by Dr. Dembski:

Dr. Dembski, from here:
“There now exists a rigorous criterion-complexity-specification-for distinguishing intelligently caused objects from unintelligently caused ones. Many special sciences already use this criterion, though in a pre-theoretic form (e.g., forensic science, artificial intelligence, cryptography, archeology, and the Search for Extra-Terrestrial Intelligence) ... The contingency must conform to an independently given pattern, and we must be able independently to formulate that pattern. A random ink blot is unspecifiable; a message written with ink on paper is specifiable.”

The pattern is independently given if it can be converted into function/meaning according to the rules of an information processing system/semiotic agent not contained within the pattern. In the above example, the ink markings specify function according to the rules (language and lexicon) of a human information processor/semiotic agent. Thus, the pattern to match is one that has meaning/function. That was also the point in Dr. Dembski’s example (in “Specification: the Pattern that Signifies Intelligence,” pg. 14-15) of difference between a prespecification of random coin tosses and a specification of a coin toss that could be converted into the first 100 bits of the Champernowne sequence. The sequence specifies a function of a combination of mathematical and binary rules as opposed to just matching a previous random toss. The Champernowne sequence exemplifies the same idea behind receiving a signal of prime numbers from ET.

And, yes, measurability (quantity) is important in scientifically distinguishing specificity, which is why the units must be measurable in amount of information (shannon information theory) and randomness (algorithmic information theory) and conform to other criteria in order to be complex specified information.

It is true that, in “Specification: the Pattern that Signifies Intelligence,” Dembski does explain the mathematics behind complexity and specification using both shannon and algorithmic information theory and probability theory, however the purpose of my last blog post was to merely flesh out the non-mathematical description of the concept of specification, since many people do not understand what it entails.

Zachriel:

“Leaving aside the voluminous problems with his definition, this is quite a bit different than yours. All patterns can be specified by a semiotic agent, the question is the compactness of that description.

? = –log2[ ?S(T)P(T|H)].”

Incorrect.

First, from what I understand, a semiotic agent is an information processor since they both are systems which can interpret signs/signals and the basic definition of an information processor is that which converts signs/signals into function/meaning. But that is only another way of saying what Dr. Dembski states on pg. 16 (“Specification: the Pattern which Signifies Intelligence”): “To formulate such a description, S employs a communication system, that is, a system of signs. S is therefore not merely an agent but a semiotic agent.”

And actually, Dembski and my definitions are saying the same thing, just with different wording.
Here is the definition I used: specificity = “to include within its detailed plan a separate item.” Dembski’s statement, “Contingency conforming to an independently given pattern,”is the same concept in different words. First, a plan in the form of coded information is a measurable contingency. Second, an independently given pattern, as formulated by an information processing/semiotic system has meaning/function. Now, look at Dr. Dembski’s above examples. The sequence of units in the message is the contingency and the meaningful/functional sequence of letters or numbers which conform to the rules of an information processing system is the separate item. According to myself, the sequence of units in the message is the detailed plan and the meaningful/functional sequence of letters or numbers which conform to the rules of an information processing system is the separate item. A sequence of letters is the contingency/plan which is converted into to a separate item/indpendentaly given pattern (language) of meaning/function.

An information processor is necessary to convert the contingency/plan into the separate item/independently given pattern which is meaningful/functional. I was merely discussing the same concept with different wording and then focussing on my own argument from information processing, which actually becomes a slightly different aspect of ID Theory since I deal with the cause of information processors/semiotic agents (that which causes specificity).

Re: “compactness” and algorithmic information theory:

Algorithmic compressibility (“compactness of the description”) is one way to rule out chance, however, algorithmic compressibility does not rule out natural laws, since high algorithmic compressibility expresses regular repetition which is indicative of a causal law. So, it is true that natural law can create specifications in the form of repetitive patterns. Repetitive patterns are specified because they represent a simplified/compressed description caused by a specific algorithm being processed by an information processing system/semiotic agent. In being repetitive and caused by the laws of the algorithm, they are ruled out as having been caused by chance. Compressible specifications can be represented by an algorithm as a string shorter than the original pattern.

First example:

-the pattern:“1010101010101010101010101010101010101010"
-the algorithm: print ‘10' twenty times

The algorithm is the independently given pattern (an independent simplified/compressed description) processed by the rules of the language of its program (information processor/semiotic agent) and thus the overall pattern, following the laws of the algorithm, has a low randomness. In this case chance is ruled out, and the pattern is specific.

Second example:

-pattern:“473826180405263487661320436416"
-algorithm: there is no simplified/compressed independently given description of the pattern.

According to algorithmic information theory, the second example is more random than the first example -- all units are random with respect to each other within the overall pattern. Because it cannot be compressed – doesn’t follow a rule/law -- and is thus random, from my understanding of Dembski’s argument, it is complex.

The first example can not have reasonably arrived by chance acting independent of a law , yet there is no reason why the second couldn’t have. Note that I said, “acting independent of law.” In nature, there are many sequences which display regularities and are thus caused by natural law. Ie: snowflakes and the molecular structure of diamonds. In these cases the pattern is caused by laws of physical attraction between their units acting in accord with other chance factors, however it is not merely attributable to chance in the way a random arrangement of rocks is caused by chance without a law to describe the exact arrangement.

So, compressible specifications rule out chance, but not natural law. How is natural law ruled out?

Well, what if there was no compressibility in a given bit string, ruling out law, but there still was an independently given pattern which rules out chance? IOW, what if we have both complexity and specification -- complex specificity.

Let’s look at an example:

“Canyouunderstandthis” has no algorithmic compressibility (no regularities) and its sequence is not caused by laws of physical attraction between its units, thus natural laws are ruled out and it is random, thus complex. However, it still possesses specificity since it can be processed by an information processor/semiotic agent as having meaning/function, thus ruling out chance. It is an example of specified complexity, or complex specified information.

Now, I have no problem with evolution generating complex specified information -- the “how” is
different question. But, the main point of my original argument is that evolution requires a semiotic system to process measurable information and cause specificity -- the converting of measurable and complex information (DNA) into functional integrated molecules (regulatory proteins, molecular machines, etc.).

My question is, “what causes semiotic systems?” This ties into my hypothesis re: the cause of information processing systems, which I will again reiterate near the end of this post.

Now, I’ll address your statement that “all patterns can be specified by a semiotic agent.”

If according to yourself, “all patterns can be specified by a semiotic agent,” then please describe the independently given pattern within the following random 500 bit “pattern”:

“abfbdhdiellscnieorehfgjkabglskjfvbzmcnvbslgfjhwapohgkjsfvbizpalsienfnjgutnbbxvzgaqtwoeitbbns
pldxjcba”

You will notice that there is no compressibility, thus algorithmic law is ruled out. Also, the sequence is not caused by the physical laws of attraction between its units so natural law is ruled out. As per the EF and argument from CSI, it is now up to you to describe the independently given pattern in order to rule out chance. IOW, you as a semiotic system must create a pattern (ie: language) which is independent of the random 500 bit “pattern” and in which the 500 bit “pattern” can be processed into meaning/function.

Furthermore, the cell, as a semiotic agent/information processor does not and can not process just any DNA signal much less just any signal which is not even in the proper biochemical format. So, if correct, even just this one example refutes your statement.

Angels and Crystal Spheres


Zachriel:
“Consider the historical example of Angels. We evoke a time when humans could observe and record the intricate movements of planets (planets meaning the classical planets; Sun, Moon, Venus, Jupiter, Saturn, Mars, Mercury) and plot them against events on Earth, but who lacked a unifying explanation such as gravity. Will the "Explanatory Filter" render false positives in such cases?

COME ON ZACHRIEL, WE’VE ALREADY BEEN THROUGH THIS

First, did they have any scientific understanding of angels and the phenomenon they were purporting to explain?

Second, did they have any observation of “inter-relatedness” between angels and planetary orbits?

We have both, when it comes to intelligence and complex specified information.

Third, they did not even follow the explanatory filter at all. If they did, and if they had an understanding of natural laws, they first would have looked for regularities since natural laws are based on regularities which is why they can be summed up in mathematical equations and algorithms. If they had looked for these regularities, they would have noticed cycles, and thus proposed, at the least, that some unknown law governed the motion of the planets. Moreover, in order to actually move beyond the first stage of the explanatory filter, they would have needed to positively rule out natural law, as has been done with coded information. Now, I do realize that our ancestors did not know about gravity and its effects, however did they positively rule out laws as a cause of planetary motion, as has been done with DNA sequence (it is aperiodic/irregular and attached to the backbone, thus not exerting attractive influence on its sequence)? Life is controlled, not only by physics and chemistry, but also by coded information, which itself is a “non-physical-chemical principle” as stated by Michael Polanyi in “Life Transcending Physics and Chemistry,” Chemical & Engineering News (21 August 1967).

Also, can you measure the shannon information content of the sequence of planetary orbit positions? If not, then we can’t measure the informational complexity of the system. If you can’t measure the informational complexity of the system, then we are dealing with neither coded information, nor an information processing system, nor scientifically measurable complex specificity, and thus your obfuscation does not apply.

Zachriel:
“The most complex devices of the day were astrolabes. Take a look at one. Intricate, complex. Certainly designed. Yet, it is only a simulacrum of the planetary orbits. The very process by which you would deduce that the astrolabe is designed, leads to the claim that the movements of planets are designed. And his is exactly the conculsion our ancient semiotes reached. Terrestrial astrolabes were made of brass—the celestial of quitessence.”

These intelligent representations of nature are not coded information -- potentially functional, yet still not scientifically measurable as information. I have already dealt with representations which are not coded information (in my last blog post) and why it is not scientifically measurable as complex and specific because of inability to measure the informational content (both shannon information and algorithmic information) and potential of false positives in the form of animals in the clouds, faces in inkblots, and animals naturally sculpted in sandstone. By your logic, which is not derived from anything I have explained, I can arrange paint into a very complex pattern of a waterfall on a canvass (obviously designed) and arrive at the conclusion that the waterfall itself is intelligently designed.

Furthermore, it seems that these astrolabes are based on measurements of regularities, and as such show that whatever they are representing is caused by law, thus failing the first stage of the Explanatory Filter. Humans can design many things, but only complex specified information can be scientifically verified as being designed through the use of the EF, through the filter for determining complex specified information, and through the observation that all complex specified information that we do know the cause has been intelligently designed.

Planetary orbits are governed by a law because they follow a regularity and are thus ruled out by the first phase of the EF. Furthermore, they contain no measurable complex information which is processed into its function/meaning by a semiotic system, so we can’t measure the specification. Simple as that. Planetary orbits strike out twice. And in this game, after one strike you’re out.

Hypothesis

Zachriel:
“You seem to be confused as to the nature of a hypothesis, conflating it with your conclusion.”

What do you mean by conclusion? Do you mean as per merriam-webster disctionary:

-1 a : a reasoned judgment : INFERENCE

If so, then you are correct that I am “conflating” an hypothesis with a conclusion. However, you are incorrect that I am confused. You obviously don’t realize that an hypothesis is indeed a proposed, testable, and potentially falsifiable reasoned judgement or inference (conclusion).

According to wikipedia:
“A hypothesis consists either of a suggested explanation for a phenomenon or of a reasoned proposal suggesting a possible correlation between multiple phenomena.”

Or maybe you are just stating that my conclusion that complex, specified information is an indication of intelligence is separate from my hypothesis that a program will only produce an information processing system if programmed to necessarily do so by an intelligence.

If that is the case, then let me put it to you in a way that you may find digestible:

1. Hypothesis: “Functional DNA is complex specified information and as such can not be created by natural law.”
-falsify this by showing one example of which functional DNA is caused by any type of natural law. Remember that laws are descriptions of regularities and as such can be formulated into mathematical equations. Furthermore, regularities in nature are caused by physical laws of attraction (gravitational, magnetic, voltaic, etc.). So, find a physical law of attraction and its representative mathematical equation or algorithm which causes the functional sequences within DNA and the above hypothesis will be falsified.

2. Hypothesis: “Information processing/semiotic systems do not generate themselves randomly within a program which creates complex patterns and is founded on a random set of laws.”
-falsify this by creating a computer simulation which develops programs based on random sets of laws and see if information processing systems randomly self-organize. Of course, if that is possible, then the original information processing system which the simulation is modelling can also be seen as a random production itself, thus eliminating intelligence as a necessary cause of information processing systems.

3. Hypothesis: “Complex specified information can not be generated independently of its compatible information processing system.” Complex Specified Information is defined as such by being converted by its compatible processor and the definition of an information processor is a system which converts information into function.
-falsify this by creating two teams of engineers. Without any contact with the other team, one team must create a new language of complex specified information and a message written in the language and the second team must build a new information processing system and program. Then, attempt to run the message through the program. If the program outputs a meaningful message then this hypothesis is falsified.

I have written more that is relevant to the above hypothesis here starting in para 5 beginning with: “Basically, if we look at an information processing system ...” through the next three paras.

4. Data (observation): 100% of the information processing systems in which we know the cause originate in an intelligence. Intelligence can and has produced information processing systems.

5. Conclusion: “Since life contains an information processing system acting on complex specified information, life is the result of intelligence.”

Please read through “The Science of Intelligent Design.” Then, if you would like to respond to the argument of ID as Science, I would appreciate it if you could do so in that thread just so I can try and keep on topic. Thanks.

Zachriel:
“You haven't provided any method of testing your idea.”

You didn’t see the suggested computer program experiment? If information processors were the result of random laws, then a program which created complex patterns and was founded upon a random set of laws would cause an information processing system to randomly self-organize. The above hypothesis and thus ID Theory would then be falsified.

Zachriel:
“Consider that if it was a strong scientific theory (as opposed to a vague speculation), it would immediately lead to very specific and distinguishing empirical predictions.”

Vague speculation? Nope, there is no vague speculation of the inter-relatedness between coded information and intelligence. 100% of all information processing systems that we know the cause are caused by intelligence. That is the available data (observation).

Second, there is the Explanatory Filter with three very non-vague stages, which hasn’t turned up a false negative yet as far as I am aware.

Third, there is the non-speculative argument for complex specification which must be scientifically measurable and complex (random) information that can be converted into function/meaning. This concept can even be used in SETI research program to discover ET without ever meeting him, without knowledge of how the signal was created, and without a knowledge of the form of ET intelligence. All that is known is that the signal comes from an intelligence (at least as intelligent as humans) which does not reside on earth.

You bet it will lead to a very specific and distinguishing empirical prediction, as in the computer simulation. At least by intelligently programming a computer program to generate an information processor, there will be proof of concept for ID Theory. I say prediction (singular) because this hypothesis is only one aspect of ID Theory. Then there is front loading, a law of conservation of information, programming for evolution ... but I am only focussing on one for now.

Thursday, August 23, 2007

ID THEORY vs. PLANETARY ORBITS, CRYSTAL SPHERES, ANGELS, DEMONS, AND IGNORANCE?

This is the beginning and continuation of a discussion I was having with Zachriel at Telic Thoughts in this huge thread on July 7, 2007 at 11:21 am.

Hello Zachriel (if you do indeed decide to visit me here), welcome to my humble blog.

My apologies that this is so long, however, I felt that I needed considerable room to describe coded information and what is meant by specificity, as well as providing one testable and potentially falsifiable ID hypothesis.

My comments from the blog at Telic Thoughts will be in green and Zachriel, your comments will appear in blue and will be
centred.

My continuing response that did not originally appear at Telic Thoughts will follow in plain type.

With that, Zachriel sets the stage:

Zachriel: Well, we already know that people have repeatedly made erroneous conclusions about
design by filling Gaps in human knowledge with some sort of designer. Angels pushing planets on
crystal spheres. Demons causing disease. Faeiry Rings. An angry Sky God hurling lightning bolts.

CJYman: These are eroneous conclusions because they are not created through any scientific
inference or experiment. Tell me, what experience do we have with angels, crystal spheres, demons, or angry sky gods, so that we can use them as causal explanations of phenomenon?

Zachriel: Precisely! They are not valid scientific inferences. It isn't enough to point at
impressive, intricate, specific and detailed planetary movements and say that they are due to agency. If a claim is to have scientific validity, it has to lead to specific and distinguishing empirical consequences.


CJYman: Exactly, and this is where the point re: codes and intelligence as a valid scientific
inference which I usually bring to the table and which stunney continues to bring forward is
extremely relevant.

Zachriel: Codes? Do you mean cryptanalysis? If so, then cryptanalysis is deep within the orthodox paradigm. We know that people make codes to safeguard and communicate secrets. We know that people try to break codes to steal secrets. As such, cryptography can be analyzed as a branch of game theory. "The enemy knows the system." Codes are usually understood by making and testing various assumptions about the encoder, the probable messages, and the mechanism of encoding.

“Codes” – I mean coded information, of which cryptanalysis deals with in part. What system creates coded information and the information processor to convert present codes into future integrated systems? Here’s a hint – you possess it.

In fact, cryptanalysis deals with discovering a code's cypher, or specificity. If there is a cypher, causing the code to be specified, then it is possible to process (convert) the code with some type of information processor (ie: "enigma machine"). This is important and I will be discussing specificity further on in this post, and how specificity is one of the main defining factors of coded information.

CJYman: Furthermore, appealing to angels and demons is major orders of magnitude off of
appealing to intelligence, since science is beginning to understand intelligence, can model
intelligence, understands that intelligence and information are intricately related — in fact necessary for each other as far as we can scientifically determine.

Zachriel: Again, precisely the point. We refer to a library of knowledge to help us form tentative
assumptions, which we then use to devise specific and distinguishing empirical tests.

Hold on – that certainly did not seem to be “precisely the point” when you seemed to be saying that ID theory is no more scientific than “angels, demons, and crystal spheres theory.” I have just shown that line of thinking to be COMPLETELY incorrect and a horrible obfuscation, and you seem to agree with me now.

So why again, does it seem that you were trying to conflate ID theory with a belief in angels, demons and crystal spheres? What was your point in bringing up angels, demons, and crystal spheres?

It is obviously perfectly scientific to create a tentative assumption that intelligence is necessary to program the appearance of information processing systems. That is one aspect of ID theory.


CJYman: Conversely, there is no scientific understanding of angels and demons, no scientific
models of 'artificial' angels/angelicness or demons/demonicness, and so far no scientific
'inter-relatedness' between angels, crystal spheres, and planetary motion.

Because of that, I am extremely fed up with your irrelevant and nonsensical smoke and mirror obfuscations of ID with angels and crystal spheres.

Zachriel: Again, precisely the point. They were invoked to explain Gaps in human understanding
of these phenomena. Whatever poetic value they have had, invoking agency to explain planetary
motions is scientifically vacuous.

Again, what does this have to do with ID theory? You say “precisely the point,” but I am missing your original point and why you brought up angels and crystal spheres in the first place.

CJYman: Specificity, not merely complexity, is a sign of intelligence.

Zachriel: Planetary motions are not only complex, but specified. They certainly don't move about willy-nilly. For instance, they are confined to a narrow region of the sky called the Ecliptic. Until modern times, this was unexplained.

It is obvious we are not on the same playing field when discussing specificity. You are discussing complex natural laws of attraction (voltaic, magnetic, and gravitational) that cause something to move about in a specific orbit, rather than willy-nilly. However, there is no specification or representation/conversion as the term is used within ID theory.

Let’s look to the dictionary for a start:

-specified: to include as an item in a specification.

-specification: a detailed precise presentation of something or of a plan or proposal for something.

As you can see, when something is specified, it includes, within its detailed plan, a separate item.

– ie: SPECIFIC markings on Mount Rushmore SPECIFY four president’s faces; the SPECIFIC arrangement of the english letters “c” - “a” - “t” SPECIFIES the idea of a four legged mammal which carries her cubs/kittens around by the scruff of the neck and meows/roars; the SPECIFIC arrangement of nucleotides within genes can SPECIFY a molecular machine.

Another small detail is that in order for anything to be specified, it must be converted by an information processor into its specified object or idea.

This is how specification is used in ID theory and in coded information. In fact, this is how one determines if something indeed IS coded information – is there any specificity (representation/conversion). Do the units in question specify a separate function or meaning when processed? Or, as the dictionary puts it, is a separate item included within the plan – a plan being a specific arrangement (sequence)?

When dealing with something that is specified, it causes a SPECIFIC item as based on the SPECIFIC sequence/organization of its plan.

I will admit that there is a slight problem with scientifically deciding whether an object is indeed specified whenever intelligence is the system converting (processing) the object in question into its specification. This is because art (intelligent representation) is subjective and intelligence seems to be able to make subjective interpretations and decide that a pattern merely LOOKS LIKE something else. There is actually no scientific way to decide if a shape which is not composed of discrete units and not chosen from a finite “alphabet” actually represents the object in question.

However, that is not how coded informational systems work, so there needs to be a distinguishing line drawn here. Coded information is measurable because it consists of discrete units chosen from a finite alphabet and it is objectively processed by rules set by its processing system. By objectively, I mean that a specific input always equals one specific output; there is only one way for it to be processed by its compatible information processor, and it will objectively output the function, according to the rules of its compatible processor, which corresponds to the specific input with no subjective interpretation except for any “deeper meaning,” nuances, implications, etc. However any of those “deeper meanings” would be the artistic layers of some intelligently created codes (such as language) and as art those layers can not be scientifically determined to exist as an intelligent construct within the code, even though the code itself can be scientifically verified to exist because it can be processed objectively according to the rules of its governing system as having meaning/function of some type.

Furthermore, there is no ambiguity within a non-conscious information processing system as it will be completely objective as based on its programming. Ask our genetic information processing system what “ACTG ...” is and it will pump out the correct protein or regulation or system, etc. which that specific sequence specifies/represents. But if intelligent systems such as you and I look at the same ink blot or cloud or eroded sand sculpture, we will both be able to decide on separate “meanings” for the systems in question, including no meaning at all.

However, when we are dealing with coded information created by intelligence, instead of merely having an infinite number of possible abstract artistic shapes, we also have an agreed upon language with defined units, rules and words which, although created subjectively, once they are created and implemented they provide an objective set of rules for deciding specificity. Furthermore, unlike possible artistic shapes that might look like something an intelligence could have made, coded information can also be measured according to number of units measured against its available “alphabet.”

Here are two 100 bit strings of shannon information, however only one of them is coded information – or as Dr. Dembski puts it, “specified information:”

Canyouunderstandthis – specifies a question in the English language
oeskysqpqdykvuuzhfeh – specifies nothing

How can you know that one of them is coded information and thus the result of intelligence, rather than merely the result of a random string generator? You can only do so by discovering specificity since specificity is only possible in light of an information processing system (in the above case: an intelligent English speaking human) and intelligence creates information processors and according to ID theory random laws do not. IF THE STRING IS NOT DEFINED BY NATURAL LAWS OF ATTRACTION AND IF IT IS PROCESSED INTO A SEPARATE FUNCTION, IT IS SPECIFIED AND THUS THE RESULT OF INTELLIGENCE, SINCE INFORMATION/PROCESSING SYSTEMS ARE THE RESULT OF INTELLIGENCE.

Of course, Dr. Dembski goes even further for discovering necessarily programmed (intelligently designed) results within an evolving system, and explains that specified information over “x” number of bits does not have the time within our universe to have been discovered through random search (random variation) no matter how many times intermediate lengths of that string were frozen by natural selection. (At least that is how I understand Dr. Dembski’s argument) But that is not what I am discussing right now. I am merely discussing the existence of coded information/processing systems.

So, codes and information processing systems are defined not by how the systems look but by how they are specified according to the rules of its compatible processor which converts the specific arrangement of units into specific functional structure, along with the fact that coded information is not defined by physical laws of attraction between its units. Again, as per the given dictionary definition of “specified,” is a separate item included within the coded plan; and do they specify (represent) a separate function when processed -- are they processed and converted into further functional systems? Within the genome, proteins are one of the separate items which are included in the plan (blueprint) of the genome.

DEFINITION: coded information = an aperiodic sequence of units, in which the arrangement is not defined by physical laws of attraction, chosen from a finite “alphabet” which is processed (converted) into a separate system (of ideas/meaning or objects/functions). Furthermore, because it contains discrete units chosen from a finite “alphabet,” coded information can be measured in binary digits (bits) as per shannon information theory. Basically, discover a cypher (specificity) which is based on a finite alphabet, and you have discovered coded information. As far as I understand, this is the main idea behind code cracking -- cryptanalysis.

Now, let’s apply this to planetary orbits. Sure they move in a definite orbit, but do these orbits or the arrangement (sequence) of the planets themselves contain within them a separate function when processed? Well, I am not aware of any system which processes the sequence of the planets and their orbits to produce a separate item according to the specificity of the planetary orbits. The planetary orbits do not intrinsically (as the nature of the system) specify or represent anything.

Sure, we humans can extrinsically impose information on the regularities of the system (as we can with any system which acts regularly: ie atomic vibrations) and use it as a time keeping device, but that is the result of subjective intelligence and is external to the system in question and is merely a MEASUREMENT OF REGULARITY WHICH IS DEFINED BY PHYSICAL LAWS OF NATURE/ATTRACTION. The planetary system does not contain any coded (sequenced) information within it. If it did, then the planets themselves or their positions relative to each other would contain further function or meaning. That is what is known as astrology. But, I’m quite confident that you don’t see astrology as scientific.

Now, let’s look at life. How do we know we aren’t imposing information upon the system, and as you state a bit later “Is it really a "code" or are we confusing the analogy with the thing itself?”
It is actually quite obvious that life is an actual coded information processing system because it does not need any imposition externally in order to be specified. It processes its own information completely objectively -- specific input = specific output with no multiple subjective interpretations.

First, DNA isn’t a molecule that organizes itself according to physical laws of attraction between its units. Second, DNA specifies amino acids, proteins, and other systems when processed by its information processor regardless of whether we humans measure it or not. And life itself is not just a measurement of regularities – it is an actual processing (converting) of an irregular sequence of units, not defined by physical laws of attraction, into further function.

In fact, as I imply [elsewhere], if DNA existed on its own without any system to process it, it would not be coded information (or at the very least we would not be able to scientifically determine if it was coded information) -- it would merely be a random string of chemicals, probably forged accidentally and randomly since the organization of its units (nucleotides) do not follow any natural laws of attraction between the units. Likewise, if the english language did not exist within this universe and somehow a few lines of random markings organized themselves into the pattern: “In the beginning God created the heavens and the earth...” on the side of a white birch tree, there would be no way to scientifically know if the random patterns were indeed coded information -- in fact these random markings would have no meaning since they would not specify anything without the existence of human intelligence and the English language.

Conversely, with SETI, the reason why a sequence of 1, 3, 5, 7, 11, 13 ... would be seen as coded information resulting from intelligence is because it is not MERELY a random string of units. It is an aperiodic sequence (not merely a measurement of regularities caused by physical laws of attraction) processed by intelligence as “the mathematical idea of prime numbers” AND since its arrangement (sequence) is indeed not defined by physical laws of attraction, unlike planetary orbits, we can be confident in saying we have scientifically verified that “we are not alone.” The pattern 1, 3, 5, 7, 11, 13 ... specifies or represents “a section of prime numbers” and is not a pattern that is created by natural laws of physical attraction as are planetary orbits.

Reminder: the subjective interpretation by intelligence of a periodic system, which is defined by physical laws of attraction and which does not rely on a processing system, as being informative based on its regularities (using planetary orbits as a timekeeper) must not be confused with the objective processing of an aperiodic system of units not defined by physical laws of attraction which has specificity/representation/meaning (life’s processing of DNA), in which processing of the system causes separate items which function together. The former is not coded information; however the latter is coded information.

The main point as I see it is that when dealing with specificity we are also dealing with an information processor of some type, since specificity is only recognizable in light of some type of information processor. There must be something which converts and causes specification. So, since intelligence and information are intricately related – both necessary for each other as far as we can scientifically determine – then, until a random set of laws causes an information processing system to randomly generate itself, ID is the best scientific explanation for specificity (coded information).


Zachriel: Previously, the planets were thought to be designed to control the destiny of humankind, and their movements gave clues to that destiny. But as you point out, "so far no scientific 'inter-relatedness' between angels, crystal spheres, and planetary motion."

Exactly, and there IS scientific ‘inter-relatedness’ between coded information and intelligence.


CJYman: Every example of a code (specificity) that we know the origins has its origination in an
intelligence …… and according to yourself: ...

Zachriel: In real-life science, we always compare purported artifacts to known examples, then
attempt to identify characteristics of the artisan or art. That's how it's done.

Yes, and that’s why we compare the information processing system of life with all known examples of coded information which has been produced by intelligence.

Zachriel:Look at your statement carefully. The strength of your argument depends on the extent of human ignorance. Nevertheless, it still may be suitable for generating a hypothesis.

Ignorance of what? How information processing systems randomly and accidentally generate themselves without any underlying intelligence/plan/information? That’s not even a scientific hypothesis because when dealing with accidental randomness there is no underlying NATURAL LAW and as such there is no way to falsify an hypothesis based on random accidents.

I refer you to a quote from a Professor Hasofer:

"The problem [of falsifiability of a probabilistic statement] has been dealt with in a recent book by G. Matheron, entitled Estimating and Choosing: An Essay on Probability in Practice (Springer-Verlag, 1989). He proposes that a probabilistic model be considered falsifiable if some of its consequences have zero (or in practice very low) probability. If one of these consequences is observed, the model is then rejected.
‘The fatal weakness of the monkey argument, which calculates probabilities of events “somewhere, sometime”, is that all events, no matter how unlikely they are, have probability one as long as they are logically possible, so that the suggested model can never be falsified. Accepting the validity of Huxley’s reasoning puts the whole probability theory outside the realm of verifiable science. In particular, it vitiates the whole of quantum theory and statistical mechanics, including thermodynamics, and therefore destroys the foundations of all modern science. For example, as Bertrand Russell once pointed out, if we put a kettle on a fire and the water in the kettle froze, we should argue, following Huxley, that a very unlikely event of statistical mechanics occurred, as it should “somewhere, sometime”, rather than trying to find out what went wrong with the experiment!’”

So, what really went “wrong” with our universe for it to have created a system of replicating information processing systems which continually (at least for about 3.5 billion years) generate novel information? The reason I ask what went “wrong” is because the very foundation of life is based on a system (coded information) which is not defined by physical laws of attraction between its units. So, if we can not call upon natural laws of physical attraction, then is random accidental chance to blame or, as per the above quote, should we be looking elsewhere in order to generate an actual scientific hypothesis?

Here’s a hint: the very first information processing system in the universe (presumably life) was either created randomly and accidentally or was influenced or programmed to exist by intelligence since the informational units which are the foundation of life are not defined by physical laws of attraction. Since there is no room for physical laws of attraction, then it is either “sheer dumb luck” or it is somehow caused by agency and the only type of agency that we are scientifically aware of is necessarily associated with intelligence. Or, maybe there is a second scientifc option ... any ideas?

Furthermore, how does the scientific knowledge of the ‘inter-relatedness’ of information processing systems and intelligence depend on human ignorance? This ‘inter-relatedness’ depends ENTIRELY on what we DO EXPERIMENTALLY KNOW: intelligence requires information processing and information processing requires previous intelligence. Do you have any scientific data that even remotely suggests otherwise?

IMHO, it is modern evolutionary theory, or at least how it is sold, which depends on the extent of human ignorance. How much do we really know about life, how it operates, and how evolution and abiogenesis occur opposed to what we are sold ... er ... told by the scientific priests among us? I think that the “Edge of Evolution” really served to expose the fact that we are still in the dark ages in actual observation and experimental understanding of evolution.
For more, I refer you to Dr. Shapiro.

Zachriel: The question you want to ponder is whether or not the genetic code is intelligently
designed. So, state it as a hypothesis, and then form specific and distinguishing predictions. If it is designed, then what is the causal link to the designer? How did the designer manufacture the coding device? Who is this designer? Is there more than one designer? Are they aliens or gods? What are they like? Is it really a "code" or are we confusing the analogy with the thing itself? What observations do we make?

These are all excellent questions and are perfectly compatible and would only be explored within an ID paradigm. A few more good question would be:

-“How do you design a program to necessarily produce an information processor within it and how does this relate to our universe and the first information processor within it?” and,

-“How do you design an information processing system to evolve toward intelligence and consciousness as it interacts with its environment and how would this relate to life as we know it?” and,

-“Is there a law of conservation of information, where the generation of a certain amount of new information must be dependant on a specific amount of previous information, so that there is never a true informational free lunch?” and,

-“Is it possible to front load a small amount of information to contain and necessarily produce a larger amount of information upon interaction with its environment: in effect, a type of technologically advanced information compression strategy?”

Of course all of these questions can only be answered as the necessary evidence and data are discovered. Some may be scientifically answerable and others may not. For more discussion of the science of IDT refer to “The Science of Intelligent Design.” (from this blog)

Zachriel: Each answer will raise more questions. The scientific method is not an end-point, but a
process of investigation.

Completely agreed!


Now, to wrap this all up, let me try to put it a slightly different way and present my scientific question and hypothesis.

Over the course of time there have been at least four different information processing “events” including that of the “appearance” of the universe (when time/matter began). When I say “events”, I am referring to the appearance of new information and information processing systems.

Here are four of them:

1. The information processor which causes the program we know as our universe (matter and energy). This is a quantum information processor. For more info read the book “Programming the Universe” by Seth Lloyd.

2. Matter and energy in the form of atoms creating the information processing system of the living cell -- abiogenesis. This is a biochemical information processor.

3. Living cells in the form of neurons creating the information processing system of the brain. I would call this a cellular (neurons) information processor.

4. A collection of interacting brains creating systems of language (including further information processing systems – computers and artificial intelligence). These are electronic information processors.

Now, in reference to these information processing systems, I would like to ask a scientific question – one that has the potential of being testable and falsified.

That question is: “will any random set of laws governing the program of an information processing system spontaneously generate even one, much less three information processing systems within the initial program, layering them all on top of each other, or is preceding intelligence (programming) necessary as a cause for this phenomenon?” IOW, what causes information processors? Are they caused by physical laws of attraction, random chance, or intelligence, or ...?

Now, here are some definitions in my own words and brief explanations. BTW: these are only IMO and do not necessarily reflect any ID scientist’s understanding of the matter, however I do see them as defensible, informative, expandable, and perfectly consistent with ID theory.

Conclusion:

- Re: coded information -- if you can created a cypher based on a finite alphabet for a group of units and de-code those units into their specific functional integrated items, then you are dealing with coded information.

- Natural laws of attraction do not define coded information, since coded information is not caused by physical laws of attraction between its units. Natural laws, which depend on forces of attraction (such as voltaic, magnetic, and gravitational) on their own are not a cause of coded information.

- Random causes and “anything is probable after enough time” are not scientific hypothesis, since they are not falsifiable (refer to above quote by prof. Hasofer). Science wants hard data, patterns, and repeatable laws, not "pat" answers.

Approaching my above scientific question with the understanding that science is beginning to understand intelligence, can model intelligence, understands that intelligence and information are intricately related -- in fact necessary for each other as far as we can scientifically determine, one can create the scientific hypothesis that intelligence is a necessary cause of information processing systems and that a program producing a random set of laws will not cause an information processing system to randomly self-organize. Thus, we can scientifically infer that information and intelligence were both present at the initial singularity of our universe.

BTW: this is not an argument against abiogenesis. This is a positive argument for intelligent programming for information/processing systems to exist within overarching programs. In order to falsify this, merely produce a program (which can be, for the sake of argument, taken as a given) which causes a random set of laws and see if information processing systems randomly self organize. In fact, it is my personal opinion that if ID were the governing paradigm we would probably be closer to understanding abiogenesis and evolution from an information processing point of view, rather than sticking our collective heads in the sand and only considering those “hypothesis” which allow us to get something (information processing) from nothing. IMO, the search for RANDOM abiogenesis is the search for the ultimate perpetual motion machine (free energy from nothing).

For more, refer to my thread “The Science of Intelligent Design.”