And its Boundaries
D. Pitman M.D.
Despite the apparent confusion for many people the theory of evolution is based on a very simple idea. The idea is that all living things have a common ancestor. The theory of evolution is often referred to as "common decent with modification." This modification is the result of random mutation combined with natural selection. Natural selection is able to select between different functions. It is generally thought of as a mindless process. This mindless process is thought to have given rise to life and to all the various life forms that we see around us today.3,4,5,6
If the amazing diversity of life forms on this planet arose from the evolutionary potential of a common ancestor life form, the assumption can be made that all or nearly all life forms living today have the same potential for future diversity. If true, this mindless force is a very creative force. But, how does this mindless force work?
Most will agree that if living things change over time, they change because their D.N.A. (deoxyribonucleic acid) changed. The information contained in the DNA is called the “genotype.” The expression of this information in the physical form of the creature is called the “phenotype.” 1, 2 DNA is very much like the paper that the blueprint for a house is written down on. This blueprint is equivalent to the genotype. The actual house, once it is built, is equivalent to the phenotype. The phenotype changes only if the blueprint changes.
The theory of evolution proposes that all the various blueprints of living things are descendents of a single common ancestor blueprint. The diversity of blueprints that exist today is simply the result of variations on a single theme. If true, the power of the mindless evolutionary force of nature is truly astounding. The sheer creativity and magnificent diversity of nature is enough to make even the most cynical stand in silent awe. If humankind could harness this power and speed it up with the aid of our intelligent minds, the implications for advancement seem unlimited.
How then does this mindless evolution work? How does the equivalent of a blueprint for a house change over time to code for phenotypic structures as diverse as an automobile, a battleship, or a skyscraper? Obviously this does happen in the natural world to produce creatures as different as bacteria, oak trees, and elephants. In considering this question perhaps we should begin with Darwin and what he saw.
Charles Darwin (1809-1882) came up with his famous version of the theory of evolution after observing some very interesting differences, such as the variation in the size and shape of finch beaks on the Galapagos Islands. Many other similar changes have also been observed and carefully documented. Certainly these are “changes” and as changes many would call them evolutionary changes. If the theory of evolution is defined as any and every phenotypic change that occurs from parents to offspring, then it might be perfectly fine to say that finches are demonstrating evolution in real time. But, are they demonstrating genotypic evolution? Has the blueprint changed in an informationally unique way? Is it possible to have change in phenotype without a change in genotype? In other words, do the phenotypic changes in the finches that Darwin observed require new information that the ancestor finches never had?
Darwin was unable to answer this question although the answer was in fact available in his own day. Gregor Mendel (1822-1884), the father of genetics, came up with the idea of “alleles” or unchanging “traits” after studying pea plants. 3,7 What he discovered is that certain phenotypic traits, such a pea color, texture, shape, and a number of other traits, are passed on by unchanging genotypic alleles. Different combinations of these alleles result in different allelic expressions in the phenotype.
For example, lets say that a house needs colored carpet. Colored carpet is a “trait” or characteristic of the house listed in the blueprint. The blueprint of this particular house is interesting however in that it is a double blueprint. There are two pieces of paper that code for every aspect of the house. The two blueprints are identical as far as the traits that they code for (ie: colored carpet), but they are different as far as the trait variations are concerned (ie: red, yellow, green, blue or white carpet). If one blueprint coded for green carpet and the other coded for white carpet, what color would the carpet be? It will be the “dominant” color.
found that allelic traits could be either dominant or recessive (We now know
that they can also be co-dominant, incompletely dominant, additive,
multiplicative etc.) This is made
possible because of the fact that many traits have at least two alleles, or two
separate codes on two blueprint copies, that code for the same trait.
If both alleles are the same, then the expressed phenotype will match
both alleles. If the two alleles
are different, then the phenotypic expression will match the dominant allele.
Each allele is inherited unchanged, one from each parent.
During the process of sexual reproduction, the alleles coding for the
same trait trade places with each other randomly (from one blueprint copy to the
same place on the other blueprint copy) so that the next generation will be
uniquely different from the current generation in their phenotypic expression of
the same alleles. This is why
siblings from the same parents never look exactly alike unless they are twins
arising from the same fertilized egg. Siblings
can in fact look very different from each other and even their parents.
One may have a big nose and the other a small nose.
Similarly, a finch may have a bigger or smaller beak than its siblings.
Such phenotypic changes do indeed occur, but they need not be based in any
change of the common "gene pool" of options. 2
Breeding via Human Selection
The variation in allelic expression can be quite dramatic. It is responsible for the majority of changes seen in animal breeding. For example, the main differences between a Chihuahua and a Great Dane are primarily the result of trait selection where desired traits contained by a common gene pool were gathered together over a few generations into one animal. Most of traits themselves already existed, fully formed, in the common ancestral gene pool of dogs. Thus, neither of these breeds has “evolved” much of anything that their common ancestors did not already have in their common gene pool of options.
The ability for great phenotypic variation is obvious, but there are clear limits to this variation. Using genetic recombination alone, a dog cannot be anything but a dog. A dog can never be changed, via genetic recombination alone, into a cat or a chicken or anything else. Why? Because cats, dogs, and chickens are made from different blueprints that code for different trait types and trait options that are not contained by the gene pools of the others. Also, traits that may be similar may not necessarily be located at the same relative positions on their respective genetic blueprints.
Again, using the house blueprint analogy as an example, one house might have a blueprint that codes for carpet at the bottom right-hand corner of the page. Another house might have a blueprint that codes for an electric garage door opener at this location and has no code for carpet at all. Also, the first house might not have a code for a garage much less an electric garage door opener. Neither one of the blueprints for one house will match with the options or order of options for the other house. What does this mean? It means that the blueprints for the different houses in this case cannot talk to each other, mix and match, or "recombine" their collective information in any functional way. They cannot "interbreed" so to speak to produce viable "offspring". Only blueprints that have the same setup and "trait" types can exchange equivalent information with each other. It is impossible then for blueprints that do not have a position for a garage code to trade equivalent information that results in the formation of a garage. The same is true for different animals such as dogs, chickens and cats. They cannot breed with each other, nor can they be bred to look like each other, using genetic recombination alone.
Genetic recombination can and does result in some very dramatic phenotypic changes for these creatures. However, this process is limited by the edges of a large but finite pool of options. Such a gene pool remains fixed while various creatures within the gene pool give phenotypic expression to various aspects of this genotypic pool of options. The pool provides the means for huge phenotypic variation or “change” but the genotypic pool itself does not change. 8 The changing phenotypic creature is nothing more than a partial reflection of a non-changing or "static" genotypic pool. Thus, it is the genotypic pool and not the phenotypic creature that evolution must act on. But, if Darwin’s finch beaks are not examples of this type of evolution is there anything that is? Is there any creature that has unique traits that its ancestors did not have in their gene pool?
is where mutations come into play. Genetic mutations are relatively rare
random changes that occur in a creature's genotypic blueprint that were not in
the blueprints of that creature’s parents.
There are many different types of mutations.
There are point mutations where just one letter is changed in the wording
of the genetic blueprint. There are
translocation mutations where a section of the blueprint is cut out and pasted
in another place on the blueprint. There
are inversion mutations where a section of the blueprint is cut out and turned
upside down and pasted back in the same place.
There are duplication mutations where a section of the blueprint is
copied and then pasted in another place. The
list goes on and on, but basically the mutated blueprint has genes/alleles or
genetic sequences that neither one of its parents had 1,2,4
Functional vs. Neutral Mutations
As would seem intuitive, most functional mutations are harmful and may even be lethal. Fortunately though, most mutations are not functional. Most mutations are silent or "neutral" and result in no detectable phenotypic change. Very rarely, some mutations are “beneficial.” The ratio of beneficial vs. detrimental mutations is on the order of 1 in 1,000. Common examples of beneficial mutations are those that give bacteria antibiotic resistance or those that cause sickle cell anemia in people who live where malaria is prevalent. But how, exactly, do beneficial mutations achieve their benefits?
In the case of bacterial antibiotic resistance, certain mutations do evolve new functions that the bacterial ancestors never had before (or at least the immediate ancestors did not have in their genetic codes). But, before discussing this in some limited detail, it should be noted that bacteria are not like dogs, cats, and chickens, or anything else that uses sex or genetic recombination as a means of reproduction. Bacteria reproduce themselves by a relatively simple method of cell division. In other words, all of the offspring of a given bacterium will be identical with itself as well as with each other. They are basically clones of each other. Because of this, there are no trait options to choose from. There is only one copy of the blueprint instead of the two copies used in sexual recombination. So, there is only one option for each bacterial "trait." There is no gene "pool" of options so to speak. Certain bacteria can in fact exchange genetic information via plasmids and the like, but generally speaking, there is no such thing as bacterial genetic recombination. So, all bacteria within a given group are identical except if a mutational change occurs. Such mutations, when they do occur, are passed on to all subsequent offspring.
Occasionally, certain mutations may not only be functional but beneficial. For example, penicillin resistance is not always gained by the production of the famous b-lactamase enzyme "penicillinase." There are several other ways that bacteria become resistant to penicillin. A notable example occurs in Streptococcus pneumoniae bacteria. Beta-lactamases have never been identified in S. pneumoniae and yet they are capable of penicillin antibiotic resistance due to modification of their penicillin binding proteins (PBPs). Since PBPs are the natural target of penicillin, many different point mutations within this target are capable of interfering with the target-antibiotic interaction. It is this interference that results in penicillin resistance.
Other antibiotics require a specialized transport protein to bring them into the bacterium. Again, many different point mutations can interfere with the ability of the transport protein to interact properly with the antibiotic. This interference results in resistance to this particular antibiotic.
Other bacteria already make more complex antibiotic enzymes, such as the penicillinase enzyme. Such enzymes do not evolve de novo in previously susceptible bacterial populations. The coded sequence or "gene" needed to produce the penicillinase enzyme was already there or it was gained via lateral transfer from other bacteria who already had this code (often via plasmids). The problem is that this coded sequence is usually regulated so that the penicillinase enzyme is not produce in sufficient enough quantities to protect the bacterium from high levels of the penicillin antibiotic. Several different mutations are capable of releasing this suppression of penicillinase production so that much greater quantities can be made, which results in enhanced penicillin resistance.
Similarly, Mycobacterium tuberculosis, the cause of the tuberculosis disease, produces an enzyme that (as well as its other useful functions) changes the non-harmful antibiotic "isoniazid" into its active and lethal form. The now active isoniazid proceeds to kill the Mycobacterium. Several different mutations are capable of interfering with the enzymes interaction with isoniazid. And again, this interferance results in Mycobacterial resistance to isoniazid.
To give another example, the 4-quinolone antibiotics attack the enzyme “DNA gyrase” inside various bacteria. Again, several different point mutations are capable of interfering with the gyrase-antibiotic interaction.9
In yet another non-antibiotic example, the point mutation of the hemoglobin molecule in sickle cell anemia decreases its effective oxygen carrying capacity. It still carries oxygen, just not as well. The malarial parasite needs a high oxygen concentration to survive and so cannot survive in blood with the sickle cell mutation.10
What is interesting about these beneficial mutations is that they all achieve their benefits with just one or rarely two point mutations. Also, it is hard to miss the fact that all of these functions were the result of a mutation that interfered with a previously established function or specific interaction. As we all know, it is far easier to break than to create. There are so many different ways to break something, but only a relatively few ways to make something work. Remember, it was much easier to break Humpty Dumpty than to was to put him back together again.
But, how did such apparently complex enzymes such as penicillinase evolve? A bacterium is not going to evolve the enzymatic penicillinase function with just one or two point mutations to some target sequence. There are many theories as to how the penicillinase enzyme must have evolved, but when it comes right down to it, no one has ever demonstrated the evolution of the penicillinase enzyme in the lab. As previously noted, bacteria that produce the penicillinase enzyme were always capable of producing this enzyme or they obtained the code for this enzyme via plasmid from another bacterium who had this code already formed.11 Sometimes a point mutation is required to deregulate the production of penicillinase so that much greater quantities are produced, rendering the bacterium (and its subsequence offspring) instantly resistant to large levels of penicillin. But, this change really has nothing to do with explaining how the rather complex penicillinase function evolved.9 So, are there any documented reports of the evolution of a complex enzymatic function comparable to that of the penicillinase function?
Michael Behe, a professor of biochemistry at Lehigh University, says that, “Molecular evolution is not based on scientific authority. There is no publication in the scientific literature in prestigious journals, specialty journals, or books that describe how molecular evolution of any real, complex, biochemical system either did occur or even might have occurred. There are assertions that such evolution occurred, but absolutely none are supported by pertinent experiments or calculations.” 5
Others, such as the well known evolutionary biologist Kenneth Miller, disagree. In his 1999 book, Finding Darwin’s God, one of Miller’s challenges of Behe’s position includes a 1982 research study by professor Barry Hall, an evolutionary biologist from the University of Rochester. In this study, Hall deleted a gene (lacZ gene) in a type of bacteria (E. coli) that makes an enzyme (beta-galactosidase). This enzyme converts the sugar lactose into the sugars glucose and galactose. The E. coli then use glucose and galactose for energy.
Without this lactase enzyme one would think that these bacteria and their offspring would not be able to utilize lactose. However, what Hall found is that after a short time (just one generation) of exposure to a lactose environment these bacteria modified a different gene with just one point mutation so that it evolved the ability to produce a completely new lactase enzyme.12 Since the original enzyme was composed of a fairly large tetramer (~1,000 amino acids for each subunit), it seemed like the evolution of the lactase function might require a fair amount of enzymatic complexity. In other words, it might be rather difficult to come across very many enzymes with lactase ability. So, the demonstration of such rapid evolution of a completely different hexametric lactase enzyme was quite a stunning success for Hall. How did these amazing bacteria evolve a brand new enzyme to do such an apparently complex task?
As it turns out, these E. coli bacteria had something of a spare tire gene that Hall called the "evolved beta-galactosidase gene" (ebgA). Just one point mutation was all it took to give this spare tire gene the ability to produce a protein with the beneficial lactase activity. Hall was of course disappointed to find out that only one point mutation was enough to "evolve" this beneficial lactase activity. So, he did a very interesting thing. He deleted both the original lacZ genes as well as the evolved ebgA gene in some colonies of E. coli. Interestingly enough, these doubly mutated bacteria and their offspring never evolved any other gene or DNA sequence into a functional lactase enzyme despite observation for tens of thousands of generations. Hall was mystified. He described these bacteria as having, "limited evolutionary potential." 12 The interesting thing is that these same bacteria that were limited in their ability to evolve the lactase function would easily have evolved resistance to an antibiotic in just a few generations. Compared to antibiotic resistance, the evolution of even a relatively simple enzyme is quite a different matter. We are starting to climb the ladder of increasing complexity.
Limited Evolutionary Potential
Now I ask, what exactly was limiting the evolutionary potential of these bacteria? Does the theory of evolution explain such limits? If so, how are they explained? The theory of evolution claims the power to create incredible diversity via mindless processes if given enough time. Well, how much time, on average, would it take for E. coli, without lacZ and ebgA, to evolve the lactase function? Can this time be estimated, even roughly? If so, upon what basis can it be estimated?
According to Hall's own calculations, a function that required just two independent (neutral) mutations would take around "100,000 years" to achieve in E. coli.12 It seems as though Hall does not understand the statistics of random walk very well or he would not have been surprised when he did in fact isolate such a double mutant in just a few days. The estimated time for fixation is what caused Hall to estimate a time of 100,000 years for the crossing of a gap of just 2 neutral mutations. What Hall did not realize is that stepwise fixation of each mutation (spread to all members of a population) is not required for such a gap to be crossed. With populations the size of Hall's E. coli colonies, such a double mutation would be realized in just one or two generations using random walk alone.
Growing Neutral Gaps
So, is the problem solved? Hardly. With each doubling of the neutral gap between the current genetic real estate and a new potentially beneficial function, the random walk increases by a factor of two. For example, a neutral gap of 2 amino acids has only 400 different options to fill (20 potentially different amino acids in each position). A population of one billion bacteria would quickly distribute itself among all these 400 options in very short order. However, doubling the gap to 4 neutral mutations would increase the number of options to 160,000. Doubling the neutral gap again to 8 mutations would increase the number of options to 25.6 billion. A gap of 16 would yield 655 million trillion (6.5e20). In such a case, each bacterium in the population of one billion would be surrounded by a neutral sea of 655 billion options. The time required to traverse this average neutral gap, even for a population of one billion bacteria, would run into the trillions of years. This is because natural selection cannot selection between functionally identical sequences. The only thing that can sort out such functionally neutral sequences is random mutation (also known as "random walk"). Such a random walk takes a whole lot more time than a non-random direct walk would take. This simple little problem is what messes things all up for evolution.
For instance, consider that there are many bacterial functions that are far more complex than the relatively simple single-protein based enzymatic-type functions of lactase or nylonase. Many single protein enzymes are actually fairly complex, don't get me wrong. They certainly are far more complex than the function of antibiotic resistance that arises via an interfering mutation. Enzymes are unique sequences that act alone to do a specific function. This makes them quite specialized and relatively rare. However, their functions are still relatively simple when compared to other cellular functions of higher complexity. For example, there are 101300 potential protein sequences 1,000 amino acids in length (the length of a lactase enzyme subunit). Of these 101300 potential proteins, how many would have the lactase function? Different scientists that I have posed this question to have estimated anywhere between 1060 to 1090 different potential lactase proteins in sequence space. To understand how big these numbers are, consider that the total number of atoms in the visible universe is only around 1080. So, one can see that 1090 different lactase sequences is an absolutely huge number - around 10 billion lactases for each atom in the universe in fact. Still, this pile of 1090 lactase enzymes is absolutely miniscule when compared to 101300, which is the total number of different potential protein sequences 1,000aa in length. For every one lactase enzyme there would be 101210 non-lactase amino acid sequences in sequence space. In fact, it seems rather providential that Hall's experiments with E. coli were successful in evolving even one new lactase enzyme - almost like this spare-tire gene was put there deliberately?
And yet, this ratio gets exponentially worse as the complexity of function increases. For example, the function of bacterial motility involves the interactions of many different proteins all working together at the same time. Some suggest that the most simple bacterial motility system would require 20 or 30 different proteins working together in a specified arrangement. Since an average protein is around 200 to 300 amino acids in length, the total length of all the proteins in such a motility system would be around 4,000 to 9,000 amino acids (comparable to the complexity of a short essay written in English). The question is, how many different arrangements of these amino acids would produce a motility system (or how many arrangements of 4,000 to 9,000 letters would produce a meaningful, much less beneficial, essay in English)? Lets be extraordinarily generous and say that 102000 different motility systems could be made with such a stretch of amino acids. Despite this apparently astronomical number of different motility systems, 102000 is still a tiny fraction of 105200 - the minimum potential protein sequence space at this level of complexity (4,000aa level). Each sequence with a motility function would be surrounded by at least 103200 sequences without the motility function. In fact, the beneficial sequence density at this level of complexity seems to be so miniscule that evolution is powerless to evolve any function at such a level of complexity. There simply are no examples of any such multi-protein function evolving in real time - and I am betting that there never will be. The problem is that there is always more junk than non-junk at any given level of complexity. The real problem though is that this junk pile grows exponentially with each step up the ladder of complexity.
The Junk Pile and The Ladder of Complexity
The ladder of complexity limits the ability of evolution to evolve beyond its lowest rungs where the most simple functions, such as antibiotic resistance, can be found. The target mutations required to achieve antibiotic resistance are extremely simple to get "right" because there are so many "right" options. However, the evolution of specific enzymatic functions, like the lactase function, are a lot harder to get "right" because far fewer options are "right." Then again, even these functions are relatively easy to get "right" compared to more complex multipart functions, such as bacterial motility, where all the protein parts work together at the same time in a specific 3-D orientation with each other. So, as one moves up the ladder of functional complexity, the difficulty of finding any sequence that does anything beneficial at such a level of complexity becomes exponentially harder and harder to do until not even trillions upon trillions of years are enough.
A Construction Foreman Who Cannot Read
It is all very much like a construction foreman who never learned how to read a blueprint, but who intuitively knows what works when he sees it in action. His workers are the ones that know how to read blueprints and follow directions exactly. The workers also copy parent blueprints to use as templates for each new house that is to be built. However, although they are extremely careful copyists, the workers make little mistakes every now an then. These little mistakes may not result in any phenotypic change whatsoever, but sometimes they translate into slight or even major variations among the actual houses (the phenotype). The foreman then comes to inspect the completed houses and picks the one that is the “best” given the particular needs of that house for that housing market. The choice of the foreman is based only on current function.5,6 He knows only what works right now. He has no imagination, memory, or vision for the future. If there is a part of a house that he does not recognize as having current function, he will not select to keep that part. The foreman goes around saying, “Keeping do-dads around that don’t work is expensive!” He will not maintain what he does not recognize in hopes that sometime in the future, with some potential change in the housing market/environment, it may develop into something beneficial. Once the selection is made by the foreman, the workers find the blueprint for this house and use it as a template for the next building project.
what happens if the housing market changes the next year?
What if certain changes would benefit the house in this new environment?
For example, what if there were a prolonged drought and wild fires became
a threat making houses with tile shingles more resistant to fire than houses
with wooden or asphalt shingles? Would
the foreman be able to make these beneficial changes?
Consider the thought that languages and thus blueprints are arbitrary in that they use arbitrary symbols to represent ideas. A change or evolution of a symbol does not necessarily correlate to an equivalent change in the attached idea. If a symbol changes, even a little bit, the attached idea may simply disappear, leaving the “new” symbol without any recognized function. The symbol is now meaningless. For example, what if the blueprint for our house in question called for “wooden shingles.” Each of the words, “wooden” and “shingles” is an arbitrary group of symbols that represents an idea to an English speaking person. If the blueprint could be changed to read “tile shingles”, the understood change in meaning and the resultant change in house building would be a clear advantage in our fire-hazard environment. If no pre-established alleles for “tile shingles” are available in the blueprint pool of trait options, is there any way to create the “tile allele” from scratch?
The problem for gradual change is that each letter change must make sense. If “wood” is changed to “hood”, the actual word “hood” has meaning. However, is the meaning for “hood” any closer to the meaning for “tile”? There is another problem as well. What does “hood shingles” mean? So, not only does each word of the blueprint have to make sense to the workers, but the combination or location of the words on the blueprint has to make sense as well or else the workers cannot make anything, much less something beneficial. Order is important at all levels of complexity. Amino acid order is important for the proper function of a single protein. Also, the order of multiple proteins is important for the formation of a multi-protein system. Again, if the workers build something that the foreman cannot recognize as beneficial right now, it will be rejected. It is as simple as that.
In order to better visualize the problem, put yourself in place of the foreman. You can only select based on functions that work “right now.” With this in mind, consider the phrase, “Methinks it is like a weasel.” 6 Now, add, subtract, or change one letter at a time from the phrase in any position or order that you want. There are just two more little rules to this game. Each change that you make must make sense in English and each change must be beneficial in a particular situation/environment. See how far you can go and how much change in meaning you can get. Changing the meaning very far is a lot more difficult than one might think even if the beneficial nature of the change is not a concern.
Nature runs into this same little problem. Changing genetic sequences too much destroys all phenotypic function before any new function can be reached. Maintaining a functional phenotypic trait along the path towards any uniquely functional trait requires multiple changes (maybe even hundreds or thousands) that do not change the original function and do not achieve new function, until all the changes are in place.5,12 This is because many functional genetic elements are isolated from each other like islands on a large ocean of neutral function or even non-function. Consider the fact that most letter sequences of a given length have no meaning to an English speaking person. The same is true for sequences of DNA. The vast majority of potential genes of a given length mean nothing to a given cell. Any gene that happens to get mutated into one of these unrecognized or neutral genes becomes suddenly lost in the ocean of neutrality where no guidance is available. Without guidance, evolution drowns in this ocean.
Adding a Stereo System to a Car
Some say that evolution need not work like this but that the mindless processes of nature evolve new traits and gene pools by simply adding on previously defined genetic elements to an established system of function... like the addition of a stereo system to a car. The addition of these units enhances the function of the established system, even to the point of giving it new functions that it never had before. In this way, a simple system of function can be enhanced in complexity step by tiny step.
Let's try a thought experiment to illustrate this point. Consider the sentence in a large book of sentences that reads, "I am." Now add any other word onto this sentence. The only rule is that whatever word you choose must make sense in the context of the other sentences and the book as a whole. For example, I could add the pre-formed word, "pleased" and make the new sentence read, "I am pleased." This makes sense in English and it adds meaning to the sentence (whether or not it makes sense to the rest of the paragraph is another story). The new word, "pleased" could have been the result of a duplication mutation of a gene somewhere else that just happened to get inserted into our sentence. But, if the mutation had read, "I am very", this does not make sense in English. The addition of the word "very" destroyed the previous function of our sentence without creating a new function. However, if the phrase "I am pleased" was first to evolve, the phrase, "I am very pleased" could evolve next and make sense. With these rules in mind, try and keep adding on words (or subtracting words) and see how far you can get. Maybe the next mutation could read, "I am very pleased Tom." Then, "I am pleased Tom." Then, "I am Tom." We could also go another route and say, "I am very pleased in Tom." Then, "I am very pleased in seeing Tom." Then "I am very pleased in seeing Tom run." Then, "I am very pleased in seeing Tom run fast." Then, "I am very pleased in seeing Tom run real fast." Then, etc. etc. etc. We can evolve quite a few different phrases with quite a few unique meanings with the simple addition or subtraction of previously defined words. Could genes in DNA do the same thing? Technically yes, but there are just a few problems to consider.
Remember that not every defined word that exists in English will make sense when added to the phrase, "I am." Granted, the odds that one will make sense seem to be fairly good though. However, the longer our sentence gets, the less words there are that make sense when they are added to our sentence. Consider also that the placement of the words within our sentence is extremely vital to the functionality of the sentence. I might say, "I am green." This phrase makes sense in English. But what if the word "green" had been inserted into the wrong place? The sentence could just as easily have "evolved" to read, "I agreenm." This makes no sense in English and destroys the function of a previously functional sentence. Consider also that the duplication mutation could have occurred or been inserted into an area of the book of sentences where it was not needed. What are the odds that it would land in exactly the right "evolving" sentence in exactly the right position within that sentence when there are potentially millions of other locations it could have landed? Then consider that the sentence itself, even if it might make sense by itself, must make sense as it relates with the other sentences around it and in the rest of the book. If any of these problems arise, that sentence is lost in the ocean of non-function.
The Genetic Degeneration of Humans
Then, on top of all these problems with mutations, mutations themselves are relatively rare events. For mammals, they are on the order of one mutation per gene every 100,000 generations.13,14 Then, when they do occur, they are at least 1,000 times more detrimental than they are beneficial. Large populations sizes and short generation times can help overcome this problem to a certain degree, but what about humans? We have had relatively small population sizes and very long generation times.
is a well recognized problem that is difficult to explain. Nachmann
and Crowell detail this perplexing situation in the following conclusion from
their fairly recent paper on human mutation rates:
high deleterious mutation rate in humans presents a paradox. If mutations
interact multiplicatively, the genetic load associated with such a high U
[detrimental mutation rate] would be intolerable in species with a low rate of
reproduction [like humans and apes etc.] . . .
reduction in fitness (i.e., the genetic load) due to deleterious mutations with
multiplicative effects is given by 1 - e -U (Kimura and Moruyama
1966). For U = 3, the average fitness is reduced to 0.05, or put
differently, each female would need to produce 40 offspring for 2 to survive and
maintain the population at constant size. This assumes that all mortality
is due to selection and so the actual number of offspring required to maintain a
constant population size is probably higher.
The problem can be mitigated somewhat by soft selection or by selection early in development (e.g., in utero). However, many mutations are unconditionally deleterious and it is improbable that the reproductive potential on average for human females can approach 40 zygotes. This problem can be overcome if most deleterious mutations exhibit synergistic epistasis; this is, if each additional mutation leads to a larger decrease in relative fitness. In the extreme, this gives rise to truncation selection in which all individuals carrying more than a threshold number of mutations are eliminated from the population. While extreme truncation selection seems unrealistic [the death of all those with a detrimental mutational balance], the results presented here indicate that some form of positive epistasis among deleterious mutations is likely.15
and Crowell find the situation a very puzzling one. How does one get rid
of all the bad mutations faster than they are produced? Does their
hypothesis of “positive epistasis” adequately explain how detrimental
mutations can be cleared faster than they are added to a population? If
the functional effects of mutations were increased in a multiplicative instead
of additive fashion, would fewer individuals die than before?
Even if every detrimental mutation caused the death of its owner, the
reproductive burden of the survivors would not diminish, but would remain the
same. For example, lets say that
all those with at least three detrimental mutations die before reproducing. The
population average would soon hover just above three deleterious mutations.
Over 95% of each subsequent generation would have 3 or more deleterious
mutations as compared with the original "neutral" population.
The death rate would increase dramatically. In order to keep up, the
reproductive rates of those surviving individuals would have to increase in
proportion to the increased death rate (around 40 offspring per female just to
stay even). The same thing would eventually happen if the death line were
drawn at 100, 500, 1000, 10000 or more deleterious mutations. The only
difference would be the length of time it would take a given population to build
up a lethal number of deleterious mutations from a relatively
"neutral" starting point. The population might survive fairly
well for many generations without having to resort to huge increases in the
reproduction rate. However, without getting rid of the accumulating
deleterious mutations, the population would eventually find itself experiencing
an exponential rise in its death rate as its average population crossed the line
of lethal mutations.
the theory of positive epistasis does not seem to help the situation much, some
other process must be found to explain how to preferentially get rid of
detrimental mutations from a population. Consider
an excerpt from a fairly recent Scientific American article entitled, The
Degeneration of Man :
to standard population genetics theory, the figure of three harmful mutations
per person per generation implies that three people would have to die
prematurely in each generation (or fail to reproduce) for each person who
reproduced in order to eliminate the now absent deleterious mutations [75% death
rate]. Humans do not reproduce fast enough to support such a huge death
toll. As James F. Crow of the University of Wisconsin asked rhetorically,
in a commentary in ‘Nature’ on Eyre-Walker and Keightley's analysis: “Why
aren't we extinct?”
answer is that sex, which shuffles genes around, allows detrimental mutations to
be eliminated in bunches. The new findings thus support the idea that sex
evolved because individuals who (thanks to sex) inherited several bad mutations
rid the gene pool of all of them at once, by failing to survive or reproduce.
Yet natural selection has weakened in human populations with the advent of modern medicine, Crow notes. So he theorizes that harmful mutations may now be starting to accumulate at an even higher rate, with possibly worrisome consequences for health. Keightley is skeptical: he thinks that many mildly deleterious mutations have already become widespread in human populations through random events in evolution and that various adaptations, notably intelligence, have more than compensated. “I doubt that we'll have to pay a penalty as Crow seems to think,” he remarks. “We've managed perfectly well up until now." 16
though I do not agree with much of Crow's thinking, I do agree with him when he
says that harmful mutations are accumulating in the human gene pool far faster
than they are leaving it. However, on what basis does he suggest that
harmful mutations spontaneously cluster themselves into “bunches” for batch
elimination from the gene pool of a given population?
Crow does not suggest a mechanism nor does it seem remotely intuitive as
to how this even might occur via mindless naturalistic means.
Keightley is far too optimistic in my view. He assumes that because
evolution has happened in the past, that somehow evolution will solve the
problem. He fails to even consider the notion that perhaps humans, apes,
and other species with comparably long breeding times have been gradually
degenerating all along. Perhaps evolution only proceeds downhill
(devolution)? Perhaps mutations do not improve functions over time so much
as they remove functions over time in species with slow generation times?
Then again, even if the degenerative effects of mutations were somehow solved in populations with slow reproduction times, the addition of new information to the gene pool often involves the crossing of huge oceans of neutral/nonfunctional sequence gaps that would take, for all practical purposes, forever to cross. It seems as though mutations, averaged over an extended period of time, tend toward loss and extinction rather than toward any sort of improvement or gain.
Stacking the Deck
If the theory of evolution is dependent upon such a stacking of the deck as described above, and if, even when the deck is stacked in favor of evolution, evolution runs into impossible statistical problems, then how is it that the theory of evolution can be so earnestly presented as the only "rational" answer to the question of the origins of living things?
Benjamin, Genes V, Oxford
University Press, 1994.
Thomas D. et al. Principles of
Medical Genetics, 1998.
L. A., Gregor Mendel: An opponent of descent with modification. History
of Science 26: 41-75. 1988.
131: 245-253, 1992.
Michael J. Darwin’s Black Box,
The Free Press, 1996.
Richard. The Blind Watchmaker,
Gregor. Experiments in Plant Hybridization. 1865.
Walter J. The
Genesis Conflict, The Amazing Discoveries Foundation, 1997, p.
Wieland, Antibiotic Resistance in
Bacteria, Creation Ex Nihilo Technical Journal, 8:1 (1994), p. 5.
Konotey-Ahulu, The Sickle Cell Disease Patient (New York: Macmillan, 1991),
McQuire, Eerie: human Arctic fossils
yield resistant bacteria, Medical Tribune, 12/29/1988, pp. 1, 23.
Hall, Evolution on a Petri Dish. The
Evolved B-Galactosidase System as a Model for Studying Acquisitive Evolution
in the Laboratory, Evolutionary Biology, 15(1982): 85-150.
Achillies. Lecture Notes,
Biochemistry 110-A, University California Riverside, Fall 1999.
Ayala, Francisco J. Teleological Explanations in Evolutionary Biology, Philosophy of Science, March, 1970, p. 3.
Michael W., Crowell, Susan L., Estimate of the Mutation Rate per Nucleotide
in Humans, Genetics, September 2000, 156: 297-304 ( http://www.genetics.org/cgi/content/full/156/1/297?
Tim, The Degeneration of Man, Scientific American, April, 1999, p32
. Home Page . Truth, the Scientific Method, and Evolution
. Maquiziliducks - The Language of Evolution . Defining Evolution
. DNA Mutation Rates . Donkeys, Horses, Mules and Evolution
. Amino Acid Racemization Dating . The Steppingstone Problem
. Harlen Bretz
Stacking the Deck
God of the Gaps
The Density of Beneficial Functions
All Functions are Irreducibly Complex
Ladder of Complexity
Chaos and Complexity
Confusing Chaos with Complexity
Scientific Theory of Intelligent Design
A Circle Within a Circle
Mindless vs. Mindful
Single Protein Functions
BCR/ABL Chimeric Protein
The Limits of Functional Flexibility
Functions based on Deregulation
The Geologic Column
Matters of Faith
Evidence of Things Unseen
The Two Impossible Options
Links to Design, Creation, and Evolution Websites
Since June 1, 2002