evol-mut-circle is an international group of scientists, philosophers, and historians interested in research on the role of mutation in evolution, particularly as a dispositional factor. The group meets monthly via videoconference for seminars and discussion. Coordination of activities occurs via an email list. To join, contact Arlin.
Upcoming sessions
Virtual meetings take place at 15:00 UTC on the 3rd Tuesday of every month. Upcoming events may include regular seminars, special topic discussions, AMAs, and journal clubs
21 January (2025), regular seminar
James Horton, GnT motifs can increase specific mutation rates >1000-fold in bacteria
4 February (2025),
18 February (2025), regular seminar
Bryan Gitschlag, Graduated effects of mutation bias in adaptation
4 March (2025),
18 March (2025), regular seminar
Kelley Harris, TBA
1 April (2025)
15 April (2025), regular seminar
20 May (2025), regular seminar
June (2025), regular seminar
July (2025), regular seminar
August (2025), regular seminar
September (2025), regular seminar
Iñigo Martincorena, TBD
Patrick Dolan, TBD
Past (recorded) sessions
Recordings include the presentation and discussion, with an auto-generated transcript.
Most recent update: 3 November, 2023, initial version. See the change-log at the bottom for details.
This is an ongoing list of updates and corrections to Mutation, Randomness and Evolution, including typographic errors, as well as substantive errors and updates to knowledge.
Typos and glitches
The following mistakes (given in order, by section number) might be confusing or misleading:
section 1.1 refers 3 times to a “50-fold range” of mutation rates in MacLean, et al (2010), but the actual range is 30-fold.
section 2.3. See the note about 1.1 (30-fold, not 50-fold)
in section 4.3, replace “development, growth, and hereditary” with “development, growth, and heredity”
section 4.5 describes a hypothetical experiment examining 20 mutations each for 5 species, then refers to “our small set of 20 X 10” mutations instead of “our small set of 20 X 5” mutations
section 5.7, the reference to the “ongoing commitment of evolutionary biologists to neo-Darwinism” is actually refering to the second aspect of neo-Darwinism, i.e., not adaptationism but the dichotomy of roles in which variation is subservient to selection
Fig 9.7 right panel title refers to “Frequency rate vs. fitness” instead of “Mutation rate vs. fitness”
section 9.3. See the note about 1.1 (30-fold, not 50-fold)
section A.3, the equation is mis-formatted. The left-hand side should be xi+1, not xi + 1
More recent work on topics covered in MRE
MRE was mostly completed in 2019 and only has a few citations to work published in 2020. For perspectives more up-to-date, see the following.
Ch. 8 covers the theory of arrival bias, and Ch. 9 covers evidence. Both chapters suggest generalizations that are subject to further evaluation. Most of the updates are going to involve these two chapters.
Prediction regarding self-organization (MRE 8.11)
For a long time, I’ve been arguing that one sense of “self-organization” in the work of Kauffman (1993) and others is an effect of findability that is related to the explanation for King’s codon argument, arising from biases in the introduction process (Stoltzfus, 2006, 2012). MRE 8.11 calls this “the obvious explanation for the apparent magic of Kauffman’s ‘self-organization'”, and suggests how to demonstrate this directly by implementing an artificial mutation operator that samples equally by phenotype.
This demonstration has been done— independently of my suggestions— by Dingle, et al. (2022), Phenotype Bias Determines How Natural RNA Structures Occupy the Morphospace of All Possible Shapes. The findability of intrinsically likely forms has been explored in an important series of studies from Ard Louis’s group. The earliest one, Schaper and Louis (2014), actually appeared before MRE was finished (I saw it but did not grasp the importance). More recent papers such as Dingle, et al. (2022) have made it clear that the “arrival of the frequent” or “arrival bias” in this work is a reference to biases in the introduction process that favor forms (phenotypes, folds) that are over-represented (i.e., frequent) in genotypic state-space.
Berkson’s paradox refers to associations induced by conditioning, often illustrated by an example in which a negative correlation is induced in a selected sub-population, e.g., the wikipedia page explains how a negative correlation between looks and talent could arise among celebrities if achieving celebrity status is based on a threshold of looks + talent. MRE 8.13 suggests that something like this will happen in nature, because the changes that come to our attention in spite of the disadvantage of a lower mutation rate will tend to have a larger fitness advantage, and vice versa.
Data on clonal hematopoesis lines from Watson and Blundell (2022) showing a negative correlation between growth advantage (left) and inferred mutation rate (right)
There is now a theory for this, and suggestive evidence (e.g., figure above). In “Mutation and selection induce correlations between selection coefficients and mutation rates,” Gitschlag, et al (2023) address the transformation of a joint distribution of mutation rates and selection coefficients from (1) a nominal distribution of starting possibilities, to (2) a de novo distribution of mutations (the nominal sampled by mutation rate), to (3) a fixed distribution (the de novo sampled by fitness benefit). The dual effect of mutation and selection can induce correlations, but they are not necessarily negative: they can assume any combination of signs. Yet, Gitschlag, et al (2023) argue that natural distributions will tend to have the kinds of shapes that induce negative correlations in the fixed distribution. They use simulations to illustrate these points with realistic data sets. They also show a relatively clear example in which, for the fixed distribution, selection coefficients (estimated from deep mutational scanning) are amplified for a rare mutational type, namely double-nucleotide mutations among TP53 cancer drivers. That is, the drivers that rise to clinical attention in spite of having much lower mutation rates, have greater fitness benefits that (post hoc, via conditioning) offset these lower rates.
MRE 8.13 frames this as an issue of conditioning, but that is only if one is looking backwards, making inferences from the fixed distribution. The forward problem of going from the nominal to the de novo to the fixed can be treated as an issue of what is called “size-biasing” in statistics.
Apropos of this, I realized too late that the problem of conditioning undermines an argument from Stoltzfus and Norris (2015) that is repeated in the book (Box 9.1 or section 9.8.1). When investigating the conservative transitions hypothesis, Stoltzfus and Norris (2015) found that transitions and transversions in mutation-scanning experiments have roughly the same DFE. They also considered the DBFE (distribution of beneficial fitness effects) from laboratory adaptation experiments, which showed that beneficial transversions are slightly (not significantly) better than beneficial transitions.
At the time, this was humorously ironic: not only did we fail to find support for 50 years of lore, the data on adaptive changes actually gave the advantage to transversions.
However, we were attempting to make an inference about the nominal distribution from the fixed distribution, and therefore our inference was subject to conditioning in a way that made it unsafe: transversions that appear in the fixed distribution, in spite of their lower mutation rates, might have greater fitness benefits that (via conditioning) offset these lower rates. Thus, the pattern of more strongly beneficial transversions in the fixed distribution suggests (weakly, not significantly) a Berkson-like effect, but it does not speak against the hypothesis that the nominal DBFE is enriched for transitions (a hypothesis that, to be clear, has no direct empirical support).
Prediction about graduated effects (MRE 9.8.2)
As of 2020, all of the statistical evidence for mutation-biased adaptation in nature was based on testing for a simple excess of a mutationally favored type of change, relative to a null expectation of no bias. As MRE 9.8.2 explains, this is perfectly good evidence for mutation-biased adaptation, but not very specific as evidence for the theory of arrival biases. The theory predicts graduated effects, such that (other things being equal) a greater bias has a greater effect. In the weak-mutation regime, the effects are not just graduated, but proportional.
Evidence for this kind of graduated effect is now available in “Mutation bias shapes the spectrum of adaptive substitutions” by Cano, et al. (2022). The authors show a clear proportionality between the frequencies of various missense changes among adaptive substitutions, and the underlying nucleotide mutation spectrum (measured independently). They also developed a method to titrate the effect of mutation bias via a single coefficient β, defined as a coefficient of binomial regression for log(counts) vs. log(expected). Thus, one expects β to range from 0 (no effect) to 1 (proportional effect). Cano, et al. (2022) found that β is close to 1 (and significantly greater than 0) in three large data sets of adaptive changes from E. coli, yeast, and M. tuberculosis. They also split the mutation spectrum into transition bias and other effects, and found that β ~ 1 for both parts.
What this suggests generally is that each species will exhibit a spectrum of adaptive changes that reflects its distinctive mutation spectrum in a detailed quantitative way. This is precisely what the theory of arrival bias predicts, in contrast to Modern Synthesis claims (about the irrelevance of mutation rates) documented in MRE 6.4.3.
Note that the theory of arrival bias predicts graduated effects under a broad range of conditions, but only predicts β ~ 1 when the mutation supply μN is sufficiently small. Cano, et al. (2022) present simulation results showing how, as μN increases, the expected value of β drops from 1 to 0. This result applies to finite landscapes: for infinite landscapes, the effect of mutation bias does not disappear at high mutation supply (see Gomez, et al 2020).
Misleading claim: “this is expected..” (MRE 8.13)
The section on conditioning and Berkson’s paradox (see above) has the following interpretation of a result from Stoltzfus and McCandlish (2017):
When we restrict our attention to events with greater numbers of occurrences, we are biasing the sample toward higher values of μs. Thus, we expect higher values of μ, higher values of s, and a stronger negative correlation between the two. In fact, Table 9.4 shows that the transition bias tends to increase as the minimum number of occurrences is increased. This is expected, but it does not mean that the fitness effects are any less: again, we expect both higher μ and higher s, as the number of recurrences increases
The dubious part is “This is expected.” There may be a reason to expect this (I’m not entirely sure), but upon reflection, it does not relate to the paradox of conditioning that is the topic of this section, therefore the statement is misleading in context. The part that says “Thus, we expect” follows from conditioning. But the next “This is expected…” claim, if it is indeed correct, would relate to the compounding of trials. For parallelism, i.e., paths with 2 events, the effect of a bias on paths is linear and the effect of a bias on events is squared (see MRE 8.12). If we are considering only paths with 3 events or more, then we can expect an even stronger effect of mutation bias on the bias in events, because counting outcomes by events (rather than paths) is like raising the effect-size of the bias to a higher power. That is, conditioning on 3, 4 or more events per path will enrich for mutations with higher rates, whether they are transitions or transversions, but (so far as I understand) will not enrich for transition bias in the underlying paths.
Poorly phrased: “the question apparently was not asked, much less answered” (MRE 8.14)
This statement— in regard to whether 20th-century population genetics addressed the impact of a bias in introduction— sounds broader than it really is. Clearly Haldane and Fisher asked, and answered, a question about whether biases in variation could influence the course of evolution. The problem is that they didn’t ask the right question, which is about introduction biases. I’m not aware of any 20th-century work of population genetics that asks the right question. The closest is Mani and Clark, which treats the order of introductions as a stochastic variable that reduces predictability and increases variance (whereas if they had treated a bias they would have discovered that it increases predictability).
So, the claim is correct, but it is less meaningful than it sounds. Clearly the pioneers of evo-devo raised the issue of a causal link between developmental tendencies of variation and tendencies of evolution. In response, Maynard Smith, et al (1985) clearly and explicitly raised the question of how developmental biases might “cause” evolutionary trends or patterns. As recounted in MRE 8.8 and 10.2, they did not have a good answer. In general, historical evolutionary discourse includes both pre-Synthesis thinking (orthogenesis; mutational parallelisms per Vavilov or Morgan) and post-Synthesis thinking (evo-devo; molecular evolution) in which tendencies of variation are assumed or alleged to be influential, but the problem of developing a population-genetic theory for this effect was not apparently solved in the 20th century (a substantial failure of population genetics to serve the needs of evolutionary theorizing).
General issues needing clarification
Stuff that isn’t quite right, but which does not have an atomic fix.
Causal independence and statistical non-correlation
In the treatment of randomness in MRE, causal independence and statistical non-correlation are often treated as if they are the same thing. I confess that sorting this out and keeping it straight, without unduly burdening the reader, was beyond my capabilities.
The phrase “arrival bias”
The phrase “arrival of the fittest” or “arrival of the fitter” is used only twice in MRE, to refer to the thinking of others. I missed an opportunity to capitalize on “arrival bias”, a very useful and intuitive way to refer to biases in the introduction process, e.g., as in Dingle, et al (2022). Referring to the “arrival of the fittest” sounds very clever, but it combines effects of introduction and fitness in a way that is unwelcome for my purposes. Strictly speaking, arrival bias in the sense of introduction bias is an effect of the arrival of the likelier (i.e., mutationally likelier), not arrival of the fitter. One version is the “arrival of the frequent” concept of Schaper and Louis (2014), meaning a tendency for mutation to stumble upon the alternative forms that are widely distributed in genotype space.
Note that, by contrast, when Wagner (2015) refers to “the arrival of the fittest”, this is not an error of confounding mutation and fitness, but a deliberate attempt to tackle the problem of understanding how adaptive forms originate.
Quantitative evolutionary genetics
In the past, I mostly ignored QEG as irrelevant to my interests in the discrete world of molecular evolution. But in preparing to write MRE, I invested serious effort in reading the QEG literature and integrating it into my thinking about variation and causation. The biggest gap is the lack of an explanation of how and why the dispositional role of variation differs so radically in the QEG framework as compared to the kinds of models we use to illustrate arrival bias. This gap exists because the problem is unsolved.
Another issue that does not come out clearly is what, precisely, is the position of skepticism in Houle, et al. (2017), and more generally, what is the nature and extent of the neo-Darwinian refugium (or perhaps, redoubt) in the field of quantitative genetics? I incorrectly stated in MRE 5.7 that Houle, et al (2017) favor a correlational-selection-shapes-M theory, whereas their explicit position is that no known model fits their data (this position is better reflected in MRE 9.7). I am struck by the fact that the data on M:R correlation from quantitative genetics is far more rigorous and convincing than various indirect arguments of the same general form in the evo-devo literature, and yet, while the importance of “developmental bias” is often depicted as an established result in the literature of evo-devo (and EES advocacy), quantitative geneticists are clearly hesitant to conclude that the M:R correlation reflects M –> R causation, e.g., see the reference to “controversial” in the first sentence of the abstract of Houle, et al., or in Rohner and Berger (2023).
This is related to the first problem above. Variational asymmetries do not have a lot of power in the standard QEG framework: they are easily overwhelmed by selection. The quantitative geneticists understand this (and the evo-devoists perhaps do not). However, available QEG theory on effects of directional (as opposed to dimensional) bias is limited only to showing how a bias causes a slight deflection from the population optimum on a 1-peak landscape (Waxman and Peck, 2003; Zhang and Hill, 2008; Charlesworth, 2013), and lacks the kinds of multi-peak or latent-trait models that IMHO are going to show stronger effects (Xue, et al. 2015). It will be interesting to see how this plays out.
Change log
3 November 2023. Initial version with typos, updates (with a couple of figures) and Table of Contents.
The term “mutationism” appeared in the early 20th century in regard to the views of early geneticists such as de Vries, Bateson, Punnett, and Morgan (e.g., Poulton, 1909 or McCabe 1912). These leading thinkers did not use “mutationism” to describe their own diverse views.[1] Perhaps they thought of themselves as free thinkers, not tied to any ideology or “-ism”.
In the contemporary literature, “mutationism” is most often a strawman in which evolution takes place by dramatic mutations alone, without selection (see the conceptual immune system of neo-Darwinism). This pejorative use of “mutationism” continues today in the writings of Synthesis gatekeepers such as Futuyma (2023) or Svensson (2023).
Yet the 2013 book “Mutation-driven Evolution”, by Masatoshi Nei— a pioneer of molecular evolutionary genetics who passed away in early 2023—, brought renewed attention to the idea of a broad alternative to traditional thinking focused on mutation rather than selection. Among published reviews of the book, only Wright rejects Nei’s thinking as mistaken, referring to it as “Mutationism 2.0.” Five other reviews try to explain Nei’s position sympathetically, without necessarily endorsing it. Three reviews do not mention “mutationism” (Brookfield, Galtier, Weiss). The review by Gunter Wagner entitled “The changing face of evolutionary biology”, like my review for Evo & Devo, attempts to identify a sympathetic meaning of “mutationism” appropriate for Nei’s distinctive project, focusing on the importance of mutations in evolution.
One might be tempted to avoid the term “mutationism” (along with “saltationism” and “orthogenesis”) on the grounds of being toxic. To use this term is to risk ridicule and invite misunderstanding. Why do that, when one’s goal is to communicate with readers? I avoided these terms myself for many years, on precisely these grounds. However, eventually I decided not to acquiesce to rhetorical tactics designed to browbeat dissenters using strawman arguments. As we say here in the US, that would be letting the terrorists win. Promoting good intellectual hygiene in our field means calling out fallacies, and addressing alternative views fairly and rigorously, without rhetorical trickery. [2]
If there are distinctive features of the views of Nei and the early geneticists, nothing prevents us from using “mutationism” to denote those features. If “selectionism” is allowable to designate a focus on selection, without denying a role for mutations in evolution, then “mutationism” is allowable to designate a focus on mutation that does not deny selection. In my own thinking, I tend to associate “mutationism” with a non-exclusive explanatory position, with the lucky-mutant conception of evolutionary dynamics (see the shift to mutationism is documented in our language), or with a school of thought.
TLDR
Possible meaning of mutationism
Type of meaning
evolution happens by dramatic mutations alone, without selection
Strawman from Synthesis tribal mythology, employed by gatekeepers to police orthodoxy
identifying distinctive mutational-developmental changes is a uniquely powerful way to explain the evolution of form
Explanatory position on what kinds of causal attributions are meaningful, key to evo-devo
reconstructing mutational changes provides uniquely reliable knowledge of past evolution
Methodological position on which causes are most accessible to scientific methods, also key in evo-devo
the timing and character of events of mutation determine the timing and character of evolutionary change
Empirical position on evolutionary dynamics, e.g., in applications of origin-fixation models
diverse evolutionary phenomena arise from combining mutation and genetics
Loosely defined school of thought associated with Bateson, Punnett and Morgan
a preliminary and imperfect expression of (for instance) a future paradigm of dual causation
Transition state mainly of historical interest
The Mutationism Story in Synthesis tribal mythology
In the mainstream literature of evolutionary biology, history is told in a way that makes things turn out right for the Modern Synthesis, e.g., there is literally an “eclipse of Darwinism”— a period of darkness and strife— that ends when the Modern Synthesis solves the problem of evolution. This self-serving view of history is called “Synthesis Historiography” or SH by professional historians (Amundson, 2005). In SH, critics of neo-Darwinism behave irrationally and hold views with obvious flaws, while Darwin’s followers use reason and evidence to establish important truths.
The stories in SH function as a tribal mythology, i.e., scientists who identify culturally with the “Synthesis” tell these stories to each other to affirm their identity, which is based on a shared belief in their fundamental rightness about evolution, and the wrongness of historic opponents. For instance, in the Mutationism Story, the early geneticists are too stupid to understand populations, gradual change, or selection, which they reject, believing instead that evolution happens by dramatic mutations alone, without selection. The problem is solved when Fisher sees what the mutationists are too foolish to see: there is no conflict between gradualism, selection, and genetics. Versions of this fable are given in this blog (e.g., Dawkins 1987, p. 305 of The Blind Watchmaker; Cronin 1991, p. 47 of The Ant and the Peacock; Ayala and Fitch 1997; Futuyma, 2017; Segerstråle 2002, Oxford Encyclopedia of Evolution 2, pp. 807 to 810; Charlesworth and Charlesworth 2009). Here is Dawkins’s version:
“It is hard for us to comprehend but, in the early years of this century when the phenomenon of mutation was first named, it was regarded not as a necessary part of Darwinian theory but as an alternative theory of evolution! There was a school of geneticists called the mutationists, which included such famous names as Hugo de Vries and William Bateson among the early rediscoverers of Mendel’s principles of heredity, Wilhelm Johannsen the inventor of the word gene, and Thomas Hunt Morgan the father of the chromosome theory of heredity. . . Mendelian genetics was thought of, not as the central plank of Darwinism that it is today, but as antithetical to Darwinism. . . It is extremely hard for the modern mind to respond to this idea with anything but mirth”
Dawkins, 1987, p. 305
Actual history contradicts the Mutationism Story. In reality, immediately upon the discovery of genetics in 1900, early geneticists began to assemble the pieces of a Mendelian view of evolution by mutation, inheritance, and differential survival (Stoltzfus and Cable 2014). The multiple-factor theory was immediately suggested by Bateson and others. Here Bateson and Saunders (1902) give a precise verbal rendition of the Hardy-Weinberg paradigm, the first rigorous paradigm of population thinking:
“It will be of great interest to study the statistics of such a population in nature. If the degree of dominance can be experimentally determined, or the heterozygote recognised, and we can suppose that all forms mate together with equal freedom and fertility, and that there is no natural selection in respect of the allelomorphs, it should be possible to predict the proportions of the several components of the population with some accuracy. Conversely, departures from the calculated result would then throw no little light on the influence of disturbing factors, selection, and the like.”
Bateson and Saunders, 1902, p. 130
Thomas Hunt Morgan won a Nobel prize in genetics. His tendency to refer to “survival” of “definite variations” and to avoid “natural selection” reflects, not a rejection of what we call “selection” today, nor some kind of mental block, but a belief that shifting the goal-posts to avoid accountability is bad for science. For Morgan, the term “natural selection” must be reserved for Darwin’s non-Mendelian theory based on the blending of environmentally stimulated fluctuations (“indefinite variability”), a theory correctly rejected by the scientific community when it was experimentally refuted by Johannsen. Morgan called out the problem of goal-post-shifting when he wrote that “Modern zoologists who claim that the Darwinian theory is sufficiently broad to include the idea of the survival of definite variations seem inclined to forget that Darwin examined this possibility and rejected it.” (Morgan, 1904).
The early geneticists did not reject what we would call “selection” today, e.g., Morgan (1916), in his closing summary, writes that “Evolution has taken place by the incorporation into the race of those mutations that are beneficial to the life and reproduction of the organism” (p. 194). Bateson, Punnett, de Vries and Johannsen were the other early geneticists most well known for their views on evolution. Johannsen and de Vries both carried out successful selection experiments. de Vries begins his major 1905 English treatise by writing that …
“Darwin discovered the great principle which rules the evolution of organisms. It is the principle of natural selection. It is the sifting out of all organisms of minor worth through the struggle for life. It is only a sieve, and not a force of nature” (p. 6)
In Materials for the Study of Variation, Bateson (1894) refers to natural selection as “obviously” a “true cause” (p. 5). Punnett (1905) explains that mutations are heritable while environmental fluctuations are not, concluding that “Evolution takes place through the action of selection on these mutations” (p. 53).
The views of these influential scientists, and their contributions to evolutionary thinking, were not secrets: they were published, cited and discussed. Bateson, Punnett, Morgan and de Vries all were awarded the Royal Society Darwin medal in the period from 1900 to 1930. That is, the Mutationism Story is not just a wildly distorted version of history: it is a wildly distorted version of history contradicted by sources that are readily accessible to any serious scholar. The ongoing success of this kind of mythology is a testament to the power of propaganda and to the insularity of the Synthesis tribal culture (again, see the conceptual immune system).
Explanatory or methodological mutationism
Explanatory and methodological versions of mutationism are useful to contemplate, by comparison to the flavors of adaptationism identified by Godfrey Smith (2001):
Empirical adaptationism is ontological, based on a belief about how the world actually is: living things are pervasively adapted, down to the finest detail, and therefore, we will see adaptation everywhere we look because adaptation is in fact everywhere we look, and the explanation for traits will inevitably be functional because traits are in fact inevitably functional.
Methodological adaptationism holds that, even though adaptation might not be pervasive, it is the thing we are uniquely equipped to study using the methods of science. This view tends to travel together with the ideology that evolution is a combination of selection and “chance”, with the latter being hard to study systematically.
Explanatory adaptationism is the view that, although selection might not be everything, and although we might be able to study other kinds of causes in evolution, a focus on selection and adaptation is justified because adaptation is the distinctive problem in evolution, and selection is the necessary principle behind adaptation.
Analogously, we can imagine empirical, explanatory, and methodological versions of mutationism. The lucky mutant view mentioned below is one possible ontological or empirical flavor of mutationism. In methodological mutationism, which is clearly a research program in evo-devo, we focus on identifying the mutational-developmental changes involved in evolution on the grounds that this is a distinctively reliable and productive way to study evolution. In explanatory mutationism, our focus is on identifying the detailed mutational-developmental changes underlying changes in form, because explaining changes in form over time is the distinctive challenge of evolutionary biology.
Bateson’s early work exemplifies methodological mutationism: he believed that, in order to understand how evolution happens, the first step was to study variations. Accordingly, his Materials for the Study of Variation is a catalog of 886 numbered cases of discontinuous variations. Bateson planned a second volume on continuous variation but subsequent work on quantitative trait distributions made this unnecessary.
Bateson’s approach was observational, but today we see various experimentallyoriented mutationist projects in evolutionary biology:
attempts to reconstruct mutational changes involved in key changes in development, in the context of evo-devo
systematic measurements of M in quantitative genetics, e.g., Houle, et al (2017)
reconstructing ancestral protein molecules and their mutants in order to reconstruct the path of history and test hypotheses (from biochemically-oriented molecular evolutionists, e.g., Dean, Weinreich, Thornton, et al)
the recent focus on using deep sequencing methods to characterize the mutation spectrum in quantitative detail in a variety of organisms, and in the context of cancer biogenesis
Note that these projects are generally situated in paradigms that are not focused solely on mutation, but also reflect functionalist concerns. This is clearly true of evo-devo, for instance, as the analysis of Novick (2023) makes clear. The evo-devoists are not merely concerned with understanding why certain types of transformations are mutationally and developmentally likely, they are also concerned with selection and function. The same is obviously true of the line of work from Thornton and colleagues, which combines the reconstruction of mutants with functional assays and even selection experiments. In the study of cancer drivers and clonal haematopoesis mutants, contemporary research on mutation spectra, mutation rates, and repair mutants is premised on the understanding that clinical prevalence reflects both the rate of mutational origin and the selection intensity (figure; see Cannataro, et al. 2019; Watson and Blundell, 2022).
If we look at methodological mutationism as an extreme or exclusive position, it is difficult to separate from an extreme form of skepticism about selection that seems unwarranted today, when we can test hypotheses of selection in rigorous ways and assign some non-negligible proportion of the variance in outcomes to selection. Apropos, Nei (2013) does not reject selection as a causal principle in evolution, yet in practice, he seems to reject every attempt to attribute something concrete to positive selection. His approach recalls the attitude of Bateson, who (a century earlier) disparaged adaptationist story-telling by appealing to Voltaire’s Dr. Pangloss, a trope made famous in the “Panglossian paradigm” of Gould and Lewontin (1979). A scientist in Bateson’s time might find it easy to dismiss the vast majority of claims about selection as armchair speculation, not science. Punnett was so deeply skeptical of adaptive explanations that he rejected adaptive mimicry as an explanation for apparently mimetic morphs in butterflies!
Likewise, it’s hard to think of explanatory mutationism as an exclusive position. Clearly we can study the evolution of form from a structuralist viewpoint as a series of transformations based on genetic encodings and the intrinsic self-organizing properties of material systems, but we also can study the evolution of form from an adaptationist perspective.
So, rather than supposing that mutationism is uniquely explanatory for evolution in general, perhaps we can suppose instead that it is distinctively explanatory in some limited but important context. What is the limited but important context in which selective explanations are the least informative or trustworthy, and in which mutational explanations have more power to explain what we wish to understand? I think the best answer here is that there are some aspects of deep divergence, such as the formation of new body plans, major taxa, or key innovations, in which the power of selective explanations is at its lowest—because there are too many degrees of freedom— and the power of mutational explanations are at their highest, e.g., when key innovations can be associated with specific changes in developmental genetics, against a background of conserved features that do not change.
Mendelo-mutationism as a school of thought
The “school of thought” version of Mendelian mutationism is not a unified theory, but a loose collection of beliefs and ideas, overlapping substantially with how the “Modern Synthesis” is construed mistakenly today as a loose collection of beliefs consistent with genetics and selection (see this blog or Stoltzfus and Cable, 2014 for a review).
The early geneticists were the first scientists to accept particulate inheritance and mutation as the foundation of their understanding of evolution, in the sense that they viewed with suspicion any idea that could not be reconciled with particulate inheritance and mutation. Adopting genetics as the foundation for evolutionary reasoning sounds very familiar today, but in 1909 this was a disruptive view that seems to have pissed off evolutionists who were not geneticists, i.e., most of them. Imagine these upstarts telling leading evolutionary thinkers— paleontologists, systematics, embryologists— that the foundation for all thinking in evolution must be particulate inheritance and mutation, new discoveries only understood by a small group of scientists!
As noted above, Bateson and Saunders (1902) clearly articulated the research program of looking for deviations from Hardy-Weinberg expectations as a way of detecting causes other than inheritance.
In the same 1902 paper, they explain what became known as “the multiple factor theory” in which a smooth distribution of trait-values reflects, not blending inheritance and fluctuation, but the joint effect of Mendelian variation at many loci, combined with environmental noise.
But of course they also considered non-gradual changes via distinctive mutations, i.e., saltations. To the extent that non-gradual changes reflecting distinctive mutations are important in evolution, understanding evolution requires knowing how and when these distinctive mutations arise, based on relevant theories and systematic data. This is why Bateson (1894) catalogued distinctive variations as a way of understanding evolution. Morgan later made a systematic search for mutations in fruit-flies. It was Morgan who first clearly depicted evolution as a series of mutations that are accepted by virtue of being beneficial to the survival of the species. He articulated the concept of a probability of fixation in 1916, distinguishing the case of beneficial, neutral and deleterious mutations (the mathematical problem was later solved partially by Haldane, 1927 and more thoroughly by Kimura, 1962).
Interestingly, it was also Morgan (1909) who first suggested the randomness of mutation as a kind of metaphysical gambit, a working assumption that, so long as the origins of mutations remain a mystery, we will treat them as random and not entertain any ideas in which they have special properties.
Whether definite variations are by chance useful, or whether they are purposeful are the contrasting views of modern speculation. The philosophic zoologist of to-day has made his choice. He has chosen undirected variations as furnishing the materials for natural selection. It gives him a working hypothesis that calls in no unknown agencies; it accords with what he observes in nature; it promises the largest rewards. He does not deny, if he is cautious, the possibility that there may be a purposefulness in the sense that organisms may respond adaptively at times to external conditions; for the very basis of his theory rests on the assumption that such variations do occur. But he is inclined to question the assumption that adaptive variations arise because of their adaptiveness. In his experience he finds little evidence for this belief, and he finds much that is opposed to it. He can foresee that to admit it for that all important group of facts, where adjustments arise through the adaptation of individuals to each other—of host to parasite, of hunter to hunted—will land him in a mire of unverifiable speculation.
Note again the stark contrast between the facts of history and the stories used in Synthesis gatekeeping, in which an association of “mutationism” with directed mutation has been fabricated repeatedly in the attempt to discredit both (Gardner, 2013; Svensson, 2023).
However, Morgan frequently noted that mutations happen at different rates. He and Punnett both believed that this was important for evolution, and might play a role in parallel evolution, citing cases like albino or melanic forms. Under a neo-Darwinian view, melanic forms are expected to emerge gradually, like the all-black rats in Castle’s experiments, from the gradual accumulation of many small differences; and the repeated appearance of melanism in different taxa would indicate that it is some kind of adaptive optimum. For the mutationists, the repeated occurrence of melanic forms suggested that such forms were readily mutationally accessible.
Vavilov (1922) took this idea of parallel evolution by parallel variations to extreme lengths. From his extensive observations of plants, especially crop species, he developed a theory that each major group of organisms has a set of characteristic variants that eventually manifest as distinct species, e.g., if family F has a tendency to produce long-eared forms, this tendency would manifest in genera G1, G2, … each having both long-eared and short-eared species within the genus. He also proposed a kind of mimicry— now called Vavilovian mimicry— that turns out to be quite important among domesticated crop species. In Vavilovian mimicry, the model is a cultivated species actively harvested and propagated by humans, and the mimic starts out as a weed that is eventually propagated by humans by virtue of mimicking the model in terms of the time of maturation, and similar responses to threshing and winnowing techniques. For instance, rye and oats are believed to be Vavilovian mimics that emerged in the context of wheat cultivation (see the wikipedia article on Vavilovian mimicry).
With regard to species and speciation, the early geneticists tended to believe that reproductive incompatibilities were “the true criterion of what constitutes a species” (Punnett, 1911, p. 151). With the Modern Synthesis, this “biological species concept” became the prevailing view (Mallet 2013). They allowed for different kinds of speciation, including speciation by non-Mendelian mutations like de Vriesian macromutations, but also by the accumulation of what we now call “Bateson-Dobzhansky-Muller” incompatibilities.
To summarize, the early geneticists opened up and explored a new field, considering a wide range of possibilities (excluding only Lamarckism) and contributing a number of key concepts to evolutionary genetics. Few people know of their accomplishments today because, in Synthesis Historiography, scientific progress only comes from people with the right Darwinian lineage, and not from critics of neo-Darwinism, who are treated as aliens or un-persons. For instance, in Synthesis Historiography, the credit for rejecting 19th-century views of heredity and introducing modern notions of hard inheritance is awarded, not to the geneticists responsible for this innovation, but to 19th-century physiologist and infamous mouse-torturer August Weismann. The Oxford Encyclopedia of Evolution does not have biographic entries for Bateson, de Vries, Punnett, or other early geneticists except for the entry on Morgan, which says nothing of his views of evolution, although he wrote 4 books on the topic. For a graphical example of how the early geneticists are treated as un-persons in Synthesis Historiography, read this.
Lucky mutant (sushi conveyor) dynamics
The lucky mutant version of mutationism is a focus on the regime of population genetics in which origination events are important, so that the timing and character of evolutionary change depend on the timing and character of events of mutation that introduce new alleles (or phenotypes). This is sometimes called “mutation-driven” or “mutation-limited” evolution. For me, “mutation-driven” evokes evolution by mutation pressure, so I don’t like the term, but I feel obliged to use it occasionally because this is what some readers recognize. The problem with “mutation-limited” is that, for the vast majority of readers, it suggests some kind of limit to the outcomes that selection can access, whereas for theoreticians this is a statement about dynamics.[3]
As a technical description of dynamics, “mutation-limited” behavior could mean either (1) behavior responsive to changes in u or (2) the limiting behavior as u approaches 0, which is origin-fixation dynamics. When people like Dawkins (2007) invoke the idea that “evolution is not mutation-limited” as a way of discounting a focus on mutation, this only makes sense if it means that evolutionary behavior is not responsive to changes in u, which is what Dobzhansky and others stated explicitly, i.e., they said that changing the rate of mutation would not change the rate of evolution due to the buffering capacity of the gene-pool.
In other words, the most direct label for mutation-responsive dynamics would be “mutation-responsive dynamics” rather than “mutation-limited” or “mutation-driven” dynamics. I have also referred to the “sushi-conveyor” regime of population genetics, as distinct from the “buffet” regime.
Defining “mutationism” as a position on population genetics is not the most historically justifiable way to interpret the views of the early geneticists, because they were not very explicit about population genetics. However, it is how we might choose to see mutationism in retrospective contrast to the neo-Darwinian view of the Modern Synthesis. Darwin’s followers, in their dialectical encounter with the early geneticists, were most concerned to defend the power and creativity of selection, to defend gradualism, and to reject a lucky mutant view of dynamics. They did this by invoking the “buffet” regime of population genetics, in which evolution takes place by shifting the frequencies of alleles present in an abundant gene pool.
Consider again the example of melanic or albino morphs. The repeated occurrence of melanic morphs in related species might suggest to us the possibility of a common mutation to blackness that has occurred repeatedly. Under neo-Darwinism, by contrast, this would only happen by the accumulation of many small effects, i.e., in the same way that all-black rats emerged in the Castle experiment from the accumulation of many small variations. Note that the historic reception of Castle’s experiment illustrated the breadth of mutationist thinking: in a famous dispute with Castle and colleagues, members of Morgan’s group insisted that the gradual emergence of all black and all white rats was entirely consistent with incremental frequency shifts of small-effect alleles under the Mendelian multiple-factor theory, and did not require blending or transformation of hereditary factors, as Castle (under the influence of Darwin’s thinking) had argued.
If “mutationism” means the lucky-mutant view of sushi-conveyor dynamics, then we have seen a broad resurgence of mutationism in evolutionary biology, starting with the molecular evolutionists in the 1960s. See The shift to mutationism is documented in our language.
A transition to …
Finally we can think of mutationism not as a resting point or destination, but as an unstable transition-state on the path to something else. The most productive line of thought, perhaps, is that it points the way toward a paradigm of dual causation that combines functionalism and structuralism, with a major goal of partitioning variance in outcomes to variational and selective causes. A clear and direct recognition of dual causation is evident in statements of Vavilov (1922), e.g.,
“the role of natural selection in this case is quite clear. Man unconsciously, year after year, by his sorting machines, separated varieties of vetches similar to lentils in size and form of seeds, and ripening simultaneously with lentils. The same varieties certainly existed long before selection itself, and the appearance of their series [i.e., combinations], irrespective of any selection, was in accordance with the laws of variation.” (p. 85)
Here Vavilov combines two different kinds of dispositions in one theory, such that each disposition reflects a set of distinct causal processes that are invoked directly in historical explanations. Darwin’s followers would look at the same case and say that variation merely supplies raw material that selection shapes into adaptations, invoking two kinds of causal processes, only one of which is dispositional.
One sees a notion of dual causation expressed very abstractly by Vrba and Eldredge (1984), in their enhanced description of evo-devo thinking:
“Developmental biologists variously stress: (1) how indirect any genetic control is during certain stages of epigenesis; (2) that the system determines by downward causation which genomic constituents are stored in unexpressed form versus those which are expressed in the phenoytpe; (3) that bias in the introduction of phenotypic variation may be more important to directional phenotypic evolution than sorting by selection. This is in contrast to the synthesis, which stresses more or less direct upward causation from random mutations to phenotypic variants, with selection among the latter as the prime determinant of directional evolution.”
Instead of casting evolution as shifting gene frequencies, we can depict it more broadly as a process of the introduction and reproductive sorting of variation in a hierarchy of reproducing entities.[4] To the extent that evolution has any predictable tendencies, they reflect biases in introduction and biases in sorting. This is not simply a re-statement of the position of Vavilov or of Vrba and Eldredge, which is not based on any technical understanding of bias in the introduction of variation as a population-genetic mechanism.
The classical functionalist position of neo-Darwinism and the Modern Synthesis focuses on biases in reproductive sorting (i.e., selection) as the cause of everything interesting. The success of this research program is proof that effects of biases in reproductive sorting are profoundly important in evolution. However, the reason for the resurgence of interest in quasi-mutationist thinking— as an attempt to get beyond neo-Darwinism— is that selection does not actually govern evolution in the way that neo-Darwinism supposes. Selection is a directional factor, but not the directional factor. We can also pursue a research program based on the role of generative biases in evolution and, even more broadly, a research program that focuses on both biases in the introduction of variation and biases in the reproduction of variation, with the goal of quantifying their relative influence on the predictability of evolution.
References
Bateson W. 1894. Materials for the Study of Variation, Treated with Especial Regard to Discontinuity in the Origin of Species. London: Macmillan.
Bateson W, Saunders ER. 1902. Experimental Studies in the Physiology of Heredity. In. Reports to the Evolution Committee: Royal Society.
Davenport CB. 1909. Mutation. In. Fifty Years of Darwinism: Modern Aspects of Evolution. New York: Henry Holt and Company. p. 160-181.
Dawkins R. 2007. Review: The Edge of Evolution. In. International Herald Tribune. Paris. p. 2.
de Vries H. 1905. Species and Varieties: Their Origin by Mutation. Chicago: The Open Court Publishing Company.
Futuyma DJ. 2017. Evolutionary biology today and the call for an extended synthesis. Interface Focus 7:20160145.
Godfrey-Smith P. 2001. Three Kinds of Adaptationism. In: Orzack SH, Sober E, editors. Adaptationism and Optimality. Cambridge: Cambridge University Press. p. 335-357.
Gould SJ, Lewontin RC. (classic; CNE theory co-authors). 1979. The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist program. Proc. Royal Soc. London B 205:581-598.
McCabe J. 1912. The Story of Evolution.
Morgan TH. 1904. The Origin of Species through Selection Contrasted with their Origin through the Appearance of Definite Variations. Popular Science Monthly:54-65.
Morgan TH. 1909. For Darwin. Popular Science Monthly 74:367-380.
Morgan TH. 1916. A Critique of the Theory of Evolution. Princeton, NJ: Princeton University Press.
Nei M. 2013. Mutation-Driven Evolution: Oxford University Press.
Novick R. 2023. Structure and Function. In. Cambridge: Cambridge University Press.
Poulton EB. 1909. Fifty Years of Darwinism. In. Fifty Years of Darwinism: Modern Aspects of Evolution. New York: Henry Holt and Company. p. 8-56.
Punnett RC. 1905. Mendelism. London: MacMillan and Bowes.
Punnett RC. 1911. Mendelism: MacMillan.
Segerstråle U. 2002. Neo-Darwinism. In: Pagel M, editor. Encyclopedia of Evolution. New York: Oxford University Press. p. 807-810.
Stamhuis IH. 2015. Why the Rediscoverer Ended up on the Sidelines: Hugo De Vries’s Theory of Inheritance and the Mendelian Laws. Science & Education 24:29-49.
Stoltzfus A, Cable K. 2014. Mendelian-Mutationism: The Forgotten Evolutionary Synthesis. J Hist Biol 47:501-546.
Svensson EI. 2023. The structure of evolutionary theory: beyond Neo-Darwinism, Neo-Lamarckism and biased historical narratives about the Modern Synthesis. In: Dickins TE, Dickins JA, editors. Evolutionary biology: contemporary and historical reflections upon core theory. Cham, Switzerland: Springer Nature.
Vavilov NI. 1922. The Law of Homologous Series in Variation. J. Heredity 12:47-89.
Vrba ES, Eldredge N. (benchmark; co-authors). 1984. Individuals, hierarchies and processes: towards a more complete evolutionary theory. Paleobiology 10:146-171.
Notes
[1] The term “early geneticist” typically means scientists working on mutation and Mendelian inheritance in the first decade of the 20th century (thus Goldschmidt is not considered an early geneticist). The most influential ones were clearly Johannsen, de Vries, Bateson, Punnett, and Morgan. My claim that leading early geneticists did not use the term “mutationism” for their own views is based on published works of Bateson, Punnett, Morgan and de Vries. I’m not going to say they never used it, but I haven’t found a case. I found one instance where Davenport (1909) refers to the view of the “the mutationist”. De Vries literally proposed a MutationsTheorie so it is natural to call him a mutationist. But de Vries’s thinking was extremely complex and mainly non-Mendelian, and the other early geneticists developed their own views, not relying on de Vries’s thinking (for explanation, see Stoltzfus and Cable, 2014).
[2] I’m saying this as an established researcher who is not trying to get a job or tenure, or to curry favor with decision-makers. If you are a junior person, calling out the strawman arguments and shoddy historical scholarship used by influential gatekeepers poses risks to your career, and you should weigh those risks carefully. It’s perfectly all right to leave this fight to others who are not as vulnerable. We all have to pick our battles, and mine are not the same as yours.
[3] For instance, consider evolution under mutation bias on a smooth landscape with one peak. Ultimately the system goes to the peak: mutation places no limits in this sense. However, the rate and trajectory of the approach to the peak will reflect the rate and bias of mutations. So, the dynamics are mutation-responsive but the ultimate outcome and the ultimate level of fitness or adaptation is not limited by mutation. If multiple peaks or destinations are possible, then biases in introduction may be influential. Calling this mutation-limited evolution would just confuse people; saying that it isn’t mutation-limited also would give the wrong impression.
[4] Technically the list should be something more like “introduction, hereditary transmission and reproductive sorting” with biases possible in each process. Biased gene conversion is a transmission bias. So is meiotic drive. Effects of mutational hazard in the thinking of Lynch can be understood as biases in transmission, i.e., longer sequences have lower transmission due to mutational damage (mutational hazard is not an effect of introduction; although it is possible to cast it as a form of selection, that is weird IMHO).
Mutation bias: a systematic difference in rates of occurrence for different types of mutations, e.g., transition-transversion bias, insertion-deletion bias
Brandolini’s law: it takes 10 times the effort to debunk bullshit as to generate it
If I were to misdefine “negative selection” or “G matrix”, evolutionary biologists would go nuts because theories and results that are familiar would be messed up by a wrong definition. Likewise, a wrong definition of mutation bias is obvious to those of us who are actual experts, because it induces contradictions and errors in things we know and care about.
The actual usage of “mutation bias” by scientists is broadly consistent with a systematic difference in rates of occurrence for different types of mutations and is not consistent with a forward-reverse bias or with heterogeneity in rates of mutation for different loci or sites. To demonstrate this, here is a simple table showing which meanings fit with actual scientific usage, starting with the 3 types of mutation bias invoked most commonly in PubMed (based on my own informal analysis), and continuing with some other examples. The last two refer to the literature of quantitative genetics, which occasionally makes reference to bias in mutational effects on quantitative traits (either on total variability, or on the direction of effects).
Effect called a “mutation bias”in the literature
Heterogeneity per locus (or site)
Forward-reverse asymmetry
Systematic diff in rates for diff types
Transition bias
No
No
Yes
GC/AT bias
No*
Yes
Yes
Male mutation bias
No
No
Yes
pattern in Monroe, et al (2022)
Yes*
No
Yes
Insertion or deletion bias
No
Yes
Yes
CpG bias
No
Possibly
Yes
Diffs in mutational variability of traits
Possibly
No
Yes
Asymmetric effect on trait value
No
Possibly
Yes
In the first column are kinds of effects that scientists denote with the literal term “mutation bias” or variants thereof (mutational bias, bias in mutation). The remaining columns indicate whether the noted effect is covered by a definition of mutation bias that also appears in the literature. “Possibly” means that some models of the bias would fit the definition and others would not. CpG bias can’t be modeled correctly as a sitewise bias because it influences transitions and transversions quite differently. The “No” with asterisk means that you could try to model GC/AT bias as a site-wise bias, but this approach will soon break down as sequences change, because mutability is not actually an intrinsic property of a position, but of the sequence context at a position. Likewise, the “Yes” with asterisk means that, whereas Monroe, et al. are usually putting the focus on regional differences in mutation rate, the detailed pattern is not merely a difference in rates per site, because the underlying model of contextual effects involves things like transition bias and GC/AT bias.
How does one concept of “mutation bias” cover such heterogeneity? Every mutation has a “from” and a “to” state, i.e., a source and a destination. A variety of different genetic and phenotypic descriptors can be applied to these “from” and “to” states, which means that we can define many different categories or types of mutations. Different applications of the concept of mutation bias always refer to types whose rates differ predictably, but there are many different ways of defining types, so there are many different possible mutation biases.
Let’s consider transition-transversion bias, GC vs. AT bias, and male mutation bias. The first is defined relative to the chemical categories of purine (A or G) and pyrimidine (C or T): we apply these categories to the source and destination states, and if they are in the same category, that is a transition, otherwise it is a transversion. The second example, GC/AT bias, is based on whether the shift from the “from” to the “to” increases or decreases GC content. This can be defined either as a forward-reverse asymmetry, or as a difference in mutability of the “from” state, e.g., if A and T are simply more mutable than G and C, the result is a net bias toward GC. In the case of male mutation bias, the categories of mutation are defined by whether the “from” context is male or female.
Note that transition-transversion bias is not a site-wise bias: every nucleotide site is the same in the sense of having 1 transition and 2 transversions (one blue arrow and 2 red arrows in the figure above). Also, transition bias is not a forward-reverse bias, but a difference between two types of fully reversible rates, e.g., under transition bias, the transitions A —> G and G —> A both have a higher rate than the transversions A —> T and T —> A. An insertion-deletion bias is a forward-reverse bias, but it is not a site-wise bias, in the sense that every site has the same set of possible insertions and deletions.
Thus, defining mutation bias as “differences between loci in mutation rates” (Svensson, 2022) is inconsistent with transition bias, GC/AT bias, and male mutation bias, the 3 most familiar and commonly invoked types of mutation bias in the scientific literature. The magnitude of this error is roughly the same as that of defining “genome” as the RNA molecules that store hereditary information. Some genomes are indeed made of RNA. We can imagine a novice RNA virus researcher, e.g., a summer student, who hears everyone in the lab talking about the “genome” which is RNA, and who assumes on this basis that all genomes are RNA, but no experienced scientist who has worked with a variety of organisms or read widely or attempted to teach students would make this kind of error of defining something in a way that excludes the most familiar cases.
Why is this called a “bias”? “Mutation bias” (“mutational bias”, “bias in mutation”) has been a term of art in molecular evolution for over half a century, since Cox and Yanofsky (1967). The term is perfectly apt and useful. A bias is a systematic or predictable asymmetry, and the term is most congenial when this asymmetry applies to categories with some structural symmetry, e.g., insertions vs. deletions. The term is used this way in various areas of science and engineering, e.g., a biased estimator in statistics is one that yields a systematically low or high estimate.
Nonetheless, some evolutionary biologists don’t want you to have this useful term in your vocabulary. Some will object that “bias” should be avoided because it implies an effect on fitness, but that is just because some people think everything is about fitness and want to restrict your language to force you into their belief system. Salazar-Ciudad rejects the use of “bias” on the grounds that it implies an error or distortion. Yes, in statistics the term is used to indicate sources of distortion or sources of error from a true value, but this is a narrow 20th-century technical meaning, whereas the usage of “bias” in the English language is much older than this:
We also expect that traditionalists will dilute the concept of mutation bias as part of a cultural appropriation strategy (based on what we have seen here, here and in a recent anonymous review). That is, traditionalists will undermine the distinctive concept of mutation bias by blurring it together with chance effects, contingency, or heterogeneity, because this makes it easier for them to broaden the scientific issue and then claim that nothing is new using “we have long known” arguments, e.g., statements like “we have long known that mutation rates are not all the same” will be used to dilute the key concept, followed by “this just sounds like new words for old concepts” to undermine a claim of novelty.
The problem with this line of argument— as a critique of work highlighting the role of arrival biases— is that systematic and patterned differences in properties between classes of things are not the same thing as idiosyncratic or unpatterned heterogeneity among a set of items, and more importantly, what is novel is not the claim that mutation biases exist, but linking them with biases in evolutionary outcomes both theoretically (via a pop-gen mechanism of arrival biases) and empirically (via results showing effects of mutation bias on adaptive changes). However, the traditionalists have a lot of power, which means that they can set the terms of debate and reframe things using straw-man arguments and excluded middle arguments, e.g., “we see nothing revolutionary with X” is utterly devoid of merit but has been an effective go-to argument for traditionalists in online discussions or when talking to reporters. It’s a very easy argument to make and can be applied with to the novelty of arrival biases or other ideas. As a rhetorical device, it can be coupled very effectively with a misrepresentation of X that broadens it into something trivial, e.g., rather than saying
“we see nothing revolutionary with how this formal body of theory on arrival biases creates a structural equivalence between mutational and developmental biases that was not known to exist previously”
instead say
“we see nothing revolutionary with a theory that applies both to molecules and morphologies— we have long used such models”.
The defense of tradition often relies on fatuous arguments that broaden and trivialize new findings. Exploring them is a useful exercise to build awareness. I wish that reporters knew how to recognize this pattern of minimization.
By the way, Wikipedia gets the definition of mutation bias right. But many other sources get this wrong and say wrong things, e.g.,
“Mutation bias. A pattern of mutation in DNA that is disproportional between the four bases, such that there is a tendency for certain bases to accumulate.” (Encyclopedia.com)
“Mutation bias. Bias in the mutation frequencies of different codons, affecting the synonymous to nonsynonymous rate ratio. Mutation bias results in an accelerated rate of amino acid replacement in functionally less constrained regions.” [that statement is not true] (Oxford Reference)
And many sources simply do not define the term because it is not on the radar for most evolutionary biologists.
References
E. Cox and C. Yanofsky. Altered base ratios in the DNA of an Escherichia coli mutator strain. Proc. Natl. Acad. Sci. USA, 58:1895–1902, 1967.
The dismissal-resistance-appropriation pattern called the “stages of truth” has been noted for well over a century (Shallit, 2005). Zihlman’s reference (above) to “the same critics” suggests that the attempt to appropriate a new idea for tradition overlaps with the stage of active resistance. This blog about appropriation uses recent examples to illustrate how pundits normalize or domesticate new results, claiming them for tradition. These examples indicate that the gatekeepers who challenge the validity and importance of a new idea are the same people who attack its originality, e.g., by quote-mining older sources and mashing up new ideas with old ones, to make the new ideas seem more incremental.
This presents a dual challenge to scientists advancing the case for new theories: (1) the challenge of communicating the new idea and making a case for its importance and plausibility, and (2) the struggle to defend against the efforts of gatekeepers to undermine the idea and muddy the waters by blending it with old ideas.
Dismissal, resistance, and appropriation are all evident in reactions to the theory of arrival biases proposed by Yampolsky and Stoltzfus (2001). Lynch (2007) writes that “The notion that mutation pressure can be a driving force in evolution is not new” (explained here) citing CharlesDarwin, Yampolsky and Stoltzfus, and about a half-dozen others, none of whom proposed a recognizable theory of evolution by mutation pressure. The first major commentary on mutation-biased adaptation came out in TREE in 2019: it was a hit-piece that misrepresented the theory and the evidence, and offered multiple attempts at appropriation— the theory is merely part of the neutral theory, it comes from Haldane and Fisher, it is nothing more than contingency, it is the traditional view, etc.[1]
Recently Cano, et al. (2022) showed an effect of mutation-biased adaptation predicted from theory, and one press release framed this effort literally as “helping to return Darwin’s second scenario to its rightful place in evolutionary theory” as if the main idea came from Darwin.
My focus here is not on this process of appropriation or theft that re-assigns credit for our work to others who are more famous, but on the issue of novelty, and how to evaluate it. The kind of novelty that matters in science is untapped potential; assigning credit is a separate issue (think of Mendelian genetics: in 1900, this was a novel idea with huge untapped potential, though an unknown monk had published the idea decades earlier). The theory of arrival biases— proposing a specific population-genetic linkage between tendencies of variation and predictable tendencies of evolution— appeared only in 2001 and is still unknown to most evolutionary biologists. It is not found in textbooks, or in the Oxford Encyclopedia of Evolution, or in the canonical texts of the Modern Synthesis, or in the archives of classical population genetics.
Something is profoundly wrong with suggesting that this theory is not new. Clearly we need a better way to evaluate the novelty of scientific claims than asking whether the claims have thematic content that can be linked back to dead authorities via vague statements.
Here I’m going to use the patent process as a point of reference for evaluating novelty. Patenting an invention and proposing a theory are two different things, but I think the comparison is useful. The law is often where philosophy meets practice: where abstract principles become the basis for adjudicating concrete issues between disputing parties— with life, liberty and treasure at stake. Note that patent law combines credit and novelty: it is about who gets to claim the untapped potential of an invention.
The theory of novelty underlying patent law hinges on non-obviousness. Under US patent law, a successful patent application shows that an invention meets the 4 criteria of eligibility (patentability), newness, usefulness, and non-obviousness. Eligibility is mostly about whether the proposed invention is a manufacturing process, rather than something non-patentable. I will set aside that criterion as irrelevant for our purposes. An invention must have the potential to do something useful.
An invention is new if it is not found in prior art. In patent law, prior art is defined in a very permissive way, to include any prior representation, whether or not the invention was ever manufactured or made public (whereas in science, we might want to restrict the scope of prior art to published knowledge available, for instance, via libraries).
Thus, newness is easy to understand: make one small improvement on prior art, and that is considered new under patent law. But improvement or newness is not enough. In order to be patentable, a new and useful invention must be a non-obvious improvement— non-obvious to a practitioner or knowledgeable expert. In the patent law for some other countries (e.g., Netherlands), the latter two criteria are sometimes combined by saying that the invention must be inventive, meaning both new and non-obvious. An example of an obvious improvement would be to take a welding process widely used with airframes and apply it to bicycle frames.
The most distinctive (i.e., potentially non-obvious) aspects of the theory from Yampolsky and Stoltzfus (2001) are that (1) it focuses on the introduction process (the transient of an allele frequency as it departs from zero) as a causal process, distinguishing dual causation by origin and fixation (in contrast to the singular conception of population-genetic causes as mass-action forces that shifting frequencies); (2) it links tendencies of evolution to tendencies of variation without requiring neutrality or high mutation rates, in contrast to the mutation-pressure theory of Haldane and Fisher; and (3) it purports to unite disparate phenomena including effects of mutation bias, developmental bias, and the findability of intrinsically likely forms.
The non-obviousness of a pop-gen theory of arrival biases
As outlined above, we may consider the theory of arrival biases as a useful and possibly non-obvious improvement on prior art. Prior to 2001, what was the state of the art (in evolutionary thinking) in regard to the potential for internal variational biases to induce directional trends or tendencies in evolution? This general idea was believed to be incompatible with population genetics, based on the opposing pressures argument of Haldane and Fisher, which says that, because mutation rates are small, mutation is a weak pressure, easily overcome by the opposing force of selection.
More broadly, these are the most relevant pieces of prior art from the corpus of population genetics and evolutionary theorizing (to my knowledge, based on years of searching):
the verbal theory of Vrba and Eldredge (1984). See note 2.
Mani and Clarke (1990), a theory paper, which shows that mutational order is influential, but treats it as a purely stochastic variable, rather than proposing a theory of biases and generalizing on that
None of these sources actually proposes a population-genetic theory of arrival biases and distinguishes it from evolution by mutation pressure. This means that the theory is new, but it does not, by itself, mean that the theory is non-obvious. An idea could be obvious but no one writes it down because no one cares enough to do so. Indeed, historically, the people with the population-genetics expertise are not the same people as the structuralists searching for internal causes. Population geneticists are obsessed with selection, Fisher, selection, standing variation and selection. We could read stacks of population genetics papers without finding out what anyone thinks about generative biases, which might as well be a reference to voodoo for most population geneticists.
We could solve this problem with a time machine: go back into the 20th century, and get some theoreticians together with internalist-structuralist thinkers to see if they could combine the classic idea of internal variational biases with population genetics. For instance, we could go back to the 1980s and get together some population geneticists with those early evo-devo pioneers (e.g., Pere Alberch) who were literally calling for attention to developmental biases in the introduction of variation. Maybe we could add some philosophers of science.
In fact, we do not need a time machine to understand how population geneticists would address structuralist thinking about the role of variation, because this meeting of the minds actually happened, with results recorded in the scientific literature. In the late 1970s and early 1980s, Gould, Alberch and others began to suggest some kind of important evolutionary role for developmental “constraints” that was not included in traditional thinking, as described on the wikipedia page for arrival bias:
Similar thinking [about generative biases acting prior to selection] featured in the emergence of evo-devo, e.g., Alberch (1980) suggests that “in evolution, selection may decide the winner of a given game but development non-randomly defines the players” (p. 665)[23] (see also [24]). Thomson (1985), [25] reviewing multiple volumes addressing the new developmentalist thinking— a book by Raff and Kaufman (1983) [26] and conference volumes edited by Bonner (1982) [27] and Goodwin, et al (1983) [28] — wrote that “The whole thrust of the developmentalist approach to evolution is to explore the possibility that asymmetries in the introduction of variation at the focal level of individual phenotypes, arising from the inherent properties of developing systems, constitutes a powerful source of causation in evolutionary change” (p. 222). Likewise, the paleontologists Elisabeth Vrba and Niles Eldredge summarized this new developmentalist thinking by saying that “bias in the introduction of phenotypic variation may be more important to directional phenotypic evolution than sorting by selection.” [29]
They are literally talking about biases in the introduction of variation. In 1984, a group of scientists and philosophers, all highly regarded, convened at the Mountain Lake biological station to consider how development might shape evolution. In 1985, these 9 eminent scientists and philosopers collaborated to publish “Developmental constraints and evolution”, now considered a landmark paper cited ~1800 times:
John Maynard Smith, population geneticist trained with Haldane
Richard Burian, philosopher of science
Stuart Kauffman, later wrote the Origins of Order
Pere Alberch, developmental biologist and evo-devo pioneer
John H Campbell, evolutionary theorist and philosopher of science
Brian Goodwin, developmental biologist, author of How the Leopard got its Spots
Russ Lande, developer of the multivariate generalization of QG (trained with Lewontin)
David Raup, paleontologist
Lewis Wolpert, developmental biologist
The authors raised the question of what might give developmental biases on the production of variation a legitimate causal status, i.e., the ability to “cause evolutionary trends or patterns.” The only accepted theory of causation was that evolution is caused by the forces of population genetics, i.e., mass-action pressures acting on allele frequencies. Maynard Smith et al. confronted the issue thus, invoking the prior art of Haldane and Fisher:
Although they clearly call for a “reexamination”, they did not provide one, other than a vague suggestion of neutral evolution, which is unsatisfactory because a proposal based on neutrality, though consistent with Haldane-Fisher reasoning, is not a satisfactory foundation for the claims of evo-devo.
Another way of articulating the prior art would be to point to the verbal theories noted above, e.g., the highly developed example of Vrba and Eldredge (1984).[2] Is the population-genetic theory of Yampolsky and Stoltzfus (2001) an obvious improvement on this verbal theory? Does it merely supply the math that would be obvious from reading Vrba and Eldredge? Again, we can answer this question by reading Maynard Smith, et al (1985), because they are clearly representing the verbal theory from evo-devo that Pere Alberch (one of the authors) promoted. So, if a population-genetic theory of arrival biases was an obvious clarification of this verbal theory, then Pere Alberch could have asked John Maynard Smith and Russ Lande to translate his verbal theory so as to yield a proper population-genetic grounding for his claims. Clearly that did not happen.
I could cite some other examples, but the case could be made entirely on Maynard Smith, et al. (1985). Why does a single paper make such a strong case? First, the author list includes a set of people who are clearly experts on the relevant topics. Second, the authors were clearly focused on the right issue, and were clearly motivated to find a theory to account for the efficacy of biases in variation to influence the course of evolution, arguably the central claim of the paper. Third, this was a seminal paper that got lots of attention and became highly cited (today: 1800 citations). This means that hundreds of other experts must have read and discussed the paper: if this particular set of 9 authors had missed something, others would have pointed it out in response. Instead, years later, critics such as Reeve and Sherman (1993) complained that Maynard Smith, et al. simply re-state the idea of developmental biases in variation, without an evolutionary mechanism linking these biased inputs to biased outputs.
Thus, experts confronted the issue of how biases in variation could act as a population-genetic cause, citing the prior art of Haldane and Fisher. They repeated some of the verbal claims of the developmentalists, but they did not find a causal grounding for these claims in a population-genetic theoory of arrival biases.
QED, a population-genetic theory of arrival biases was non-obvious in 1985.
More non-obviousness with Maynard Smith and Kauffman
To further explore this issue of non-obviousness, let us consider John Maynard Smith’s knowledge of King’s (1971) codon argument, which can be read as an early intuitive appeal to an implication of arrival biases. King argued that the amino acids with more codons would be more common in proteins (as indeed they are) because they offer more options, explaining this with an example implying an effect of variation and not selection. Originally, King and Jukes (1969) proposed this as an implication of the neutral theory, but King (1971) quickly realized that this did not depend on neutrality, but would happen even if all changes were adaptive [3].
The way we would explain this today is that the genetic code is a genotype-phenotype map that assigns more codon genotypes to certain amino-acid phenotypes. Because these amino acids occupy a greater volume of the sequence-space of genotypic possibilities, they have more mutational arrows pointed at them from other parts of sequence space: this makes them more findable by an evolutionary process that explores sequence space (or genotype space) via mutations. Because of this phenomenon of differential findability reflecting what Ard Louis calls “the arrival of the frequent”, the amino acids with the most codons will tend to be the most common in proteins. This argument does not require neutrality, but merely a process subject to the kinetics of introduction, i.e., it is an argument about kinetic bias. The form of the argument maps to a more general argument about the findability of intrinsically likely phenotypes, which is one of the meanings of Kauffman’s (1993) concept of self-organization.
Maynard Smith literally invented the concept of sequence space (Maynard Smith, 1970). He also knew about King’s codon argument, which he quoted it in his 1975 book, in a passage contrasting the neutralist and selectionist views:
“Hence the correlation does not enable us to decide between the two. However, it is worth remembering that if we accept the selectionist view that most substitutions are selective, we cannot at the same time assume that there is a unique deterministic course for evolution. Instead, we must assume that there are alternative ways in which a protein can evolve, the actual path taken depending on chance events. This seems to be the minimum concession the selectionists will have to make to the neutralists; they may have to concede much more.”
p. 106 of Maynard Smith J. 1975 (same text in 1993 version). The Theory of Evolution. Cambridge: Cambridge University Press.
This just drives home the point about non-obviousness even further. To someone who knows the theory already, King’s argument might look familiar, but Maynard Smith does not recognize a theory connecting generative biases with evolutionary biases that would be useful for understanding evo-devo and solving the challenge of “constraints”. Instead, he refers only to a theory that allows “chance events” to affect the outcome of evolution.[4].
A few years later, Kauffman (1993) published The Origins of Order, offering findability arguments under the heading of “self-organization.” In Kauffman’s reasoning, selection and “self-organization” work together to produce order. But Kauffman never offered a causal theory explaining self-organization in terms of population-genetic processes, so his claims were a bit of a mystery to population geneticists (though his results were never doubted). His simulations typically didn’t include population genetics in the usual sense, but depicted evolutionary change as the series of steps taken by a discrete particle (representing the population) in a discrete space. Thus, the models do not treat introduction and fixation as separate processes.
But there is no longer any mystery: the findability effect emerges from the way that differential representation of phenotypes in genotype-space induces biases in the introduction process, which lead evolution toward the most highly represented forms. This “arrival of the frequent” effect has been demonstrated clearly (in regard to the findability of RNA folds) in work from Ard Louis’s group (see Dingle, et al., 2022). So, we can count Kauffman (1993) as another famous example illustrating non-obviousness, because clearly the theory was non-obvious to Kauffman, and his book was widely read and discussed by evolutionary biologists. If Kauffman had missed a population-genetic basis for self-organization that was obvious to other thinkers, then those other thinkers could have supplied this missing argument. They didn’t, apparently.
This is an important lesson for traditionalists who seem to assume mistakenly that theories are timeless universals (per Platonic realism) and that they are all obvious from assembling the parts. John Maynard Smith had the parts list, in a sense, and he had the chance to glimpse the issue from multiple angles. He got useful clues from the thinking of Jack King. In the circumstances leading up to the famous 1985 paper, Maynard Smith and another brilliant population geneticist (Russ Lande) were placed (figuratively) on an island with developmentalist-structuralist thinkers and they were tasked with making sense of the causal role of developmental biases. In the end, they did not articulate a theory of arrival biases as a potential solution to this problem. Maynard Smith and Lande remained active as theoreticians after 1985 and did not discover the theory.
This should not be taken as a criticism of the abilities of any of these scientists. This is just how reality works, contrary to the assumptions of traditionalist pundits. Maynard Smith himself understood that theories can be elusive. I had the opportunity to meet him several times in the 1990s when he was an external advisor to the Canadian Institute for Advanced Research program in evolutionary biology. He was a great one for sharing stories and talking science with students and post-docs over a beer. He once told a series of humorous anecdotes about scientific theories he almost discovered. In one case, he had run some numbers on the logistic growth equation and found some odd behavior, and then set the problem aside— only to realize much later that he had stumbled on deterministic chaos. The neutral theory was another theory that he almost-but-not-quite discovered. I don’t recall the other examples.
The point here is that scientists, even brilliant ones, can fail to see possibilities that seem obvious in retrospect. When Huxley first read the Origin and learned of Darwin’s theory, he said to himself “How extremely stupid not to have thought of that!”
Confronting attempts at appropriation and minimization
Now, with this background in mind, we can reconsider attempts to appropriate the theory of arrival biases or undermine its novelty. Lynch (2007) and Svensson and Berger (2019) appear to be suggesting that the theory is now new, but is merely part of a tradition going back a century or more.
In science, the way to establish X in prior art is to find a published source and then cite it. Let us consider what this might look like:
“Stoltzfus and Yampolsky (2001) were not the first to propose and demonstrate a population-genetic theory for the efficacy of mutational and developmental biases in the introduction of variation: such a theory was already proposed and demonstrated by Classic Source and subsequently was cited in reviews such as Well Known Review and textbooks such as Popular Textbook”.
But of course, Lynch, Svensson and Berger say nothing like this, because they can’t: no such sources exist. In appropriation arguments, the gap between intention and reality is filled with misleading citations and hand-waving. Returning to the analogy with patents: if this were a dispute over the novelty of a patent claim, every one of the sources cited by Lynch, and every one of the arguments of Svensson and Berger (2019), would be dismissed as irrelevant, because they simply do not establish the existence, in prior art, of a recognizable theory of biases in the introduction of variation as a cause of orientation or direction in evolution.
For instance, Svensson and Berger say that the theory is part of the neutral theory, citing no source for this claim. Kimura wrote an entire book about the neutral theory, along with hundreds of papers. Surely if Svensson and Berger were serious, they could cite a publication from Kimura that expresses the theory. But they have not done this. None of the sources that they cite articulates a theory of arrival biases. Their insinuation that our work-product is not original relative to work cited from Kimura, Haldane, Fisher, or Dobzhansky must be rejected as, not merely false, but a frivolous and damaging misrepresentation that deprives scientists of proper credit for their work (e.g., note how, in their Box 1, Haldane, Fisher and Kimura are named, but Yampolsky and Stoltzfus are not). Lynch’s (2007) bad take (analyzed here) is equally frivolous, but it is merely a fleeting expression of ignorance rather than a concerted attempt to undermine unorthodox claims.
Now, having dismissed attacks on the newness of the theory, let us consider the critique of Svensson and Berger (2019) as an attack on non-obviousness, i.e., they may be insinuating that the theory, though it improves on prior art, fails to satisfy the criterion of being non-obvious, and therefore is not novel, not a genuine invention worthy of recognition. This is one way to read their argument in Box 1, in which they build on work from Haldane, Fisher and Kimura, going from an origin-fixation formalism to a key equation from Yampolsky and Stoltzfus (2001) that expresses the bias on evolution as a ratio of probabilities Pj / Pj = (uisi) / (uj sj) = (ui / uj) (si / sj), reflecting both biases in introduction and biases in establishment.
The implication is that the theory is obvious because one can put it together from readily available bits. But this just begs the question: does the derivation in Box 1 show that the theory is obvious? If anyone can put the theory together from readily available bits, why didn’t they? Why didn’t Fisher and Haldane do this in the 1930s? Why didn’t Kimura, Lewontin or King in the 1970s? Why didn’t Maynard Smith and Lande in the 1980s?
The attitude of Svensson and Berger (2019) seems to be a case of inception: we planted this theory in their heads, and now they see it everywhere. Yet before we proposed this theory, vastly greater minds than Svensson and Berger had access to the same Modern Synthesis canon and failed to see the theory.
Clearly mathematical cleverness is not the right standard for judging the non-obviousness of scientific theories. An equation is, at best, a model of a theory, useful if you already understand what the theory says. Writing down an equation with the form of a ratio of origin-fixation rates (as in Box 1 of Svensson and Berger) does not magically cause a theory to form in your head. For instance, Lewontin has a structurally similar equation on p. 223 of his 1974 book: it is a ratio of origin-fixation rates for beneficial vs. neutral changes, reducing to 4Ns ub / un. If this equation caused the theory of arrival biases to form in Lewontin’s head, then surely he would have included the theory in the 1979 Spandrels paper, and this would have provided a much more solid grounding for the claims of Gould and Lewontin about the role of non-selective factors in evolution. That didn’t happen (note that, at this time, Lewontin was also familiar with King’s codon argument [5]).
Why was a population-genetic theory of arrival biases so non-obvious? My sense is that the late discovery of this theory reflects a blind spot in evolutionary thinking, the combined effect of habitual approaches to theoretical modeling (e.g., solving for equilibrium behavior), specific notions of causation (e.g., favoring mass action and determinism), the overwhelming influence of neo-Darwinism (selection governs evolution and variation is just a source of random raw materials), and particularly, the threat of ridicule in a neo-Darwinian culture that habitually relies on strawman arguments to link internalist thinking with mysticism and vitalism.
However, understanding why the theory was non-obvious is a separate issue. Whether or not we understand why the theory eluded generations of scientists, it clearly did. The theory may seem obvious today, but it clearly was not obvious in the past, and the empirical proof of this non-obviousness is that Maynard Smith, Lande, Kauffman, Lewontin and many other well qualified evolutionary thinkers had the motive and the opportunity to propose this theory and they did not.
References
King JL, Jukes TH. 1969. Non-Darwinian Evolution. Science 164:788-797.
King JL editor. 1972. Sixth Berkeley Symposium on Mathematical Statistics and Probability. 1971 Berkeley, California.
King JL. 1971. The Influence of the Genetic Code on Protein Evolution. In: Schoffeniels E, editor. Biochemical Evolution and the Origin of Life. Viers: North-Holland Publishing Company. p. 3-13.
Mani GS, Clarke BC. 1990. Mutational order: a major stochastic process in evolution. Proc R Soc Lond B Biol Sci 240:29-37.
Maynard Smith J. 1970. Natural selection and the concept of a protein space. Nature 225:563-564.
Maynard Smith J. 1975. The Theory of Evolution. Cambridge: Cambridge University Press.
Maynard Smith J, Burian R, Kauffman S, Alberch P, Campbell J, Goodwin B, Lande R, Raup D, Wolpert L. 1985. Developmental Constraints and Evolution. Quart. Rev. Biol. 60:265-287.
Oster G, Alberch P. 1982. EVOLUTION AND BIFURCATION OF DEVELOPMENTAL PROGRAMS. Evolution 36:444-459.
Reeve HK, Sherman PW. 1993. Adaptation and the Goals of Evolutionary Research. Quarterly Review of Biology 68:1-32.
Shallit J. 2005. Science, Pseudoscience, and The Three Stages of Truth. PDF
Stebbins GL, Lewontin RC. 1971. Comparative evolution at the levels of molecules, organisms and populations. Sixth Berkeley Symposium on Mathematical Statistics and Probability. 1971 Berkeley, California.
Vrba ES, Eldredge N. (benchmark; co-authors). 1984. Individuals, hierarchies and processes: towards a more complete evolutionary theory. Paleobiology 10:146-171.
Notes
[1] When I say “hit piece”, I mean that TREE solicited, reviewed and published a highly negative piece without getting feedback from the people whose work was targeted. When we objected to the garbage they published, we were not given space for a rebuttal. This is what institutionalized gatekeeping looks like. We were not given a seat at the table.
[2] The piece by Vrba and Eldredge, 1984 is part of the paleo debate of the 1970s and 1980s. They use very general abstract language, following on early authors such as Oster and Alberch. They refer to biases in the introduction (or production) and sorting (reproductive sorting, to include selection or drift), making a neat dichotomy that they apply at each level of a hierarchy. So, clearly, they see this as a fundamental kind of causation that can be extrapolated to a hierarchy. One strange thing about the argument is the implicit assumption that biases in the introduction of variation are a kind of causation already recognized at the population level, which is not correct. So, the argument is not situated properly. And, because it includes no demonstration, one could not be sure (in 1984) that the theory would actually work. And, because it has no demonstration, and because of the way it refers to prior art, we can’t be sure that references to “introduction” or “production” are references to the population-genetic operator that is coincidentally called “introduction” in the theory of arrival biases. In fact, it seems quite certain that Vrba and Eldgredge could not have had a clear conception of the introduction process, because even the population geneticists of the time did not have a clear conception.
[3] King 1971 presents a verbal argument with a concrete example that is the closest thing I have seen to an earlier statement of the theory of Yampolsky and Stoltzfus (2001). First, he clearly means for this idea to be general. That is the significance of the diagram with the arrows coming out from a point in a blank space, with some pointing up (beneficial), some laterally (neutral), and many down (deleterious). I have used diagrams like that myself. But he has almost nothing to say on where the biases come from. His motivating example is the genetic code but he does not cite any other example. He does not reference prior art of Haldane and Fisher and does not explain how his theory is different. He does not offer proof of principle other than the verbal model.
[4] I have seen this kind of reaction many times. Saying that chance affects evolution is a familiar thing. Evolutionary biologists are accustomed to a dichotomy of selection (or necessity) and chance, and it is familiar to invoke “chance” as if it were a cause. But it is not familiar in evolutionary biology to refer to generative processes as evolutionary causes that impose biases on the course of evolution. As of today, there is no language for this that is acceptable to traditionalists. So, when traditional thinkers are confronted with the theory of Yampolsky and Stoltzfus (2001), they often translate this into a familiar selection-vs-chance dichotomy and say that the theory of arrival biases is a theory about how “chance” affects evolution, or they link this to “contingency.” But the theory of arrival biases is not a theory about how chance and contingency affect evolution. It is a theory about how arrival biases affect evolution.
[5] Stebbins and Lewontin (1971) address King’s codon argument in their general rebuttal of neutralist arguments. This paper appears in a symposium volume together with one of King’s papers. So, perhaps all three scientists were present together at this symposium in Berkeley. Stebbins and Lewontin dismiss King’s argument with a reference to “the law of large numbers”.
Whereas Kimura (1968) proposed his version of the Neutral Theory of Molecular Evolution as the answer to an esoteric problem of population genetics theory, King and Jukes (1969) proposed a theory driven by the results of macromolecular sequence comparisons. Molecular evolution, in their view, demanded “new rules.” As evidence for neutrality, they pointed to a general correspondence between the frequencies of amino acids in protein sequences, and the frequencies expected from translating randomly generated sequences with the genetic code.
At the bottom left are Met and Trp with 1 codon each and then, proceeding with some variation up and to the right, we have the 2-codon blocks (Cys, His, Tyr, Phe, Gln, Asn, Asp, Lys), Ile with 3 codons, then the 4-codon blocks (Pro, thr, Val, Ala, Gly) then the 6-codon blocks (Leu, Ser, Arg), with Arg being a somewhat extreme outlier due to the CpG effect.
However, King rather quickly recanted on this argument. It is rare for a scientist to do that, so pay attention. In the proceedings of a 1971 conference that came out 2 years later, King (1973) said that this was not evidence of neutrality, but rather evidence for some kind of indeterministic process dependent on mutation. He explains
“If a gene is in the process of progressive, adaptive evolution, there might very likely be more than one among the thousand or so possible single-step changes that would be evolutionarily advantageous. Then the first of these to occur by mutation would have the first chance to take over. The conditions of selection would then be changed, and it would be too late for the other previously potential candidates. Thus the probability of fixation [probability of origin-and-fixation] of an amino acid is a function of its frequency of arising by mutation, and this will happen more often to amino acids with more codons. The eventual distribution of amino acid frequencies will reflect, more or less passively, the peculiarities of an arbitrary genetic code, even if most evolutionary changes are due exclusively to Darwinian adaptive evolution.” (p. 7)
King JL. 1973. The Influence of the Genetic Code on Protein Evolution. In: Schoffeniels E, editor. Biochemical Evolution and the Origin of Life. Viers: North-Holland Publishing Company. p. 3-13
King did not generalize further on this argument. However, we can see this as an instance of a general form of argument in which phenotypes that are over-represented in genotype-space are more findable due to mutation. Amino acids with larger numbers of codons (in the genetic code) occupy a greater volume of sequence space, analogous to phenotypes with large numbers of genotypes in genotype-space, and this makes them more findable by an evolutionary process that explores sequence space (or genotype space) via mutations. So the amino acids with the most codons will tend to be the most common in proteins, and this argument does not require neutrality, but merely a process subject to biases in introduction of amino acid phenotypes.
The form of this argument is analogous to that made by Ard Louis and colleagues in regard to RNA folds that are common in sequence space (Dingle, et al. 2022).
King’s two conference papers in this period reveal important thinking about evolution in discrete spaces. King (1973) gives this image combining states, paths, upwardness, and a fitness landscape. Note that this is not merely a passive depiction of “Maynard Smith’s” concept, but represents creative and synthetic thinking.
What happened to King’s argument? It could have, but apparently did not, stimulate a model like that of Yampolsky and Stoltzfus (2001). However, it was not completely lost to history. Leigh Van Valen repeated the argument in a 1974 paper, attributing it ambiguously either to an “oral” clarification of Lewontin or to Stebbins and Lewontin (1973) :
“2. Lewontin also made more plausible another rebuttal that Stebbins and Lewontin (1973) made to a neutralist argument. The latter argument is the general similarity of the proportions of amino acids in proteins to the proportions of their respective codons among all codons, given the observed proportions of the four nucleotides. Now consider a protein sitting in the protein space. There may be several sequences (local adaptive peaks) better than the one it now has. However, these will be unequally available. The most available will be those for which the needed mutations are for amino acids with the most codons, assuming that many of the possible steps to the peak increase fitness. Once the protein has chosen its peak, the final sequence is determined only by selection. Therefore selection can give a correspondence of proportions as easily as drift can.”
The article from Stebbins and Lewontin appears in the same symposium volume as King’s paper. So, all 3 of them probably went to the same 1971 meeting and were exposed to this idea (the papers were not published until 1973). However, the version of the argument in Stebbins and Lewontin is inadequate and cannot have been the source of Van Valen’s version, which is as clear as that of King. So, either Van Valen got it from King but misattributed it to Lewontin, or Lewontin’s “oral” clarification included a greatly improved argument.
Maynard Smith (1975) also repeats King’s argument in his comments on the neutralist-selectionist controvery, concluding that:
“Hence the correlation does not enable us to decide between the two. However, it is worth remembering that if we accept the selectionist view that most substitutions are selective, we cannot at the same time assume that there is a unique deterministic course for evolution. Instead, we must assume that there are alternative ways in which a protein can evolve, the actual path taken depending on chance events. This seems to be the minimum concession the selectionists will have to make to the neutralists; they may have to concede much more.”
I love this quotation because it reveals a world that most people do not know existed in 1975, a world in which selectionists were not yet entertaining the idea of evolution as a Markov chain of origin-fixation events, but were still using the shifting-gene-frequencies view defended by Stebbins and Lewontin (1973).
References
King JL. 1973. The Role of Mutation in Evolution. In: Sixth Berkeley Symposium on Mathematical Statistics and Probability (eds Le Cam, Neyman, and Scott) Berkeley, California.
King JL. 1971. The Influence of the Genetic Code on Protein Evolution. In: Schoffeniels E, editor. Biochemical Evolution and the Origin of Life. Viers: North-Holland Publishing Company. p. 3-13.
Maynard Smith J. 1975. The Theory of Evolution. Cambridge: Cambridge University Press.
Stebbins GL, Lewontin RC, 1973. Comparative evolution at the levels of molecules, organisms and populationseditors. In: Sixth Berkeley Symposium on Mathematical Statistics and Probability (eds Le Cam, Neyman, and Scott) Berkeley, California
Van Valen L. 1974. Molecular evolution as predicted by natural selection. Journal of Molecular Evolution 3:89-101.
This is a long but organized dump of some thoughts on a particular type of distortion that arises from an attitude of conservatism or traditionalism. It is part of a longer-term attempt to understand controversy and novelty, with the practical goal of helping scientists themselves to cut through bullshit and learn how to interpret new findings and reactions to new findings. The topics here in rough order are:
anachronism and the Sally-Ann test
the stages of truth, ending with normalization and appropriation
back-projection as a means of appropriation
ret-conning as a means of appropriation
some examples
the missing pieces theory in historiography
sub- and neo-functionalization in the gene duplication literature
ret-conning the Synthesis and the new-mutations view
“we have long known” claims
Platonic realism in relation to back-projection
a gravity model for misattribution
pushing back against appropriation
Sally-Ann and the stages of truth
When a new finding appears, this changes the world of knowledge and our perspective on the state of knowledge. After a new result has appeared, reconstructing what the world looked like before the new result can be difficult.
Indeed, for young children, understanding what the world looks like from the perspective of someone with incomplete knowledge is literally impossible. This issue is probed by the “Sally-Ann” developmental test, in which a child is told a story with some pictures, and then is asked a question. The story is that Sally puts a toy in her basket, then leaves, and before she returns, Ann moves the toy to her box. The question is where will Sally look for the toy, in the basket or in the box? A developmentally typical adult will say that Sally will look in the basket where she put it. Sally’s state of knowledge is different from ours: she doesn’t know that her toy was moved to the box.
But children under 4 typically say that Sally will look in the box. They have not acquired the mental skill of modeling the perspective of another person with different information. In their only model of the world— the world as seen from their own perspective—, the toy is in the box, so they assume that this is also Sally’s perspective, and on that basis, they predict that Sally will look in the box.
The most egregious kinds of scientific anachronism have the same flavor as this childish error, e.g., describing Darwin’s theory as one of random mutation and selection. It is notoriously difficult for us to forget genetics and comprehend pre-Mendelian thinking on heredity and evolution. For this reason, one often hears the notion that Mendelism supplies the “missing pieces” to Darwin’s theory of evolution, as if Darwin articulated a theory with a missing component the precise shape of Mendelian genetics, yet did not foresee Mendelian genetics.
Historian Peter Bowler loves to mock the missing-pieces story. Darwin did not, in fact, propose a theory with a hole in it for Mendelism: he proposed a non-Mendelian theory based on the blending of environmental fluctuations under the struggle for life, which Johannsen then refuted experimentally. Historian Jean Gayon wrote an entire book about the “crisis” precipitated by Darwin’s errant views of heredity. Decades passed before Darwin’s followers threw their support behind a superficially similar theory combining a neo-Darwinian front end with a Mendelian back end. Then they shut their eyes tightly, made a wish, and the original fluctuation-struggle-blending theory mysteriously vanished from the pages of the Origin of Species. They can’t see it. They can’t see anything non-Mendelian even if you hold the OOS right up in front of their faces and point to the very first thing Darwin says in Ch. 1. All they see is a missing piece. This act of mass self-hypnosis has endured for a century.
Normalization: the stages-of-truth meme
Anachronistic attempts to make sense of the past fit a pattern of normalization suggested by the classic “stages of truth” meme (see the QuoteInvestigator piece), in which a bold new idea is first dismissed as absurd, then challenged as unsupported, then normalized. Depictions of normalization emphasize either that (1) the new truth is declared trivial or self-evident (e.g., Schopenhauer’s trivial or selbverständlich), or (2) its origin is pushed backwards in time and credited to predecessors, e.g., Agassiz says the final stage is “everybody knew it before” and Sims says
For it is ever so with any great truth. It must first be opposed, then ridiculed, after a while accepted, and then comes the time to prove that it is not new, and that the credit of it belongs to some one else.
This phenomenon is something that deserves a name and some careful research (such research may exist already, but I have not found it yet in the scholarly literature). The general pattern could be called normalization (making something normal, a norm) or appropriation (declaring ownership of new results on behalf of tradition). Normalization or appropriation is a general pattern or end-point for which there are multiple rhetorical strategies. I use the term “back-projection” when contemporary ideas are projected naively onto progenitors, and I sometimes use “ret-conning” when there is a more elaborate kind of story-telling that anchors new findings in tradition and links them to illustrious ancestors. Recognizing these tactics (and the overall pattern) can help us to cut through the bullshit and assess more objectively the relationship of new findings or current thinking to the past.
Back-projection examples (DDC model and Monroe, et al 2022)
The contemporary literature on gene duplication features a common 3-part formula with consistent language for what might happen when a genome contains 2 copies of a gene: neo-functionalization, sub-functionalization or loss (or pseudogenization).
This 3-part formula began to appear after the sub-functionalization model was articulated in independent papers by Force, et al. (1999) and Stoltzfus (1999). Each paper presented a theory of duplicate gene establishment via subfunctionalization, and then used a population-genetic model to demonstrate the soundness of the theory. In this model, each copy of a gene loses a sub-function, such as expression in a particular tissue, but the loss is genetically complemented by the other copy, so that the two genes together are sufficient to do what one gene did previously. Force, et al. called their model the duplication-degeneration-complementation (DDC) model; the model of Stoltzfus (1999) was presented as a case of constructive neutral evolution.
The appearance of this new and somewhat subversive theory— calling on neutral evolution to account for a pattern of apparent functional specialization— sparked a renewed interest in duplicate gene evolution that has been surprisingly durable, continuing to the present day. The article by Force, et al has been cited over 2000 times. That is a huge impact!
As noted, the emergence of this theory induced the use of a now-familiar 3-part formula. Along with this came a shift in how existing concepts were described, using the neat binary contrast of sub versus neo, i.e., “neo-functionalization” refers to the classic idea that a duplicate gene gains a new function, yet the term itself is not traditional, but spread beginning with its use by Force, et al (1999), as shown in this figure.
Then the back-projection began. Even though this 3-part formula emerged in 1999, references in the literature (e.g., here) began to attribute it to an earlier piece by Austin Hughes that does not propose a model for the preservation or establishment of duplicate copies by subfunctionalization. Instead, Hughes (1994) argued that new functions often emerge within one gene (“gene sharing”) before gene duplication proceeds, i.e., Hughes proposed dual-functionality as an intermediate stage in the process of neo-functionalization (see the discussion on Sandwalk):
A model for the evolution of new proteins is proposed under which a period of gene sharing ordinarily precedes the evolution of functionally distinct proteins. Gene duplication then allows each daughter gene to specialize for one of the functions of the ancestral gene.
Hughes (1994)
Over time, the back-projection became even more extreme: some sources began to attribute aspects of this scheme to Ohno (1970), e.g., here, or when Hahn (2009) writes:
In his highly prescient book, Susumu Ohno recognized that duplicate genes are fixed and maintained within a population with 3 distinct outcomes: neofunctionalization, subfunctionalization, and conservation of function.
What, precisely, is Hahn saying here? He does not directly attribute the DDC model to Ohno. He seems to refer primarily to outcomes rather than to processes, leaving room for interpretation. Perhaps there is some subtle way in which it is legitimate to apply the word “subfunctionalization” anachronistically, but it isn’t clear what exactly Ohno said that justifies this statement. Of course, Ohno did not use the term “neo-functionalization” either, but there is no anachronism in applying it, because the term was invented specifically as the label for the old and familiar idea of gaining a new function. Again, Hahn does not say explicitly and clearly that the subfunctionalization model comes from Ohno,but this is what the reader will assume.
And this is where the ingenuity of back-projection goes wrong: the more clever you are in weaving a thread backwards from the present into the past, spinning a story that connects current thinking to older sources— older sources that actually used different language and explored different ideas—, the more likely that you are just going to mislead people.
Obviously any new theory or finding will have some aspects that are not new. A common strategy of appropriation is to point to familiar parts of a new finding, and present those as the basis to claim that the finding is not new. One version of this tactic is to focus on a phenomenon or process that features either as a cause or an effect in a new theory, and then claim that, because this part was recognized earlier, the theory is not new. For instance, Niche Construction Theory (NCT) is about the reciprocal ways in which organisms both adapt to, and modify, their environment. However, naturalists have recognized for centuries that organisms modify their environment, e.g., beavers build dams and earthworms aerate and condition the soil. Therefore, strategies of appropriation by traditionalists (e.g., Wray, et al; see Stoltzfus, 2017) focus on the way that authors such as Darwin noted how earthworms modify their environment, claiming that this undermines the novelty of NCT.
If this kind of argument were valid, it would mean that we have no need for genuine causal theories in science, e.g., theories that induce sophisticated mathematical relations between measurable quantities, because it equates the recognition of an effect with a theory for that effect. In the Origin of Species, Darwin explicitly and repeatedly invoked 3 main causes of evolutionary modification: natural selection, use and disuse, and direct effects of environment. He did not list niche construction. Saying that niche construction theory is not novel on the grounds that the phenomenology it was designed to explain was noticed earlier is like saying that Newton’s theory of gravity was not novel because humans, going back to ancient times, already knew that heavy things fall [7].
A variety of anachronisms, misapprehensions, and other pathologies of normalization were evident in responses to the recent report by Monroe, et al. (2022) of a genome-wide pattern in Arabidopsis of an anti-correlation between mutation rate and functional density [3]. One commentary was entitled “Who ever thought genetic mutations were random?“, which is outright scientific gaslighting. Another commentary stated that “Scientists have been demonstrating that mutations don’t occur randomly for nearly a century” citing a 1935 paper from Haldane that does not explicitly invoke either random or non-random mutation, and does not report any systematic asymmetry or bias in mutation rates.
I was so mystified by this citation that I read Haldane’s paper line by line about 4 times, and finding nothing, used an online service (scite) to examine the context for about 70 citations to Haldane’s 1935 paper. I found that the paper was mainly cited for the reasons one would expect (sex-linked diseases, mutation-selection balance) until about 2 decades ago, when male mutation bias began to be a hot topic, and then scientists began to cite Haldane’s paper as though this were a source of the idea.[which is in Haldane (1947): for details, see note 8] In fact, Haldane (1935) does not propose male mutation bias. The closest that he gets to this possibility is to present a mutation-selection balance model for sex-linked diseases with separate parameters for male and female mutation rates, though ultimately his actual estimate is a single rate inferred from the frequency of haemophilic males (“x” in his notation). That is, male mutation bias was back-projected to Haldane in expert sources (an example is shown in the figure below), then this pattern was twisted into an even more bizarre claim in the newsy piece about Monroe, et al (and whereas the authors who originated and propagated this myth probably never stopped to ponder what they were doing, I spent multiple hours checking my own work, illustrating Brandolini’s law: debunking bullshit takes 10 times the effort as producing it).
An important lesson to draw from such examples is that when new results are injected into evolutionary discourse, this provokes new statements, even if the form of those new statements is a novel defense of orthodoxy, e.g., outrageous takes like “Who ever thought genetic mutations were random?“ or “Scientists have been demonstrating that mutations don’t occur randomly for nearly a century.” That is, the publication of Monroe, et al. caused these novel sentences to come into existence, as a form of normalization.
More generally, new work may induce increased attention to older work, rightly or wrongly. The extreme case would be a jump in attention to Mendel’s work, not when it was published in 1865, but when it was “rediscovered” in 1900 [6]. The appearance of Monroe, et al. (2022) stimulated a jump in attention to earlier work from Chuang and Li (2004) and Martincorena, et al (2012). Renewed attention to relevant prior work is salutary in the sense that (1) the later work increases the posterior probability of the earlier claims, and (2) this re-evaluation rightfully draws our attention. However, this is not salutary if (1) the earlier studies failed to have an impact for any reason, but particularly because they were not as convincing, and (2) their existence is now being used retroactively to make a case against the novelty of subsequent work. The work of Martincorena, et al. stimulated a backlash at the time; Martincorena wrote a rebuttal but never published it (it’s still on biorxiv), and then got out of evolutionary biology, escaping our toxic world for the kinder gentler field of cancer research. But now his work (and the work of Chuang and Li) is put forth as the basis of “we have long known” claims attempting to undermine the novelty of Monroe, et al. (e.g., this and other rhetorical strategies are used to undermine the novelty of Monroe, et al in this video from a youtube science explainer).
“Fisher’s” geometric model
As a more extended example of back-projection, consider the case of “Fisher’s geometric model.”
Given a range of effect-sizes of heritable differences from the smallest to the largest, i.e., effects that might be incorporated in evolutionary adaptation, which size is most likely to be beneficial? Fisher (1930) answered this question with his famous geometric model. The chance of a beneficial effect is a monotonically decreasing function of effect-size, so that the smallest possible effects have the greatest chance of being beneficial. Fisher concluded from this that the smallest changes are the most likely in evolution, i.e., adaptation will occur gradually, by infinitesimals. To put this in more formal terms, for any size of change d, Fisher’s model allows us to compute a chance of being beneficial b = Pr(s > 0), and he showed that b approaches a maximum, b → 0.5, as d → 0.
Kimura (1983) revisited this argument 50 years later, but from the neo-mutationist perspective that emerged among molecular evolutionists in the 1960s, and which gave rise to the origin-fixation formalism (McCandlish and Stoltzfus, 2014). That is, Kimura treated the inputs as new mutations subject to fixation, rather than as a shift defined phenotypically, or defined by the expected effect of allelic substitution from standing variation. Each new mutation has a probability of fixation p that depends, not merely on whether the effect is beneficial, but how strongly beneficial it is. Mutations with bigger effects are less likely to be beneficial, but among the beneficial mutations, the ones with bigger effects have higher selection coefficients, and thus are more likely to reach fixation. Meanwhile, as d → 0, the chance of fixation simply approaches the neutral limit, i.e., the mutations with the tiniest effects behave as neutral alleles whether they are beneficial or not.
So, instead of Fisher’s argument with one monotonic relationship dictating that the chances of evolution depend on b, which decreases with size, we now have a second monotonic relationship in which the chances of evolution depend on p that (conditional on being beneficial) increases with size. The combination of the two opposing effects results in an intermediate optimum.
Thus Kimura transformed and recontextualized Fisher’s geometric argument in a way that changes the conclusion and undermines Fisher’s original intent, which was to support infinitesimalism. This is because Kimura’s conception of evolutionary genetics was different from Fisher’s.
The radical nature of Kimura’s move is not apparent in the literature of theoretical evolutionary genetics, where “Fisher’s model” often refers to Kimura’s model (e.g., Orr 2005a, Matuszewski, et al 2014, Blanquart, et al. 2014). Some authors have been explicit in back-projecting Kimura’s mutationist thinking onto Fisher, e.g., to explain why Fisher came to a different conclusion, Orr (2005a) suggests that Fisher made a mistake in forgetting to include the probability of fixation
“Fisher erred here and his conclusion (although not his calculation) was flawed. Unfortunately, his error was only detected half a century later, by Motoo Kimura”
Orr (2005b) states that “an adaptive substitution in Fisher’s model (as in reality) involves a 2-step process.”
But Fisher himself did not specify a 2-step process as the context for his geometric argument: he did not provide an explicit population-genetic context at all. However, we have no reason to imagine that Fisher was secretly a mutationist. His view of evolution as a deterministic process of selection on available variation is well known, i.e., the missing pop-gen context for Fisher’s argument would look something like this: Evolution is the process by which selection leverages available variation to respond to a change in conditions. At the start of an episode of evolution, the frequencies of alleles in the gene pool reflect historical selection under the previously prevailing environment. When the environment changes, selection starts to shift the frequencies to a new multi-locus optimum: most of them will simply shift up or down partially; any unconditionally deleterious alleles will fall to their deterministic mutation-selection balance frequencies; any unconditionally beneficial ones will go to fixation deterministically. The smallest allelic effects are the most likely to be beneficial, thus they are the most likely to contribute to adaptation.
The fixation of new mutations is not part of this process, and that, surely, is why the probability of fixation plays no part in Fisher’s original calculation. Instead, all one needs to know is the chance of being beneficial as a function of effect-size. Fisher’s argument is complete and free of errors, given the supposition that evolution can be adequately understood as a deterministic process of shifting frequencies of available variation in the gene pool.
I recently noticed that Matt Rockman’s (2012) seminal reflection on the limits of the QTN program presents a nearly identical argument in his supplementary notes (i.e., 5 years before the longer version I put in the supplement to Stoltzfus 2017):
3. Note that while Fisher was concerned with the size distribution of changes that improve the conformity of organism and environment (i.e., adaptation), Kimura (1983, section 7.1) was discussing the effect size distribution of adaptive substitutions, i.e., his is a theory of molecular evolution. Though many now describe Kimura’s work as correcting Fisher’s mistake, it is not clear that there is a mistake: Fisher was concerned not with fixation but with adaptation. Kimura for one seems not to have thought that he was correcting an error made by Fisher (Kimura 1983, p. 150-151). Though the distributions derived by Fisher and Kimura are both relevant to adaptation, Fisher’s model is compatible with adaptation via allele frequency shifts in standing variation. In Fisher’s words, “without the occurrence of further mutations all ordinary species must already possess within themselves the potentialities of the most varied evolutionary modifications. It has often been remarked, and truly, that without mutation evolutionary progress, whatever direction it may take, will ultimately come to a standstill for lack of further possible improvements. It has not so often been realized how very far most existing species must be from such a state of stagnation” (Fisher 1930, p. 96).
Relative to the case above regarding gene duplications, this case of back-projecting Kimura’s view to Fisher results in a more pernicious mangling of history: attributing to Fisher a model based on a mutationist mode of evolution not formalized until 1969 after Fisher was dead, and which contradicts Fisher’s most basic beliefs about how evolution works (along with the clear intent of the Modern Synthesis architects to exclude mutationist thinking as an apostasy).
Synthesis apologetics
But these examples are mild compared to the ret-conning that has emerged in debates over the “Modern Synthesis.” In serial fiction, ret-conning means re-telling an old story to ensure retroactive continuity with new developments that the writers added to the storyline in a subsequent episode, e.g., when a character that died previously is brought back to life. The difference between a retcon and simple back-projection is perhaps a matter of degree. The retcon is a much more conscious effort to re-tell the past in order to make sense of the present. The “Synthesis” story is very deliberately ret-conned to appropriate contemporary results. In a different world, the defenders of tradition might have declared that the definitive statement is in that 1970s textbook by Dobzhansky, et al.; they might have stopped writing defenses and just posted a sign saying “See Dobzhansky, et al. for how the Synthesis answers evolutionary questions.” But instead, defenders keep writing new pieces that expand and reinterpret the Synthesis story to maintain an illusion of constancy.
Futuyma is the master of the Synthesis retcon. He has a craftsman’s respect for the older storylines, because he helped write them, so his retcons are subtle and sometimes even artful. We can appreciate the artistry with which he has subtly pulled back from the shifting gene frequencies theory and the grand claims he made originally in 1988 on behalf of the Synthesis. In the original Synthesis storyline, the MS restored neo-Darwinism, crushed all rivals (mutationism, saltationism, orthogenesis, etc), and provided a common basis for anyone in the life sciences or paleontology to think about evolution.
By contrast, the retcons from the newer traditionalists are full of bold anachronisms. Svensson (2018) mangles the Synthesis timeline by calling on Lewontin’s (1985) advocacy of reciprocal causation to appropriate niche construction, as if the year 1985 were not several decades after the architects of the Modern Synthesis declared victory at the 1959 Darwin centennial. Lewontin (1985) himself says that reciprocal causation was not part of the Darwinian received view in 1985. Welch (2017), dismayed by incessant calls for reform in evolutionary biology, suggests that this reflects intrinsic features of the problem-space: naive complaints are inevitable, he argues, because no single theory can cover such a large field with diverse phenomenology subject to contingency. That is, Welch is boldly erasing the original Synthesis story in which Mayr, et al. explicitly claimed to have unified all of biology (not just evolution, but biology!) with a single theory. Whereas the “contingency” theme emerged after the great Synthesis unified biology, Welch treats this as a timeless intrinsic feature that makes any simple unification impossible (see Platonic realism, below).
Svensson (e.g., here or here) has repeatedly turned history on its head by suggesting that evolution by new mutations is the classical view of the Modern Synthesis and that the perspective of evolutionary quantitative genetics (EQG), i.e., adaptation by polygenic shifting of many small-effect alleles, has been marginalized until recently. To anchor this bold anachronism in published sources, he calls on a marginalization trope from the recent literature on selective sweeps, in which some practitioners sympathetic to EQG looked back— on the scale of 10 or 20 years— to complain that the EQG view was neglected and that the view of hard sweeps from a rare distinctive mutation was the classic view (some of these quotations appear in The shift to mutationism is documented in our language).
Indeed, hard sweeps are easier to model and that is presumably why they came first, going way back to Maynard Smith and Haigh (1974). And the mini-renaissance of work on the genetics of adaptation from Orr and others beginning in the latter 1990s— work that received a lot of attention— was based on new mutations. But that’s a very shallow way of defining what is “classic” or “traditional.” The mini-renaissance happened (after decades of inactivity) precisely because theoreticians were suddenly exploring the lucky mutant view they had been ignoring or rejecting (Orr says explicitly that the received Fisherian view stifled research by making it seem like the problem of adaptation was solved). The origin-fixation formalism only emerged in 1969, and for decades, this mutation-limited view was associated with neutrality and molecular evolution (see the figure below from McCandlish and Stoltzfus, 2014).
Rockman (2012) again gets this right, depicting the traditional view (from Fisher to Lewontin and onward) as a change in allele frequencies featuring polygenic traits with infinitesimal effects from standing variation:
“Despite the centrality of standing variation to the evolutionary synthesis and the widely recognized ubiquity of heritable variation for most traits in most populations, recent models of the genetics of adaptive evolution have tended to focus on new-mutation models, which treat evolution as a series of sequential selective sweeps dependent on the appearance of new beneficial mutations. Only in the past few years have phenotypic and molecular population genetic models begun to treat adaptation from standing variation seriously (Orr and Betancourt 2001; Innan and Kim 2004; Hermisson and Pennings 2005; Przeworski et al. 2005; Barrett and Schluter 2008; Chevin and Hospital 2008).”
To summarize, Svensson’s QEG-marginalization narrative turns history upside down in order to retcon contemporary thinking, i.e., he creates a false view of tradition in order to claim that new work with a mutationist flavor is traditional.
This anachronistic approach makes Svensson and Welch more effective in some ways than Futuyma, because they are really just focused on telling a good story, without being constrained by historical facts. But sometimes fan-service means sticking more closely to tradition. Even die-hard Synthesis fans are going to be complaining about Svensson’s fabrications, because they go beyond ret-conning into the realm of gas-lighting, undermining our shared understanding of the field, e.g., population geneticists (and most everyone else, too?) understand the “Fisherian view” of adaptation to be precisely the view that, according to Svensson, was marginalized in the Modern Synthesis. Clearly if anyone is going to take over the mantle from Futuyma and write the kind of fiction needed to keep the Synthesis brand fresh, the franchise needs a better crop of writers, or else needs to develop a fan-base that doesn’t care about consistency.
How to understand this phenomenon
Presumably if a grad student were to ask a genuine expert on gene duplication for the source of the sub-functionalization model, so as to study its assumptions and implications, they would be instructed to read the papers from 1999 or subsequent ones, and not Hughes (1994) or Ohno (1970), because the model is simply not present in these pre-1999 papers. Likewise, “Fisher’s geometric model” in Kimura’s sense is not in Fisher (1930). The theory of biases in the introduction process (or any model of this kind of effect) is absent from Kimura’s book and other works (e.g., from Dobzhansky, Haldane and Fisher) suggested as sources by Svensson and Berger (2019).
In this sense, back-projection is a mode of generating errors in the form of false attributions.
Why does this happen?
A contributing sociological factor is that, in academia, linking new ideas to prior literature, and especially to famous dead people, is a performative act that brings rewards to the speaker. Referencing older literature makes you look smart and well read, and also displays a respectful attitude toward tradition that is prized in some disciplines. And then those patterns of attribution get copied. When some authors started citing Haldane (1935) for male mutation bias, others simply copied this pattern (and resisting the pattern presumably would entail a social cost).
The extreme form of this performative act, a favorite gambit of theoreticians, is to dismiss new theoretical findings by saying “this is merely an implication of…” citing some ancient work. Indeed, many “we have long known” arguments defend the fullness and authority of tradition, in the face of some new discovery X, by saying “we have long known A and B”, where A and B can be construed to imply or allow X. Why don’t the critics undermine the novelty of X by saying “we have long known X”? Because they can’t.If X were truly old knowledge, the critics would just cite prior statements of X, following standard scientific practice. But when X is genuinely new, the defense of the status quo resorts to the implicit assumption that a result isn’t new and significant if someone could have reasoned it out from prior results, even if they did not actually do so. This implies the outrageously wrong notion that science is a perfect prediction machine, i.e., feed it A and B, and it auto-generates all the implications that will become important in the future. Clearly this is not how reality works (but see Platonic realism below).
Professional jealousy is a contributing factor when scientists offer their opinion on new findings, especially when those new findings are generating attention. I’m not going to dwell on this but it’s obviously a real thing.
Likewise, politics come into play when pundits and opinion leaders are called on to comment on new work. In an ideal world, when a new result X appears, we would just call on the people genuinely interested in X, the ones best positioned to comment on it, and they would only accept the challenge if they have digested the new result X [5]. But if X is new, how do we know who is best qualified? If X crosses boundaries or raises new questions, how do we know who has thought deeply about it? Often reporters will rely on the same tired old commentators to explain Why Orthodoxy Is True. The ones who step willingly into this role are often the ones most deeply invested in maintaining the authority of the status quo, the brand-value of mainstream acceptable views of evolution. Genuinely new findings undermine their brand. It’s a dangerous situation today when so many evolutionists have publicly signaled a commitment to the belief that a 60-year-old conception of evolution is correct and sufficient, that this theory cannot be incorrect, only incomplete (p. 25 of Buss, 1987), indeed, when some even go so far as to insist that nothing fundamentally new remains to be discovered (Charlesworth, 1996). Given this commitment to tradition, how could they possibly respond to a genuinely new idea except by (1) rejecting it or (2) shifting the goal-posts to claim it for tradition? Either way, this attitude degrades scientific discourse.
A completely different way to think about back-projection and ret-conning — independent of motivations and power struggles — is that they reflect a mistaken conception of scientific theories. Philosophers and historians typically suppose that scientific theories are constructed by humans, in a specific historic context, out of things such as words, equations, and analogies. The theory does not exist until the point in time when it is constructed, or perhaps, the point when it appears in scientific discourse. Under this kind of view, the DDC subfunctionalization model did not exist until 1999, Kimura’s mutationist revision of Fisher’s argument did not exist until 1983, and the theory of biases in the introduction process did not exist until 2001.
However, scientists themselves often speak as if theories exist independently of humans, as universals, and are merely recognized or uncovered at various points in time, e.g., note Hahn’s use of “recognized” above. In philosophy, this is called “Platonic realism.” The theory is a real thing that exists, independent of time or place. It’s hard to resist this. I do it myself instinctively. When I look back at King (1971, 1972), it feels to me like he is trying (without complete success) to state the theory we stated in 2001.
This has an important implication for understanding how scientists construct historical narratives, and how they interpret the historical canon. In the Platonic view, there is a set of universal time-invariant theories T1, T2, T3 etc, and anything written by anyone at any period in time can refer to these theories. In particular, anyone can see a theory like the DDC theory partly or incompletely, without clearly stating the theory or listing its implications. It’s like in the parable of the blind men and the elephant, where each person senses and interprets a thing, without construing it as an elephant.
By contrast, in the constructed view, if no one construes an elephant, describing the parts and how they fit together, there is no elephant, there is just a snake and a fan and a tree trunk and so on.
If we adopt the Platonic view, we will naturally tend to suppose that terms and ideas from the past may be mapped to each other and to the present, because they are all references to universal theories that have always existed. Clearly the Platonic view underlies the missing-pieces theory. Likewise, if one holds this view, one may imagine that Hughes or Ohno glimpsed the sub-functionalization theory, without fully explaining the theory or its implications. They sensed a part of the elephant. Likewise, the Platonic view is at work in Orr’s framing, which suggests that Fisher entertained a mutationist conception of evolution as a 2-step origin-fixation process, according to Kimura’s theory, but perhaps saw it imperfectly, resulting in a mistake in his calculation. Svensson and Berger (2019) likewise suggest that Dobzhansky, Fisher and Haldane understood implications of a theory of biases in the introduction process (first published in 2001), even though those authors never explicitly state the theory or its implications.
By contrast, a historian or philosopher considering the concepts implied by a historical source does not insist on mapping them to the present on an assumption of Platonic realism or continuity with current thinking. In fact, just like the practitioners of any scholarly discipline make distinctions that are invisible to novices, it is part of the craft of history to notice and articulate how extinct authors thought differently. For instance, the careful reader of Ohno (1970) surely will notice that his usage of the term “redundancy” often implies a unary concept rather than a relation. That is, Ohno often specifies that gene duplication creates a redundant copy, i.e., a copy with the property of being redundant, which makes it free to accumulate forbidden mutations, as if the original (“golden”) copy has been supplemented with a photocopy or facsimile that is a subtly different class of entity. By contrast, the logic of the DDC model is based on treating the two gene copies as equivalent. We think of redundancy today as a multidimensional genomic property that is distributed quantifiably across genes.
This is how Platonic realism encourages and facilitates back-projection, especially when combined with confirmation bias and the kind of ancestor-worship common in evolutionary biology. If theories are universal and have always existed, then it must have been the case that any theory available today also was accessible to illustrious ancestors like Darwin and Fisher. They may have recognized or seen the theory in some way, perhaps only dimly or partly; their statements and their terminology can be mapped onto the theory. So, the reader who assumes Platonic realism and is motivated by the religion of ancestor-worship can explore the works of predecessors, quote-mining them for indications that they understood the Neutral Theory, mutationism, the DDC model, and so on.
Again, a distinctive feature of the Platonic view is that it provides a much broader justification for back-projection, because it allows for a theory to be sensed without fully grasping it or getting the implications right, like sensing only part of the elephant. So the test case distinguishing theories about theories is this: if a historic source has a collection of statements S that refers to parts of theory T without fully specifying the theory, and perhaps also features inconsistent language or statements that contradict implications of T, we would say under the constructed view that S lacks T, but under the Platonic view, we might conclude that the S refers to T but does so in an incomplete or inconsistent way.
A model for misattribution
When we are back-projecting contemporary ideas, drawing on Platonic realism while virtue-signaling our dedication to tradition and traditional authorities, what sources will we use? Of course, we will use the ones we have read, the ones on our bookshelf, the ones by famous authors, the ones that everyone else has on their bookshelves. We will simply draw on what is familiar and close at hand.
Thus, the practice of back-projection will make links from contemporary ideas to past ideas, and it will tend to make those links by something like what is called a “gravity model” in network modeling, where the activity or importance or capacity associated with a node is equated with mass and used to model links to other nodes. The force of back-projection from A to B, e.g., the chance that the neutral theory will be linked to Darwin, will depend on the importance of A and B, and how close they are in a space of ideas.
In more precise terms, the force of gravitational attraction between objects 1 and 2 is proportional to m1m2 / d2, i.e., the product of the masses divided by their distance squared. In a network model of epidemiology, we might treat persons and places of employment as the objects subject to attraction. Each person has a location and 1 unit of mass, and each workplace has a location and n units of mass where n is the number of employees. For a given person i, we assign a workplace j by sampling from available workplaces with a chance proportional to mj / dij2, the mass (number of employees of the workplace) divided by the squared distance from the person. Smaller workplaces tend to get only local workers, while larger ones draw from a larger area.
Imagine that concepts or theories or conjectures can be mapped to a conceptual hyperspace. This space could be defined in many ways, e.g., it could be the result of applying some machine-learning algorithm. We can take every idea from canonical sources, each ic, and map it in this space, along with every new idea, each in. Any two ideas are some distance d from each other. Any new idea has some set of neighboring ideas, including other new ideas and old ideas from canonical sources. Among the ideas from canonical sources, there is some set of nearest neighbors, and each one has some distance dnc from the target idea to be appropriated.
To complete the gravity model for appropriation, we need to assign different masses to the neighbors of a target idea, and perhaps also assign a mass to the target idea as well. Given the nature of appropriation, a suitable metric for the mass of ideas from the canon mc would be the reputation or popularity of the source, e.g., an idea from Darwin would have a greater mass than from Ford, which would have a greater mass than an idea from someone you have never heard of. If so, the force of back-projection linking a new idea to something from the canon would be proportional to mc / dnc2, assuming that back-projection acts equally on all non-canonical ideas, i.e, they all have the same mass. If more important ideas stimulate a stronger force of back-projection— because traditionalists are more desperate to appropriate new ideas if they are important— then we could also assign an importance mn to the new idea and then the force of appropriation would be mnmc / dnc2.
Thus, the more important a new theory, the greater the pressure to back-project it to traditional sources. The more popular a historic source, the more likely scientists will attribute a new theory to it. If two different historic sources suggest ideas that are equally close to a new theory (i.e., same dnc) the one with the higher mass mc (e.g., popularity) is more likely to be chosen as the target of back-projection.
If the chances of back-projection follow this kind of gravity model, then clearly back-projection is an effective system for diverting credit from contemporary scientists and lesser-known scientists of the past, to precisely those dead authorities who already receive undue attention. Under a gravity model for misattribution, Darwin is going to get credited with all sorts of ideas, because he is already famous and he wrote several sprawling books that are available on bookshelves of scientists, even the ones who have no other books from 19th-century authors who might have said it better than Darwin. If one has very low standards for what constitutes an earlier expression of a theory, then it is easy to find precursors.
Resisting back-projection
The two most obvious negative consequences of back-projection and ret-conning are that they (1) encourage inaccurate views of history and (2) promote unfair attribution of credit.
However, those are just the obvious and immediate consequences.
Back-projection and anachronism have contributed to a rather massive misapprehension of the position on population genetics underlying the Modern Synthesis, directly related to the “Fisher’s geometric model” story above. Beatty (2022) has written about this recently. I’ve been writing about it for years. Today, many of us think of evolution as a Markov chain of mutation-fixation events, a 2-step process in which “mutation proposes, selection disposes” (decides). If you asked us what is the unit step in evolution, we would say it is the fixation of a mutation, or at least, that fixations are the countable end-points. The latter perspective is sometimes arguably evident in classic work, e.g., the gist of Haldane’s approach to the substitution load is to count how many allele replacements a population can bear. But more typically, this kind of thinking is not classical, but reflects the molecular view that began to emerge among biochemists in the 1960s.
The origin-fixation view of evolution from new mutations, where each atomic change is causally distinct, is certainly not what the architects of the Modern Synthesis had in mind. The MS was very much a reaction against the non-Darwinian idea of a “lucky mutant” view in which the timing and character of episodes of evolutionary change depend on the timing and character of events of mutation. Instead, in the MS view, change happens when selection brings together masses of infinitesimal effects from many loci simultaneously. In both Darwin’s theory and the Modern Synthesis, adaptation is a multi-threaded tapestry: it is woven by selection bringing together many small fibers simultaneously [4]. This is essential to the creativity claim of neo-Darwinism. If we take it away, then selection is just a filter acting on each thread separately, a theory that Darwin and the architects of the Modern Synthesis disavowed. Again, historical neo-Darwinism, in its dialectical encounter with the mutationist view, insists that adaptation is multi-threaded.
As you might guess at this point, I would argue that back-projection does not merely distort history and divert credit in an unfair way, it is part of a pernicious system of status quo propaganda that perpetually shifts the goal-posts on tradition in a way that undermines the value of truth itself, and suppresses healthy processes in which new findings are recognized and considered for their implications.
A scientific discipline is in a pathological state if a large fraction of of its leaders are unable or unwilling to recognize new findings or award credit to scientists who discover them, and instead confuse the issues and redirect credit and attention to the status quo and to some extremely dead white men like Darwin and Fisher. The up-and-coming scientists in such a discipline will learn that making claims of novelty is either bad science or bad practice, and they will respond by limiting themselves to incremental science, or if they actually have a new theory or a new finding, they will seek to package it as an old idea from some dead authority. A system that ties success to mis-representing what a person cares about the most is a corrosive system.
So, we have good reasons to resist back-projection.
But how does one do it, in practice? I’m not sure, but I’m going to offer some suggestions based on my own experience with a lifetime of trying not to be full of shit. I think this is mainly a matter of being aware of our own bullshitting and demanding greater rigor from ourselves and others, and this benefits from understanding how bullshit works (e.g., the we-have-long-known fallacy, back-projection, etc) and what are the costs to the health and integrity of the scientific process.
Everyone bullshits, in the weak sense of filling in gaps in a story we are telling so that the story works out right, even if we aren’t really sure that we are filling the gaps in an accurate or grounded way. When I described the Sally-Ann test, part was factual (the test, the age-dependent result), but the part about “the mental skill of modeling the perspective of another person” is just an improvised explanation, i.e., I invented it, guided partly by my vague recollection of how the test is interpreted by the experts, but also guided by my intuition and my desire to weave this into a story about perspective and sophistication that works for my narrative purposes, sending a not-so-subtle message to the reader that scientific anachronisms are childish and naïve. To succeed at this kind of improvisation means telling a good story that covers the facts without introducing misrepresentations that are significantly misleading, given the context.
When I wrote (above) about the self-hypnosis of Darwin’s followers, this was an invention, a fiction, but not quite the same, given that it is obviously fictional to a sophisticated reader. This is important. A mature reader encountering that passage will understand immediately that I am poking fun at Darwin’s followers while at the same time suggesting something reasonable and testable: scientists who align culturally with Darwin tend to be blind to the flaws in Darwin’s position. Again, any reasonably sophisticated reader will understand that. However, it would take a high level of sophistication for a reader to surmise that I was improvising in regard to “the mental skill of modeling the perspective of another person.” Part of being a sophisticated and sympathetic reader is knowing how to divide up what the author is saying into the parts that the author wants us to take seriously, and the parts that are just there to make the story flow. Part of being a sophisticated writer is knowing how to write good stories without letting the narrative distort the topic.
So, always remember the difference between facts and stories, and use your reason to figure out which is which. The story of Fisher making a mistake is a constructed story, not a fact. The authors of this story are not citing a journal entry from Fisher’s diaries where he writes “I made a mistake.” Someone looked at the facts and then added the notion of “mistake” to make retrospective sense of those facts. In the sub-functionalization story, the link to Ohno and Hughes is speculative. If ideas had DNA in them, then maybe we could extract the DNA from ideas and trace their ancestry, though I doubt it would be that simple. In any case, there is no DNA trace showing that the DDC model is somehow derived from the published works of Hughes or Ohno. It’s a fact that Ohno talked about redundancy and dosage compensation and so on, and it’s a fact that Hughes proposed his gene-sharing model, but it is not a fact that these earlier ideas led to the DDC model. Someone constructed that story.
By the way, it bears emphasis that the inability to trace ideas definitively is pretty much universal even if one source cites another. How many times have you had an idea and then found a prior source for it? You thought of it, but you are going to cite the prior source. So when one paper cites another for an idea, that doesn’t mean the idea literally came from the previous paper in a causal sense. We often use this “comes from” or “derives from” or “source of” language, but it is an inference. We only know that the prior paper is a source for the idea, not the source of the idea.
Second, gauging the novelty of an idea can be genuinely hard, especially in a field like evolution full of half-baked ideas. If you find that people are normalizing a new result by making statements that were never said before (like the “for a century” claim above), that is a sign that the result is really new. And likewise, if people are reacting to a new result with the same old arguments but the old arguments don’t actually fit the new result, that indicates that new arguments are needed. In generally, the strongest sign of novelty (in my opinion) is misapprehension. When a result or idea is genuinely new, you don’t have a pre-existing mental slot for it. It’s like a book in a completely new genre, so you don’t know where it goes on your bookshelf. The people who are most confident about their mastery are the most likely to shove that unprecedented book into the wrong place and confidently explain why it belongs there using weak arguments. So, when you find that experts are reacting to a new finding by saying really dubious things, this is a strong indicator of novelty.
Third, I recommend to develop a more nuanced or graduated sense of novelty, by distinguishing different types of achievements. Scientific papers are full of ideas, but precisely stated models with verifiable behavior are much rarer. Certain key types of innovations are superior and difficult, and they stand above more mundane scientific accomplishments. One of them is proposing a new method or theory and showing that it works using a model system. Another one is establishing a proposition by a combination of facts and logic. Another one is synthesizing the information on a topic for the first time, i.e., defining a topic.
It should be obvious that stating a possibility X is a different thing from asserting that X is true, which is different again from demonstrating that X is true. Many many authors have suggested that internal factors might shape evolution by influencing tendencies of variation, and many have insisted that this is true, but very few if any have been able to provide convincing evidence (e.g., see Houle, et al 2017 for an attitude of skepticism). Obviously, we may feel some obligation to quote prior sources that merely suggest X as a possibility, but if a contemporary source demonstrates X conclusively, this is something to be applauded and not to be treated as a derivative result. If demonstrating the truth of ideas that already exist is treated as mundane and derivative, our field will become even more littered with half-baked ideas.
If ideas were everything, we would not need scientific methods to turn ideas into theories, develop models, derive implications, evaluate predictions and so on, we would just need people spouting ideas. In particular, authors who propose a theory and make a model [2] that illustrates the theory have done important scientific work and deserve credit for that. If an author presents us with a theory-like idea but there is no model, we often don’t know what to make of the idea.
It seems to me that, in evolutionary biology, we have a surfeit of poorly specified theories. That is, the ostensible theory, the theory-like thing, consists of some set of statements S. We may be able to read all the statements in S but still not understand what it means (e.g., Interaction-Based Evolution), and even if we have a clear sense of what it might mean, we may not be certain that this meaning actually follows from the stated theory. An example of the latter case would be directed mutation. Cairns, et al proposed some ways that this could happen, e.g., via reverse-transcription of a beneficial transcription error that helps a starving cell to survive. If this idea is viable in principle, we should be able to engineer a biological system that does it, or construct a computer model with this behavior. But no one has ever done that, to my knowledge.
This is a huge problem in evolutionary biology. Much of the ancient literature on evolution lacks models, and because of this, lacks the kind of theory that we can really sink our teeth into. Part of this is deliberate in the sense that thinkers like Darwin deliberately avoided the kind of speculative formal theorizing that, today, we consider essential to science. The literature is chock full of unfinished ideas about a broad array of topics, and every time one of those topics comes up, we have to go over all the unfinished ideas again. Poring over that literature is historically interesting but scientifically it is IMHO a huge waste of time because again, ideas are a dime a dozen. If we just threw all those old books into the sea, we would lose some facts and some great hand-drawn figures, but the ideas would all be replenished within a few months because there is a more diverse and far larger group of people at work in science today and they are better trained.
Finally, think of appropriation as a sociopolitical act, as an exercise of power. As explained, even when an idea “comes from” some source, it often doesn’t come from that source in a causal sense. That’s a story we construct. Each case when normalization-appropriation stories are constructed to put the focus on tradition and illustrious ancestors, they re-direct the credit for scientific discoveries to a lineage grounded in tradition that gravitates toward the most important authorities. Telling this kind of story is a performative act and a socio-political act, and it is inherently patristic: it is about legitimizing or normalizing ideas by linking them to ancestors with identifiable reputations, as if we have to establish the pedigree of an idea— and it has to be a good pedigree, one that traces back to the right people. Think about that. Think about all those fairy-tales from the classics to Disney to Star Wars in which the worthy young hero or heroine is, secretly, the offspring of royalty, as if an ordinary person could not be worthy, as if young Alan Force and his colleagues could not have been the intellectual sources of a bold new model of duplicate gene retention, but had to inherit it from Ohno.
Part of our job as scientists operating in a community of practice is to recognize new discoveries, articulate their novelty, and defend them against the damaging effects of minimization and appropriation. The scientists who are marginalized for political or cultural reasons are the least likely to be given credit, and the same scientists may hesitate to promote the novelty of their own work due to the fear of being accused of self-promotion. In this context, it’s up to the rest of us to push back against misappropriation and back-projection and make sure that novelty is recognized appropriately, and that credit is assigned appropriately, bearing in mind that these outcomes make science healthier.
References
Blanquart, F., Achaz, G., Bataillon, T., Tenaillon, O. (2014) Properties of selected mutations and genotypic landscapes under Fisher’s geometric model. Evolution 68(12), 3537–54 . doi:10.1111/evo.12545
Buss LW. 1987. The Evolution of Individuality. Princeton: Princeton Univ. Press.
Charlesworth B. 1996. The good fairy godmother of evolutionary genetics. Curr Biol 6:220.
Force A, Lynch M, Pickett FB, Amores A, Yan YL, Postlethwait J. 1999. Preservation of duplicate genes by complementary, degenerative mutations. Genetics 151:1531-1545.
Gayon J. 1998. Darwinism’s Struggle for Survival: Heredity and the Hypothesis of Natural Selection. Cambridge, UK: Cambridge University Press.
Haldane JBS. 1935. The rate of spontaneous mutation of a human gene. Journ. of Genetics 31:317-326.
Haldane JB. 1947. The mutation rate of the gene for haemophilia, and its segregation ratios in males and females. Annals of Eugenics 13:262-271.
Haldane J.B.S. The formal genetics of man. Proc. R. Soc. B. 1948;135:147–170.
Hughes AL. 1994. The evolution of functionally novel proteins after gene duplication. Proc. R. Soc. Lond. B 256:119-124.
Kimura M. 1983. The Neutral Theory of Molecular Evolution. Cambridge: Cambridge University Press.
Matuszewski, S., Hermisson, J., Kopp, M. (2014) Fisher’s geometric model with a moving optimum. Evolution 68(9), 2571–88 doi:10.1111/evo.12465
Ohno S. 1970. Evolution by Gene Duplication. New York: Springer-Verlag.
Orr, H.A (2005a) The genetic theory of adaptation: a brief history. Nat Rev Genet 6(2), 119–27
Orr, H.A. (2005b) Theories of adaptation: what they do and don’t say. Genetica 123(1-2), 3–13
Rockman MV. 2012. The QTN program and the alleles that matter for evolution: all that’s gold does not glitter. Evolution 66:1-17.
Blanquart, F., Achaz, G., Bataillon, T., Tenaillon, O. (2014) Properties of selected mutations and genotypic landscapes under Fisher’s geometric model. Evolution 68(12), 3537–54 . doi:10.1111/evo.12545
Charlesworth B. 1996. The good fairy godmother of evolutionary genetics. Curr Biol 6:220.
Force A, Lynch M, Pickett FB, Amores A, Yan YL, Postlethwait J. 1999. Preservation of duplicate genes by complementary, degenerative mutations. Genetics 151:1531-1545.
Gayon J. 1998. Darwinism’s Struggle for Survival: Heredity and the Hypothesis of Natural Selection. Cambridge, UK: Cambridge University Press.
Haldane JBS. 1935. The rate of spontaneous mutation of a human gene. Journ. of Genetics 31:317-326.
Hughes AL. 1994. The evolution of functionally novel proteins after gene duplication. Proc. R. Soc. Lond. B 256:119-124.
Kimura M. 1983. The Neutral Theory of Molecular Evolution. Cambridge: Cambridge University Press.
Li WH, Yi S, Makova K. 2002. Male-driven evolution. Curr Opin Genet Dev 12:650-656.
Matuszewski, S., Hermisson, J., Kopp, M. (2014) Fisher’s geometric model with a moving optimum. Evolution 68(9), 2571–88 doi:10.1111/evo.12465
Ohno S. 1970. Evolution by Gene Duplication. New York: Springer-Verlag.
Orr, H.A (2005a) The genetic theory of adaptation: a brief history. Nat Rev Genet 6(2), 119–27
Orr, H.A. (2005b) Theories of adaptation: what they do and don’t say. Genetica 123(1-2), 3–13
Rockman MV. 2012. The QTN program and the alleles that matter for evolution: all that’s gold does not glitter. Evolution 66:1-17.
Notes
Actually, in the evolutionary literature, traditionalists do not assume that all past thinkers saw the same theories we use today, but only the past thinkers considered to be righteous by Synthesis standards, i.e., the ones from the neo-Darwinian tradition. Scientists from outside the tradition are understood to have been bonkers.
In the philosophy of science, a model is a thing that makes all of the statements in a theory true. So, to borrow an example from Elisabeth Lloyd, suppose a theory says that A is touching B, and B is touching C, but A and C are not touching. We could make a model of this using 3 balls labeled A, B and C. Or we could point to 3 adjacent books on a shelf, label them A, B and C, and call that a model of the theory.
I’m going to write something separate about Monroe, et al. (2022) after about 6 months have passed.
In Darwin’s original theory, the fibers in that tapestry blend together, acting like fluids. The resulting tapestry cannot be resolved into threads anymore, because they lose their individuality under blending. A trait cannot be dissected in terms of particulate contributions, but may be explained only by a mass flow guided by the hand of selection. This is why Darwin’s says that, on his theory, natura non facit saltum must be strictly true.
In some instances of reactions to Monroe, et al., 2022, this actually worked well, in the sense that people like Jianzhi Zhang and Laurence Hurst— qualified experts who were sympathetic, genuinely interested, and critical— were called on to comment.
Google ngrams data shows an acceleration of references to “Mendel” after 1900; there are earlier references, but upon examination (based on looking at about 10 of them), these are references to legal suits and other mundane circumstances involving persons with a given name or surname of Mendel. Scholars of de Vries such as Stamhuis believe that de Vries understood Mendelian ratios well before 1900, and before he discovered Mendel’s paper.
Note that one often sees a complementary fallacy in which reform of evolutionary theory is demanded on the basis of the discovery and scientific recognition of some phenomenon X, usually from genetics or molecular biology. That is, the reformist fallacy is “X is a non-traditional finding in genetics therefore we have to modify evolutionary theories,” whereas the traditionalist fallacy is “we have long known about X therefore we do not have to change any evolutionary theories.” Both versions of the argument are made in regard to epigenetics. Shapiro’s (2011) entire book rests on the reformist fallacy where X = HGT, transposition, etc, and Dean’s (2012) critical review of Shapiro (2011) relies on the traditionalist fallacy of asserting that if “Darwinians” study X, it automagically becomes part of a “Darwinian” theory: “Horizontal gene transfer, symbiotic genome fusions, massive genome restructuring…, and dramatic phenotypic changes based on only a few amino acid replacements are just some of the supposedly non-Darwinian phenomena routinely studied by Darwinists”. What Dean is describing here are saltations, which are not compatible with a gradualist theory, i.e., a theory that takes natura non facit saltum as a doctrine. Whatever part of evolution is based on these saltations, that is the part that requires some evolutionary theory for where jumps come from, i.e., what is the character and frequency of variational jumps, and how are they incorporated in evolution.
I later discovered sources for the Haldane 1935 fallacy. As noted, Haldane (1935) merely designates abstract variables mu and nu for the female and male mutation rates, without claiming that there is any difference. By contrast, Haldane (1947) clearly uses available data to argue that the male mutation rate is higher, and he offers some possible biological reasons:
If the difference between the sexes is due to mutation rather than crossing over, many explanations could be suggested. The primordial oocytes are mostly if not all formed at birth, whereas spermatogonia go on dividing throughout the sexual life of a male. So if mutation is due to faulty copying of genes at a nuclear division, we might expect it to be commoner in males than females. Again the chromosomes in human oocytes appear to pass most of their time in the pachytene stage. If this is relatively invulnerable to radiation and other influences, the difference is explicable. On either of these hypotheses we should expect higher mutability in the male to be a general property of human and perhaps other vertebrate genes. It is difficult to see how this could be proved or disproved for many years to come
As early as 2000, authors began to cite the 1935 paper together with the 1947 and 1948 papers as the source of male mutation bias (e.g., Huttley, et al. 2000). The first paper I have seen that clearly makes a mistaken attribution is Li, et al 2002
Almost 70 years ago, Haldane [1] proposed that the male mutation rate in humans is much higher than the female mutation rate because the male germline goes through many more rounds of cell divisions (DNA replications) per generation than does the female germline. Under this hypothesis, mutations arise mainly in males, so that evolution is ‘male-driven’ [2]
The first citation is to Haldane (1935), and the second is to Miyata, et al 1987. Note that Li, et al. not only erroneously attribute male mutation bias to Haldane (1935), they also seem to equate this with an evolutionary hypothesis of male-driven evolution from Miyata, et al.
The Haldane-Fisher “opposing pressures” argument is an argument from population genetics that played an important role in establishing the Modern Synthesis orthodoxy, and which continued to guide thinking about causation throughout the 20th century. The flaw in the argument was pointed out by Yampolsky and Stoltzfus (2001) when they showed that a workable theory of variation-biased evolution emerges, not from mutation driving alleles to fixation against the opposing pressure of selection, but from biases in the introduction process. The purpose of this blog is simply to document this influential fallacy.
In his magnum opus, Gould (2002) writes as follows, citing an argument from Fisher (1930):
“Since orthogenesis can only operate when mutation pressure becomes high enough to act as an agent of evolutionary change, empirical data on low mutation rates sound the death-knell of internalism” (p. 510).
The conclusion of this argument is that internalist theories — the kind of theories that attempt to explain evolutionary tendencies by referring to internal variational tendencies — are incompatible with population genetics because mutation rates are too small for the pressure of mutation to be an important causal force.
Note the form of the argument: the theoretical principle in the first clause, combined with an empirical observation (low mutation rates), yields a broad conclusion. The theoretical argument assumes that the way to understand the role of mutation in evolution is to think of it as a force or pressure on allele frequencies. That is, in Modern Synthesis reasoning, evolution is reduced to shifting gene frequencies, and the causes of evolution are declared to be the forces that shift frequencies. One then inquires into the magnitude of the forces, because obviously the stronger forces are more important; strong forces deserve our attention and must be treated fully; very weak forces may be ignored. As indicated by Provine (1978), this kind of argument about the sizes of forces was a key contribution of theoretical population genetics to the Modern Synthesis, anchoring its claim to have undermined all the alternative non-Darwinian theories.
Below I will present the argument in more depth, illustrate how it is invoked in evolutionary writing, and explain why it is important today.
The argument
The mutation pressure theory, explained in much more detail in Bad Takes #2, appears most prominently as a strawman rejected by Haldane (1927, 1932, 1933) and Fisher (1930). That is, Haldane and Fisher did not advocate for the importance of evolution by mutation pressure, but presented an unworkable theory as a way to reject the idea that evolutionary tendencies may reflect internal variational tendencies, an idea that conflicts with the neo-Darwinian view that selection is the potter and variation is the clay.
Haldane and Fisher concluded that evolution by mutation pressure would be unlikely on the grounds that, because mutation rates are small, mutation is a weak pressure on allele frequencies, easily overcome by opposing selection. Haldane (1927) concluded specifically that this pressure would not be important except in the case of neutral characters or abnormally high mutation rates (image)
The argument is hard to comprehend today because most of us think like mutationists and no longer accept the shifting-gene-frequencies theory central to classical thinking.
The way to understand the argument more sympathetically is to consider how, in the neo-Darwinian tradition, the focus on natural selection shapes conceptions of evolutionary causation: selection is taken as the paradigm of a cause, so that other evolutionary factors are treated as causal only to the extent that they look (in some way) like selection. For instance, drift and selection can both cause fixations, and so (in the context of population-genetics discussions) they are often contrasted as the two alternative causes of evolutionary change.
More generally, classical population genetics tends to treats causes of evolution as mass-action pressures that shift frequencies. The mutation-pressure argument treats mutation as a pressure that might drive alleles to prominence, i.e., to high frequencies.
That is, the way to understand Haldane’s treatment is that, if mutation-biased evolution is happening, this is because mutation is driving alleles to prominence against the opposing pressure of selection, so that either the mutation rate has to be very high, or selection has to be practically absent (i.e., neutrality). Fisher’s (1930) reasoning on the issue was similar to Haldane’s. From the observed smallness of mutation rates, he drew a sweeping conclusion to the effect that internalist theories are incompatible with population genetics.
Examples
Haldane’s 1927 statement is given above. In 1933, he wrote as follows, again treating the role of mutation in the “trend” of evolution as a matter of mutation pressure (where Haldane uses k and p, we would today use something like s for selection coefficient and something like u for mutation rate).
p. 6. “In general, mutation is a necessary but not sufficient cause of evolution. Without mutation there would be no gene differences for natural selection to act upon. But the actual evolutionary trend would seem usually to be determined by selection, for the following reason.
A simple calculation shows that recurrent mutation (except of a gene so unstable as to be classifiable as multi-mutating) can not overcome selection of quite moderate intensity. Consider two phenotypes whose relative fitnesses are in the ratios 1 and 1-k, that is to say, that on the average one leaves (1-k) times as many progeny as the other. Then, if p is the probability that a gene mutates to a less fit allelomorph in the course of a life cycle, it has been shown (Haldane, 1932) that when k is small, the mutant gene will only spread through a small fraction of the population unless p is about as large as k or larger. This is true whether the gene is dominant or recessive.”
Fisher used much more dramatic language.
“For mutations to dominate the trend of evolution it is thus necessary to postulate mutation rates immensely greater than those which are known to occur.”
“The whole group of theories which ascribe to hypothetical physiological mechanisms, controlling the occurrence of mutations, a power of directing the course of evolution, must be set aside, once the blending theory of inheritance is abandoned. The sole surviving theory is that of Natural Selection, and it would appear impossible to avoid the conclusion that if any evolutionary phenomenon appears to be inexplicable on this theory, it must be accepted at present merely as one of the facts which in the present state of knowledge seems inexplicable. The investigator who faces this fact, as an unavoidable inference from what is now known of the nature of inheritance, will direct his inquiries confidently towards a study of the selective agencies at work throughout the life history of the group in their native habitats, rather than to speculations on the possible causes which influence their mutations.”
Fisher (1930) The Genetical Theory of Natural Selection
Fisher’s unqualified rejection of internalist theories seems to have been more influential, which is not surprising given that it comes down like a hammer whereas Haldane’s conclusion is subtle by comparison.
“For no rate of hereditary change hitherto observed in nature would have any evolutionary effect in the teeth of even the slightest degree of adverse selection. Either mutation-rates many times higher than any as yet detected must be sometimes operative, or else the observed results [apparent evolutionary trends] can be far better accounted for by selection.” p. 56
“Of course, if mutation-rate were high enough to overbalance counter-selection, it would provide an orthogenetic mechanism of a kind. However, as Fisher and others have shown, mutation rates of this intensity do not exist, or at least must be very rare. ” p. 509
Huxley (1942), Evolution: the Modern Synthesis
“if ever it could have been thought that mutation is important in the control of evolution, it is impossible to think so now; for not only do we observe it to be so rare that it cannot compete with the forces of selection but we know this must inevitably be so.” p. 391
Ford (1971), Ecological Genetics
Provine (1978) begins by stating the issue very modestly, but then concludes that the argument “discredited” alternative theories. However, note that the pressure theory was invented by Haldane and Fisher: the position of the mutationists was not a monistic theory of mutation pressure, but a dualistic theory of “mutation proposes, selection disposes (decides).”
“the mathematical evolutionists demonstrated that some paths taken by evolutionary biologists were unlikely to be fruitful. Many of the followers of Hugo de Vries, including some Mendelians like Raymond Pearl, believed that mutation pressure was the most important factor in evolutionary change. The mathematical models clearly delineated the relationships between mutation rates, selection pressure, and changes of gene frequencies in Mendelian populations. Most evolutionists believed that selection coefficients in nature were several orders of magnitude larger than mutation rates; upon this assumption, the mathematical models indicated that under most conditions likely to be found in natural populations, selection was a vastly more powerful agent of evolutionary change than mutation … These mathematical considerations … discredited macromutational theories of evolution and theories emphasizing mutation pressure as the major factor in evolution.”
Provine (1978) The role of mathematical population geneticists in the evolutionary synthesis of the 1930s and 1940s.
In the seminal paper on developmental constraints, Maynard Smith et al (1985) identify the Haldane-Fisher argument as an impediment to recognizing developmental biases as genuinely causal
“Two separate issues are raised by these examples. The first is whether biases on the production of variant phenotypes (i.e., developmental constraints) such as those just illustrated cause evolutionary trends or patterns. Since the classic work of Fisher (1930) and Haldane (1932) established the weakness of directional mutation as compared to selection, it has been generally held that directional bias in variation will not produce evolutionary change in the face of opposing selection. This position deserves reexamination. For one thing, our examples (like many discussed during the last twenty years – e.g., White, 1965; Cox and Yanofsky, 1967) concern biased variation in the genetic mechanism itself. If such directed variation accumulates– as the results regarding DNA quantity and chromosome numbers suggest– one obtains a very effective evolutionary ratchet. For another, such directional biases may not stand in contradiction to the Fisher-Haldane point of view: within reasonable limits, neither the increase in cellular DNA content nor that in chromosome number is known to have deleterious effects at the organismic level.” (p. 282)
Maynard Smith, et al. (1985) Developmental Constraints
Below is one of several contemporary statements that seem to gesture toward the Haldane-Fisher argument, without betraying any clear link. It’s a general application of the forces theory, based on the idea that some forces are strong and others are weak, and the strong forces dominage.
“For instance, it is possible to say confidently that natural selection exerts so much stronger a force than mutation on many phenotypic characters that the direction and rate of evolution is ordinarily driven by selection even though mutation is ultimately necessary for any evolution to occur.”
Futuyma and others, 2001, in a white paper written by representatives of various professional societies
Gould was obviously sympathetic to internalist thinking but he got his ideas on this issue straight from Fisher (1930). Note that Gould is writing 75 years after Haldane.
“Since orthogenesis can only operate when mutation pressure becomes high enough to act as an agent of evolutionary change, empirical data on low mutation rates sound the death-knell of internalism.” (p. 510)
Gould (2002) The Structure of Evolutionary Theory
Contemporary relevance
Subsequent work has partially undermined the narrow implications of the Haldane-Fisher argument, and completely undermined its broader application as a cudgel against internalism. Mutation pressure is rarely a reasonable cause of population transformation, because it would happen so slowly and take so long that other factors such as drift would intervene, as argued by Kimura (1980).
That is, whereas Haldane’s conclusion suggests that important effects of mutation in evolution result from one of two special conditions— high rates of mutation or neutrality—, this is not a safe inference because it ignores the role of biases in origination, whose efficacy does not require high rates of mutation or neutrality.
Although the mutation pressure theory is most relevant today as a historically important fallacy, it is not entirely irrelevant to evolution in nature. Consider the loss of a complex character encoded by many many genes: perhaps the total mutational target is so large that a population might reach a substantial frequency of loss of the character due to the mass effect of many mutational losses. The case studied by Masel and Maughan (2007) is exactly this kind of case in which evolution by mutation pressure is reasonable. In particular, the authors estimate an aggregate mutation rate of 0.003 for loss of a trait (sporulation) dependent on many loci, concluding that complex traits can be lost in a reasonable period of time due primarily to mutational degradation.
To reiterate, the main relevance of this argument today is historical and meta-scientific. First, it represents a historically influential fallacy. Recognizing that the argument cited by Gould above is a fallacy might cause us to pause and reflect on how conventional wisdom from famous thinkers citing other famous thinkers might have an improper grounding. Second, this is not just an arbitrary technical error, but reflects a substantive flaw in the Modern Synthesis view of causation and of evolutionary genetics, exposing the extent to which classic arguments about causation that established the Modern Synthesis do not follow from universal principles, but are grounded in a parochial view designed to support neo-Darwinism.
Objections to declaring the argument a fallacy
When I present this argument, I sometimes hear objections. One is that it is unfair to criticize Fisher and Haldane for not understanding transition bias, because they did not know about it. But we are not trying to be fair to persons: we are trying to be rigorous about theories and arguments. Theories and arguments are supposed to be right. If the opposing-pressures argument is a good pop-gen argument, then it will work in a world with transition bias or GC bias and so on.
For instance, the mutation-selection balance— in the simplest case, f = u / s — is a theory from Haldane and Fisher, and the theory can be right when applied to kinds of mutations that were not known in 1930. In fact, no molecular mechanisms of mutation were known in 1930: this was before the structure of DNA was known, and even before it was known that DNA is the genetic material. Haldane and Fisher knew that not all mutation rates are the same, so when they devised theories, they invoked a mutation rate as a case-specific variable. They derived a mutation-selection balance equation with a form that allows the rate to take on different values, so we are on solid ground in applying it to any deleterious mutation that can be assigned a rate, e.g., a transposon insertion.
Another objection is that the opposing-pressures argument is really just an argument against evolution by mutation pressure— which we still reject generally, for reasons expressed by Haldane and Kimura— and it doesn’t rule out other forms of variation-biased evolution. The problem is that this is not how the argument was understood by generations of evolutionary biologists from Fisher and Huxley to Gould and Maynard Smith. Instead, it was understood to be a very general claim.
Think about it this way. Theoretical arguments like this often have a 3-part structure
the set-up: a problem-statement or question that frames the issue and establishes context, possibly with some problematic assumptions
the analysis: an analytical core with some modeling or equations
the take-away: a conclusion that maps the analysis to the problem, answering the framing question
The analytical core is rarely the problem. If you go back over the examples above and ask how the issue is framed, it is often framed in terms of a very general question like what determines the direction or general trend of evolution (Haldane), or what is the status of internalism (Gould), or could a trend be caused by mutation instead of selection (Huxley), or what is the potential for developmental effects on the production of variation to influence evolution (Maynard Smith, et al). Fisher’s argument quoted above is explicitly general, referring to any theory that attempts to explain evolutionary tendencies by reference to “physiological mechanisms controlling the occurrence of mutations.” He is not just rejecting evolution by mutation pressure or a specific theory labeled “mutationism” or “orthogenesis.” Fisher says that researchers who understand how population genetics works will stay focused on selection and not on how the mutations happen, because that is irrelevant.
For instance, a discussion of how oxidative deamination and repair contribute to CpG bias is clearly a discussion of physiological mechanisms controlling the occurrence of mutations, and therefore is irrelevant to evolution according to Fisher’s argument. To cite a concrete example, the study by Storz, et al. (2019) of the role of CpG bias in altitude adaptation by changes in hemoglobin genes violates Fisher’s guidance because the authors directed their evolutionary inquiry toward the possible causes which influence mutations. Fisher’s argument is explicitly a general argument that applies to any considerations of what determines the occurrence of mutations, that is precisely how generations of evolutionary thinkers understood Fisher’s argument, and that is precisely the basis for concluding firmly that Fisher’s argument is mistaken.
Synopsis
The opposing pressures argument says that, because mutation rates are small, mutation is a weak pressure, and this rules out a possible role for mutational and developmental effects in determining evolutionary tendencies or directions. The argument first appeared in writings of Haldane and Fisher, and was repeated by leading thinkers throughout the 20th century, e.g., emerging in the evo-devo dispute of the 1980s.
The analytical core of the opposing pressures argument is not the problem. The analytical core says that evolution by mutation pressure would require high mutation rates unopposed by selection. The fallacy is to use this analytical core as the basis for a general conclusion about the status of internalism, the sources of direction in evolution, or the potential for variational biases to impose dispositions.
Why would generations of evolutionary thinkers assume that an argument about mutation pressure is an adequate basis for making such broad conclusions, ignoring the introduction process? That’s a story for another day, but the short answer is that, for the people thinking analytically about causation, the introduction process did not exist. For them, the Modern Synthesis had reduced evolution to quasi-deterministic shifts in frequencies of genes in the gene pool. New mutations aren’t involved. The population is a point being pushed around by forces in a space of non-zero allele frequencies. Mass-action pressures are the only effective sources of direction in this kind of system.
References
Popov I. 2009. The problem of constraints on variation, from Darwin to the present. Ludus Vitalis 17:201-220.
Ulett MA. 2014. Making the case for orthogenesis: The popularization of definitely directed evolution (1890–1926). Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 45:124-132.
Unfamiliar ideas are often mis-identified and mis-characterized. It takes time for a new idea to be sufficiently familiar that it can be debated meaningfully (a really long time, in the case of the theory of arrival biases). We look forward to those more meaningful debates. Until then, fending off bad takes is the order of the day!
This is a series of mostly short pieces focusing on bad takes on the topic of biases in the introduction of variation, covering both the theory and the evidence.
We have long known (Bad Takes #1) A reviewer responds to new results on the role of mutation bias with: “We have long known that mutation is important in evolution.” Bonus: Primo Carnera and Max Baer
Mutation pressure (Bad Takes #2) An author says that “The notion that mutation pressure can be a driving force in evolution is not new,” citing Yampolsky and Stoltzfus along with a range of other sources from Darwin to Morgan to Nei. We consider a more coherent conception of evolution by mutation pressure per Haldane and Fisher.
Independent cause of adaptation (Bad Takes #3) A pair of pundits mischaracterize the theory of biases in the introduction process as a theory of mutation bias as an independent cause of adaptation.
Mutation-driven (Bad Takes #4) A perfect illustration of the concept of a bad-faith argument: “Selection ultimately drove these adaptive allele frequency changes, rather than evolution being ‘mutation-driven’ as some might claim.”
Contingency (Bad Takes #5) The recently observed effect of mutation bias on adaptation is nothing new, because it is just the same thing as contingency.
Unfamiliar ideas are often mis-identified and mis-characterized. It takes time for a new idea to be sufficiently familiar that it can be debated meaningfully. We look forward to those more meaningful debates. Until then, fending off bad takes is the order of the day! See the Bad Takes Index.
Svensson (here or here) has repeatedly asserted that the effect of biases in the introduction process requires reciprocal sign epistasis, with the implication that this makes the effect unlikely in nature.
Epistasis is inescapably relevant when considering extended adaptive walks on realistic fitness landscapes, but sign epistasis is certainly not a requirement for effects of biases in the introduction process. This is an invention of Svensson, not listed as a requirement in any of the published works of scientists developing theory on this topic. For instance, Rokyta, et al. (2005) have no epistasis in their model of 1-step adaptation. Gomez, et al. (2020) present a staircase model of fitness without sign epistasis.
In some cases, the fitness landscape is specified by an empirical model, e.g., the “arrival of the frequent” model of Schaper and Louis (2014) simply uses the genotype-phenotype map for RNA folds that emerges from RNA folding algorithms. Cano and Payne (2020) use empirical fitness landscapes of transcription-factor binding sites, which typically have a small number of peaks. Whatever the degree of epistasis found on these landscape, it is the naturally occurring degree for this kind of landscape.
The superficial plausibility of Svensson’s fabrication arises from the fact that reciprocal sign epistasis is indeed a feature of the original computer simulations of Yampolsky and Stoltzfus (2001). Why is this feature present?
In the original model, the initial ab population can evolve either to Ab or to aB, but further evolution to AB does not occur because AB is less fit (i.e., in the figure, t is less than s1 or s2). This means that the change from a to A is beneficial in the b background, but deleterious in the B background. When one allelic substitution reverses the effect of another, this is called sign epistasis, and when the effect goes both ways, that is reciprocal sign epistasis.
Remember, the Yampolsky-Stoltzfus model was designed to be the simplest model to prove a point, so it is a model of one-step adaptation with two options: up and to the left, or up and to the right. Thus, we could have avoided sign epistasis by stipulating that the left and right options are mutually exclusive, e.g., we could have stipulated that the initial population has T at a specific nucleotide site, and that the left and right options are C (transition) and A (transversion). The behavior of such a model would be almost identical to the original. Or we could have stipulated (1) an infinite landscape where each derived genotype has a similar left-right choice with no epistasis, but (2) we are only going to look at the first step.
More generally, the way to understand this issue more deeply is to contrast two kinds of scenarios: (1) the idealistic scenario in which the evolving system proceeds to the fitness optimum at equilibrium, in infinite time, so that biases in introduction have no final effect, and (2) everything else, i.e., non-idealistic scenarios in which multiple outcomes are possible, and the choice might reflect biases in the introduction of variation.
Many conditions lead to models of the second type, i.e., realistic models (for a larger discussion, see here). For instance, ending evolution with the first beneficial change is the actual model used recently by Cano, et al. (2021), and it corresponds to the natural scenario of antibiotic resistance evolution explored empirically by Payne, et al. (2019). Resistant M. tuberculosis isolates emerge, and they are isolated and analyzed, without waiting for some long-term process of adaptive optimization to take place. They are isolated and analyzed by virtue of having evolved resistance, not by virtue of having reached a global fitness optimum for resistance.
More generally, we could stipulate that the space is large compared to the amount of time to explore it, so that kinetic biases influencing the early steps are important. In the antibiotic resistance scenario, one is literally looking at the first step in evolution, because that is the step that counts. For instance, one could simply posit an infinite space and compare the rates of two origin-fixation processes that differ due to a mutation bias, e.g., GC-increasing or GC-decreasing changes in an infinitely long genome. No sign epistasis is required, and the effect would apply even given completely additive effects.
In a more finite model in which the system has time to explore the landscape of possibilities, sign epistasis, diminishing-returns epistasis, and other kinds of frustration can have the effect of locking in consequences of initial steps that are subject to kinetic biases. Such effects are common for protein-coding regions because, from a given starting codon for 1 type of amino acid, only 4 to 7 other amino acids — not all 19 alternatives — are accessible by a single-nucleotide mutation. Thus, even when the effects of amino acid changes are all additive, the landscape is rough for a protein-coding gene evolving by single-nucleotide mutations, so that biases in the introduction process can influence the final state of the system. This effect is evident in the theoretical study by Stoltzfus (2006).
In summary, the true relevance of sign epistasis to understanding the efficacy of biases in the introduction process is roughly as follows. When you haven’t got far from where you started, your path depends a lot on your first steps. So, kinetic biases reflecting what is mutationally likely are going to be relevant to understanding the path of an evolving system when the amount of change is non-large relative to the size of the space to be explored. When evolution has plenty of time to explore a small finite landscape, as in the Yampolsky-Stoltzfus model, we can still see consequences of kinetic bias in the case where epistasis has the effect of locking in these consequences.