Reductionism is not effective
On causal emergence and 'top-down' intervention
ANTEATER: Now the state can be described on a low level or on a high level. A low-level description of the state of an ant colony would involve painfully specifying the location of each ant, its age and caste, and other similar items. A very detailed description, yielding practically no global insight as to why it is in that state. On the other hand, a description on a high level would involve specifying which symbols could be triggered by which combinations of other symbols, under what conditions, and so forth.
— Ant Fugue | Gödel, Escher, Bach
If you want to influence the actions of an individual, you don’t do so by altering their genes. Instead, you introduce new information to a more conserved level of spatial organization—you may think of this as the brain, consciousness, or something in between.
So why do we aim to bring about other whole organism outcomes by seeking intervention strategies at the lowest possible levels of function?
The answer is that we shouldn’t.
My friend Vance talks about how no one likes to put themselves in a box; we all want to present ourselves as special in some way. He gives the advice that in order to put forward a unique hook in an introduction while remaining grounded enough to understand, you should put yourself in the middle of a Venn diagram.
For me that Venn diagram would comprise molecular biology and software engineering. More specifically, I use computational tools to come to novel insights about how biological systems function so that I can determine opportunities for intervention that others haven't tried before.
The intervention strategy that I have been honing in on for the last year as I do this work is what I would call a 'top-down' intervention strategy. In my world, this would be considered somewhat heretical. Most of science is centered around reductionism, or the idea that a system can be better understood and controlled by coming to a more complete understanding of how the parts that make up that system work. This is what brought us from Newton's laws of motion all the way down to the world of quarks and strings in physics. We assumed that as we continued to probe deeper, that we would come to better insights about how these systems function. However, we haven't. The opposite has happened. We've only become more confused and uncertain.
So, how does this apply to the work I am doing in complex biological systems? I am looking for ways to intervene and bring about specific outcomes that don't follow this reductionist dogma. I am doing this by exploring the causal emergence across different scales of information processing. The 2 specific scales that I do this on are gene regulatory networks and the biochemical signaling. In their most basic form, each of these levels of organization can be represented as networks.
In a gene regulatory network, the sections of DNA that code for specific proteins can be represented as nodes, and the proteins they code for can be represented as the edges in that network. In a biochemical network, the ion channels and gap junctions between cells can be represented as the nodes and the small molecules that are passed between these channels can be represented as the information carrying edges.
In both of these types of networks, we are able to assign different classifications to their components based on something called partial information decomposition from information theory. There are 3 different classifications to give a component; it can either be redundant, unique, or at a higher scale, synergistic.
Let’s say Bob and Alice both have information that would effect how Jack acts in the future. If Bob and Alice each have unique information for Jack, that means that he will take one course of action based on Bob’s information and another based on Alice’s. If the information is redundant, that means whether Jack heard Bob's info or Alice’s info, his proceeding action would be the same.
What causal emergence suggests is that we can take an example like the latter and change how we look at Bob and Alice’s information edges to Jack to increase the causal relevance of the model. We would do this by combining Bob and Alice into a new node feeding information to Jack called Bob-Alice. This new Bob-Alice node increases our ability to understand why Jack does what he will do next, because we are not left wondering whether he received his information from Bob or Alice alone.
In broader examples than the one outlined above, this combination of redundant information atoms into synergistic information atoms becomes very useful because it decreases the amount of uncertainty we have in tracking causality across a complex network of interactions.
This is where we get causal emergence from, and so these are the parts of both of these networks that I am looking for, and working towards observing how they change across time. I am looking for these synergistic information atoms across 2 different scales of spatial organization because I believe that synergistic information at the lower of the 2 scales offers hints to parts of the network that can be changed with greater certainty by intervening at the higher scale.
ACHILLES: You're right-I bypassed the lower levels, and saw only the top. It's too bad that the top level doesn't contain all the information about the bottom level, so that by reading the top, one also learns what the bottom level says. But I guess it would be naive to hope that the top level encodes anything from the bottom level-it probably doesn't percolate up.
— Ant Fugue | Gödel, Escher, Bach
We mapped the whole human genome back in 2003, and we've had CRISPR to precisely edit this genome for over 10 years now, so why aren't we yet able to change the parts of the genome that are responsible for certain functions to achieve the outcomes that we want? This is because, like I hinted at in the beginning, this increase in information about the scale we want to change hasn't given us any new insights into the function of this scale of organization. All of the GRNs that I've observed are less than 60% effective, which means that the reliability of achieving outcomes based on certain inputs is far from a guarantee. Most research being done today is not actually about trying to find out how to achieve the outcome we want, it's about trying to find out how to get it to happen more reliably.
Take cellular reprogramming for an example, which is a process by which experimentalists administer some combination master transcription factors in order to regress a cell back from a differentiated state to a pluripotent, or stem cell state. When Yamanaka first achieved this, it was revolutionary because we thought we'd unlocked the key to rolling cells back to younger states, and that we'd no longer need to harvest stem cells from embryos for research. The only problem was that this process is less than 1% effective, and so achieving this outcome with any utility in vitro or in an organism has shown to be virtually impossible, at least by intervening at this scale.
ACHILLES: Gee! That is an amazing wraparound. They were completely unconscious of what they were participating in. Their acts could be seen as part of a pattern on a higher level, but of course they were completely unaware of that. Ah, what a pity-a supreme irony, in fact -that they missed it.
— Ant Fugue | Gödel, Escher, Bach
I came across a paper by a scientist named Michael Levin after we had him on the podcast showing that his lab had achieved something called deacylation through an intervention at the biochemical level I've referred to a few times. The effectiveness of this interaction was much higher than any intervention at the GRN level aiming at the same outcome. This is what completely changed the way I was thinking about solving this problem. Deacylation is one of the crucial actions that happens during cellular reprogramming where DNA damage and chromatin/structural modifications that occur overtime are undone to reintroduce redundancy into gene expression pathways.
This is what got me started looking at this higher level of interaction for places to intervene to bring about desired outcomes with a greater degree of precision and certainty. This is also what made me start to consider the ways in which information is conserved not only across different spatial scales, but also across temporal scales—ie, time.
There are 2 ideas that I want to highlight here. The first is Landaur's Principal which suggests a unity between energy and information, and the second is free energy principle which says that systems that are evolving through time, adapt and change in ways that minimize the amount of energy required to do the same things over and over again. Energy minimization is synonymous with reductions in uncertainty of outcome when it comes to tracking the dynamics of a complex system, and so when I take these 2 ideas in concert, it reads to me that the same information theoretic ideas I mentioned above can also be applied across temporal scales in addition to spatial.
Intuitively, this makes sense. Time and space are one and the same after all. So what makes this useful? Back to the context of biological systems, I see changes taking place that the broader scientific community categorizes as damage accumulation, but that I see to be adaptations to conserve energy and information.
A domain that is ripe for this kind of perspective changes is the study of epigenetic changes and modifications to DNA structure. As these things happen over time, gene expression in an organism is decreased. In other words, the options our body has available to go about achieving a healthy mode of function are reduced. Think about the energy you expend when you have something you want to achieve, but multiple ways of arriving at that outcome. Over time if this outcome is something you repeatedly work towards over and over again, you will reinforce certain paths of action and negate others, saving you the energy required to deliberate that same breadth of options in the future.
This is ideal to conserve energy, however this energy minimizing state of function comes at the cost of resilience to stressors and other dynamic variables, which is what causes most looking into these mechanics to view the process as simply ‘scratches on a CD’ or losses of information via damage rather than some downstream effect of a conservation process.
Enjoying this post? Subscribe to receive future newsletters in your inbox the moment they come out.
ANTEATER: In an ant colony all ants work for the common good, even to their own individual detriment at times. Most human beings are not aware of anything about their neurons; in fact they probably are quite content not to know any' thing about their brains, being somewhat squeamish creatures.
— Ant Fugue | Gödel, Escher, Bach
At every scale of spatial order, the parts at a lower scale are organized to be sacrificed for the sake of the whole at a higher scale. Cells used to be the only life on this planet, and before that, there were even more basic forms of self replicating systems. Over time through emergence, more complex structures evolved and incentives changed for what used to be independent goal oriented systems. In the terms used above, redundant information atoms combined to conserve energy and information and create a higher scale structure.
And so I believe that to achieve more temporally certain outcomes in working towards the objectives mentioned here and more in complex systems, we need to plot interventions at a higher scale of spatial organization, rather than at the lowest.
If we think about the ways we want to intervene not as a means of undoing damage but rather as a means of reintroducing redundancy or uncertainty to a lower scale of spatial organization, this completely reframes the ways in which we aim to address the problem. We can take what we know from these theories to bring about reliable, wholistic outcomes.
ACHILLES: It is interesting to me to compare the merits of the descriptions at various levels. The highest-level description seems to carry the most explanatory power, in that it gives you the most intuitive picture of the ant colony, although strangely enough, it leaves out seemingly the most important feature-the ants.
ANTEATER: But you see, despite appearances, the ants are not the most important feature. Admittedly, were it not for them, the colony wouldn't exist; but something equivalent-a brain-can exist, ant-free. So, at least from a high-level point of view, the ants are dispensable.
ACHILLES: I'm sure no ant would embrace your theory with eagerness.
ANTEATER: Well, I never met an ant with a high-level point of view.
CRAB: What a counterintuitive picture you paint, Dr. Anteater. It seems that, if what you say is true, in order to grasp the whole structure, you have to describe it omitting any mention of its fundamental building blocks.
— Ant Fugue | Gödel, Escher, Bach
I am orienting towards investing greater amounts of time into the pursuit of these ideas. I wrote this in order to hone communicating them with less domain specific jargon. I am very open to feedback or counter-perspectives. To follow what I’m paying attention to in more detail, you can view my research here.
When they go low, go high.
Send me signal on Urbit: ~padlyn-sogrum
Varley Thomas F. and Hoel Erik 2022 Emergence as the conversion of information: a unifying theory Phil. Trans. R. Soc. A.3802021015020210150 http://doi.org/10.1098/rsta.2021.0150
Thank you Erik Hoel for the clarification on this section after reviewing a draft. Go check out his Substack over at The Intrinsic Perspective.
Anderson, Benjamin, “Causal Emergence in Biological Networks”, TheBenjam.in (2022-07-23), available at https://www.thebenjam.in/research/.
Takahashi K, Yamanaka S. Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell. 2006 Aug 25;126(4):663-76. doi: 10.1016/j.cell.2006.07.024. Epub 2006 Aug 10. PMID: 16904174.
Takahashi K. Cellular reprogramming. Cold Spring Harb Perspect Biol. 2014 Feb 1;6(2):a018606. doi: 10.1101/cshperspect.a018606. PMID: 24492711; PMCID: PMC3941237.