Publication Descriptions
2025
“Biology is low dimensional” is a popular claim, but it can mean many things—some useful, some vague. We review one simple idea with real predictive power: “soft modes”. Soft modes are the easy, preferred ways biological systems respond to change—whether from mutations, environmental shifts, or evolution. They’re nature’s paths of least resistance. The mere existence of a soft mode predicts that mutations and environmental changes often cause similar effects, that stress responses to the environment can also buffer mutations. Perhaps most surprisingly, soft modes predict that epistasis (how the effect of one gene or mutation depends on others) often follows simple, predictable patterns, whether across residues in a protein, genes in a genome, or species in an ecosystem.
Neural networks running on GPUs are reshaping our world, but GPUs are energy-intensive and costly. Contrastive learning, a simpler alternative, can save resources by just comparing initial guesses to correct solutions. We discovered that physical systems can naturally implement contrastive learning if they use integral feedback—a common mechanism as simple as a thermostat controlling room temperature. These systems effortlessly compare past and present states, learning patterns over time. This insight points toward new efficient hardware for artificial neural networks and suggests how even 'brainless' systems, like single cells or mechanical materials, could physically learn from their environment. Lesson: Effective learning doesn't always need sophisticated memory storage; sometimes, all it takes is the right rhythm and a touch of natural forgetfulness.
2024
Life spends much effort trying to fix errors introduced by the forces of disorder; e.g., across the Central Dogma, molecular machines like the ribosome and polymerases, have baroque error correcting schemes called proofreading that double-check their work at the cost of time and energy. But imagine if trying to speed up a process alone could actually force it to correct errors. It sounds counterintuitive, but we found exactly this: molecules processes pressured to go fast can evolve sophisticated error-correcting mechanisms like proofreading, even without any pressure to avoid mistakes. It's as if typing faster on a messy keyboard spontaneously taught your fingers to avoid typosŃpurely because typos slow you down so much. Our findings hint that life's earliest molecular machines could have stumbled onto error correction and thus order purely by racing against the clock. Lesson: Sometimes, speeding things up can tidy things up.
Can a soup of molecules act like a neural network? We experimentally demonstrate a molecular system here that acts like an associative neural network - similar architecture, pattern recognition behavior, ability to expand in a Hebbian manner, phase diagrams etc. But we didn't design these molecules to act like `neurons' in any sense - they are just molecules following the inevitable physics of nucleation, self-assembly and depletion. You can easily imagine such hidden computational power in molecular collectives being exploited by evolution. Lesson: Neural computation doesn't need networks of linear threshold devices (`neurons'); it can arise through the collective dynamics of other many-body systems.
2023
We review the incipient field of `physical learning' - when can a physical system physically learn desired behaviors by experiencing examples of such behavior? We categorize examples in the molecular and mechanical literature as unsupervised or supervised training. We highlight the main intellectual challenges - e.g., how can learning work despite learning rules having to be local in space and time? What does all of this have to do with adjacent mature fields like biological learning and neuromorphic computing?
Bifurcations are special states in mechanical systems where any linear approximation completely misses the picture, even for small deformations. How do we design the behavior near these highly non-linear points? We demonstrate experimentally that mechanical systems can `physically learn' desired behaviors at these points. We fill a creased sheet with soft epoxy and physically subject it to the desired behavior as the epoxy is setting; we find that the epoxy re-distributes itself in just the right way so as to learn the desired behavior at the bifurcation. No computers or design algorithms involved!
2022
Sam Schaffter and others in Schulman's lab created an amazing scaleable platform for robust synthetic molecular circuits based on `genelets' (little modules of DNA + RNA + a RNA polymerase). Today, these elements can already be linked together to create multi-stable systems, generate pulses and the like. Tomorrow, maybe these chemical circuits can be combined with mechanical systems in Schulman's lab to reveal a whole new class of materials that can learn a la neural networks.
Multitasking is distracting and often makes you terrible at each of your tasks. But sometimes, if you are stuck on a given task, switching to a different task for a while can help you get unstuck. But how do you know when this will work and what kind of task should you switch to? If the `task' is finding the ground state of a Hamiltonian, we have one simple (analytic) trick to get unstuck - we found an analytically-specified simple alternative Hamiltonian that you should switch to every so often during minimization. Guaranteed* to help you get unstuck (*conditions apply).
2021
Criss-crossed phone lines sound like a bad idea if you want to communicate from one specific place to another (at least back when phones had lines). Can it ever make sense to connect phone lines from different houses into one messy ball of wires and then fan them out to different destinations? It turns out that molecules can handle such a messy ball just fine - the collective dynamics of these molecules will eventually sort out messages to the right destinations. In fact, there are advantages to being this messy.
Active matter is seen as a metaphor for biology because active matter is out of equilibrium. But lots of physical phenomena are out of equilibrium and most are boring. What makes biological phenomena uniquely interesting is the ability to regulate the flow of energy in functional ways. Here, we ask if regulating activity in a space-time dependent way can create interesting organization in a minimal active matter systems.
A review. Everyone talks about biology in time varying environments but only in some cases do time varying environments do something truly novel - there is no way to understand the resulting behavior in terms of any averaged or effective static environment. This review, by leaders in many distinct parts of biology, explores such conceptual questions in different areas + what the promising theoretical and experimental approaches are.
2020
We introduced the idea of “physical learning” in mechanical systems, i.e., training of physical systems in the physical world but in ways analogous to the way neural networks are trained on a computer. In this framework, a complex mechanical system is physically subject to examples of a desired behavior which leads to changes in the mechanical material (e.g., stiffening or softening in different places); these changes result in a `trained’ material with the desired behavior. After learning through these physics-driven local rules, these sheets respond correctly even to patterns they've never encountered before, all without any digital memory or external guidance. This physical learning works because folding naturally reshapes the material's internal landscape, creating distinct responses to different patterns. Since this paper, a large body of work has explored such learning in mechanical systems. See also our work on learning in the molecular realm and our 2023 review on the field as a whole.
2017
Can a physical system act like a neural network through its own natural physical dynamics? Surprisingly, yes! We showed that molecular systems, through the simple physics of self-assembly, can recognize and categorize complex patterns in the concentrations of molecules, much like neural networks. These molecules naturally assemble together into different structures based on subtle differences in their environment, effectively 'remembering' and reconstructing patterns—even noisy or incomplete ones. This process doesn't need sophisticated machinery, just basic interactions that follow from a local Hebbian-inspired learning rule. Lesson: Even molecules without brains can exhibit memory and pattern recognition—just add the right chemistry!
2014
Inspired by associative memory in Hopfield's neural networks, we generalized the self-assembly framework to a soup of particles (proteins/DNA tiles) that can simultaneously 'store' the ability to assemble multiple different structures. Such a soup of particles can then assemble ('retrieve') any one of the stored structures ('memories') when presented with a signal vaguely reminiscent of one of those memories ('association'). However, store one too many memories and promiscuous interactions between particles prevent faithful retrieval of any memory. Secretly, such self-assembly is conceptually similar to Hippocampus place cell networks and equivalent spin glass models.
Usually, non-equilibrium error correction is understood to increase the occupancy of the ground state and reduce the occupancy of all higher energy states. However, we found that a proofreading mechanism can act differently in different energy bands, reducing occupancy of unstable states in a given energy band while increasing the occupancy of less stable higher energy states ('anti-proofreading'). And you can switch between different designer occupancy of states by simply changing the external driving forces.
2012
A new twist on the classic model of kinetic proofreading. Proofreading uses 'catastrophes' to slow down biochemical reactions while improving their fidelity. We introduced `rescues' that mitigate catastrophes and speed up reactions at the cost of increased errors. Surprisingly, we found a non-equilibrium phase transition as you tune the rescue rate. At this transition, you achieve, loosely speaking, 80% of the max possible error-correction at only 20% of the max speed cost. Why would you go any further (as the traditional limit does) unless you really care about errors and really don't care about speed at all? We took the terms catastrophes and rescues from non-equilibrium microtubule growth. The connection to 'dynamic inability' of microtubules suggests a broader context for proofreading as a stochastic search strategy, balancing exploration and exploitation.
2008
We showed that entanglement entropy, a concept traditionally studied in condensed matter physics, provided a sharp new theoretical tool for long-standing questions in particle physics on the confinement of quarks within protons/neutrons. By studying a geometric problem that was a holographic `dual’ of the real problem, we saw a surprising transition: when you try to entangle two regions of space, their entanglement structure suddenly changes at a critical size, a signature of how quarks getting confined at a specific length scale. In the years since, entanglement entropy has proven useful in a variety of other contexts in string theory and high energy physics.
2006
Many observations about the surprising uniformity of the large scale structure of the universe is explained by cosmic inflation. In this model, a "ball" (the inflaton) gently rolls down a smooth hill, causing rapid expansion of the universe in size. But what is this "ball" and what is the hill it is running down? We built a mechanistic model of inflation based on string theory, with different membrane-like objects (D3 and D7 branes) and their interactions playing the role of the ball and the hill. We computed what the shape of the hill would be from this mechanistic theory for cosmic inflation which has consequences for experimental signatures of inflation (e.g., in the cosmic microwave background).