Descriptions of Select Publications
2025
How can a glob of molecules make smart decisions in a cell? One option is to engineer them like parts in a circuit. But we’re more interested in embodied computation, where molecules do useful things just by bouncing around and interacting in messy, natural ways. Based on past work by us and others, we suggest that phase boundaries, the same physics behind ice melting, can be seen as decision boundaries, like those in neural networks. Cross the line and proteins condense; stay on the other side and they don’t. That makes for sharp, physical if/then switches.
We show how to measure such computational power inherent to physics by asking how “wiggly” their boundaries are, just like how machine learning models carve up input space. By tweaking the number of species, valency, or environment, condensates become more or less collective decision-makers.
We show how to measure such computational power inherent to physics by asking how “wiggly” their boundaries are, just like how machine learning models carve up input space. By tweaking the number of species, valency, or environment, condensates become more or less collective decision-makers.
We usually think of genomes as long molecules bundled into chromosomes, a structure so familiar we rarely question it. Here, we explore a very different architecture: information spread across many short molecules that cooperate and compete. No single strand holds the full message; meaning emerges from their interactions. This virtual circular genome has a surprising benefit: it naturally suppresses errors. Replication depends on help from molecular neighbors—like an ecosystem where mutants don’t easily fit. So bad copies are filtered out by the architecture itself. Biology already flirts with wild genome setups, from scrambled DNA that reassembles itself to cells with multiple nuclei playing different roles. These architectures may be especially useful for early life or minimal synthetic cells that can’t afford complex error correction.
Lesson: who needs fancy error correction when your genome architecture just refuses to work with weirdos?
Lesson: who needs fancy error correction when your genome architecture just refuses to work with weirdos?
We usually think of circuits as rigid and designed. But while studying microbial metabolism, we realized something surprising: microbial ecosystems can act like living circuits, where individual connections, or “edges”, dynamically adjust based on how much energy flows through them. It's like power lines that not only carry electricity but thicken or thin themselves depending on usage. In this case, those “wires” represent different redox metabolic pathways, the diverse chemical routes microbes use to extract energy. But there’s no master controller, just local feedback. Weak links strengthen, neglected paths light up, and new circuits emerge, all tuned to the flow of metabolism.
In physics, "non-equilibrium" often means that a fixed energy drive creates a fixed amount of structure. In these living systems, the loop is closed. Energy builds structure, and structure makes it easier to grab more energy. The result is a self-bootstrapping, self-wiring circuit that climbs further from equilibrium, one rewired connection at a time.
In physics, "non-equilibrium" often means that a fixed energy drive creates a fixed amount of structure. In these living systems, the loop is closed. Energy builds structure, and structure makes it easier to grab more energy. The result is a self-bootstrapping, self-wiring circuit that climbs further from equilibrium, one rewired connection at a time.
“Biology is low dimensional” is a popular claim, but it can mean many things—some useful, some vague. We review one simple idea with real predictive power: “soft modes”. Soft modes are the easy, preferred ways biological systems respond to change—whether from mutations, environmental shifts, or evolution. They’re nature’s paths of least resistance. The mere existence of a soft mode predicts that mutations and environmental changes often cause similar effects, that stress responses to the environment can also buffer mutations. Perhaps most surprisingly, soft modes predict that epistasis (how the effect of one gene or mutation depends on others) often follows simple, predictable patterns, whether across residues in a protein, genes in a genome, or species in an ecosystem.
Neural networks running on GPUs are reshaping our world, but GPUs are energy-intensive and costly. Contrastive learning, a simpler alternative, can save resources by just comparing initial guesses to correct solutions. We discovered that physical systems can naturally implement contrastive learning if they use integral feedback—a common mechanism as simple as a thermostat controlling room temperature. These systems effortlessly compare past and present states, learning patterns over time. This insight points toward new efficient hardware for artificial neural networks and suggests how even 'brainless' systems, like single cells or mechanical materials, could physically learn from their environment.
Lesson: Effective learning doesn't always need sophisticated memory storage; sometimes, all it takes is the right rhythm and a touch of natural forgetfulness.
Lesson: Effective learning doesn't always need sophisticated memory storage; sometimes, all it takes is the right rhythm and a touch of natural forgetfulness.
2024
Life spends much effort trying to fix errors introduced by the forces of disorder; e.g., across the Central Dogma, molecular machines like the ribosome and polymerases, have baroque error correcting schemes called proofreading that double-check their work at the cost of time and energy. But imagine if trying to speed up a process alone could actually force it to correct errors. It sounds counterintuitive, but we found exactly this: molecules processes pressured to go fast can evolve sophisticated error-correcting mechanisms like proofreading, even without any pressure to avoid mistakes. It's as if typing faster on a messy keyboard spontaneously taught your fingers to avoid typosŃpurely because typos slow you down so much. Our findings hint that life's earliest molecular machines could have stumbled onto error correction and thus order purely by racing against the clock.
Lesson: Sometimes, speeding things up can tidy things up.
Lesson: Sometimes, speeding things up can tidy things up.
Can a soup of molecules act like a neural network? We experimentally demonstrate a molecular system here that acts like an associative neural network - similar architecture, pattern recognition behavior, ability to expand in a Hebbian manner, phase diagrams etc. But we didn't design these molecules to act like `neurons' in any sense - they are just molecules following the inevitable physics of nucleation, self-assembly and depletion. You can easily imagine such hidden computational power in molecular collectives being exploited by evolution. Lesson: Neural computation doesn't need networks of linear threshold devices (`neurons'); it can arise through the collective dynamics of other many-body systems.
2023
We usually train a system to solve a task—lift this, classify that. But what if we trained it to get better at learning? In this work, we show that even soft materials can "learn to learn" if trained the right way. Instead of repeating the same task, we challenge them to physically adapt to a sequence of different tasks. This pushes the system to find flexible strategies that make it easy to shift from one solution to another. We find materials that behave like proteins switching folds with a single tweak, or elastic networks that flip from expanding under pressure to compressing instead.
The learning to learn solutions we find here aren't about doing more - it’s about positioning yourself so that the path from one task to the next task is shorter. Even a blob of soft matter can become a nimble learner with the right kind of challenge.
The learning to learn solutions we find here aren't about doing more - it’s about positioning yourself so that the path from one task to the next task is shorter. Even a blob of soft matter can become a nimble learner with the right kind of challenge.
We review the incipient field of `physical learning' - when can a physical system physically learn desired behaviors by experiencing examples of such behavior? We categorize examples in the molecular and mechanical literature as unsupervised or supervised training. We highlight the main intellectual challenges - e.g., how can learning work despite learning rules having to be local in space and time? What does all of this have to do with adjacent mature fields like biological learning and neuromorphic computing?
Bifurcations are special states in mechanical systems where any linear approximation completely misses the picture, even for small deformations. How do we design the behavior near these highly non-linear points? We demonstrate experimentally that mechanical systems can `physically learn' desired behaviors at these points. We fill a creased sheet with soft epoxy and physically subject it to the desired behavior as the epoxy is setting; we find that the epoxy re-distributes itself in just the right way so as to learn the desired behavior at the bifurcation. No computers or design algorithms involved!
2022
Sam Schaffter and others in Schulman's lab created an amazing scaleable platform for robust synthetic molecular circuits based on `genelets' (little modules of DNA + RNA + a RNA polymerase). Today, these elements can already be linked together to create multi-stable systems, generate pulses and the like. Tomorrow, maybe these chemical circuits can be combined with mechanical systems in Schulman's lab to reveal a whole new class of materials that can learn a la neural networks.
Multitasking is distracting and often makes you terrible at each of your tasks. But sometimes, if you are stuck on a given task, switching to a different task for a while can help you get unstuck. But how do you know when this will work and what kind of task should you switch to? If the `task' is finding the ground state of a Hamiltonian, we have one simple (analytic) trick to get unstuck - we found an analytically-specified simple alternative Hamiltonian that you should switch to every so often during minimization. Guaranteed* to help you get unstuck (*conditions apply).
2021
Criss-crossed phone lines sound like a bad idea if you want to communicate from one specific place to another (at least back when phones had lines). Can it ever make sense to connect phone lines from different houses into one messy ball of wires and then fan them out to different destinations? It turns out that molecules can handle such a messy ball just fine - the collective dynamics of these molecules will eventually sort out messages to the right destinations. In fact, there are advantages to being this messy.
Active matter is seen as a metaphor for biology because active matter is out of equilibrium. But lots of physical phenomena are out of equilibrium and most are boring. What makes biological phenomena uniquely interesting is the ability to regulate the flow of energy in functional ways. Here, we ask if regulating activity in a space-time dependent way can create interesting organization in a minimal active matter systems.
A review. Everyone talks about biology in time varying environments but only in some cases do time varying environments do something truly novel - there is no way to understand the resulting behavior in terms of any averaged or effective static environment. This review, by leaders in many distinct parts of biology, explores such conceptual questions in different areas + what the promising theoretical and experimental approaches are.
2020
We introduced the idea of “physical learning” in mechanical systems, i.e., training of physical systems in the physical world but in ways analogous to the way neural networks are trained on a computer. In this framework, a complex mechanical system is physically subject to examples of a desired behavior which leads to changes in the mechanical material (e.g., stiffening or softening in different places); these changes result in a `trained’ material with the desired behavior. After learning through these physics-driven local rules, these sheets respond correctly even to patterns they've never encountered before, all without any digital memory or external guidance. This physical learning works because folding naturally reshapes the material's internal landscape, creating distinct responses to different patterns. Since this paper, a large body of work has explored such learning in mechanical systems. See also our work on learning in the molecular realm and our 2023 review on the field as a whole.
2017
Can a physical system act like a neural network through its own natural physical dynamics? Surprisingly, yes! We showed that molecular systems, through the simple physics of self-assembly, can recognize and categorize complex patterns in the concentrations of molecules, much like neural networks. These molecules naturally assemble together into different structures based on subtle differences in their environment, effectively 'remembering' and reconstructing patterns—even noisy or incomplete ones. This process doesn't need sophisticated machinery, just basic interactions that follow from a local Hebbian-inspired learning rule. Lesson: Even molecules without brains can exhibit memory and pattern recognition—just add the right chemistry!
2014
Inspired by associative memory in Hopfield's neural networks, we generalized the self-assembly framework to a soup of particles (proteins/DNA tiles) that can simultaneously 'store' the ability to assemble multiple different structures. Such a soup of particles can then assemble ('retrieve') any one of the stored structures ('memories') when presented with a signal vaguely reminiscent of one of those memories ('association'). However, store one too many memories and promiscuous interactions between particles prevent faithful retrieval of any memory. Secretly, such self-assembly is conceptually similar to Hippocampus place cell networks and equivalent spin glass models.
Usually, non-equilibrium error correction is understood to increase the occupancy of the ground state and reduce the occupancy of all higher energy states. However, we found that a proofreading mechanism can act differently in different energy bands, reducing occupancy of unstable states in a given energy band while increasing the occupancy of less stable higher energy states ('anti-proofreading'). And you can switch between different designer occupancy of states by simply changing the external driving forces.
2012
A new twist on the classic model of kinetic proofreading. Proofreading uses 'catastrophes' to slow down biochemical reactions while improving their fidelity. We introduced `rescues' that mitigate catastrophes and speed up reactions at the cost of increased errors. Surprisingly, we found a non-equilibrium phase transition as you tune the rescue rate. At this transition, you achieve, loosely speaking, 80% of the max possible error-correction at only 20% of the max speed cost. Why would you go any further (as the traditional limit does) unless you really care about errors and really don't care about speed at all? We took the terms catastrophes and rescues from non-equilibrium microtubule growth. The connection to 'dynamic inability' of microtubules suggests a broader context for proofreading as a stochastic search strategy, balancing exploration and exploitation.
2008
We showed that entanglement entropy, a concept traditionally studied in condensed matter physics, provided a sharp new theoretical tool for long-standing questions in particle physics on the confinement of quarks within protons/neutrons. By studying a geometric problem that was a holographic `dual’ of the real problem, we saw a surprising transition: when you try to entangle two regions of space, their entanglement structure suddenly changes at a critical size, a signature of how quarks getting confined at a specific length scale. In the years since, entanglement entropy has proven useful in a variety of other contexts in string theory and high energy physics.