What is the entropy cost when two molecules form a complex?

Biology is driven by molecular interactions. Our understanding of the constant flux back and forth between molecules with different identities is largely a story about free energy differences between reactants and products as all science students learn in their first chemistry course. However, the cursory introduction to these matters experienced by most students casts aside a world of beautiful subtleties that center on the many ways in which the free energy of a molecular system is changed as a result of molecular partnerships. Here we focus on the contribution to the free energy resulting from the entropy changes when molecules bind.

In this vignette, we address a simple conceptual question, namely, when two molecules A and B interact to form the complex AB, how large is the entropy change as a result of this interaction? The free energy has the generic form

G=H-TS,

where H is the enthalpy and S is the entropy.

We see that in a simple case in which there is no enthalpy change, the entire free energy balance is dictated by entropy. If a reaction increases the entropy this means there is a corresponding negative free energy change, signaling the direction in which reactions will spontaneously proceed. A deep though elusive insight into these abstract terms comes from one of the most important equations in all of science, namely,

S = kB ln W

which tells us how the entropy of a system S depends upon the number of microstates available to it as captured by the quantity W. An increase in entropy thus reflects an increase in the number of microstates of the system. Assuming the system has the same chance to be in any microstate, spontaneous jiggling in the space of possible states will indeed lead the system to move to the condition with the most states, i.e. with the highest entropy. At the risk of being clear to only those who had especially clear teachers (a substitute is Dill and Bromberg’s excellent book, “Molecular Driving Forces”), we note that even the term representing the enthalpy change in the free energy is actually also an entropy term in disguise. Concretely, this term reflects the heat released outside of the system where it will create entropy.  This effect is included in the calculation of the free energy because it is a compact way of computing the entropy change of the “whole world” while focusing only on the system of interest.

A ubiquitous invocation of these far reaching ideas is in understanding binding interactions. In these cases there is a competition between the entropy available to the system when ligands are jiggling around in solution and the enthalpy released from the bonds created upon their binding to a receptor, for example. When a ligand has a dissociation constant of, say 1 µM, it means that at that concentration, half the receptors will be bound with ligands. At this concentration, the energy released from binding, a gain in enthalpy that increases the number of states outside the system, will equal the loss in entropy, measuring the decrease in states within the system due to binding. When the concentration of a ligand is lower, it means that the ligand in solution will have a larger effective volume to occupy with more configurations and thus will favor it over the energy released in binding. As a result, the receptor or enzyme will be in a state of lower fractional occupancy. At the other extreme, when the ligand concentration is higher than the dissociation constant, the ligand when unbound has a more limited space of configurations to explore in solution and the binding term will prevail, resulting in higher occupancy of the bound state. This is the statistical mechanical way of thinking about the free energy of binding as a strict competition between entropic and enthalpic terms.

What fundamentally governs the magnitude of the entropic term in these binding reactions? This is a subject notorious for its complexities, and we only touch on it briefly here. The entropy change upon binding is usually calculated with reference to the standard-state concentration of c0= 1 M (which can be thought of as a rough estimate for the effective concentration when bound) and is given by ÄS= -kB ln (c/c0), where c is the prevailing concentration of the ligand.  Specifically, this formula compares the number of configurations available at the concentration of interest to that when one particle binds to the receptor at that same concentration. We now aim to find the actual magnitude of the entropy change term estimated by using the expression ÄS= -kB ln (c/c0).  If ligand-receptor binding occurs at concentration c=10-n M, the entropy change is given by ÄS= nkB ln 10 ≈ 10-20kBT for n≈4-8, i.e 10nM-100µM. Using more sophisticated theoretical tools, this entropy change has been estimated for ligands binding to proteins to have values ranging from ≈6-20 kBT ≈ 15-50 kJ/mol (BNID 109148, 111402, 111419), a range generally in line with the simple estimate sketched above. For protein-protein binding a value under standard conditions of 40 kBT ≈ 100 kJ/mol was estimated (BNID 109145, 109147). These calculations were partially derived from analyzing gases because fully accounting for solvation effects is a big unresolved challenge. Inferring the value from experiments is challenging but several efforts result in values of ≈6-10 kBT ≈ 15-25 kJ/mol (BNID 109146, 111402) for cases ranging from polymerization of actin, tubulin and hemoglobin as well as the interaction of biotin and avidin.

As discussed above, binding is associated with an entropic cost that is offset by enthalpic gain. An important consequence of this interplay is the ability to build extremely strong interactions from several interactions to the same substrate, which are each quite weak. In the first interaction the entropic term offsets the binding energy, creating only a modest dissociation constant. But if a second binding interaction of the very same substrate occurs concurrently with the first one, the entropic term was already “paid” and the associated free energy change will be much more substantial. Consider the case of binding of the actin monomer to the actin filament built of two protofilaments, and thus two concurrent binding interactions. The binding to each protofilament is independently quite weak with a dissociation constant of 0.1 M but the joint dissociation constant is 1 µM, because the ≈10kBT entropic term is not offsetting the binding energy twice but only once. This effect, also referred to in the term avidity, is at the heart of antibodies binding specifically and tightly to antigens as well as many other cases including transcription factors binding to DNA, viral capsid formation etc.