The Beauty of Phase Spaces
Phase Spaces: Mapping Complex Systems
I’ve spent the day on cloud computing.
Yes, there will be a course on it at GMU this Fall of 2010. And cloud computing is simply a technology; a means of getting stuff done.
In and of itself, I think there are more exciting things in the world — such as phase spaces.
One of the classic nonlinear systems is the Ising spin glass model. This system is composed of only two kind of particles; those in state A, and those in state B. (That is, spin “up” or spin “down.”) Let’s say that those particles in state A have some sort of “extra energy” compared with those in state B. We’ll also say that there is an interaction energy between nearest-neighbor particles; two neighboring particles of the same state have a negative interaction energy; this actually helps stabilize those particles for being in their respective states.
The simplest model for this system, including not only the enthalpy terms (the energies for particles in state A, above the “rest state” of B), and the interaction energy, along with a simple entropy term, is:
expresses the the “reduced” free energy, which means that our formulation is independent of the total number of particles in the system. This means that we express enthalpy and entropy in terms of fraction of particles in a state; not total number. We represemt the total fraction of particles in state A (the “higher energy state”) by x. We also divide through by various constants, so that we simplify the total number of parameters. In this equation, we wind up with two simple parameters: ε1 and ε2.
This figure shows a phase space of seven distinct regions, labeled A-G. These regions are characterized in the following Table.
The important thing about this phase space diagram
, and the distinct regions within it, is that it shows how a phase transition could happen between two very different states in a system; one where x is high (most units are in a very active or “on” state x=> 1), and another in which x is low (most units are in an inactive or “off” state, x=>0).
To see this possibility, we examine the case where ε1 is fixed; let’s take ε1=3. Then we examine the system for increasing (negative) values of a=ε2/ε1.
This means, we trace a vertical line, going from top to bottom, near the center of the diagram, where ε1=3. We will go through three different regions; A, then D, then F/G.
When a=-0.5, we are in Region A. This is a “low x” region; most units are inactive or “off.” We increase (the negative value of) a to -1.5, and move to Region D. There are two minima in this region; the system can be either in a low or high value for x. We increase a to -2.5, and move to Region F. There is a single minimum in this region, for x close to 1.
In sum, using the starting equation for free energy given in this post, we have a resulting “phase space” which includes the possibility of both low-value and high-value regions for x satisfying an equilibrium state. There is also a middle region, one in which either a low or high value of x is possible. That means that the phase transition between the low-value and high-value states can be subject to hysteresis; it can be memory-dependent. Often, in practical terms, this means that if the system starts off in a “low-x” state, it will stay in that state until the “free energy minimum” completely disappears, and the system is forced to abruptly shift into a “high-x” state. Similarly, if it is in a “high-x” state, it may stay in that state (while the parameter a continues to change) all the way from Region F through Region D, until the system moves into Region A, and is forced back into a “low-x” state once again.
This equation is used to explain hystersis as well as phase transitions in memory-dependent systems, such as ferromagnets. In this case, a ferromagnetic material that is disordered (different domains are randomly aligned) will become ordered (almost all the domains will align their magnetic fields) when put into an external magnetic field. They will maintain their alignment for a long time. It will take a significant change in external circumstances (e.g., raising the temperature a great deal) to destroy the alignment and bring the ferromagnet back into a disordered state once again. Then, a lot of effort (a significant magnetic field) would be required to re-align the domains.
We know that this equation is good for describing certain physical systems. The question is: to what extent could we apply this to other, more complex systems? And if we did so, would we learn something interesting?