A Discrete Approach to Top-Down Modeling of Biochemical Networks

Reinhard Laubenbacher , Pedro Mendes , in Computational Systems Biology, 2006

VIII CONCLUSIONS

We have discussed some top-down modeling methods resulting in time-discrete dynamical system models over finite-state sets. They serve to provide high-level information about systems that can be used as constraints for the construction of low-level models, either top-down or bottom-up. Our method using polynomial dynamical systems over finite fields has the advantageous feature that its mathematical underpinning provides access to a variety of mathematical algorithms and symbolic computation software. Other modeling approaches discussed here have other advantages, such as the capability of asynchronous update or the incorporation of discrete as well as continuous variables. The choice of what method to use in a particular case will depend on the type of data and information available. For instance, the incorporation of asynchronous update is only feasible computationally with prior biological information on the pathways to be modeled. In particular, it provides a mathematical basis for the investigation of questions such as "goodness" measures on data sets. Ultimately, the performance of top-down modeling methods cannot be properly evaluated unless we understand what types of input data are required for optimal performance. That is, the "data must fit the models".

Experimental data sets suitable for the various modeling methods are still difficult to obtain, and the biochemical networks producing the data are typically too poorly understood to truly test modeling performance. An important resource in the field would be a collection of benchmark synthetic biochemical networks and the ability to generate from them data sets covering various types of networks, providing wild-type and perturbation time series. One possible tool for generating such networks and data is described by Mendes et al. (2003).

We believe that the field of system identification can serve as a blueprint for a mathematical top-down modeling program in systems biology. Based on a well-defined collection of model classes, from high-level statistical models down to ODE and PDE models, such a program must include the development of appropriate system identification methods for each model class and quality measures on data sets that can be used to develop confidence measures for the resulting models.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780120887866500319

Analysis of Dynamic Models

Mark M. Meerschaert , in Mathematical Modeling (Fourth Edition), 2013

Abstract

In this chapter, we consider some of the most broadly applicable techniques for the analysis of discrete and continuous time dynamical systems such as Eigenvalue Methods and Phase Portraits. These methods can provide important qualitative information about the behavior of dynamical systems, even when exact analytic solutions are not obtainable. Eigenvalue method is used to analyze nonlinear dynamical systems for stability and is an appropriate application for a computer algebra system. Its methods can be applied to both continuous time dynamical systems and discrete time dynamical systems. The same concept can be used to obtain the phase portrait, which is a graphical description of the dynamics over the entire state space. A phase portrait is mapped by homeomorphism, a continuous function with a continuous inverse.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123869128500051

Introduction to Dynamic Models

Mark M. Meerschaert , in Mathematical Modeling (Fourth Edition), 2013

4.3 Discrete Time Dynamical Systems

In some problems it is most natural to model the time variable as being discrete. When this happens, the usual differential equations are replaced by their discrete-time analog: difference equations. The relationship between discrete and continuous dynamics is the relationship between Δxt and dx/dt, so it is often assumed that the behavior of a dynamical system will be roughly the same whether we assume that time is continuous or discrete. However, this kind of logic overlooks one important point. There is a kind of time delay built into every discrete-time dynamical system, which is the length of the time step Δt. For systems in which the dynamic forces are very strong, this time delay can lead to unexpected results.

Example 4.3. Astronauts in training are required to practice a docking maneuver under manual control. As a part of this maneuver, it is required to bring an orbiting spacecraft to rest relative to another orbiting craft. The hand controls provide for variable acceleration and deceleration, and there is a device on board that measures the rate of closing between the two vehicles. The following strategy has been proposed for bringing the craft to rest. First, look at the closing velocity. If it is zero, we are done. Otherwise, remember the closing velocity and look at the acceleration control. Move the acceleration control so that it is opposite to the closing velocity (i.e., if closing velocity is positive, we slow down, and we speed up if it is negative) and proportional in magnitude (i.e., we brake twice as hard if we find ourselves closing twice as fast). After a time, look at the closing velocity again and repeat the procedure. Under what circumstances will this strategy be effective?

We will use the five-step method. Let vn denote the closing velocity observed at time tn , the time of the nth observation. Let

Δ v n = v n + 1 v n

denote the change in closing velocity as a result of our adjustments. We will denote the time between observations of the velocity indicator by

Δ t n = t n + 1 t n .

This time interval naturally divides into two parts: the time it takes to adjust the velocity controls and the time between adjustment and the next observation of the velocity indicator. Write

Δ t n = c n + w n ,

where cn is the time to adjust the controls and wn is the waiting time until the next observation. The parameter cn is a function of astronaut response time, and we are free to choose wn .

Let an denote the acceleration setting after the nth adjustment. Elementary physics yields

Δ v n = a n 1 c n + a n w n .

The control law says to set acceleration proportional to (−vn ); hence

a n = k v n .

The results of step 1 are summarized in Figure 4.7.

Figure 4.7. Results of step 1 of the docking problem.

Step 2 is to select the modeling approach. We will model this problem as a discrete-time dynamical system.

A discrete-time dynamical system consists of a number of state variables (x 1, …, xn ) defined on the state space S n and a system of difference equations

(4.9) Δ x 1 = f 1 ( x 1 , , x n ) Δ x n = f n ( x 1 , , x n ) .

Here Δxn represents the change in xn over one time step. It is common to take time steps of length 1, which just amounts to selecting appropriate units. If time steps are of variable length, or if the dynamics of the system vary over time, then we include time as a state variable. If we let

x = ( x 1 , , x n ) F = ( f 1 , , f n ) ,

then the equations of motion can be written in the form

Δ x = F ( x ) .

A solution to this difference equation model is a sequence of points

x ( 0 ) , x ( 1 ) , x ( 2 ) ,

in the state space with

Δ x ( n ) = x ( n + 1 ) x ( n ) = F ( x ( n ) )

for all n. An equilibrium point x 0 is characterized by

F ( x 0 ) = 0 ,

and the equilibrium is stable if

x ( n ) x 0

whenever x(0) is sufficiently close to x 0. As in the continuous time case, many other difference equation models can be reduced to the form (4.9) by introducing additional state variables.

Think of a solution as a sequence of points in the state space. The vector F(x(n)) connects the point x(n) to the point x(n + 1). A graph of the vector field F(x) can reveal much about the behavior of a discrete-time dynamical system.

Example 4.4. Let x = (x 1, x 2), and consider the difference equation

(4.10) Δ x = λ x ,

where λ > 0. What is the behavior of solutions near the equilibrium point x 0 = (0, 0)?

Figure 4.8 shows a graph of the vector field F(x) = −λx in the case where 0 < λ < 1. It is clear that x 0 = (0, 0) is a stable equilibrium. Each step moves closer to x 0. Now let us consider what happens when λ becomes larger. Each of the vectors in Fig. 4.8 will stretch as λ increases. For λ > 1 the vectors are so long that they overshoot the equilibrium. For λ > 2 they are so long that the terminal point x(n +1) is actually farther away from (0, 0) than the starting point x(n). In this case x 0 is an unstable equilibrium.

Figure 4.8. Vector field for Example 4.4.

This simple example clearly illustrates the fact that discrete-time dynamical systems do not always behave like their continuous-time analogs. Solutions to the differential equation

(4.11) d x d t = λ x

are all of the form

x ( t ) = x ( 0 ) e λ t ,

and the origin is a stable equilibrium regardless of λ > 0. The difference in behavior for the analogous difference equation in Eq. (4.10) is due to the inherent time delay. The approximation

d x d t Δ x Δ t

is only valid for small Δt, where the term small depends on the sensitivity of x to t. It should only be relied upon in cases where Δx represents a small relative change in x. When this is not the case, the difference in behavior between discrete and continuous systems can be dramatic.

We return now to the docking problem of Example 4.3. Step 3 of the five-step method is to formulate the model. We are modeling the docking problem as a discrete-time dynamical system. From Fig. 4.7 we obtain

( v n + 1 v n ) = k v n 1 c n k v n w n .

Hence, the change in velocity over the nth time step depends on both vn and vn −1. To simplify the analysis, let us assume that cn = c and wn = w for all n. Then the length of each time step is

Δ t = c + w

seconds, and we do not need to include time as a state variable. We do, however, need to include both vn and vn −1. Let

x 1 ( n ) = v n x 2 ( n ) = v n 1 .

Compute

(4.12) Δ x 1 = k w x 1 k c x 2 Δ x 2 = x 1 x 2 .

The state space is ( x 1 , x 2 ) 2 .

Step 4 is to solve the model. There is one equilibrium point (0, 0) found at the intersection of the two lines

k w x 1 + k c x 2 = 0 x 1 x 2 = 0.

These are the steady-state equations obtained by setting Δx 1 = 0 and Δx 2 = 0.

Figure 4.9 shows a graph of the vector field

Figure 4.9. Graph of previous velocity x 2 versus current velocity x 1 showing vector field for the docking problem.

F ( x ) = ( k w x 1 k c x 2 , x 1 x 2 ) .

It appears as though solutions will tend toward equilibrium, but it is hard to be sure. If k, c, and w are large, then the equilibrium is probably unstable, but once again it is difficult to tell.

In mathematics we often come across problems we cannot solve. Usually the best thing to do in such cases is to review our assumptions and consider whether we can reduce the problem to one that we can solve by making a further simplifying assumption. Of course this would be a meaningless and trivial exercise unless the simplified problem had some real significance.

In our docking problem we have expressed the change in velocity Δvn as the sum of two components. One represents the change in velocity occurring between the time we read the velocity indicator and the time we adjust the acceleration controls. Suppose that we can do this very quickly. In particular, suppose that c is very much smaller than w. If vn and vn −1 are not too different, the approximation

Δ v n k w v n

should be reasonably accurate. The difference equation

Δ x 1 = k w x 1

is a familiar one from Example 4.4, and we know that we will get a stable equilibrium for any kw < 2. If kw < 1, we will approach the equilibrium asymptotically without overshooting.

Step 5 is to answer the analysis question in plain English. Maybe we should just say in plain English that we don't know the answer. However, we probably can do better than that. Let us report that a completely satisfactory solution is not obtainable by elementary graphical methods. In other words, it will take more work using more sophisticated methods to determine exactly the conditions under which the proposed control strategy will work. It does seem that the strategy will be effective in most cases as long as the time interval between control adjustments is not too long and the magnitude of those adjustments is not too large. The problem is complicated by the fact that there is a time delay between reading the velocity indicator and adjusting the controls. Since the actual closing velocity may change during this interval, we are acting on dated and inaccurate information. This adds an element of uncertainty to our calculations. If we ignore the effects of this time delay (which may be permissible if the delay is small), we can then draw some general conclusions, which are as follows.

The control strategy will work so long as the control adjustments are not too violent. Furthermore, the longer the interval between adjustments, the lighter those adjustments must be. In addition, the relationship is one of proportion. If we go twice as long between adjustments, we can only use half as much control. To be specific, if we adjust the controls once every 10 seconds, then we can only set the acceleration controls at 1/10 of the velocity setting to avoid overshooting the target velocity of zero. In order to allow for human and equipment error, we should actually set the controls somewhat lower, say 1/15 or 1/20 of velocity. More frequent adjustments require more frequent observations of the closing velocity indicator and more concentration on the part of the operator, but they do allow for the successful administration of more thrusting power under control. Presumably, this would be advantageous.

Normally, we would conclude our discussion of this problem with a fairly comprehensive sensitivity analysis. In view of the fact that we have not yet found a way to solve this problem, we will defer that discussion to a later chapter.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012386912850004X

Simulation of Dynamic Models

Mark M. Meerschaert , in Mathematical Modeling (Fourth Edition), 2013

6.4 Chaos and Fractals

One of the most exciting mathematical discoveries of the twentieth century is the chaotic behavior of some dynamic models. Chaos is characterized by the apparently random behavior of solutions, with extreme sensitivity to initial conditions. Chaotic dynamical systems can give rise to exotic limit sets called fractals. Chaotic dynamical system models have been used to handle problems in turbulent fluid flow, ecosystems with aperiodic population fluctuations, cardiac arrythmia, occasional reversal of the earth's magnetic poles, complex chemical reactions, lasers, and the stock market. Many of these applications are controversial, and their implications are still being explored. One of the most surprising things about chaos is the way it can emerge from simple nonlinear dynamic models.

Example 6.4. Reconsider the whale problem of Example 4.2, but now suppose that we use a discrete model to project population growth with a time step of several years. We know that for a time step of one year, the discrete and continuous time models behave in essentially the same manner. How large a time step can we use and still retain the same qualitative behavior as the continuous time model? What happens to the model if we use too large a time step?

We will use the five-step method. The results of step 1 are the same as in Figure 4.3. In step 2 we specify a continuous-time dynamical system model, which we will solve by simulation using the Euler method.

Consider a continuous-time dynamical system model

(6.12) d x d t = F ( x )

with x = (x 1,…, xn ) and F = (f 1,…, fn ), along with the initial condition x(t 0) = x 0 . The Euler method uses a discrete time dynamical system

(6.13) Δ x Δ t = F ( x )

to approximate the behavior of the continuous time system. One reason to use a large step size is to make long-range predictions. For example, if time t is measured in years, then Δx = F(xt is a simple projection of the change in the state variable x over the next Δt years, based on the current state information. If the step size Δt is chosen so that the relative change Δx/x remains small, then the discrete time system in Eq. (6.13) will behave much like the original continuous time system in Eq. (6.12). If the step size is too big, then the discrete time system can exhibit very different behavior.

Example 6.5. Consider the simple linear differential equation

(6.14) d x d t = x .

Compare the behavior of solutions to Eq. (6.14) to those of its discrete time analogue

(6.15) Δ x Δ t = x .

Solutions to Eq. (6.14) are all of the form

(6.16) x ( t ) = x ( 0 ) e t .

The origin is a stable equilibrium, and every solution curve decays exponentially fast to zero. For Eq. (6.15) the iteration function is

G ( x ) = x + Δ x = x x Δ t = ( 1 Δ t ) x .

Solutions to Eq. (6.15) are all of the form

x ( n ) = ( 1 Δ t ) n x ( 0 ) .

If 0 < Δt < 1, then x(n) → 0 exponentially fast, and the behavior is much like that of the continuous time differential equation. If 1 < Δt < 2, then we still have x(n) → 0, but the sign of x(n) oscillates between positive and negative. Finally, if Δt > 2, then x(n) diverges to infinity as it oscillates in sign. In summary, the solutions of Eq. (6.15) behave much like those of Eq. (6.14) as long as the time step Δt is chosen so that the relative change Δx/x remains small. If the time step Δt is too large, then Eq. (6.15) exhibits behavior entirely different from that of its continuous time analogue Eq. (6.14).

For linear dynamical systems, the time delays inherent in discrete approximations can lead to unexpected behavior. A stable equilibrium can become unstable, and new oscillations can occur.

For linear systems, this is about the only way in which the behavior of the discrete approximation can differ from that of the original continuous system. For nonlinear continuous time dynamical systems, however, discrete approximations can also exhibit chaotic behavior. In a chaotic dynamical system, there is an extreme sensitivity to initial conditions, along with apparently random behavior of individual solutions. Chaos is usually associated with systems in which nearby solutions tend to diverge, but overall remain bounded. This combination of factors can only occur in a nonlinear system.

The study of chaos in discrete time dynamical systems is an active area of research. Some iteration functions produce extremely complex sample paths, including fractals. A typical fractal is a set of points in the state space that is self-similar and whose dimension is not an integer. Self-similar means that the object contains smaller pieces that are exact scaled-down replicas of the whole. One simple way to measure dimension is to count the number of boxes needed to cover the object. For a one-dimensional object, it takes n times as many boxes if they are 1/n times as large; for a two-dimensional object, it takes n 2 times as many, and so on. For an object with fractal dimension d, the number of boxes it takes to cover the object increases like nd as the box size 1/n tends to zero.

Step 3 is to formulate the model. We begin with the continuous time dynamical system model

(6.17) d x 1 d t = f 1 ( x 1 , x 2 ) = 0 . 05 x 1 ( 1 x 1 150 , 000 ) α x 1 x 2 d x 2 d t = f 2 ( x 1 , x 2 ) = 0 . 08 x 2 ( 1 x 2 400 , 000 ) α x 1 x 2

on the state space x 1 ≥ 0, x 2 ≥ 0, where x 1 denotes the population of blue whales and x 2 the population of fin whales. In order to simulate this model, we will transform to a set of difference equations

(6.18) Δ x 1 = f 1 ( x 1 , x 2 ) Δ t Δ x 2 = f 2 ( x 1 , x 2 ) Δ t

over the same state space. Then, for example, Δx 1 represents the change in the population of Blue whales over the next Δt years. We will assume that α = 10−8 to start with, and later on we will do a sensitivity analysis on α. Our objective is to determine the behavior of solutions to the discrete time dynamical system in Eq. (6.18) and compare to what we know about solutions to the continuous time model in Eq. (6.17).

In step 4 we solve the problem by simulating the system in Eq. (6.17) using a computer implementation of the Euler method for several different values of h = Δt. We assume that x 1(0) = 5, 000 and x 2(0) = 70, 000, as in Example 6.2. Figure 6.27 illustrates the results of our simulation with N = 50 iterations and a time step of h = 1 years. In 50 years the fin whales grow back steadily but do not quite reach their eventual equilibrium level. In Figure 6.28 we increase the step size to h = 2 years. Now our simulation with N = 50 iterations shows the fin whale population approaching its equilibrium value. Using a larger time step h is an efficient way to project further into the future, but then something interesting happens.

Figure 6.27. Graph of fin whales x 2 versus time t for discrete time simulation of the whale problem with time step h = 1.

Figure 6.28. Graph of fin whales x 2 versus time t for discrete time simulation of the whale problem with time step h = 2.

Figure 6.29 shows the result of using a time step of h = 24 years. The solution still approaches its equilibrium, but now there is an oscillation. Figure 6.30 shows what happens when we use a time step of h = 27 years. Now the population actually diverges from the equilibrium and eventually settles into a discrete limit cycle of period two.

Figure 6.29. Graph of fin whales x 2 versus time t for discrete time simulation of the whale problem with time step h = 24.

Figure 6.30. Graph of fin whales x 2 versus time t for discrete time simulation of the whale problem with time step h = 27.

When h = 32, the solution settles down into a limit cycle of period four; see Figure 6.31. Figure 6.32 shows that when h = 37, the solution exhibits chaotic behavior. The effect is similar to that of a random number generator. When h = 40 (not shown), the solution quickly diverges to infinity. The behavior of the blue whale population is similar. Different initial conditions and different values of α produce similar results. In every case there is a transition from stability to instability as the step size h increases. As the equilibrium becomes unstable, first oscillations appear, then discrete limit cycles, and then chaos. Finally, if h is too large the solutions simply diverge.

Figure 6.31. Graph of fin whales x 2 versus time t for discrete time simulation of the whale problem with time step h = 32.

Figure 6.32. Graph of fin whales x 2 versus time t for discrete time simulation of the whale problem with time step h = 37

Step 5 is to answer the question. The discrete approximation to the continuous time model is useful as long as the time step is small enough to keep the relative change in the state variables small at each time step. Using a larger time step allows us to project further into the future, but when the time step becomes too large, the behavior of the discrete time system no longer resembles that of the original continuous time model. It is interesting to observe the strange behavior of the discrete time system when a larger time step is employed; however, this behavior has no apparent connection to the real-world situation we are trying to model.

Many population models are based on some variation of the logistic model x′ = rx(1−x/K). Most of these models exhibit chaos in the discrete approximation. It is typical for the dynamics to transition from stable equilibrium to limit cycles to chaos (and then to unstable divergence) as the time step is increased. At the transition from limit cycles to chaos, the limit set typically becomes a fractal. See Exercise 25 at the end of this chapter for an illustration. There are many interesting books and articles on chaos and fractals. Strogatz (1994) is one good reference at an advanced undergraduate to beginning graduate level.

The emergence of chaos from the discrete approximation in the whale problem is interesting, but seems to have nothing to do with the real world. At best it is a mathematical curiosity, at worst a numerical headache. The next example, however, shows that chaos and fractals can also emerge naturally from realistic models of physical situations.

Example 6.6. Consider a layer of air that is heated from the bottom. In certain situations the warmer air rising up interacts with the colder air sinking down to form turbulent convection rolls. The complete derivation of the dynamics of motion involves a system of partial differential equations, which can be solved by the method of Fourier transforms; see Lorentz (1963). A simplified representation involves three state variables. The variable x 1 represents the rate at which the convection rolls rotate, x 2 represents the temperature difference between the ascending and descending air currents, and x 3 represents the deviation from linearity of the vertical temperature profile, a positive value indicating that the temperature varies faster near the boundary. The equations of motion for this system are

(6.19) x 1 ' = f 1 ( x 1 , x 2 , x 3 ) = σ x 1 + σ x 2 x 2 ' = f 2 ( x 1 , x 2 , x 3 ) = x 2 + r x 1 x 1 x 3 x 3 ' = f 3 ( x 1 , x 2 , x 3 ) = b x 3 + x 1 x 2 ,

and we will consider the realistic case where σ = 10 and b = 8/3. The remaining parameter r represents the temperature difference between the top and bottom of the air layer. Increasing r pumps more energy into the system, creating more vigorous dynamics. The dynamical system of Eq. (6.19) is called the Lorentz equations, after the meteorologist E. Lorentz who analyzed them.

To find the equilibrium points of Eq. (6.19), we solve the system of equations

σ x 1 + σ x 2 = 0 x 2 + r x 1 x 1 x 3 = 0 b x 3 + x 1 x 2 = 0

for the three state variables. Obviously, (0, 0, 0) is one solution. The first equation implies x 1 = x 2. Substituting into the second equation, we obtain

x 1 + r x 1 x 1 x 3 = 0 x 1 ( 1 + r x 3 ) = 0

so that if x 1 ≠ 0, then x 3 = r − 1. Then, from the third equation we obtain x 1 2 = b x 3 = b ( r 1 ) . If 0 < r < 1, there are no real roots to this equation, and so the origin is the only equilibrium point. If r = 1, then x 3 = 0, and again the origin is the only equilibrium point. If r > 1, then there are three equilibrium points

E 0 = ( 0 , 0 , 0 ) E + = ( b ( r 1 ) , b ( r 1 ) , r 1 ) E = ( b ( r 1 ) , b ( r 1 ) , r 1 ) .

Graphical analysis of the vector field is difficult, since we are now in three dimensions. Instead, we will perform an eigenvalue analysis to test the stability of these three equilibrium points. The matrix of partial derivatives is

D F = ( σ σ 0 r x 3 1 x 1 x 2 x 1 b ) .

At the equilibrium E 0 = (0, 0, 0) with the parameter values σ = 10 and b = 8/3, this matrix becomes

A = ( 10 10 0 r 1 0 0 0 8 / 3 ) ,

which has three real eigenvalues

λ 1 = 11 81 + 40 r 2 λ 2 = 11 + 81 + 40 r 2 λ 3 = 8 3

for any value of r > 0. If 0 < r < 1, then all of these eigenvalues are negative, so that the origin is a stable equilibrium. If r > 1, then λ2 > 0, so the origin is an unstable equilibrium.

The eigenvalue analysis for the remaining two equilibria is rather messy. Fortunately, the eigenvalues are the same at both E + and E −. For 1 < r < r 1 ≈ 1.35, all three eigenvalues are real and negative. For any value of r > r1, there is one eigenvalue λ1 < 0 and one complex conjugate pair λ2 = α + iβ, λ3 = α−iβ. The real part α is negative for r 1 < r < r 0 and positive for r > r 0, where r 0 ≈ 24.8. Thus, for 1 < r < r0, there are two stable equilibria at E + and E −, while for r > r 0, every equilibrium is unstable. Solutions nearby these two equilibria will behave much like those of the linear system x′ = Ax, where A is the matrix of partial derivatives DF evaluated at the equilibrium point. Every component of every linear solution can be written as a linear combination of terms of the form e λ 1 t , e αt cos(βt), and e αt sin(βt). When r 1 < r < r 0, nearby solution curves spiral into the nonzero equilibrium points, and when r > r 0, they spiral outwards. It also turns out that solutions do not diverge to infinity when r > r 0. We have seen this kind of behavior before, in the nonlinear RLC circuit of Example 5.4. In that case a computer simulation showed that the solution curves settled into a limit cycle. We will now simulate the dynamical system of Eq. (6.19) to determine the long-term behavior of solutions.

The Euler method for three state variables uses exactly the same algorithm as in Figure 6.19 except for the addition of another state variable. A computer implementation of that algorithm was used to simulate the solution of Eq. (6.19) in the case σ = 10 and b = 8/3. Figure 6.33 shows the results of a simulation using r = 8 and initial conditions (x 1, x 2, x 3) = (1, 1, 1). We graphed x 2 versus x 1 since these variables are the easiest to interpret. We used N = 500 and T = 5, so that the step size was h = 0.01. Additional sensitivity runs were made to ensure that increasing the simulation time T or decreasing the step size h led to essentially the same graph. As we expected from our earlier analysis, the solution curve spirals into the equilibrium point at x 1 = x 2 = 4.32 (and x 3 = 7). Recall that x 1 represents the rate at which the convection rolls rotate, and x 2 represents the temperature difference between the ascending and descending air currents. When r = 8, both of these quantities eventually settle down into a stable equilibrium. Figure 6.34 shows the simulation for the case r = 8 and initial condition (x 1, x 2, x 3) = (7, 1, 2). Keep in mind that this graph is actually a projection of a three-dimensional picture. Of course, the real solution curve does not cross itself. That would violate the uniqueness of solutions. Once again, the solution spirals into the equilibrium.

Figure 6.33. Graph of temperature differential x 2 versus convection rate x 1 for the weather problem with r = 8 and initial condition (x 1, x 2, x 3) = (1, 1, 1).

Figure 6.34. Graph of temperature differential x 2 versus convection rate x 1 for the weather problem with r = 8 and initial condition (x 1, x 2, x 3) = (7, 1, 2).

As we increase the value of r, the simulation becomes extremely sensitive to discretization. Figure 6.35 shows the results of a simulation using r = 18 and initial conditions (x 1, x 2, x 3) = (6.7, 6.7, 17). We used N = 500 and T = 2.5 for a step size of h = 0.05. The solution curve rotates rapidly about the equilibrium E + = (6.733, 6.733, 17) while it spirals in very slowly. Figure 6.36 shows the results of the same simulation using N = 500 and T = 5 for a slightly larger step size of h = 0.01. In this case the solution spirals outward, away from the equilibrium. Of course, this is not really what is happening in the continuous time model, it is just an artifact of our simulation method. Because the system is very near the point between stability and instability, we must be careful to perform sensitivity analysis on the parameters N and T to ensure that the behavior of the discrete time system really reflects what is going on with the continuous time model.

Figure 6.35. Graph of temperature differential x 2 versus convection rate x 1 for the weather problem with r = 18 and initial condition (x 1, x 2, x 3) = (6.7, 6.7, 17) using step size h = 0.005.

Figure 6.36. Graph of temperature differential x 2 versus convection rate x 1 for the weather problem with r = 18 and initial condition (x 1, x 2, x 3) = (6.7, 6.7, 17) using step size h = 0.01.

Finally, we consider the case r > r 0, where we know that the equilibrium points are all unstable. Figure 6.37 shows the results of a simulation using r = 28 and initial conditions (x 1, x 2, x 3) = (9, 8, 27). We used N = 500 and T = 10 for a step size of h = 0.02. A careful sensitivity analysis on N and T was performed to verify that this solution curve really represents the behavior of our continuous time model. At first the solution curve rotates rapidly about the equilibrium E + = (8.485, 8.485, 27) while it spirals outward very slowly. Eventually, the solution curve heads out toward the equilibrium E = (−8.485,−8.485, 27), where it spirals around for a while, and then eventually heads back towards E+. Simulations using larger values of N and T indicate that the solution never repeats itself, yet remains bounded. It continues to cross over between the region around E+ and the region around E .

Figure 6.37. Graph of temperature differential x 2 versus convection rate x 1 for the weather problem with r = 28 and initial condition (x 1, x 2, x 3) = (9, 8, 27).

Other solutions with nearby initial conditions show essentially the same behavior. There is also an extreme sensitivity to initial conditions. Solutions that begin very close together eventually move farther apart. The solution curve represented in Figure 6.37 started at the point (x 1, x 2, x 3) = (9, 8, 27) at time t = 0, and by time t = 10, it was spiraling around E−. A nearby solution curve, starting at the point (x 1, x 2, x 3) = (9.01, 8, 27) at time t = 0, ends up spiraling around E + by time t = 10. Figure 6.38 compares the path of the x 1 coordinate for these two solution curves. By time t = 10 it would be impossible to guess that these two solutions began at almost the same initial condition. Yet if we let the simulation time T get larger and larger, we see that both solution curves trace out almost exactly the same shape in the state space. This limiting set is called a strange attractor. Although it is bounded, its length is infinite, and cross sections (for example, the collection of all points at which the curve intersects the plane x 3 = 0) are typically fractals. In fact, one reasonable analytic approach for dealing with the strange attractor is to consider the mapping that sends a given intersection point to the next. This iteration function acts much like the one we analyzed in Example 6.4 for the logistic model of population growth. In this rather unexpected way, the analysis of discrete and continuous time dynamical systems finds a new connection.

Figure 6.38. Graph of convection rate x 1 versus time t for the weather problem with r = 28 comparing two initial conditions (x 1, x 2, x 3) = (9, 8, 27) and (x 1, x 2, x 3) = (9.01, 8, 27).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123869128500063

Regularization for Dynamical ζ-Functions

V. Baladi , in Encyclopedia of Mathematical Physics, 2006

Introduction

If A is a finite, say N × N , matrix with complex coefficients, the following easy equality gives an expression for the polynomial Π k = 1 N ( 1 z λ k ) = det ( Id z A ) :

[1] det Id zA = exp n = 1 z n n tr A n

(here, Id denotes the identity matrix and tr is the trace of a matrix). Even in this trivial finite-dimensional case, the z-radius of convergence of the logarithm of the right-hand side only gives information about the spectral radius (the modulus of the largest eigenvalue) of A. The zeros of the left-hand side (i.e., the inverses z = 1 / λ k of the nonzero eigenvalues of A) can only be located after extending holomorphically the right-hand side. The purpose of this article is to discuss some dynamical situations in which A is replaced by a linear bounded operator L , acting on an infinite-dimensional space, and for which a dynamical determinant (or dynamical ζ-function), constructed from periodic orbits, takes the part of the right-hand side. In the examples presented, L will be a transfer operator associated to a weighted discrete-time dynamical system: given a transformation f : M M on a compact manifold M and a function g : M C , we set

[2] L φ = g φ f 1

(If f is not inversible, it is understood, e.g., that f has at most finitely many inverse branches, and that the right-hand side of [2] is the sum over these inverse branches, see the next section.) We let L act on a Banach space of functions or distributions φ on M. For suitable g (in particular g = | det T f 1 | when this Jacobian makes sense), the spectrum of L is related to the fine statistical properties of the dynamics f: existence and uniqueness of equilibrium states (related to the maximal eigenvector of L ), decay of correlations (related to the spectral gap), limit laws, entropies, etc: see, for example, Baladi (1998) or Cvitanović et al. (2005). The operator L is not always trace-class, indeed, it sometimes is not compact on any reasonable space. Even worse, its essential spectral radius may coincide with its spectral radius. (Recall that the essential spectral radius of a bounded linear operator L acting on a Banach space is the infimum of those ρ > 0 , such that the spectrum of L outside of the disk of radius ρ is a finite set of eigenvalues of finite algebraic multiplicity.) However, various techniques allow us to prove that a suitable dynamically defined replacement for the right-hand side of [1] extends holomorphically to a disk in which its zeros describe at least part of the spectrum of L . Some of these techniques have a "regularization" flavor, and we shall concentrate on them.

In the following section, we present the simplest case: analytic expanding or hyperbolic dynamics, for which no regularization is necessary and the Grothendieck–Fredholm theory can be applied. Next, we consider analytic situations where finitely many neutral periodic orbits introduce branch cuts in the dynamical determinant, and see how to "regularize" them. Finally, we discuss a kneading operator regularization approach, inspired by the work of Milnor and Thurston, and applicable to dynamical systems with finite smoothness.

Despite the terminology, none of the regularization techniques discussed below match the following "ζ-regularization" formula:

[3] k = 1 a k = exp ( d d s k = 1 a k s | s = 0 )

(For information about the above ζ-regularization and its applications to physics, we refer, e.g., to Elizalde 1995. See also Voros (1987) and Fried (1986) for more geometrical approaches and further references, e.g., to the work of Ray and Singer.)

We do not cover all aspects of dynamical ζ-functions here. For more information and references, we refer to our survey Baladi (1998), to the more recent surveys by Pollicott (2001) and Ruelle (2002), and also to the exhaustive account by Cvitanović et al. (2005), which contains a rich array of physical applications.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0125126662000912

Random walks and flights over connected graphs and complex networks

D. Volchenkov , in Communications in Nonlinear Science and Numerical Simulation, 2011

It is well known that the evolution of densities f R N such that v V f ( v ) = 1 in systems for which the dynamics are deterministic may be studied by the use of the linear Perron–Frobenius and Koopman operators [3]. Strongly connected directed graphs G ( V , E ) can be interpreted as the discrete time dynamical systems specified by the dynamical law S : V V ,

(108) S ( A ) = { w V | v A , v w } .

Therefore, the transition operators of random walks defined above can be readily related to the Perron–Frobenius and Koopman operators, [31] (the reference has been given in [3]). In particular, the Perron–Frobenius operator P t , t Z + , transports the density function f, supported on the set A, forward in time to a function supported on some subset of S t ( A ) ,

(109) v A P t f ( v ) = S t - 1 ( A ) f ( v ) .

The Koopman operator adjoint to the Perron–Frobenius one is thought of as transporting the density f supported on A backward in time to a density supported on a subset of S t - 1 ( A ) :

(110) ( P ) t f ( v ) = f ( S t ( v ) ) .

Read full article

URL:

https://www.sciencedirect.com/science/article/pii/S1007570410001127