Tuesday, May 28, 2013

Simulation Class: Runge-Kutta Methods

Student example from Last Week

Florian Hecht used the lessons from last week to create this inspiring example: Florian's Spring Solver

Higher-Order Time Steps

In our previous class, we improved the accuracy and stability of our simulation's time step by exploring a number of different integration techniques, of which the best-performing were the Euler-Cromer and Velocity-Verlet techniques. However, while stable, their solutions drifted from the theoretically correct answer over time.

In each of our previous examples, the system's equations were evaluated just once per time-step, or in some cases twice, which means the solutions were first- or second- order. In order to improve our estimation of the function we're trying to solve for, we can make more than one or two evaluations per time-step, resulting in a higher-order approximation.

Runge-Kutta methods are a general class of explicit and implicit iterative methods for approximating functions from their derivatives. In order to explore this class of techniques, let's quickly revisit the equations which define our simple harmonic oscillator system. We have a mass attached to an ideal spring with a certain stiffness, without friction, resulting in a force equal to a constant times its displacement from the spring's origin, in the direction opposite the displacement. That system is defined like this:

\( x'_t = v_t \\ v'_t = a_t \\ a_t = -\frac{k}{m} x_t \)
The idea behind all of the explicit Runge-Kutta methods is that we will estimate a velocity, a position, and then an acceleration, and then we will improve our estimation by using the previous estimates to make better estimates. We can do this any number of times, and the number of these estimation stages is what we mean by the "order" of the approximation. After all the estimation steps have been taken, the state of the system is finally updated using a weighted average of all the estimates. The only difference between different variations of the explicit Runge-Kutta methods is how the different estimates are weighted, and how each estimate is used to produce the next estimate.

Second-Order Runge-Kutta

To illustrate the concept simply, here is the traditional second-order Runge-Kutta method, applied to our simple system. In this notation, we use the \(\star\) superscript to indicate an estimate, and the numbered subscripts to indicate which iteration of estimation we're in. Note that each estimation iteration only references values with subscripts less than its own: this is what makes the method explicit, it computes new values only from previous values.

\( \begin{array}{l} v^\star_1 = v_t, & a^\star_1 = -\frac{k}{m} x_t, \\ v^\star_2 = v_t + \Delta t\ a^\star_1, & a^\star_2 = -\frac{k}{m}( x_t + \Delta t\ v^\star_1 ), \\ x_{t+\Delta t} = x_t + \frac{\Delta t}{2}(v^\star_1 + v^\star_2), & v_{t+\Delta t} = v_t + \frac{\Delta t}{2}(a^\star_1 + a^\star_2), \\ \end{array} \)

In processing code, we can implement the RK2 timestep as follows:


// Acceleration from Position.
float A_from_X( float i_x )
{
    return -( Stiffness / BobMass ) * i_x;
}

// Time Step function.
void TimeStep( float i_dt )
{
    float vStar1 = State[StateVelocityX];
    float aStar1 = A_from_X( State[StatePositionX] );

    float vStar2 = State[StateVelocityX] + ( i_dt * aStar1 );
    float xTmp = State[StatePositionX] + ( i_dt * vStar1 );
    float aStar2 = A_from_X( xTmp );

    State[StatePositionX] += ( i_dt / 2.0 ) * ( vStar1 + vStar2 );
    State[StateVelocityX] += ( i_dt / 2.0 ) * ( aStar1 + aStar2 );

    // Update current time.
    State[StateCurrentTime] += i_dt;
}

Note that we've extracted the calculation of acceleration from position into a separate function. This is to emphasize the structure of the system and not bog the implementation down by repeated instances of the spring and mass constants. In larger systems, this is usually the most expensive part of the evaluation, so this implementation helps us understand how much actual work we're doing. We can see above that we're evaluating the acceleration twice in this time step. Here's what that looks like running:

RK2 is basically the same as the midpoint method, applied to both the position and velocity estimates.

It's not as bad as forward Euler, but it diverges from the solution quickly, and if you watch it long enough, you will see that it is not entirely stable, and will eventually produce large amplitudes. We could apply the same type of forward-backwards approach that we applied to Euler to get Euler-Cromer to RK2, but the results are still not a good fit, though then they would be at least stable again. We are simply using the second-order formulation to illustrate the technique behind Runge-Kutta methods, so let's explore what happens at higher orders.

Fourth-Order Runge-Kutta

As we increase the number of estimations, the solution improves dramatically. The fourth-order Runge-Kutta approximation is very popular, and with good reason. It makes four sets of estimates, as you'd expect from a fourth-order method, and then uses an unevenly weighted average of those estimates to get the final result. The derivation of exactly how RK4 chooses its estimation timesteps, and how it weights the average of its estimates, is complex. If you're interested in working through the Taylor-series expansion, it is detailed here. For now, we'll trust the coefficients that are given and look at how it works. Here's the formulation, as applied to our system:

\( \begin{array}{l} v^\star_1 = v_t, & a^\star_1 = -\frac{k}{m} x_t, \\ v^\star_2 = v_t + \frac{\Delta t}{2}\ a^\star_1, & a^\star_2 = -\frac{k}{m}( x_t + \frac{\Delta t}{2}\ v^\star_1 ), \\ v^\star_3 = v_t + \frac{\Delta t}{2}\ a^\star_2, & a^\star_3 = -\frac{k}{m}( x_t + \frac{\Delta t}{2}\ v^\star_2 ), \\ v^\star_4 = v_t + \Delta t\ a^\star_3, & a^\star_4 = -\frac{k}{m}( x_t + \Delta t\ v^\star_3 ), \\ x_{t+\Delta t} = x_t + \frac{\Delta t}{6} (v^\star_1 + 2 v^\star_2 + 2 v^\star_3 + v^\star_4), & v_{t+\Delta t} = v_t + \frac{\Delta t}{6} (a^\star_1 + 2 a^\star_2 + 2 a^\star_3 + a^\star_4), \\ \end{array} \)
Here's what that looks like in Processing code, computing temporary positions at each sub-estimation, and again using the separate function for acceleration from position:


// Time Step function.
void TimeStep( float i_dt )
{
    float vStar1 = State[StateVelocityX];
    float aStar1 = A_from_X( State[StatePositionX] );

    float vStar2 = State[StateVelocityX] + ( ( i_dt / 2.0 ) * aStar1 );
    float xTmp2 = State[StatePositionX] + ( ( i_dt / 2.0 ) * vStar1 );
    float aStar2 = A_from_X( xTmp2 );

    float vStar3 = State[StateVelocityX] + ( ( i_dt / 2.0 ) * aStar2 );
    float xTmp3 = State[StatePositionX] + ( ( i_dt / 2.0 ) * vStar2 );
    float aStar3 = A_from_X( xTmp3 );

    float vStar4 = State[StateVelocityX] + ( i_dt * aStar3 );
    float xTmp4 = State[StatePositionX] + ( i_dt * vStar3 );
    float aStar4 = A_from_X( xTmp4 );

    State[StatePositionX] += ( i_dt / 6.0 ) * 
        ( vStar1 + (2.0*vStar2) + (2.0*vStar3) + vStar4 );
    State[StateVelocityX] += ( i_dt / 6.0 ) * 
        ( aStar1 + (2.0*aStar2) + (2.0*aStar3) + aStar4 );

    // Update current time.
    State[StateCurrentTime] += i_dt;
}
Here's what RK4 looks like running in Processing:

Well hot-damn. The popularity of RK4 is easy to understand! However, it comes at a cost - four evaluations of acceleration in the timestep. Futhermore, there is a temporary storage requirement which begins to be an issue in larger simulations where a temporary position or velocity may amount to millions of points. Also, RK4 is not unconditionally stable. In many production cases, the stability of Velocity Verlet or Euler-Cromer may be more attractive (especially with the fewer calculations), and the additional accuracy provided by RK4 may not be visible. These are the trade-offs and design decisions made by simulation designers when implementing simulation engines.

Interesting point - the "correct" solution to our simple system here is just a cosine function of time. Our numerical integrator is achieving almost identical results without ever evaluating anything other than a multiplication or an addition. This illustrates how Taylor-series expansions are used to provide numerical approximations to mathematical functions.

So far, we've only been considering differential equations of a single variable, time. Now that we've achieved a pretty good time step, the next class will explore a system in which the equations feature derivatives in both time and space.

Download

All of the files associated with this class are available via a public github repository, found here: https://github.com/blackencino/SimClass

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.