Tuesday, May 28, 2013

Simulation Class: The Wave Equation

The Wave Equation

In our previous class, we improved our time integration to use the popular fourth-order Runge-Kutta method, and now our simple spring system is looking pretty good. It would be great if we could just use RK4 for any simulation problem we might encounter, and have it just work. Let's explore a system that's a little bit more complicated.

The 1D Wave Equation describes the motion of waves in a height field, and can be written simply as:

\( \frac{\partial^2 h}{\partial t^2} = c^2 \frac{\partial^2 h}{\partial x^2} \)

Just like in our simple spring example, we have an equation of motion in which the acceleration of position (height, in this case) is expressed as a function of the current value of position (again, height). However, we've added a complication! The function of position itself involves a second derivative of height, this time with respect to space, rather than time. The constant \(c^2\) represents the square of the wave propagation speed, and is a parameter to the system. So now we've graduated from our previous simple Ordinary Differential Equations ODE to the much more exciting Partial Differential Equations PDE.

The Wave Equation, in 1D, is extremely similar at its core to the simple spring equation, and indeed, if you explore the wikipedia article, you'll see that it is actually derived by imagining a series of masses, evenly spaced, attached to each other via springs, as seen in this illustration:

State Definition for Wave Equation

We're going to try to use our RK4 implementation from the spring example directly, so we'll need to create space in the state vector for the temporary values created. RK4 sets its final estimate to a weighted average of the four estimates it creates along the way, and each estimate is created from the previous estimate. Therefore, rather than storing all four estimates separately, we can accumulate each estimate directly into our final sum. Here's what our state will look like:

int StateSize = 7;
float[][] State = new float[StateSize][ArraySize];
int StateHeight = 0;
int StateVel = 1;
int StateHeightPrev = 2;
int StateVelPrev = 3;
int StateVelStar = 4;
int StateAccelStar = 5;
int StateHeightTmp = 6;

Initial State

The initial state of the wave equation example is a bit more complex than the pendulum or the spring. For this example, we're going to use several octaves of perlin noise to create a wave-like distribution of crests. Because our initial state is complex and large, we don't want to keep a copy of it around. If we need to reset, we'll simply call the SetInitialState function that we create. Here's that function, which is entirely subject to artistic interpretation:


void SetInitialState()
{
    noiseSeed( 0 );
    for ( int i = 0; i < ArraySize; ++i )
    {
        float worldX = 2341.17 + DX * ( float )i;
        State[StateHeight][i] = 0.5 * anoise( worldX * 0.0625 ) +
                         0.4 * anoise( worldX * 0.125 ) +
                         0.3 * anoise( worldX * 0.25 ) +
                         0.2 * anoise( worldX * 0.5 );
        State[StateVel][i] = 0.0;
    }
    EnforceBoundaryConditions( StateHeight );
    EnforceBoundaryConditions( StateVel );
    StateCurrentTime = 0.0;
}

Boundary Conditions

In order to compute the second derivative of height, spatially, we need to reference values at adjacent points to the left and right. When we reach the sides of the simulation, though, we have a problem - what value do we use when we're looking off the side of the simulation? For these areas, the simulation needs to have an answer without actually storing a state. The areas of the simulation where we already know the answer, or some aspect of the answer, are called Boundary Conditions. The area of the simulation where boundary conditions are applied is called, not surprisingly, the boundary. The boundary in our simulation is the edges of the wave to the left and right, but if we were to have collision objects, for example, the places where the simulation met the collision objects would be considered a boundary also.

There are many different types of boundary conditions, which place varyingly subtle constraints on the values at boundary points. In this simple wave equation simulation, we're going to assert that the values at the boundary are exactly equal to the simulated values just adjacent to the boundary. In other words, we are asserting that the derivative of any value across the boundary is zero. This explicit constraint on a derivative of a value is called a Neumann Boundary Condition. We implement it in code very simply, by just copying the values to the edges:


void EnforceBoundaryConditions( int io_a )
{
    State[io_a][0] = State[io_a][1];
    State[io_a][ArraySize-1] = State[io_a][ArraySize-2];
}

Note that in this implementation, we allow ourselves to affect any part of the state vector. The flexibility of our abstract state field naming again shows its power!

Integration Tools

In the simple spring example, we were able to simply create temporary floats to store temporary values, and we were able to use arithmetic operators to compute the integrated fields. Because our state values are now arrays of data, we need special functions for evaluating equations over the whole array at once. These are called "vectorized" functions. Let's break them down one by one. First, a utility function for copying an array from one part of the state vector to another:


void CopyArray( int i_src, int o_dst )
{
    for ( int i = 0; i < ArraySize; ++i )
    {
        State[o_dst][i] = State[i_src][i];
    }
}

As we move from time step to time step, we need to swap out our "current" height and velocity values into a "previous" array. Because of the way we've indexed our state vector with labels, we don't have to actually copy the values over. You could imagine that in a very large fluid simulation, this copying of millions of points of data would be very expensive. Instead, we just swap the labels:


void SwapHeight()
{
    int tmp = StateHeight;
    StateHeight = StateHeightPrev;
    StateHeightPrev = tmp;
}

void SwapVel()
{
    int tmp = StateVel;
    StateVel = StateVelPrev;
    StateVelPrev = tmp;
}

void SwapState()
{
    SwapHeight();
    SwapVel();
}

The RK4 technique involves, for every estimate other than the first, estimating a temporary value for height using the vStar estimate from the previous estimation iteration, and the height of the previous time step. The time interval over which these temporary height estimates are made varies from iteration to iteration. This function is simple! In our spring example, it looked like this:


float xTmp = State[StatePositionX] + ( i_dt * vStar1 );

In our wave state, this operation on a single float gets expanded to work on the whole state array like so:


// Estimate temp height
void EstimateTempHeight( float i_dt )
{
    for ( int i = 0; i < ArraySize; ++i )
    {
        State[StateHeightTmp][i] = State[StateHeightPrev][i] + 
                ( i_dt * State[StateVelStar][i] );
    }
    EnforceBoundaryConditions( StateHeightTmp );
}

Following along, in the simple spring example we then had to estimate a new velocity, vStar based on the previous iteration's aStar, and the previous time step's velocity. In the spring example, that looks like this:


float vStar2 = State[StateVelocityX] + ( i_dt * aStar1 );

In our wave state, this operation is expanded as follows:


// Estimate vel star
void EstimateVelStar( float i_dt )
{
    for ( int i = 0; i < ArraySize; ++i )
    {
        State[StateVelStar][i] = State[StateVelPrev][i] + 
                ( i_dt * State[StateAccelStar][i] );
    }
    EnforceBoundaryConditions( StateVelStar );
}

Discretizing the Spatial Derivative of Height

Finally, we need to port the computation of acceleration from an estimated position. In the spring example, this was as simple as could be (on purpose):


// Acceleration from Position.
float A_from_X( float i_x )
{
    return -( Stiffness / BobMass ) * i_x;
}
Now that we have a spatial derivative in our setup, though, we need to calculate our acceleration based not only on the position at each point in the grid, but also the neighboring points in the grid. In order to do that, we need to discretize the second spatial spatial derivative, which we can do using finite differences, as we learned several classes ago. The finite difference approximation to the second derivative, in one dimension, looks like this:

\( \frac{\partial^2 h}{\partial x^2} \approx \frac{ ( \frac{ ( h_{i+1} - h_i ) }{\Delta x} - \frac{ ( h_i - h_{i-1} ) }{\Delta x} ) }{\Delta x} \)

Which simplifies to:

\( \frac{\partial^2 h}{\partial x^2} \approx \frac{ ( h_{i+1} + h_{i-1} - 2 h_i ) }{\Delta x^2} \)

We can implement this in code as follows:


void A_from_H( int i_h )
{
    for ( int i = 1; i < ArraySize-1; ++i )
    {
        float hLeft = State[i_h][i-1];
        float hCen = State[i_h][i];
        float hRight = State[i_h][i+1];

        float d2h_dx2 = ( hRight + hLeft - ( 2.0*hCen ) ) / sq( DX );

        State[StateAccelStar][i] = sq( WaveSpeed ) * d2h_dx2;
    }
    
    EnforceBoundaryConditions( StateAccelStar );
}

Note that we only iterate across the center values in the arrays - we skip the very first and the very last points, because they don't have values adjacent to themselves in one direction. Here is where the boundary conditions help to fill in the gaps, and you can see that they're applied at the last step.

The Time Step

Okay, so armed with all the pieces above, we can construct our TimeStep. It should look just like the spring solver's RK4 time-step, with the appropriate adjustments for the array data. Here was the old time step:


// Time Step function.
void TimeStep( float i_dt )
{
    float vStar1 = State[StateVelocityX];
    float aStar1 = A_from_X( State[StatePositionX] );

    float vStar2 = State[StateVelocityX] + ( ( i_dt / 2.0 ) * aStar1 );
    float xTmp2 = State[StatePositionX] + ( ( i_dt / 2.0 ) * vStar1 );
    float aStar2 = A_from_X( xTmp2 );

    float vStar3 = State[StateVelocityX] + ( ( i_dt / 2.0 ) * aStar2 );
    float xTmp3 = State[StatePositionX] + ( ( i_dt / 2.0 ) * vStar2 );
    float aStar3 = A_from_X( xTmp3 );

    float vStar4 = State[StateVelocityX] + ( i_dt * aStar3 );
    float xTmp4 = State[StatePositionX] + ( i_dt * vStar3 );
    float aStar4 = A_from_X( xTmp4 );

    State[StatePositionX] += ( i_dt / 6.0 ) * 
        ( vStar1 + (2.0*vStar2) + (2.0*vStar3) + vStar4 );
    State[StateVelocityX] += ( i_dt / 6.0 ) * 
        ( aStar1 + (2.0*aStar2) + (2.0*aStar3) + aStar4 );

    // Update current time.
    State[StateCurrentTime] += i_dt;
}

And here's the new one, for the wave system:


// Time Step function.
void TimeStep( float i_dt )
{
    // Swap state
    SwapState();
    
    // Initialize estimate. This just amounts to copying
    // The previous values into the current values.
    CopyArray( StateHeightPrev, StateHeight );
    CopyArray( StateVelPrev, StateVel );
    
    // Vstar1, Astar1
    CopyArray( StateVel, StateVelStar );
    A_from_H( StateHeight );
    // Accumulate
    AccumulateEstimate( i_dt / 6.0 );
    
    // Height Temp 2
    EstimateTempHeight( i_dt / 2.0 );
    // Vstar2, Astar2
    EstimateVelStar( i_dt / 2.0 );
    A_from_H( StateHeightTmp );
    // Accumulate
    AccumulateEstimate( i_dt / 3.0 );
    
    // Height Temp 3
    EstimateTempHeight( i_dt / 2.0 );
    // Vstar3, Astar3
    EstimateVelStar( i_dt / 2.0 );
    A_from_H( StateHeightTmp );
    // Accumulate
    AccumulateEstimate( i_dt / 3.0 );
    
     // Height Temp 4
    EstimateTempHeight( i_dt );
    // Vstar3, Astar3
    EstimateVelStar( i_dt );
    A_from_H( StateHeightTmp );
    // Accumulate
    AccumulateEstimate( i_dt / 6.0 );
    
    // Final boundary conditions on height and vel
    EnforceBoundaryConditions( StateHeight );
    EnforceBoundaryConditions( StateVel );

    // Update current time.
    StateCurrentTime += i_dt;
}

It's a little bit longer in code length, but importantly - it doesn't actually contain more operations! If you compare this to the previous example, you'll see that it's basically the same! Let's see what that looks like running in processing with it all added together:

That's not bad! Perhaps we could make it more interesting, by gathering some mouse input. We'll make it so that if you click in the view, it will set the height and velocity of the sim at that point to the height of the click, and zero, respectively. The input grabbing code looks like this:


void GetInput()
{
    if ( mousePressed && mouseButton == LEFT )
    {
        float mouseCellX = mouseX / ( ( float )PixelsPerCell );
        float mouseCellY = ( (height/2) - mouseY ) / ( ( float )PixelsPerCell );
        float simY = mouseCellY * DX;
        
        int iX = ( int )floor( mouseCellX + 0.5 );
        if ( iX > 0 && iX < ArraySize-1 )
        {
            State[StateHeight][iX] = simY;
            State[StateVel][iX] = 0.0;
        }
    }
}

And we just add that directly into our main draw loop, right before the time step:


void draw()
{
    background( 0.9 );
   
    GetInput();
   
    TimeStep( 1.0 / 24.0 );

    // Label.
    fill( 1.0 );
    text( "Wave Equation RK4", 10, 30 );
  
    DrawState();
}

And what happens now? Here it is running:

Uh oh.... Looks like stability has eluded us again. Next class, we'll look into how to fix stability for cases like this, involving multiple derivatives.

Download

All of the files associated with this class are available via a public github repository, found here: https://github.com/blackencino/SimClass

Simulation Class: Runge-Kutta Methods

Student example from Last Week

Florian Hecht used the lessons from last week to create this inspiring example: Florian's Spring Solver

Higher-Order Time Steps

In our previous class, we improved the accuracy and stability of our simulation's time step by exploring a number of different integration techniques, of which the best-performing were the Euler-Cromer and Velocity-Verlet techniques. However, while stable, their solutions drifted from the theoretically correct answer over time.

In each of our previous examples, the system's equations were evaluated just once per time-step, or in some cases twice, which means the solutions were first- or second- order. In order to improve our estimation of the function we're trying to solve for, we can make more than one or two evaluations per time-step, resulting in a higher-order approximation.

Runge-Kutta methods are a general class of explicit and implicit iterative methods for approximating functions from their derivatives. In order to explore this class of techniques, let's quickly revisit the equations which define our simple harmonic oscillator system. We have a mass attached to an ideal spring with a certain stiffness, without friction, resulting in a force equal to a constant times its displacement from the spring's origin, in the direction opposite the displacement. That system is defined like this:

\( x'_t = v_t \\ v'_t = a_t \\ a_t = -\frac{k}{m} x_t \)
The idea behind all of the explicit Runge-Kutta methods is that we will estimate a velocity, a position, and then an acceleration, and then we will improve our estimation by using the previous estimates to make better estimates. We can do this any number of times, and the number of these estimation stages is what we mean by the "order" of the approximation. After all the estimation steps have been taken, the state of the system is finally updated using a weighted average of all the estimates. The only difference between different variations of the explicit Runge-Kutta methods is how the different estimates are weighted, and how each estimate is used to produce the next estimate.

Second-Order Runge-Kutta

To illustrate the concept simply, here is the traditional second-order Runge-Kutta method, applied to our simple system. In this notation, we use the \(\star\) superscript to indicate an estimate, and the numbered subscripts to indicate which iteration of estimation we're in. Note that each estimation iteration only references values with subscripts less than its own: this is what makes the method explicit, it computes new values only from previous values.

\( \begin{array}{l} v^\star_1 = v_t, & a^\star_1 = -\frac{k}{m} x_t, \\ v^\star_2 = v_t + \Delta t\ a^\star_1, & a^\star_2 = -\frac{k}{m}( x_t + \Delta t\ v^\star_1 ), \\ x_{t+\Delta t} = x_t + \frac{\Delta t}{2}(v^\star_1 + v^\star_2), & v_{t+\Delta t} = v_t + \frac{\Delta t}{2}(a^\star_1 + a^\star_2), \\ \end{array} \)

In processing code, we can implement the RK2 timestep as follows:


// Acceleration from Position.
float A_from_X( float i_x )
{
    return -( Stiffness / BobMass ) * i_x;
}

// Time Step function.
void TimeStep( float i_dt )
{
    float vStar1 = State[StateVelocityX];
    float aStar1 = A_from_X( State[StatePositionX] );

    float vStar2 = State[StateVelocityX] + ( i_dt * aStar1 );
    float xTmp = State[StatePositionX] + ( i_dt * vStar1 );
    float aStar2 = A_from_X( xTmp );

    State[StatePositionX] += ( i_dt / 2.0 ) * ( vStar1 + vStar2 );
    State[StateVelocityX] += ( i_dt / 2.0 ) * ( aStar1 + aStar2 );

    // Update current time.
    State[StateCurrentTime] += i_dt;
}

Note that we've extracted the calculation of acceleration from position into a separate function. This is to emphasize the structure of the system and not bog the implementation down by repeated instances of the spring and mass constants. In larger systems, this is usually the most expensive part of the evaluation, so this implementation helps us understand how much actual work we're doing. We can see above that we're evaluating the acceleration twice in this time step. Here's what that looks like running:

RK2 is basically the same as the midpoint method, applied to both the position and velocity estimates.

It's not as bad as forward Euler, but it diverges from the solution quickly, and if you watch it long enough, you will see that it is not entirely stable, and will eventually produce large amplitudes. We could apply the same type of forward-backwards approach that we applied to Euler to get Euler-Cromer to RK2, but the results are still not a good fit, though then they would be at least stable again. We are simply using the second-order formulation to illustrate the technique behind Runge-Kutta methods, so let's explore what happens at higher orders.

Fourth-Order Runge-Kutta

As we increase the number of estimations, the solution improves dramatically. The fourth-order Runge-Kutta approximation is very popular, and with good reason. It makes four sets of estimates, as you'd expect from a fourth-order method, and then uses an unevenly weighted average of those estimates to get the final result. The derivation of exactly how RK4 chooses its estimation timesteps, and how it weights the average of its estimates, is complex. If you're interested in working through the Taylor-series expansion, it is detailed here. For now, we'll trust the coefficients that are given and look at how it works. Here's the formulation, as applied to our system:

\( \begin{array}{l} v^\star_1 = v_t, & a^\star_1 = -\frac{k}{m} x_t, \\ v^\star_2 = v_t + \frac{\Delta t}{2}\ a^\star_1, & a^\star_2 = -\frac{k}{m}( x_t + \frac{\Delta t}{2}\ v^\star_1 ), \\ v^\star_3 = v_t + \frac{\Delta t}{2}\ a^\star_2, & a^\star_3 = -\frac{k}{m}( x_t + \frac{\Delta t}{2}\ v^\star_2 ), \\ v^\star_4 = v_t + \Delta t\ a^\star_3, & a^\star_4 = -\frac{k}{m}( x_t + \Delta t\ v^\star_3 ), \\ x_{t+\Delta t} = x_t + \frac{\Delta t}{6} (v^\star_1 + 2 v^\star_2 + 2 v^\star_3 + v^\star_4), & v_{t+\Delta t} = v_t + \frac{\Delta t}{6} (a^\star_1 + 2 a^\star_2 + 2 a^\star_3 + a^\star_4), \\ \end{array} \)
Here's what that looks like in Processing code, computing temporary positions at each sub-estimation, and again using the separate function for acceleration from position:


// Time Step function.
void TimeStep( float i_dt )
{
    float vStar1 = State[StateVelocityX];
    float aStar1 = A_from_X( State[StatePositionX] );

    float vStar2 = State[StateVelocityX] + ( ( i_dt / 2.0 ) * aStar1 );
    float xTmp2 = State[StatePositionX] + ( ( i_dt / 2.0 ) * vStar1 );
    float aStar2 = A_from_X( xTmp2 );

    float vStar3 = State[StateVelocityX] + ( ( i_dt / 2.0 ) * aStar2 );
    float xTmp3 = State[StatePositionX] + ( ( i_dt / 2.0 ) * vStar2 );
    float aStar3 = A_from_X( xTmp3 );

    float vStar4 = State[StateVelocityX] + ( i_dt * aStar3 );
    float xTmp4 = State[StatePositionX] + ( i_dt * vStar3 );
    float aStar4 = A_from_X( xTmp4 );

    State[StatePositionX] += ( i_dt / 6.0 ) * 
        ( vStar1 + (2.0*vStar2) + (2.0*vStar3) + vStar4 );
    State[StateVelocityX] += ( i_dt / 6.0 ) * 
        ( aStar1 + (2.0*aStar2) + (2.0*aStar3) + aStar4 );

    // Update current time.
    State[StateCurrentTime] += i_dt;
}
Here's what RK4 looks like running in Processing:

Well hot-damn. The popularity of RK4 is easy to understand! However, it comes at a cost - four evaluations of acceleration in the timestep. Futhermore, there is a temporary storage requirement which begins to be an issue in larger simulations where a temporary position or velocity may amount to millions of points. Also, RK4 is not unconditionally stable. In many production cases, the stability of Velocity Verlet or Euler-Cromer may be more attractive (especially with the fewer calculations), and the additional accuracy provided by RK4 may not be visible. These are the trade-offs and design decisions made by simulation designers when implementing simulation engines.

Interesting point - the "correct" solution to our simple system here is just a cosine function of time. Our numerical integrator is achieving almost identical results without ever evaluating anything other than a multiplication or an addition. This illustrates how Taylor-series expansions are used to provide numerical approximations to mathematical functions.

So far, we've only been considering differential equations of a single variable, time. Now that we've achieved a pretty good time step, the next class will explore a system in which the equations feature derivatives in both time and space.

Download

All of the files associated with this class are available via a public github repository, found here: https://github.com/blackencino/SimClass

Monday, May 20, 2013

Simulation Class: Midpoint Method and Velocity Verlet

Higher-Order Difference Equations

In our previous class, we were able to fix the instability of our spring system by switching from the never-correct Forward Euler time step to the Euler-Cromer time step, which made our simulation stable, but did not achieve an accurate result over time when compared to the exact answer.

The solutions we've explored so far are primarily explicit time integrations, in that we're only using values from previous points in time to solve for points at future points in time. When we consider, later, implicit time integration, where values at the current time are solved self-referentially, things will get a lot more complex. So, we'd like to continue exploring techniques that are fundamentally explicit in nature.

The approach that all of the higher-order explicit methods will take will be to use lower-order approximations to compute temporary values, which will be used to compute better derivatives that will then be used to integrate forward. The first, and most common higher-order method is called "The Midpoint Method".

The Midpoint Method

Let's approximate the velocity at half-way through our time step, rather than all the way to the new time, just using the forward finite-difference approximation we used with Forward Euler:
\( a( t ) = f( t, x( t ) ) \\ x( t + \frac{\Delta t}{2} ) \approx x( t ) + \frac{\Delta t}{2} v( t ) \\ v( t + \Delta t ) \approx v( t ) + \Delta t a( t ) \\ x( t + \Delta t ) \approx x( t + \frac{\Delta t}{2} ) + \frac{\Delta t}{2} v( t + \frac{\Delta t}{2} ) \)
Now we've got one extra step of computation at each time step, computing a midpoint position. This isn't a huge improvement over what we had before, and here's what it looks like in code:
// Time Step function.
void TimeStep( float i_dt )
{
    // Compute acceleration from current position.
    float A = ( -Stiffness / BobMass ) * State[StatePositionX];

    // Update position half-way.
    State[StatePositionX] += ( i_dt/2.0 ) * State[StateVelocityX];

    // Update velocity based on acceleration.
    State[StateVelocityX] += i_dt * A;
    
    // Update position half-way.
    State[StatePositionX] += ( i_dt/2.0 ) * State[StateVelocityX];

    // Update current time.
    State[StateCurrentTime] += i_dt;
}

And here that is:

So really we've actually made things worse with this simple higher-order approximation. An alternative midpoint-method approach is called Velocity Verlet, and it looks like this:

\( a( t ) = f( t, x( t ) ) \\ v( t + \frac{\Delta t}{2} ) \approx v(t) + \frac{\Delta t}{2} a( t ) \\ x( t + \Delta t ) \approx x( t ) + \Delta t v( t + \frac{\Delta t}{2} ) \\ a( t + \Delta t ) = f( t+\Delta t, x( t + \Delta t ) \\ v( t + \Delta t ) \approx v( t + \frac{\Delta t}{2} ) + \frac{\Delta t}{2} a( t + \Delta t ) \)
And here's what that looks like in code:


// Time Step function.
void TimeStep( float i_dt )
{
    // Compute acceleration from current position.
    float A = ( -Stiffness / BobMass ) * State[StatePositionX];

    // Update velocity half-way.
    State[StateVelocityX] += ( i_dt/2.0 ) * A;

    // Update position.
    State[StatePositionX] += i_dt * State[StateVelocityX];

    // Re-compute acceleration.
    A = ( -Stiffness / BobMass ) * State[StatePositionX];
    
    // Update velocity half-way.
    State[StateVelocityX] += ( i_dt/2.0 ) * A;

    // Update current time.
    State[StateCurrentTime] += i_dt;
}

And here it is running:

Not bad! Now we're cooking with gas! Still not perfect, though. The next class will explore the fourth-order Runge Kutta approach, and begin to consider implicit methods.

Download

All of the files associated with this class are available via a public github repository, found here: https://github.com/blackencino/SimClass

Simulation Class: The Euler-Cromer Time Step

Forward Euler Problems

In the previous class, we observed that a Forward Euler time step quickly produced an unusable solution to our simple spring system. This illustration from the Wikipedia entry on Forward Euler illustrates how the method works, and also why its solutions drift away from the correct result.

In this illustration, you can see that at each point, the slope of the function is used to construct a line segment that estimates the position of the next point. The error in each estimate is accumulated with each subsequent estimate, and the solution veers off course. In order to study these systems better, we first need some debug tools.

Adding Debug Tools to our Spring Simulation

The Forward Euler Time Step that we added to the Spring Simulation in the previous class produced an oscillatory motion, but did so in a way that produced an unstable result, with the mass oscillating wildly off screen in a short period of time.

The instability shown by our simple spring simulation is a constant problem for most simulations. It's easy to see the problem in this simple example (which is why we're using it!), but the problem affects large-scale fluid and cloth sims, rigid body sims, and so on. Strangely, despite the general knowledge that Forward Euler is not a good time integration tool, it is still used in many commercial tools, including Maya and Houdini.

In order to analyze our improvements, we first need a measuring stick. To quote my good friend Adam Shand, "You can't improve what you can't measure.". We chose the simple spring because it did have an exact answer. Therefore, we'll add some code to our drawing routine that draws the right answer just above the simulated spring. Here's that code - note that this uses the expression for the explicit position of the spring that we established in the previous class. We'll also add some text to the top of the drawn image to remind us which time integration we're using.


void DrawState()
{
    // Compute end of arm.
    float SpringEndX = PixelsPerMeter * State[StatePositionX];

    // Compute the CORRECT position.
    float sqrtKoverM = sqrt( Stiffness / BobMass );
    float x0 = InitState[StatePositionX];
    float v0 = InitState[StateVelocityX];
    float t = State[StateCurrentTime];
    float CorrectPositionX = ( x0 * cos( sqrtKoverM * t ) ) +
        ( ( v0 / sqrtKoverM ) * sin( sqrtKoverM + t ) );
    
    // Compute draw pos for "correct"
    float CorrectEndX = PixelsPerMeter * State[CorrectPositionX];

    // Draw the spring.
    strokeWeight( 1.0 );
    line( 0.0, 0.0, SpringEndX, 0.0 );
          
    // Draw the spring pivot
    fill( 0.0 );
    ellipse( 0.0, 0.0, 
             PixelsPerMeter * 0.03, 
             PixelsPerMeter * 0.03 );
    
    // Draw the spring bob
    fill( 1.0, 0.0, 0.0 );
    ellipse( SpringEndX, 0.0, 
             PixelsPerMeter * 0.1, 
             PixelsPerMeter * 0.1 );

    // Draw the correct bob in blue
    fill( 0.0, 0.0, 1.0 );
    ellips( CorrectEndX, -PixelsPerMeter * 0.25,
            PixelsPerMeter * 0.1,
            PixelsPerMeter * 0.1 );
}

And here that is:

I also added a "reset" capability to the simple spring code by using Processing's keyRelease function. In response to the key 'r' being released in the view (which has ascii-code 114), we copy the initial state onto the state. (See how handy state vectors are turning out to be!) Because we included the Current Time in our state vector, this is easy:


// Reset function. If the key 'r' is released in the display, 
// copy the initial state to the state.
void keyReleased()
{
    if ( key == 114 )
    {
        for ( int i = 0; i < StateSize; ++i )
        {
            State[i] = InitState[i];
        }
    }  
}

The Euler-Cromer Time Step

When we created the forward finite difference approximation to our ODE equation of motion in the previous class, we had to create two forward difference equations - one for velocity from acceleration, and then one for position from velocity. As you remember, we computed acceleration from the current position, then updated position based on the current velocity, and then finally updated velocity based on the acceleration. What if we updated velocity first, and then updated the position from the current velocity? That would look like this:
\( a( t ) = f( t, x( t ) ) = \frac{-k}{m} x( t ) \\ v( t + \Delta t ) \approx v( t ) + \Delta t a( t ) \\ x( t + \Delta t ) \approx x( t ) + \Delta t v( t + \Delta t ) \)
This seems almost trivially different from our Forward-Euler time step. This method is called many different things: Semi-Implicit Euler Method, Symplectic Euler, Semi-Explicit Euler, Euler-Cromer, and Newton–Størmer–Verlet (NSV) (at least). One of the things I've found in years of writing simulations is that just learning the names of all these different techniques, and learning that the different names often mean exactly the same thing, is a huge part of simplifying the giant mess of understanding.

The code changes to implement Euler Cromer are trivial, we just reorder the time step - literally just swap the velocity update line and the position update line:


// Euler-Cromer Time Step function.
void TimeStep( float i_dt )
{
    // Compute acceleration from current position.
    float A = ( -Stiffness / BobMass ) * State[StatePositionX];

    // Update velocity based on acceleration.
    State[StateVelocityX] += i_dt * A;

    // Update position based on current velocity.
    State[StatePositionX] += i_dt * State[StateVelocityX];

    // Update current time.
    State[StateCurrentTime] += i_dt;
}

And let's see how this ONE LINE (okay, TWO-LINE) change affects the sim:

This is not bad! Our simulation seems to have become stable, which is good news. However, if we watch the simulation for a while, it becomes clear that the simulated positions diverge more and more over time from the correct positions. In order to fix this, we're going to need to explore even higher-order time integration techniques, which is the topic of the next class!

Download

All of the files associated with this class are available via a public github repository, found here: https://github.com/blackencino/SimClass

Sunday, May 19, 2013

Simulation Class: Simple Spring Time Step

Computing Changes in the Simulation State

Now that we've defined the simulation state in the previous class, and made an effort to make sure we can display our simulation state, it's time to get to the interesting bit : simulating a change in the system over time. Note that not all simulations involve change over time - for example the calculation of material stress in a crane supporting a heavy weight. But, for our purposes, we'll focus on examples that involve matter moving in time.

A Simple Spring System

In order to evaluate the behavior of different motion simulation techniques, we need to work with a system that has an exact solution that we can easily calculate, so we can compare our simulation results to the theoretical correct results. A mass connected to a spring, with the spring initially stretched out, provides such a system. We'll create a state and draw routine that looks very similar to the simple the pendulum state from the previous class. We have a stiffness for the spring and a mass for the weight at the end of the string as configuration parameters. We restrict the mass to only move left and right, so we only store positions and velocities in the X dimension.

float Stiffness = 5.0;
float BobMass = 0.5;
int StateSize = 3;
float[] InitState = new float[StateSize];
float[] State = new float[StateSize];
int StateCurrentTime = 0;
int StatePositionX = 1;
int StateVelocityX = 2;

int WindowWidthHeight = 300;
float WorldSize = 2.0;
float PixelsPerMeter;
float OriginPixelsX;
float OriginPixelsY;

void setup()
{
    // Create initial state.
    InitState[StateCurrentTime] = 0.0;
    InitState[StatePositionX] = 0.65;
    InitState[StateVelocityX] = 0.0;

    // Copy initial state to current state.
    // notice that this does not need to know what the meaning of the
    // state elements is, and would work regardless of the state's size.
    for ( int i = 0; i < StateSize; ++i )
    {
        State[i] = InitState[i];
    }

    // Set up normalized colors.
    colorMode( RGB, 1.0 );
    
    // Set up the stroke color and width.
    stroke( 0.0 );
    //strokeWeight( 0.01 );
    
    // Create the window size, set up the transformation variables.
    size( WindowWidthHeight, WindowWidthHeight );
    PixelsPerMeter = (( float )WindowWidthHeight ) / WorldSize;
    OriginPixelsX = 0.5 * ( float )WindowWidthHeight;
    OriginPixelsY = 0.5 * ( float )WindowWidthHeight;
}

// Draw our State, with the unfortunate units conversion.
void DrawState()
{
    // Compute end of arm.
    float SpringEndX = PixelsPerMeter * State[StatePositionX];

    // Draw the spring.
    strokeWeight( 1.0 );
    line( 0.0, 0.0, SpringEndX, 0.0 );
          
    // Draw the spring pivot
    fill( 0.0 );
    ellipse( 0.0, 0.0, 
             PixelsPerMeter * 0.03, 
             PixelsPerMeter * 0.03 );
    
    // Draw the spring bob
    fill( 1.0, 0.0, 0.0 );
    ellipse( SpringEndX, 0.0, 
             PixelsPerMeter * 0.1, 
             PixelsPerMeter * 0.1 );
}

// Processing Draw function, called every time the screen refreshes.
void draw()
{
    background( 0.75 );

    // Translate to the origin.
    translate( OriginPixelsX, OriginPixelsY );

    // Draw the simulation
    DrawState();
}
Here's what that looks running in Processing - this, again, is just a repeat of the previous class's state initialization and drawing ideas, extended a little to include the notion of an initial state as separate from the state itself.

Motion of a Simple Spring

The equation of motion for a simple spring is, um, simple:
\( F = -k x \)
Where F is force, k is a constant representing the stiffness of the spring, and x is the position of the mass. This simple system is known as a Harmonic Oscillator . As we almost always do in physics, unless we're those weird quantum folks, we consult Newton's Laws of Motion, specifically the second one, which as the world's second most famous equation, says:
\( F = m a \)
Where F again, is force, m is mass, in this case the mass of the object at the end of our simple spring, and a is the acceleration of the object. By definition, acceleration is the rate of change of velocity over time, and velocity is the rate of change of position over time. In this simple system, that can be expressed in the following way, using derivatives:
\( v = \frac{dx}{dt} \\ a = \frac{dv}{dt} = \frac{d^2 x}{dt^2} \)
We often use the prime and double-prime notation as a shorthand for differentiation by time, which looks like this:
\( v = x' \\ a = v' = x'' \)
Using the equations above, and substituting for acceleration, the simple spring equation becomes:
\( x'' = \frac{-k}{m} x \)
What we see here is what is known as an Ordinary Differential Equation, or ODE for short. ODE's are equations which relate a variable (in our case the position, x), to one or more derivatives of that variable with respect to a single other variable, (in our case time, t). Most simulations of motion are numerical approximations of ODE's, or more commonly, Partial Differential Equations, in which multiple quantities will be equated to the partial derivatives of those quantities with respect to multiple other quantities, not just time. We call those PDE's for short, of course.

This equation expresses a relationship that holds true at any moment in time, and in this very simple case, there actually is an exact solution to the equation as a calculable function of time t, given an initial condition of position = x0 and velocity = v0:

\( x(0) = x_0 \\ x'(0) = v(0) = v_0 \\ x(t) = x_0 \cos( \sqrt{ \frac{k}{m} } t ) + \frac{v_0}{\sqrt{\frac{k}{m}}} \sin( \sqrt{ \frac{k}{m} } t ) \)
We chose this simple case precisely because it did have an exact solution that we will be able to use to compare our simulation results to. However, it is not usually true that there is an explicit, closed-form equation of time like the one above that can be simply evaluated for the solution to the system at any given time. For example, the equation for the angular acceleration of the pendulum from our previous class is:
\( \theta'' = -\frac{g}{L} \sin( \theta ) \)
This equation does not have a simple solution, and that will be true for anything interesting enough to bother simulating. Therefore, in order to calculate the position of the pendulum at some specific time Tk, we'll have to start at our initial time T0, and slowly iterate the solution forward through small, discrete increments of time, with each new time's solution being based on the previous time's solution. These time increments are usually very small. In the film world we work in 24 frames per second (or 25, or 30, or 48, or 60...), and simulations often require smaller steps in order to compute good results that are stable (a concept we'll explore more later).

Numerical Solutions to Differential Equations

The general idea of solving differential equations by evaluating approximations to the equations on discrete points is generally called Numerical Integration. The family of solution strategies that fall into this category rely on a linear approximation to derivatives and partial derivatives called a Finite Difference Equation.

Suppose we have a function of time, \(f(t)\). We can approximate the value of the derivative \(f'(t)\) in the following manner:

\( f'(t) \approx \frac{f(t+\Delta t) - f(t)}{\Delta t} \)
Suppose, as in a case like ours, we have an expression that we can evaluate for a derivative, but we cannot evaluate the function directly. (This is the general problem of dynamic simulation). We can rearrange the equation above so that we have an expression for the function at the new time, written in terms of the values at the current time:
\( f(t+\Delta t) \approx f(t) + \Delta t f'(t) \)
Assuming we know the value at just one single time t0, for the function f, the "initial value", we can use the equation above to find the next value, and then the next, and so on, like so:
\( f(t_0) = f_0 \\ f(t_0+\Delta t) = f_0 + \Delta t f'(t_0) \\ f(t_0+2\Delta t) = f(t_0+\Delta t) + \Delta t f'(t_0 + \Delta t) \\ ... \)
This approximation technique is called The Euler Method, or sometimes Forward Euler, because the derivative in the original expression was calculated from a point forward in time. We can apply it to our spring problem by applying it twice, since we have a second derivative. First, we calculate our acceleration from our current position, which we can do from the simple spring equation of motion we broke down above. Then, we update our position based on our current velocity, and finally we update our velocity based on the acceleration we calculated.
\( a(t) = \frac{-k}{m} x(t) \\ x(t+\Delta t) \approx x(t) + \Delta t\ v(t) \\ v(t+\Delta t) \approx v(t) + \Delta t\ a(t) \)
That's all we need to calculate new positions, so let's give it a try in our code.

Adding a Time Step to our Sim Code

Let's make a few small code additions to support time integration. Below the DrawState function, we'll add a new function called TimeStep, which will make the calculations above, and will also set the current time, based on a time increment DT. DT is a code-friendly way of writing \(\Delta t\), and is common practice.

// Time Step function.
void TimeStep( float i_dt )
{
    // Compute acceleration from current position.
    float A = ( -Stiffness / BobMass ) * State[StatePositionX];

    // Update position based on current velocity.
    State[StatePositionX] += i_dt * State[StateVelocityX];

    // Update velocity based on acceleration.
    State[StateVelocityX] += i_dt * A;

    // Update current time.
    State[StateCurrentTime] += i_dt;
}
Then, we simply update our draw function to call the TimeStep function right as it begins. We assume that each draw refresh is a single frame of time integration, which we'll call 1/24 of a second. Here's the new draw function:

void draw()
{
    // Time Step.
    TimeStep( 1.0/24.0 );

    // Clear the display to a constant color.
    background( 0.75 );

    // Translate to the origin.
    translate( OriginPixelsX, OriginPixelsY );

    // Draw the simulation
    DrawState();
}
And that's it! Here's the whole thing, now with motion!

Hey, we've got oscillating motion! But wait.... what the hell? Why is the ball shooting off the screen? Why is it getting faster and faster? Our sim is unstable! We're going to need a better approximation, because as it turns out, Forward Euler approximations are unconditionally unstable - meaning they will ALWAYS blow up sooner or later, unless the results are manipulated.

That's the subject of our next class!

Download

All of the files associated with this class are available via a public github repository, found here: https://github.com/blackencino/SimClass

Monday, May 13, 2013

Simulation Class: Smoke Solver State

Eulerian vs. Lagrangian Simulations

The types of physical simulations we do in compute graphics, visual effects, animation and games can be very coarsely categorized as either "Eulerian" or "Lagrangian". In simple terms, Eulerian Simulations are ones where the locations in space at which the simulation is being performed do not move, though mass may pass through these points. In our simple state examples, the water height field is an Eulerian simulation - it has a grid, with a height at each point, and every time we draw the grid we draw each rectangle in the same horizontal location - the waves will move through the sim points, even as the sim points stay in place. The shorthand is "Eulerian = Grids"

By contrast, a Lagrangian simulation is one in which the physical locations at which the sim is calculated (and at which knowledge about the system resides) move with the simulation as it is calculated. So, basically, a particle system, in which the particle locations change from frame to frame, and the physics are performed by looking at the relationships between the particles. The shorthand is "Lagrangian = Particles".

Finally, "Hybrid" simulations are ones which use both moving and fixed elements, such as using particles and grids simultaneously. The Particle-Level-Set methods are good examples of a hybrid system, as is the popular "Flip" fluid simulation method.

An Eulerian Smoke Grid

We're going to create an Eulerian Grid simulation of smoke, in which a 2D grid of cells will contain velocities and densities. Just like in the simple state example where we aggregated our state, we'll create a single state object and use integer pointers to access different parts of it.

Smoke will require current and previous values for velocity and density, arrays to hold input data gathered from the mouse, and finally some scratch space to use while calculating. Here's what it looks like - it's not much different from our simple state example:



// Grid resolution per side. Rectangular
int NX = 62;
int NY = 62;

// The size of the sim, in "world" units.
float LX = 100.0;

// Size, in "world" units, of a grid cell.
// Our cells are uniform (square) so DX & DY are the same.
float DXY = LX / ( float )NX;

// Y size, keeping square cells.
float LY = DXY * ( float )NY;

// The size of each grid cell, in pixels.
// This is for drawing
int CellPixels = 8;

// The length of all of our (one-dimensional)
// arrays. We use 1d arrays rather than matrices
// mostly for efficiency reasons.
int GridArraySize = GX*GY;

// Our State Arrays
int NUM_ARRAYS = 12;
float[][] State = new float[NUM_ARRAYS][GridArraySize];
int GridPrevU = 0;
int GridU = 1;
int GridPrevV = 2;
int GridV = 3;
int GridPrevDensity = 4;
int GridDensity = 5;
int GridInputU = 6;
int GridInputV = 7;
int GridInputDensity = 8;
int GridTemp0 = 9;
int GridTemp1 = 10;
int GridTemp2 = 11;

float VstrokeAlpha = 0.5;
That's all for now!

Download

All of the files associated with this class are available via a public github repository, found here: https://github.com/blackencino/SimClass

Simulation Class: Visualizing the State

Visualizing the Simulation State

Once we've established the state vector for our system, the very first thing that a simulation designer should do is to establish a visualization method for as many parts of the simulation state as possible. The better the visualization tools, the better the simulation, and the less time you will spend tracking down bugs or chasing misbehaving physics.

In fact, good visualization tools will allow you to learn and understand how your simulation is working at a deep level, and you'll begin to establish an intuitive relationship with the measurements that form your simulation state.

Drawing the Pendulum

The first thing we do when preparing to draw is to define a transformation from the simulation coordinate system to the viewing coordinate system. This is a standard part of graphics pipelines and will be familiar to anyone who has used OpenGL - we're essentially defining the transformation and projection matrices. In our pendulum case (and indeed in many of our examples), we simply need to define a relationship between the working units of the sim, in this case meters, and the pixels of the display. We define these variables in the header:

int WindowWidthHeight = 300;
float WorldSize = 2.0;
float PixelsPerMeter;
float OriginPixelsX;
float OriginPixelsY;
and then in the setup function, we calculate their values. In this case I'm explicitly deciding to put the origin at the midpoint in X, and a quarter of the way down in Y.

PixelsPerMeter = (( float )WindowWidthHeight ) / WorldSize;
OriginPixelsX = 0.5 * ( float )WindowWidthHeight;
OriginPixelsY = 0.25 * ( float )WindowWidthHeight;
It is typical to try to construct viewing transformation, such as scaling and translating the view, independently of modeling transformations. However, processing.js has a bug with concatenated transformations, so unfortunately our code has to accomodate the transformation from sim coordinates in meters to pixel coordinates by itself.

We define a DrawState function that we call from processing's 'draw' function, and draw the elements of our state. In this case we have a pendulum arm, a pendulum bob (the weight at the end), and the pivot point at the base of the pendulum. We use the angle of the pendulum from the state to determine the endpoint of the arm for drawing, thus visualizing our state:


// The DrawState function assumes that the coordinate space is that of the
// simulation - namely, meters, with the pendulum pivot placed at the origin.
// Draw the arm, pivot, and bob!
// There is currently a bug in processing.js which requires us to do the
// pixels-per-meter scaling ourselves.
void DrawState()
{
    // Compute end of arm.
    float ArmEndX = PixelsPerMeter * PendulumLength * sin( StatePendulumTheta );
    float ArmEndY = PixelsPerMeter * PendulumLength * cos( StatePendulumTheta );
  
    // Draw the pendulum arm.
    strokeWeight( 1.0 );
    line( 0.0, 0.0, ArmEndX, ArmEndY );
          
    // Draw the pendulum pivot
    fill( 0.0 );
    ellipse( 0.0, 0.0, 
             PixelsPerMeter * 0.03, 
             PixelsPerMeter * 0.03 );
    
    // Draw the pendulum bob
    fill( 1.0, 0.0, 0.0 );
    ellipse( ArmEndX, ArmEndY, 
             PixelsPerMeter * 0.1, 
             PixelsPerMeter * 0.1 );
}
And that's all we need to display the state our our simple pendulum simulation! Here it is running in processing. Note that there's no movement yet, as we haven't added any time integration into the code - we've just defined our state vector and a display mechanism for it.

Drawing the Wave Field

Drawing our other simple state example, a 1D Wave Height Field, is extremely similar to drawing the pendulum state. We have a DrawState function, which again due to bugs in Processing, does its own coordinate space transformations.

In this case, we just draw a rectangle in the world for each column of the height field state. We leave a stroke around the rectangles so we can see each individual state location. Here's what that draw function looks like:


void DrawState()
{
    float OffsetY = 0.5 * ( float )WindowHeight;
    for ( int i = 0; i < ArraySize; ++i )
    {
        float SimX = DX *  ( float )i;
        float PixelsX = ( float )( i * PixelsPerCell );
        float SimY = StateHeight[i];
        float PixelsY = SimY * (( float )PixelsPerCell ) / DX;
        float PixelsMinY = OffsetY - PixelsY;
        float PixelsHeight = (( float )WindowHeight) - PixelsMinY;

        fill( 0.0, 0.0, 1.0 );   
        rect( PixelsX, OffsetY - PixelsY, PixelsPerCell, PixelsHeight );
    }
}
And here's what it looks like in processing. Again, no movement yet - we're just working on creating a visualization for the state.


Download

All of the files associated with this class are available via a public github repository, found here: https://github.com/blackencino/SimClass