Introduction to Basic Numerical Integration

Contents

The Importance of Numerical Integration: A Crucial Tool in Computational Analysis

Numerical Integration stands at the forefront of modern computational methods, enabling us to simulate and control a myriad of processes in the real world. From the intricacies of economic models to the complexities of physical phenomena, and even extending to certain social issues, many problems are encapsulated in the form of differential equations. Numerical methods excel where traditional analytical solutions falter, offering a means to solve equations that are otherwise unsolvable by hand. More than just a solution method, numerical integration allows us to reframe these problems into formats that computers can address, achieving a level of speed and accuracy far beyond human capabilities.

A key application of numerical integration is in addressing initial value problems inherent in differential equations. A tangible example of this is in motion analysis: by integrating accelerometer output twice, we can determine an object’s position over time. This process involves navigating through a sequence of discrete data points, with each integration method making distinct assumptions about how signals vary between these points. As a general rule, the higher the order of the integration method, the more precise the results. However, this precision often comes with a cost – a higher computational burden due to an increased number of function evaluations needed for each subsequent timestep.

The journey from one timestep to another in numerical integration is fraught with potential pitfalls. The methods chosen can introduce errors and instability, particularly in complex systems or those with nonlinear inputs. The art, therefore, lies in selecting the right method for the task at hand. For instance, high-frequency systems demand careful method selection to avoid inaccuracies. In some cases, combining methods – such as using Euler’s method for acceleration-to-velocity integration, followed by the Tustin method for velocity-to-position integration – can yield superior results. Such strategies are particularly effective in solving state-space equations, where the dynamics of the system are represented in a structured mathematical format.

Numerical Integration allows us to use computers to simulate and control processes in the real world. Many economic, physical, and some social problems are represented as differential equations. Using numerical methods allow us to solve equations that might not be possible by hand. It also allows us to frame the problem in a way that computers can solve faster and more accurately than we could possibly expect a person to do.

Understanding Integration

At its essence, integration is a fundamental concept in calculus and mathematics, focused on determining the areas under curves and the volumes under surfaces. This process is akin to summing up an infinite number of infinitesimally small data points to calculate a total value. Integration is not just about finding areas and volumes; it’s a powerful tool that allows us to understand and quantify the accumulation of quantities, changes, and trends over a continuum.

While the basic premise of integration may seem straightforward, its applications extend far beyond simple area calculations. Integration plays a pivotal role in various fields, including physics, engineering, economics, and beyond. It helps in understanding the dynamics of systems, calculating the total force exerted, determining the cumulative interest over time, or analyzing the changing trends in data.

The Essence of Numerical Integration

Numerical integration approximates the solution to differential equations using discrete time steps. These methods, while approximate, are crucial in handling various types of errors and understanding their trade-offs.

Numerical integration methods use discrete time steps to solve differential equations. Therefore we need to convert our functions models, or equations into a discrete form. The solutions produced by numerical integration algorithms are always approximate and result in some type of error. Therefore it is essential to understand the different methods, what errors each method might

For numerical integration methods, stability refers to the ability of a method to produce bounded solutions. The stability region is the area of the complex plane where a bounded solution can be found. The exact equation for the stability region depends on the problem being solved and the step size used by the method. but in general it is a circle centered at the origin with radius less than or equal to 1.The equation for this relation is
$$\frac{du}{dt}=\lambda u$$

Every numerical integration method comes with its trade-offs. Higher-order methods generally yield more accurate results but require more computational resources. This trade-off between precision and computational efficiency is a key consideration in method selection.

Explicit Methods in Numerical Integration: Simplicity and Efficiency

Explicit methods in numerical integration are characterized by their approach to solving differential equations. In these methods, the value of the function at the next point is calculated directly from known values at previous points. This direct approach makes explicit methods relatively simple to implement and understand, especially for those new to numerical integration.

  • The main advantage of explicit methods is their simplicity. They do not require solving complex systems of equations, making them easier to program and faster to execute in many cases. For problems where high precision is not the primary concern, or when dealing with less stiff equations, explicit methods can be more efficient than their implicit counterparts.

While explicit methods are beneficial for their ease of use and computational efficiency, they have limitations. One of the main drawbacks is their stability, particularly when dealing with stiff equations or when large time steps are involved. In such scenarios, explicit methods can lead to inaccurate results or even numerical instabilities.

Implicit Methods

Implicit methods in numerical integration are particularly suited for systems that are reversible in time. These methods, unlike explicit ones, calculate the future state of a system by involving the future state itself in the computation. This characteristic often makes them more stable, especially for stiff equations, but also more computationally intensive.

The General Form of Implicit Methods

The basic formula for an implicit integration method is represented as:

$$Ynew=Y0+h⋅f(Ynew)Ynew​=Y0​+h⋅f(Ynew​)$$

Alternatively, this can be expressed as:

$$x(n+1)−x(n)−h⋅D⋅x(n+1)=0x(n+1)−x(n)−h⋅D⋅x(n+1)=0$$

These equations highlight the key feature of implicit methods: the future state \((Y_{\text{new}}\) or \(x(n+1))\) is determined by solving an equation in which it already appears.

Implicit methods can be conceptualized as taking a ‘backward step’ from a future time value. This approach allows for greater stability in the integration process, as it accounts for the changes that happen in the next step.

A significant advantage of implicit methods, such as the Implicit Euler Method, is their extensive stability region. The stability region for the Implicit Euler Method encompasses the entire left half of the complex plane, excluding the imaginary axis. This broad stability region signifies that the Implicit Euler method is unconditionally stable, ensuring bounded solutions for any step size, h.

Implicit methods excel at solving stiff problems, which are common in various scientific and engineering applications. Their stability characteristics make them a preferred choice in scenarios where explicit methods might fail to provide accurate results.

While implicit methods offer enhanced stability, they come with the downside of increased computational complexity. Each step in an implicit method typically involves solving a set of nonlinear equations, which can be computationally demanding and time-consuming.

Adaptive vs Fixed Timesteps

In numerical integration, the selection of timestep size plays a crucial role in balancing computational efficiency and solution accuracy. This balance can be managed through two primary approaches: adaptive and fixed timesteps.

Adaptive timestep methods adjust the step size dynamically during the computation process. This flexibility allows for larger timesteps in problem regions where the solution varies minimally, thus saving computational time. Conversely, in regions with greater solution variation, the timestep is reduced to achieve higher accuracy. This adjustment is typically based on error estimation: If the estimated error is below a certain threshold, the timestep is increased, potentially doubling it for efficiency. Conversely, if the error exceeds the acceptable limit, the timestep is reduced, possibly halving it, to improve accuracy.

The adaptive approach is particularly useful in problems with varying dynamics, where some intervals may require finer resolution than others.

In contrast, fixed-step methods involve a predetermined, constant step size throughout the computation. The main advantage of this approach is its simplicity and consistency, which can be beneficial in systems where a uniform timestep is sufficient or in scenarios where maintaining a specific timestep is crucial for the model’s fidelity.

However, the trade-off with fixed timesteps is in error management. Smaller fixed timesteps generally lead to more accurate solutions but at the cost of increased computational operations and time. Larger timesteps, while computationally efficient, can compromise the solution’s accuracy, especially in regions with high variability.

The choice between adaptive and fixed timesteps largely depends on the specific requirements of the problem being solved. Factors such as the desired accuracy, the nature of the system’s dynamics, and computational resource limitations play a significant role in this decision.

Numerical Integration in Practice

Numerical integration finds its applications in a variety of fields, from electrical engineering to epidemiology. Here are some key examples:

SPICE Circuits: Advanced Circuit Simulation

SPICE (Simulation Program with Integrated Circuit Emphasis) is a powerful tool for numerical circuit simulation. It primarily uses Newton’s Iteration method to handle circuits with nonlinear elements and employs sparse matrix representations and implicit integration for solving circuit reactance equations. Developed by Lawrence Nagel under the guidance of notable figures and funded by NSF and ARO grants at UC Berkeley, SPICE is a cornerstone in circuit simulation. It’s essential to note that while originally designed for low voltage and current circuits, adjustments may be needed for high-power applications.

Orbital Simulations: Understanding Celestial Mechanics

Orbital simulations, which I have discussed in a previous article [[orbital dynamics]] showcase the application of numerical integration in calculating the trajectories and motions of celestial bodies. These simulations help in understanding complex gravitational interactions in space.

Inertial Simulations: Exploring Object Dynamics

In another article Tumbling Through Space: A Dive into Rotational Dynamic Simulations, I explored how numerical integration helps simulate the behavior of objects based on their inertia distribution. This type of simulation can reveal unique dynamics of moving bodies under various forces.

Spring-Mass-Damper and Pendulum Simulations: Classic Physics Models

Common in physics, the spring-mass-damper and pendulum models are perfect for testing both simulation and control methods. Numerical integration plays a crucial role in these simulations, allowing us to understand the dynamics of these systems accurately.

Opioid Epidemic Model: Epidemiological Analysis

Numerical integration is not just limited to physical sciences; it extends to social sciences as well. We use differential models like the SIR (Susceptible, Infected, Recovered) model to simulate the spread of a disease and the impact of various policies.

Embedded Applications

Numerical Integration implemented as a filter

Embedded digital systems are used in most modern control systems for industrial, scientific, and commercial applications. These systems operate in discrete time steps that are scheduled according to the priority and time to complete a computational operation. A digital filter is used to modify the behavior of an input signal. For any control systems that rely on rates, there must be an integration computation to turn it into usable values. For instance, a coffee machine controller must perform an integration to know when to close the valve when the coffee cup is getting full. Autopilots for aircraft and missiles must also perform integration to transform the rate data from gyroscopes into an attitude angle.

Digital filters are a well-researched somewhat standardized way to create high-performance computations on time-series data. Numerical integrations can be implemented as a filter which can easily be incorporated into existing controller designs.

Digital processing accuracy and speed

Control systems are used to control some processor plant dynamics by taking information through sensors and then manipulating actuators/effectors. Depending on the type of plant that is being controlled, a minimum update rate is set for the controller. If you use more efficient integration methods then this update rate can be decreased, which allows for either better control of existing systems. This can also make it possible to control a system that has faster dynamics that was impossible to control with slower algorithms/hardware.

Which Integration Method should I use?

Now we’ve reached the meat and potatoes of this subject. Knowing the strengths and weaknesses of different integration methods is key to selecting the proper one for your application. We will review some of the attributes that were discussed previously, such as timesteps, errors, and give examples and comparisons of some of the more common methods.

What equations are you trying to solve?

Stiffness in your equations is an important parameter in selecting which algorithms you are trying to solve. Stiff equations have large divergent variations near the stable solution. This means that any solution that oscillates around the true solution too much may enter a feedback loop where the magnitude of its deviations increases exponentially. This is called “Blowing Up” as the result increases until it reaches the precision limit of whatever system you are using and then becomes meaningless.

Implicit vs Explicit

Backware Euler is an implicit method. the Matlab algorithms for ode1b and ode14bx are also implicit methods. Implicit methods handle stiff equations much better than explicit methods.

Fixed vs Adaptive Time-steps

Using a simple fixed timestep method is simpler to implement, but it may waste computation time by calculating many solution iterations that do not change much. The alternative is to perform more than 1 calculation, each with different timesteps and see which has the smallest error. This increases the timesteps in solution regions that do not change much, and decreases the timestep in the regions that have large changes in magnitude.

Practical Examples

Now we will run through a few different integration methods and compare their performance in terms of resulting error and computational time. We will use two toy problems, one being a circular vector field, and another being a stiff exponential vector field. For some methods we will examine special cases for which they would be helpful.

For our first problem A simple function to use for comparing integration methods is the vector field that describes a rotating circle \(f(x,y)=(y,-x)\). We know that the true value of any state should remain on a circle centered at the origin and therefore the L2 norm of the state vector can be used an an error function, as it represents the radius of the circle.

A circular vector field is defined by the function
$$F(x,y)=$$
The Jacobian of this field is
$$J=
\begin{bmatrix}
\frac{\partial F_1}{\partial x} & \frac{\partial F_1}{\partial y} \\
\frac{\partial F_2}{\partial x} & \frac{\partial F_2}{\partial y} \\
\end{bmatrix}=
\begin{bmatrix}
0 & 1\\
-1 & 0\\
\end{bmatrix}$$
Where \(F_1=y\) and \(F_2=-x\)
The eigenvalues have no real parts and are \(-i\) and \(i\) Therefore it is not a stiff set of equations.

For our second problem, an exponential decay vector field is used. This represents a complicated stiff set of equations, except for the fact that we have an exact solution equation that we can compare the results of our numerical integration to.

A 3@D exponential vector field can be defined by the following equation
$$F(x,y)=$$
The differential equations for this system are
$$\begin{matrix}
\frac{dx}{dt}=x \\
\frac{dy}{dt}=-ky
\end{matrix}$$
Which have the exact time-domain solutions of
$$\begin{matrix}
x(t)=C_1\cdot e^t \\
y(t)=C_2\cdot e^{-kt}
\end{matrix}$$

With the constants \(C_1\) and \(C_2\) determined by the initial conditions.

Euler’s Method

Euler’s method is a numerical integration method to uses the previous state and the current slope to calculate the new state. It is a single-step method with second-order accuracy. it has an implicit and an explicit version.

Forward Euler

The Forward Euler method is a method for solving differential equations. It can be thought of as propagating a state along the tangent of a vector field. We can approximate the derivative of a function using function evaluations at multiple timesteps.
The equation for a forward Euler integration of a differential equation \(f(x,t)\) is:
$$x_{k+1}=x_{k}+\Delta tf(x_k,t_k)$$
Which can also be written as:
$$\dot{x}=\frac{x_{k+1}-x_k}{h}$$
where \(h=\Delta t\), the discrete time between \(k\) timesteps. The forward euler method give an estimation of the function at the timestep \(kh\). This method produces the most error, but that error can be reduced by using smaller timesteps. The forward euler does not always preserve stability. Euler’s method has a second order numerical error. This error is the difference between the numerical method and the complete taylor series expansion. This \(O(h^2)\) error means that to have the error, you will need to take 4 times as many timesteps, or function evaluations.

Python Code

def forward_Euler(f, v):
    return v + dt * f(v)

2D Forward Euler

#basic forward Euler
def forward_euler(deriv_func, x, y, dt):
    delta = deriv_func(x,y)
    return x + delta[0] * dt, y + delta[1] * dt

Because Euler’s method amounts to a tangent approximation, if presented with a circular vector field, The solution would degrade to a spiral. Reducing the timestep does not stop this behavior, it only delays the error accumulation rate.

Euler’s Method can be used to propagate a state through a clockwise rotating vector field.

We can see how the propagation of the tangent of the circle leads to accumulated errors in our simulation. Code

Euler’s Method can be numerically unstable, as in it will not converge to a solution. For the following exponential decay function
$$f=-kx$$
Euler’s method converges for small step sizes. When the step sizes are greater than \(1/k\) Then Euler’s method oscillated around the zero. Furthermore if The stepsize is increased to greater than \(h=2/k\) Then Euler’s method will diverge for this solution.

We can see that when the timestep is greater than 1/K then Euler’s method has an oscillatory response but eventually converges.

If the step size is greater than \(2/k\) we can then see that this method is no longer numerically stable and diverges.

Runge Kutta

The Runge-Kutta methods are numerical integration methods that use n-slopes to calculate the new state. They fall under the category of single-step methods. These methods are the workhorse of numerical integrations. The second and fourth-order schemes are widely used in simulation. An n-step RK method has (n+1) order accuracy. You might notice that the 1st order Runge-Kutta is simply the forward Euler method.

RK4

The classic Runge-Kuttta method RK4 is one of the most popular runge-kutta family solvers. It was developed around the year 1900 by the mathemticians Carl Runge and Martin Wilhelm Kutta. It is more accurate than the forward Euler method. It has been used for numerical integrations for more than 100 years and has been used in control system for decades. The RK4 method cancels out errors out to order \(\Delta t^5\) . Itis an explicit solver. The equation is:

$$\begin{matrix}
x_{k+1}=x_k+\frac{\Delta t}{6}(f_1+2f_2+2f_3+f_4)\\
f_1 = f(x_k, t_k) \\
f_2 = f(x_k+\frac{\Delta t}{2}f_1,t_k+\frac{\Delta t}{2}) \\
f_3 = f(x_k+\frac{\Delta t}{2}f_2, t_k+\frac{\Delta t}{2})\\
f_4 = f(x_k+\Delta f_3, t_k+\Delta t)
\end{matrix}
$$
This method has a local error of order \(\Delta t^5\) and a global order of \(\Delta t^4\)

Matlab’s ode4 solver is an implementation of the classing Runge-Kutta

RK2

This is also known as the Euler Midpoint Formula. The Second order Runge-Kutta method evaluates the slope at a location halfway between the first timestep and the second timestep. It then makes a full forward euler step using that intermediate slope, instead of the original slope. It requires 2 function evaluations, but it gives 3rd order accuracy. The equation for the second-order Runge-kutta is:
$$x_{k+1}=x_k+\Delta tf(x_k+\frac{\Delta t}{2}f(x_k,t_k),t_k+\frac{\Delta t}{2})$$
This can be decomposed into:
$$\begin{matrix}
x_{k+1}=x_k+\Delta tf_2 \\
f_1 = f(x_k, t_k) \\
f_2 = f(x_k+\frac{\Delta t}{2}f_1,t_k+\frac{\Delta t}{2})
\end{matrix}
$$

The local error for this method is order \(\Delta t^3\). The global error is order \(\Delta t^2\) across the entire trajectory becuase as you make \(\Delta t\) smaller you have to take more steps.

def RK2(function, v,ts):
     f1 = function(v)
     f2 = function(v+ts/2*f1)
     return v + dt * f2

Euler’s Midpoint method uses an additional term of the Taylor Series to decrease the error to \(O(h^3)\) third order over the traditional Euler’s Method.

Runge-Kutta methods work poorly for Chaotic systems, stochastic differential equation systems, and stiff differential equations. Constrained dynamics such as systems with lots of springs, or complicated systems like the point kinetics model for a nuclear reactor are considered stiff problems and do not work well with the Runge-Kutta Methods.

Runge kutta methods also have distinct stability regions that you must adhere to when using them for simulations.

The stability condition for the Forward Euler method can be expressed mathematically as:

|1 + hλ| <= 1,

where h is the step size and λ is the eigenvalue of the problem being solved. The above equation states that the magnitude of the solution to the recurrence relation produced by the Forward Euler method will remain bounded if and only if the value of |1 + hλ| is less than or equal to 1.In other words, the stability region for the Forward Euler method is a circle with radius 1 centered at the origin in the complex plane, and any eigenvalue with a magnitude greater than 1/h will cause the solution to become unstable.

Comparing Numerical Methods

A simple function to use for comparing integration methods is the vector field that describes a rotating circle \(f(x,y)=(y,-x)\). We know that the true value of any state should remain on a circle centered at the origin and therefore the L2 norm of the state vector can be used an an error function, as it represents the radius of the circle. Here we see a comparison between the Forward Euler method and the second order Runge Kutta Method.

We can see that the forward Euler method accumulates error at a much faster rate than the RK2 integration method. If we plot the errors on a log-scale we can see that the RK2 method accumulates almost 3 orders of magnitude less error per timestep for the function describing this vector field.

If we compare these two methods with the Classic Runge Kutta Method we can see the vast differences. These accumulated errors are important because if the equation tha desribes the vector field is used to represent a physical simulation, then any errors in evaluation will propogate through an cause an error in the final simulation results. Here we see the vast differences in accuracy for the different integration methods.

Let’s look at another example of our adaptive integration method. This time we will look at an exponential decay vector field. The error tolerance for our algorithm will be 1e-3 and we will adjust the end time of the simulation to remain in the bounds of the vector field visualization. We can observe the results for a starting state of (0.1,5)

Adaptive Methods

For this example we are going to look at the vector field This field is represented as a clockwise rotating field.

We can run a simulation from 0 to 10 seconds with an adaptive algorithm that resizes the timestep to maintain a specified error value. With an error tolerance of 1e-3, we get the following integral curve.

The true solution should be a perfect circle, so we can see that there is some accumulated error in our method. The graph of our error shows us that this algorithm is maintaining our specified error tolerance throughout the simulation.

We can also plot the changes in step-size to see how the step size changes as the simulation progresses.

What we see is that our initial estimate for the timestep is too large for our specified error tolerance. To maintain our tolerance level of 1e-3 we need to have a step size of less than 0.03 seconds.

Let’s look at another example of our adaptive integration method. This time we will look at an exponential decay vector field. The error tolerance for our algorithm will be 1e-3 and we will adjust the end time of the simulation to remain in the bounds of the vector field visualization. We can observe the results for a starting state of (0.1,5)

This looks like the integration curve tracks the field pretty well. We can also look at the size of the adaptive steps as we progress through the simulation.

This also shows that our initial estimates for a step size are too high given our error tolerance. It also shows that halfway through the simulation the state passes through an area that is not as non-linear as the rest. Therefore the error is less which allows for us to increase our step size as the state traverses through that area of the vector field. When we plot the error values we can see exactly where our error starts to decrease.

Conclusion

Numerical integration is key to most applications where computers are used in scientific/engineering/and economic domains. While this article is not at all exhaustive, I hope that diving into a few examples will help you understand how numerical integration works, when to use it, and most importantly, how to use it.