Scipy at lightspeed ⚡ Part 2

zenulabidin

Ali Sherief

Posted on April 20, 2020

Scipy at lightspeed ⚡ Part 2

So it looks like there will be more content about scipy functions. Here I'm going to cover the integration and optimization functions in scipy. This article should be much shorter because, well, there aren't many methods to explain in these mathematical subjects, it's just the concepts of computing integrals and finding zeros of functions.

Also as this is supposed to be an entry level guide I'm going to skip over large advanced parts of the library, such as the various optimization algorithms, partial derivatives, and Jacobian matrices.

Derivatives

The function scipy.misc.derivative(func, x0, dx=1.0, n=1) computes the nth derivative of a function and returns the result of x0 passed to the derivative function as an argument. func is a one-dimensional function at only accepts one parameter. If you want to pass a function that has multiple parameters, then you are looking for the partial derivative function. That is not covered here.

Last time I checked, there were no symbolic functions returned in scipy, and since everything is strictly numeric, we are limited to evaluating a point of set of points of a function.

Integration

First I will describe the integration functions available. Now most of you know that the integration function is f(x)dx\int f(x) dx for indefinite integrals, and abxdx\int_{a}^{b} x dx for definite integrals. In scipy the function is called scipy.integrate.quad(func, a, b). This only does definite integrals and it also accepts lambda functions taking a single numeric argument and returning a numeric value, but it can also take normal functions.

For double integration, use scipy.integrate.dblquad(func, x1, x2, y1, y2). This integrates x1x2y1y2f(y,x)dydx\int_{x1}^{x2} \int_{y1}^{y2} f(y, x) dy dx . The order of variables in the passed function is important, the y parameter comes before the x parameter.

Triple integration is similar, the function for that is scipy.integrate.tplquad(func, x1, x2, y1, y2, z1, z2), and it integrates x1x2y1y2z1z2f(z,y,x)dzdydx\int_{x1}^{x2} \int_{y1}^{y2} \int_{z1}^{z2} f(z, y, x) dz dy dx .

There is also a function that will perform an arbitrary number of integrations at once, although this can also be accomplished with repeated calls to quad(). Using scipy.integrate.nquad(func, iter), you pass a function taking n variables in the order x, y, z, ... last to func, and in iter you pass a list of two element ranges of the form [x1, x2] so that the iter argument would look like [[x1, x2], [y1, y2], [z1, z2], ... [last1 last2]]. For a specific variable, instead of passing a two element range you can also pass a function which takes arguments of all the variables that come before it (empty parameter list for the first variable), and pass into iter a list of those functions. Here's an example:

>>> from scipy import integrate
>>> def f(x, y):
...     return x*y
...
>>> def bounds_y():
...     return [0, 0.5]
...
>>> def bounds_x(y):
...     return [0, 1-2*y]
...
>>> integrate.nquad(f, [bounds_x, bounds_y])
(0.010416666666666668, 4.101620128472366e-16)
Enter fullscreen mode Exit fullscreen mode

A problem might arise that computing an analytical integral for a function is too slow. To counter that, people can numerically integrate those functions, which just means you approximate the integral by performing summations with multipliers (weights) on each point of the function, essentially transforming the integral into a summation problem:

11f(x)dxi=1nwif(xi) \int_{-1}^{1} f(x) dx \approx \sum_{i=1}^{n} w_{i} f(x_{i})

The domain that's usually integrated in this case is [-1..1]. The wiw_{i} are weights that should be selectively chosen to make the most accurate integration. Similarly the xix_{i} are inputs to the function that will be numerically integrated, selectively chosen.

However, scipy hides these complexities and presents the function scipy.integrate.fixed_quad(func, a, b, n) to perform numerical integration. It takes exactly the same parameters as quad(), but in addition, it takes the number of weights/points to use n, which is also called the order of integration. This entire process is sometimes called fixed order integration.

>>> from scipy import integrate
>>> import numpy as np
>>> f = lambda x: x**8
>>> integrate.fixed_quad(f, 0.0, 1.0, n=4)
(0.1110884353741496, None)
>>> integrate.fixed_quad(f, 0.0, 1.0, n=5)
(0.11111111111111102, None)
>>> print(1/9.0)  # analytical result
0.1111111111111111

>>> integrate.fixed_quad(np.cos, 0.0, np.pi/2, n=4)
(0.9999999771971152, None)
>>> integrate.fixed_quad(np.cos, 0.0, np.pi/2, n=5)
(1.000000000039565, None)
>>> np.sin(np.pi/2)-np.sin(0)  # analytical result
1.0
Enter fullscreen mode Exit fullscreen mode

On the other hand, there is also an advanced numerical integration function scipy.integrate.quadrature(func, a, b, tol=1.49e-08, maxiter=50) that will choose successively higher orders of n to integrate at until the numerical integration (summation) has an error of less than tol, but it will stop trying if the order becomes higher than maxiter, so if that happens, it will just return the integration at that order. This function returns a tuple of the (result, error) of the numerical integration.

>>> from scipy import integrate
>>> import numpy as np
>>> f = lambda x: x**8
>>> integrate.quadrature(f, 0.0, 1.0)
(0.11111111111111106, 4.163336342344337e-17)
>>> print(1/9.0)  # analytical result
0.1111111111111111

>>> integrate.quadrature(np.cos, 0.0, np.pi/2)
(0.9999999999999536, 3.9611425250996035e-11)
>>> np.sin(np.pi/2)-np.sin(0)  # analytical result
1.0
Enter fullscreen mode Exit fullscreen mode

Differential Equations

Recall that an ordinary differential equation (ODE) is defined as

F(x,y,y,y,,y(n1))=y(n) F(x, y, y^{\prime}, y^{\prime\prime}, \mathellipsis, y^{(n-1)}) = y^{(n)}

FF is an arbitrary function which change the differential equation. The idea is to find the yy function that will, along with its n1n-1 derivatives, make the equation to its nn th derivative. The equation above is an ODE of the nth order. An ODE of the first order would have two terms:

F(x,y)=y F(x, y) = y^{\prime}

And this is what a second order ODE would look like:

F(x,y,y)=y F(x, y, y^{\prime}) = y^{\prime\prime}

But to get to the point, scipy has routines that can solve these equations. Given a function, a starting and ending value of x, an initial value of y y0 such that f(x_start) = y0. the scipy.integrate.solve_ivp() returns x points and values of the solution function at the corresponding x's. y0 can be an array of initial positions. In that case solve_ivp() will calculate the corresponding x points each_x that satisfy f(each_x) = each_y0. In other words it just solves the differential equation multiple times with different starting values, and then return the results of all of these wrapped in an array.

>>> from scipy.integrate import solve_ivp
>>> def exponential_decay(t, y): return -0.5 * y
>>> sol = solve_ivp(exponential_decay, [0, 10], [2, 4, 8])
>>> print(sol.t)
[ 0.          0.11487653  1.26364188  3.06061781  4.81611105  6.57445806
  8.33328988 10.        ]
>>> print(sol.y)
[[2.         1.88836035 1.06327177 0.43319312 0.18017253 0.07483045
  0.03107158 0.01350781]
 [4.         3.7767207  2.12654355 0.86638624 0.36034507 0.14966091
  0.06214316 0.02701561]
 [8.         7.5534414  4.25308709 1.73277247 0.72069014 0.29932181
  0.12428631 0.05403123]]
Enter fullscreen mode Exit fullscreen mode

The t values (where t is the x in our example) that are put into the y function are the ones in sol.t. For the first t, we use f(first_t) = first_y0 to get the first value of the solution for first_y0 (2), then we do the same for f(second_t) = first_y0 (1.88836035), f(third_t) = first_y0 (1.06327177), up until the lastf(fourth_t) = first_y0 (0.43319312). That commences the solution array for first_y0. It then makes similar arrays for second_y0 (4) and third_y0 (8) using exactly the same process I described earlier. If there were additional y0 values they would be evaluated like that too, and there would be more arrays in the sol.y output.

Also notice that we didn't get to chose the values of t above, it guessed the middle values of t that we wanted from a start and end point. Usually we want to know the solution at specific points of t. the t_eval parameter (a numeric array) can be used alongside the start and end point to calculate the values of y only for the points in the t_eval array.

>>> sol = solve_ivp(exponential_decay, [0, 10], [2, 4, 8],
...                 t_eval=[0, 1, 2, 4, 10])
>>> print(sol.t)
[ 0  1  2  4 10]
>>> print(sol.y)
[[2.         1.21305369 0.73534021 0.27066736 0.01350938]
 [4.         2.42610739 1.47068043 0.54133472 0.02701876]
 [8.         4.85221478 2.94136085 1.08266944 0.05403753]]
Enter fullscreen mode Exit fullscreen mode

This returns the solution at the points 0, 1, 2, 4, and 10.

Optimization

Before we proceed with describing the mathematical optimization functions, we need to know what the term "optimization" is. I quote from Wikipedia:

In the simplest case, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function.

So optimization is the process of finding the most extreme value in a function, be it the smallest, largest, or even the argument with the largest value when the argument is inserted into another function. There could also be parameter values which all have the same most "extreme" value, and these parameter values are usually all returned together.

You will also hear the terms constrained and unconstrained. A constrained optimization problem is one where relational functions limit the domain of solutions by returning False for ranges of the argument values that are to be excluded. There may be multiple constraint "relational" functions. An unconstrained optimization problem has no such constraint functions.

For now we will concentrate on minimization of functions. The function for performing minimization in scipy is called scipy.optimize.minimize(). This function takes a method parameter which is a string that is the name of the optimization method to use. We will not concern ourselves with the plethora of opitimization methods available, so don't worry too much about the value of the method available.

Optimization algorithms work by applying an equation to a function several times in succession, in other words, in iterations. We saw this earlier with differential equations. The tol parameter is the largest error that you are willing to tolerate between two iterations. I like to call it the convergence factor because the error is supposed to get smaller after subsequent iterations, so eventually, it converges into the answer.

>>> from scipy.optimize import minimize, rosen, rosen_der
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> res = minimize(rosen, x0, method='Nelder-Mead', tol=1e-6)
>>> res.x
array([ 1.,  1.,  1.,  1.,  1.])
Enter fullscreen mode Exit fullscreen mode

The res.x array has one value for each parameter passed to the function. Together, these values applied to the parameters make the smallest value in the entire function. In this case, the initial guess of x0 lead us to [1, 1, 1, 1, 1] causing Rosenbrock's function (the one we are trying to minimize) to output the smallest value.

In this example we passed five parameters to the Rosenbrock function but internally it performs a summation of the operation it does to its parameters, so we can pass a different number of parameters. This function returns only one value. It does not return the same number of values as parameters it was given.

A simple trick to find the maximum value in a function is to negate the function you want to maximize and then passing it to the minimize() function, like minimize(-rosen).

Finally, I would like to describe how one would use scipy to find roots of functions. Root finding means finding the parameter values of a function for which the output is zero. Similarly a root is any parameter value which makes a function zero. So in the function f = x - 3, the only root is 3, since f(3) = 0, but other functions can have many roots. It is similar to minimization problems but instead of finding parameters that make the smallest value, we find parameters that make zero values. As with minimization functions, there could be multiple places in the function that make a zero value.

You can use scipy.optimize.root(func, initial_guess) for this task. If initial_guess is an array then it must be the same size as the number of parameters of func.

This example finds the roots of x+2cos(x)=0x + 2\cos(x) = 0 :

>>> import numpy as np
>>> from scipy.optimize import root
>>> def func(x):
...     return x + 2 * np.cos(x)
>>> sol = root(func, 0.3)
>>> sol.x
array([-1.02986653])
Enter fullscreen mode Exit fullscreen mode

Closing words

In this article we learned about calculating derivatives, multiple integrals, approximations numerical integrals, solutions to differential equations, minimization problems and root finding. These are the core pillars of higher mathematics and are useful not just in school but when you're making production applications that have a mathematical part to them. For instance, take car cylinders for example. You need integrals to determine the volume of the cylinder. Acceleration is the derivative of velocity. And while auto makers (just to make a specific example) most likely aren't using scipy, or Python at all for the job, you would at least be able to simulate the dynamics that are happening in there, on your own systems.

If you see any errors in this post, please let me know so I can correct them.

Image by Craig Melville from Pixabay

💖 💪 🙅 🚩
zenulabidin
Ali Sherief

Posted on April 20, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related