In this section, we will look at two other common sub-packages of Scipy: scipy.interpolate and scipy.integrate.

The simplest interpolation routine in scipy.interpolate is interp1d:

In [ ]:

```
from scipy.interpolate import interp1d
```

If we create a fake dataset:

In [ ]:

```
import numpy as np
x = np.array([0., 1., 3., 4.])
y = np.array([0., 4., 2.7, 2.08])
```

we can interpolate linearly by first creating an interpolating function:

In [ ]:

```
f = interp1d(x,y)
```

and we can then interpolate to any value of x within the original bounds:

In [ ]:

```
f(0.5)
```

In [ ]:

```
f(3.3)
```

It is also possible to interpolate to several values at the same time:

In [ ]:

```
f(np.array([0.5, 1.5, 2.5, 3.5]))
```

If the interpolating function is called outside the original range, an error is raised:

In [ ]:

```
f(-1.)
```

You can change this behavior by telling `interp1d`

to not give an error in this case, but to use a set value:

In [ ]:

```
f = interp1d(x, y, bounds_error=False, fill_value=-10.)
```

In [ ]:

```
f(-1.0)
```

In [ ]:

```
f(np.array([-1., 1., 3., 6.]))
```

By default, `interp1d`

uses linear interpolation, but it is also possible to use e.g. cubic **spline** interpolation:

In [ ]:

```
f = interp1d(x, y, kind='cubic')
f(0.5)
```

Let's compare a few ways to interpolate:

In [ ]:

```
import matplotlib.pyplot as plt
%matplotlib inline
```

In [ ]:

```
xp=np.arange(0,4,0.1)
g = interp1d(x,y,kind=2) # equivalent to 'quadratic'
h = interp1d(x,y,kind=3) # equivalent to 'cubic'
plt.plot(xp,f(xp),xp,g(xp),xp,h(xp))
plt.xlim(0,6)
plt.legend(['linear','quadratic spline','cubic spline'])
```

In [ ]:

```
def orig(x):
return 3.11-3.11*(x-1)*(x-1)+0.5*x+0.39*x*x*x
xp=np.arange(0,4,0.5)
print (xp)
print (orig(xp))
print (h(xp))
```

For more information, see the documentation for interp1d. There are also other interpolation functions available (for example for spline interpolation), which you can read up about at scipy.interpolate.

Wikipedia also has some information on splines.

The available integration functions are listed at scipy.integrate. See also the tutorial here. You will notice there are two kinds of functions - those that integrate actual Python functions, and those that integrate numerical functions defined by Numpy arrays.

First, we can take a look at one of the functions that can integrate actual Python functions. If we define a function:

In [ ]:

```
def simple_function(x):
return 3. * x**2 + 2. * x + 1.
```

we can integrate it between limits using:

In [ ]:

```
from scipy.integrate import quad
print(quad(simple_function, 1., 2.))
```

As described in the documentation for quad, the first value returned is the integral, and the second is the error on the integral. If we had solved the integral analytically, we would expect 11, so the result is correct. The names comes from quadrature for working out the area under curve by splitting it up into know sub-areas.

We can also define functions as Numpy arrays:

In [ ]:

```
x = np.linspace(1., 2., 1000)
y = 3. * x**2 + 2. * x + 1.
```

And in this case we can use for example the simps function to integrate using Simpson's rule:

In [ ]:

```
from scipy.integrate import simps
print(simps(y, x=x))
```

This can be very useful in cases where one wants to integrate actual data that cannot be represented as a simple function or when the function is only available numerically.

Note that there is an issue on the scipy.integrate page - there should also be a menton of the `trapz`

function which works similarly to `simps`

but does trapezium integration:

In [ ]:

```
from scipy.integrate import trapz
print(trapz(y, x=x))
```

a) Write a function that takes `x`

, and the parameters for a Gaussian (amplitude, displacement, width) and returns the value of the Gaussian at `x`

:

In [ ]:

```
# your solution here
```

b) Use `quad`

to compute the integral and compare to what you would expect.

In [ ]:

```
# your solution here
```

c) Now create two arrays `x`

and `y`

that contain the Gaussian for fixed values `x`

, and try and compute the integral using `simps`

.

In [ ]:

```
# your solution here
```

Compare this to what you found with `quad`

and analytically.

An important task in scientific computing is to solve differential equations.

*scipy.integrate* also has *odeint* which we use here to solve the damped oscillation

In [ ]:

```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.integrate import odeint
```

For the *odeint* solver, the differential equation needs to be split up into a system of first-order equations:

(1) $\dot{x} = v$

(2) $\dot{v} = -k x - 2 g v$

In [ ]:

```
def diffeq(x, t,k,g):
#define/clear derivative
dxdt = np.zeros(2)
""" differential equation for damped oscillation
split into first-order equation
x. = v
v. = - k x - 2 gamma v
"""
dxdt[0] = x[1]
dxdt[1] = -k*x[0] - 2* g*x[1]
return dxdt
""" constants """
k=0.1
g=0.25
x = np.array([2.0,-0.8]) # initial position
time = np.linspace(0.0,100.0,100) # evaluation times
""" calling the solver """
solution = odeint(diffeq, x, time, args=(k,g))
```

This is the corresponding analytical solution:

In [ ]:

```
def damped_oscillation(t,x,k,g):
om=np.sqrt(k-g**2)
A=(x[1]+g*x[0])/om
B=x[0]
return (A*np.sin(om*t)+B*np.cos(om*t))*np.exp(-g*t)
```

In [ ]:

```
plt.plot(time,solution[:,0],time,damped_oscillation(time,x,k,g))
plt.legend(['Numerical','Analytical'])
```

a) Plot the motion of a critically damped oscillator $\gamma$ = $2 \sqrt{k}$

In [ ]:

```
# solution here
```

b) Extend the differential equation by a forcing $f(x)=A_0 \cos (\omega_0 t)$ [choose $A_0$ and $\omega_0$]. Consider the initial behaviour of the system.

In [ ]:

```
# solution here
```