The examples are adapted from:
R. Johansson, "Numerical Python: Scientific Computing and Data Science Applications With Numpy, Scipy and Matplotlib", Apress, 2018
import numpy as np
import numpy.linalg
import numpy.random as rd
import scipy
from scipy import linalg, optimize, integrate
import matplotlib
import matplotlib.pyplot as plt
Solve the system of linear equations
$$\left\{ \begin{array}{l} 2x_0 +3x_1 = -4 \\ -x_0 +4x_1 = 2 \end{array}\right.$$In a matrix form, set
$$A =\left[ \begin{array}{cc} 2 & 3 \\ -1 & 4 \end{array} \right], b =\left[ \begin{array}{c} -4 \\ 2 \end{array} \right]$$and find
$$ x =\left[ \begin{array}{cc} x_1 \\ x_2 \end{array} \right]$$so that
$$Ax = b$$A = np.array([[2,3],[-1,4]])
A
array([[ 2, 3], [-1, 4]])
b = np.array([-4,2])
b
array([-4, 2])
x = linalg.solve(A,b)
x
array([-2., 0.])
Check that $x$ is indeed the solution
A@x - b
array([0., 0.])
Th econdition number should as small (i.e. close to $1$) as possible, otherwise small changes in $b$ results in large variation in $x$. If it is small, the system is well conditionated.
numpy.linalg.cond(A)
2.2907308206532337
What happens if c.n. is large?
A1 = np.array([[2,3],[4,5.9999]])
numpy.linalg.cond(A1)
324994.00004943175
linalg.solve(A1,[2,4])
array([1., 0.])
linalg.solve(A1,A1@np.array([2.01,4]))
array([2.01, 4. ])
numpy.linalg.norm(A)
5.477225575051661
numpy.linalg.matrix_rank(A)
2
Can be useful if our data have and underlying law which depends linearly on some parameters, such as:
$$ y(x) = a + b\cdot x + c \cdot x^2$$x = np.linspace(-1,1,100)
a, b, c = 1, 2, 3
y_exact = a + b*x + c*x**2
Add some noise...
m = 100
X = 1- 2*rd.rand(m)
Y = a + b*X + c*X**2 + rd.randn(m)
fig, ax = plt.subplots()
ax.plot(X,Y,'bo')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.grid(True)
The system to solve is this one:
$$\left[ \begin{array}{ccc} 1 & x_0 & x_0^2 \\ 1 & x_1 & x_1^2 \\ 1 & x_2 & x_2^2 \\ \ldots \\1 & x_m & x_m^2 \end{array} \right] \left( \begin{array}{c} a \\ b \\ c \end{array} \right) = \left( \begin{array}{c} y_0 \\ y_1 \\ y_2 \\ \ldots \\ y_m\end{array} \right) $$A = np.vstack([X**0, X**1, X**2])
sol, r, rank, sv = linalg.lstsq(A.T, Y)
sol
array([0.91471154, 1.93138163, 3.14262001])
y_fit = sol[0] + sol[1]*x + sol[2]*x**2
y_exact = a + b*x + c*x**2
fig, ax = plt.subplots()
ax.plot(X,Y,'bo',label='data')
ax.plot(x, y_fit, 'r', label='Least square fit')
ax.plot(x, y_exact, 'k', label='True values')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.legend()
ax.grid(True)
Find the zero (also named roots) of
$$ f(x) = e^x -2$$Note that
$$ f'(x) = e^x$$The possible zeros is bracket by two points where the function has different sign.
f = lambda x: np.exp(x)-2
fprime = lambda x: np.exp(x)
x = np.linspace(-2.1,2.1,1000)
fig, ax = plt.subplots()
ax.plot(x,f(x),'b-')
ax.set_xlabel('$x$')
ax.set_ylabel('$f(x)$')
ax.grid(True)
We note that $f(0)$ and $f(1)$ differ in sign and so a zero must be between $0$ and $1$.
f(0)
-1.0
f(1)
0.7182818284590451
x_zero = optimize.bisect(f, 0, 1)
fig, ax = plt.subplots()
ax.plot(x,f(x),'b-')
ax.plot([x_zero],[f(x_zero)],'r*')
ax.set_xlabel('$x%')
ax.set_ylabel('$f(x)%')
ax.grid(True)
Using a first-order Taylor expansion
$$ f(x+dx) = f(x) + f'(x) dx $$and, starting from $x$, looking for $dx$ such that $f(x+dx)=0$ allows to estimate $dx$ as:
$$ dx = - \frac{f(x)}{f'(x)}$$optimize.newton(f, 1, fprime=fprime)
0.6931471805599453
The prime derivative can be omitted (it is evaluated numerically)
optimize.newton(f, 1)
0.6931471805599453
optimize.brentq(f, 0, 1)
0.6931471805599453
%pdoc optimize.brentq
We have to solve this sytem
$$ \left\{ \begin{array}{l} y - x^3 -2 x^2 +1 = 0 \\ y + x^2 -1 =0 \end{array} \right.$$Graphically...
x = np.linspace(-3,2,100)
fig, ax = plt.subplots()
ax.plot(x,x**3+2*x**2-1,'b-', label='$y=x^3+2x^2-1$')
ax.plot(x,1-x**2,'r-',label='$y = -x^2+1$')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.legend()
ax.grid(True)
We define a vector function in two variables: $f:\mathbb{R}^{2} \to \mathbb{R}^2$
$$ f(x) = \left[ \begin{array}{l} f_0(x_0, x_1) \\ f_1(x_0, x_1) \end{array} \right] = \left[ \begin{array}{l} x_1 - x_0^3 -2 x_0^2 +1 \\ x_1 + x_0^2 -1 \end{array} \right] $$Note that $x_0 = x$ and $x_1=y$.
def f(x):
return [x[1]-x[0]**3-2*x[0]**2+1,
x[1]+x[0]**2-1]
A guess value is needed, for example $x = \left[ 1, 0.5 \right]^T$
x_zero = optimize.fsolve(f, [1, 0.5])
x_zero
array([0.73205081, 0.46410162])
x = np.linspace(-3,2,100)
fig, ax = plt.subplots()
ax.plot(x,x**3+2*x**2-1,'b-', label='$y=x^3+2x^2-1$')
ax.plot(x,1-x**2,'r-',label='$y = -x^2+1$')
ax.plot([1],[0.5],'b*',label='Initial guess')
ax.plot([x_zero[0]],[x_zero[1]],'r*',label='Solution')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.legend()
ax.grid(True)
%pdoc optimize.fsolve
f = lambda x: np.exp(-x**2)
integrate.quad(f, -1, 1)
(1.493648265624854, 1.6582826951881447e-14)
The first element of the tuple is the estimation of the integral, the second element an estimation of the error.
Sometimes the function has a singularity.
$$\int_{-1}^{+1} \frac{1}{\sqrt{|x|}} dx$$f = lambda x: 1./np.sqrt(np.abs(x))
integrate.quad(f, -1, 1)
C:\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: RuntimeWarning: divide by zero encountered in double_scalars """Entry point for launching an IPython kernel. C:\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: IntegrationWarning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used. """Entry point for launching an IPython kernel.
(nan, nan)
Excluding the point with singulaity do the job.
integrate.quad(f, -1, 1, points=[0])
(3.999999999999999, 1.7319479184152442e-14)
x = np.linspace(-1,1,200)
y = f(x)
Using the trapezoidal rule (linear interpolation)
integrate.trapz(y, x)
3.8787082658197445
Using the Simpson's rule (quadratic interpolation)
integrate.simps(y, x)
3.8787167470986548