Find Lagrange polynomials that approximate
-
Find the linear interpolation polynomial
$P_1 (x)$ using the nodes$x_0 = −1$ and$x_1 = 0$ . -
Find the quadratic interpolation polynomial
$P_2 (x)$ using the nodes$x_0 = −1$ ,$x_1 = 0$ , and$x_2 = 1$ . -
Find the cubic interpolation polynomial
$P_3 (x)$ using the nodes$x_0 = −1$ ,$x_1 = 0$ ,$x_2 = 1$ , and$x_3 = 2$ . -
Find the linear interpolation polynomial
$P_1 (x)$ using the nodes$x_0 = 1$ and$x_1 = 2$ . -
Find the quadratic interpolation polynomial
$P_2 (x)$ using the nodes$x_0 = 0$ ,$x_1 = 1$ , and$x_2 = 2$ .
Accroding to the Lagrange polynomials froms:$$P_n(k)=\sum^N_{k=0}y_kL_{N,k}(x)$$ Where,$$L_{N,k}(x)=\frac{\prod^N_{k=0}(x-x_j)}{\prod^N_{k=0}(x_k-x_j)},;j\not=k$$ We can get the answer:
(a)$$P_1(x)=x$$ (b)$$P_2(x)=x$$ (c)$$P_3(x)=x^3$$ (d)$$P_1(x)=7x-6$$ (e)$$P_2(x)=3x^2-2x$$
import matplotlib.pyplot as plt
import numpy as np
class polynomial :
""" an object representing a polynomial """
def __init__(self, a) :
self._a = a
def evaluate(self, _x) :
_p = 0.
p_str=''
for i in reversed(self._a):
_p = _p * _x + i
return _p
def plot(self, x0=-1., x1=1.) :
_x = np.linspace(x0, x1, 100)
_y = self.evaluate(_x)
plt.plot(_x, _y)
def interpolate(points):
N = len(points)# N is the length of the array
x = np.empty((N, N))# create a N x N matrix
# fill the matrix
j = 0
for p in points:
for i in range(N) :
x[j,i] = pow(p[0], i)
j += 1
# solve the lienar equation using SciPy pacakge.
return np.linalg.solve(x, points[:,1])
def f_x_3(x) : #输入x,输出点对 (x,y)
y = [i**3 for i in x]
points = np.array(list(zip(x,y)))
return points
def p_str(a) :
p = str(a[0])
i = 1
while i < len(a) :
p = '{}x^{}'.format( a[i], i) + '+' +p
i += 1
print('Collocation polynomial is y=' + p)
return p
(a) Solution
points = f_x_3([-1,0])
a = interpolate(points) # get the coefficients of the interpolating polynomial
p = polynomial(a)# contstruct a polynomial object
p_str(list(a))
p.plot(-1,2)
plt.plot(points[:,0], points[:,1], 'r*')
plt.show()
Collocation polynomial is y=1.0x^1+0.0
points = f_x_3([-1,0,1])
a = interpolate(points) # get the coefficients of the interpolating polynomial
p = polynomial(a)# contstruct a polynomial object
p_str(list(a))
p.plot(-1,2)
plt.plot(points[:,0], points[:,1], 'r*')
plt.show()
Collocation polynomial is y=-0.0x^2+1.0x^1+0.0
points = f_x_3([-1,0,1,2])
a = interpolate(points) # get the coefficients of the interpolating polynomial
p = polynomial(a)# contstruct a polynomial object
p_str(list(a))
p.plot(-1,2)
plt.plot(points[:,0], points[:,1], 'r*')
plt.show()
Collocation polynomial is y=1.0x^3+-0.0x^2+0.0x^1+0.0
points = f_x_3([1,2])
a = interpolate(points) # get the coefficients of the interpolating polynomial
p = polynomial(a)# contstruct a polynomial object
p_str(list(a))
p.plot(-1,2)
plt.plot(points[:,0], points[:,1], 'r*')
plt.show()
Collocation polynomial is y=7.0x^1+-6.0
points = f_x_3([0,1,2])
a = interpolate(points) # get the coefficients of the interpolating polynomial
p = polynomial(a)# contstruct a polynomial object
p_str(list(a))
p.plot(-1,2)
plt.plot(points[:,0], points[:,1], 'r*')
plt.show()
Collocation polynomial is y=3.0x^2+-2.0x^1+0.0
-
The measured temperatures during a 5-hour period in a suburb of Los Angeles on November 8 are given in the following table.
Time, P.M. Degrees Fahrenheit 1 66 2 66 3 65 4 64 5 63 6 63 a. Use Program 4.2 to construct a Lagrange interpolatory polynomial for the data in the table.
b. Use Algorithm 4.1(iii) (in Numerical Methods Using MATLAB, 4th) to estimate the average temperature during the given 5-hour period.
c. Graph the data in the table and the polynomial from part (a) on the same coordinate system. Discuss the possible error that can result from using the polynomial in part (a) to estimate the average temperature.
-
In Program 4.2 the matrix D is used to store the divided-difference table.
a. Verify that the following modification of Program 4.2 is an equivalent way to compute the Newton interpolatory polynomial.
for k=0:N A(k)=Y(k); end for j=1:N for k=N:-1:j A(k)=(A(k)-A(k-1))/(X(k)-X(k-j)); end end
b. Repeat (1.) using this modification of Program 4.2
import numpy as np
import math
from sympy import *
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
def Newton_Poly_1(X,Y,x) :
''' X : abscissas X_i
Y : ordinates f(X_i)
x : interpolating points
D: divided-difference table
'''
sum = Y[0]
#X.astype(float) # change data type
#Y.astype(float)
n = len(X)
D = np.zeros((n,n), dtype=float)
D[:,0] = Y
D_sumator = 1.0
for j in range(1,n):
D_sumator = D_sumator*(x - X[j-1]) # polynomial of x
for k in range(n-1,j-1,-1):
D[k,j] = (D[k,j-1]-D[k-1,j-1])/(X[k]-X[k-j])
sum += D_sumator*D[j,j]
return sum # Newton polynomial
def Newton_Poly_2(X,Y,x) :
''' X : abscissas X_i
Y : ordinates f(X_i)
x : interpolating points
D : divided-difference table
'''
sum = Y[0]
#X.astype(float) # change data type
#Y.astype(float)
n = len(X)
D = [i for i in Y]
D_sumator = 1.0
for j in range(1,n):
D_sumator = D_sumator*(x - X[j-1]) # polynomial of x
for k in range(n-1,j-1,-1):
D[k] = (D[k]-D[k-1])/(X[k]-X[k-j])
sum += D_sumator*D[j]
return sum # Newton polynomial
X = [1,2,3,4,5,6]
Y = [66,66,65,64,63,63]
x = symbols('x')
simplify(Newton_Poly_2(X,Y,x))
X = [1,2,3,4,5,6]
Y = [66,66,65,64,63,63]
x = symbols('x')
print('Polynomial using former program is',simplify(Newton_Poly_1(X,Y,x)).evalf(3))
print('Polynomial using new program is',simplify(Newton_Poly_2(X,Y,x)).evalf(3))
Polynomial using former program is 0.0167*x**5 - 0.292*x**4 + 2.0*x**3 - 6.71*x**2 + 9.98*x + 61.0
Polynomial using new program is 0.0167*x**5 - 0.292*x**4 + 2.0*x**3 - 6.71*x**2 + 9.98*x + 61.0
def get_coef(polynomial):
A = []
i=0
while true:
A.append( polynomial.evalf(subs={x:0.}) )
polynomial -= A[i]
polynomial /= x
polynomial = simplify(polynomial)
if polynomial.evalf(subs={x:0.})==0:
break
i += 1
return A
def Integral(A,x):
n = len(A)-1
H = [0]*(n+2)
H[n+1] = A[n]/(n+1)
for k in range(n,0,-1):
H[k] = A[k-1]/k +H[k+1]*x
H[0] = H[1]*x
return H[0]
polynomial = simplify(Newton_Poly_1(X,Y,x))
A = get_coef(polynomial)
mean_T = (Integral(A,6)-Integral(A,1))/(6-1)
print(mean_T)
64.5000000000000
(c) Graph the data in the table and the polynomial from part (a) on the same coordinate system. Discuss the possible error that can result from using the polynomial in part (a)to estimate the average temperature.
X = [1,2,3,4,5,6]
Y = [66,66,65,64,63,63]
xs = np.linspace(np.min(X),np.max(X),1000,endpoint = True)
ys = []
for x in xs:
ys.append(Newton_Poly_2(X,Y,x))
plt.title("Newton Polynomial Interpolation")
plt.plot(X,Y,'s',label = "original values")
plt.plot(xs,ys,'r',label = "interpolation values")
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc = 1) # Set legend part position at the top right corner
<matplotlib.legend.Legend at 0x1972837f250>
- It can be seen that the fitting curve produces a certain deviation from the data points at both ends, resulting in a certain Runge phenomenon. This is because the number of the interpolation points is not enough. Also, the distribution of interpolation points is not reasonable.
- The first 5 terms of Taylor expansion formula are adopted for integration, and the error is determined by the accuracy of Taylor expansion. Compared with direct numerical integration, the advantage is easy to calculate, but the disadvantage is that it is difficult to improve the precision alone.
- Since the data in this question are symmetric, the results of the Taylor expansion integral are coincidentally consistent with the average value of the data points.
- (b) Section 4.4:P&A 1,2
- In program 4.2 the matrix D is used to store the divided-difference table.
(a) Verify that the following modification of Program 4.2 is an equivalent way to compupte the Newton interpolatory polynomial.
for k=0:N
$\quad$A(k)=Y(k);
end
for j=1:N
$\quad$for k=N:-1:j
$\quad\quad$A(k)=(A(k)-A(k-1))/(X(k)-X(k-1));
$\quad$end
end
$\quad\quad$ (b) Repeat Problem 1 using this modification of program 4.2
import numpy as np
import math
import matplotlib.pyplot as plt
#@brief: get Newton_inter1 function;
def Newton_inter2(x):
n = len(sr_k)
D = np.zeros(n)
i = 0
D[i]=sr_yk[0]
while i < n:
j = n-1
while j > i:
if i == 0:
D[j] = ((sr_yk[j]-sr_yk[j-1])/(sr_k[j]-sr_k[j-1]))
else:
D[j] = (D[j]-D[j-1])/(sr_k[j]-sr_k[j-1-i])
j -= 1
i += 1
result = D[0]
Wa = 1.0
for a in range(1, n):
Wa *= (x - sr_k[a-1])
result += (D[a] * Wa)
return result
#@brief: get Newton inter value
sr_k=np.linspace(0,4,num = 5)
sr_yk = [math.cos(i) for i in sr_k] #In order to get the point (k,yk)
k=np.linspace(0,4,num = 41)
yk = [Newton_inter2(i) for i in k] #For higher accuracy
x=[0.5,0.75,1.0,1.25,2.0]
y=[Newton_inter2(i) for i in x] #In order to get the Newton inter point (x,y)
print(y)
#draw the figure
plt.title("Newton Interpolation")
plt.plot(sr_k,sr_yk,linestyle = '', marker='o', color='b',label="oringal values")#Blue line indicates the original values
plt.plot(k,yk,linestyle = '--', color='r',label='Interpolation line P_5')#Red line indicates the interpolated values
plt.plot(x,y, marker='o',color='g',label='Interpolation values P_5')#Red line indicates the interpolated values
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc=1)
[0.9009455435886337, 0.7438761664112732, 0.5403023058681398, 0.3073990768640172, -0.41614683654714235]
<matplotlib.legend.Legend at 0x197282db5e0>
In problem (a.) through (f.) , use Program 4.3 to compute the coefficients
a.
e.
import matplotlib.pyplot as plt
import numpy as np
import math
from numpy.polynomial import Polynomial, Chebyshev
def Cheby(a,b,n,fun):
d=np.pi/(2*n+2)
C=np.zeros(n+1)
X=np.zeros(n+1)
for k in range(n+1):
X[k]=np.cos((2*k+1)*d)
X=(b-a)*X/2+(a+b)/2
x=X
Y=eval(fun)
for k in range(n+1):
z=(2*k+1)*d
for j in range(n+1):
C[j] += Y[k]*math.cos((j)*z)
C=2*C/(n+1)
C[0]=C[0]/2
return C
def YChebyshev(n,x,C_value):
y=[]
sum=0
for t in x:
for k in range (n):
sum += C_value[k]*np.cos(k*np.arccos(t))
y.append(sum)
sum=0
return y
#(1)For exp(x)
a = -1
b = 1
n = 4
C_value = Cheby(a,b,n-1,'np.exp(x)')
x = np.linspace(a, b, num = 20)
y = YChebyshev(n, x, C_value)
#Draw the Chebyshev Interpolation figure
plt.title("Chebyshev Interpolation")
plt.plot(x,np.exp(x),'b',label="oringal values") #Blue line indicates the original values
plt.plot(x,y,'ro',label='Interpolation values P_4') #Red line indicates the interpolated values
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc=2) #Specify the position of legend in the higher left corner
<matplotlib.legend.Legend at 0x187fffbee20>
#(2)For sin(x)
a = -1
b = 1
n = 5
C_value = Cheby(a,b,n-1,'np.sin(x)')
x = np.linspace(a, b, num = 20)
y = YChebyshev(n, x, C_value)
#Draw the Chebyshev Interpolation figure
plt.title("Chebyshev Interpolation")
plt.plot(x,np.sin(x),'b',label="oringal values") #Blue line indicates the original values
plt.plot(x,y,'ro',label='Interpolation values P_5') #Red line indicates the interpolated values
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc=2) #Specify the position of legend in the higher left corner
<matplotlib.legend.Legend at 0x1879803c5b0>
#(3)For cos(x)
a = -1
b = 1
n = 6
C_value = Cheby(a,b,n-1,'np.cos(x)')
x = np.linspace(a, b, num = 20)
y = YChebyshev(n, x, C_value)
#Draw the Chebyshev Interpolation figure
plt.title("Chebyshev Interpolation")
plt.plot(x,np.cos(x),'b',label="oringal values") #Blue line indicates the original values
plt.plot(x,y,'ro',label='Interpolation values P_6') #Red line indicates the interpolated values
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc=8) #Specify the position of legend in the lower center corner
<matplotlib.legend.Legend at 0x18798076040>
#(4)For ln(x+2)
a = -1
b = 1
n = 7
C_value = Cheby(a,b,n-1,'np.log(x+2)')
x = np.linspace(a, b, num = 20)
y = YChebyshev(n, x, C_value)
#Draw the Chebyshev Interpolation figure
plt.title("Chebyshev Interpolation")
plt.plot(x,np.log(x+2),'b',label="oringal values") #Blue line indicates the original values
plt.plot(x,y,'ro',label='Interpolation values P_7') #Red line indicates the interpolated values
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc=2) #Specify the position of legend in the higher left corner
<matplotlib.legend.Legend at 0x18798191910>
#(5)For (x+2)^(1/2)
a = -1
b = 1
n = 7
C_value = Cheby(a,b,n-1,'np.power((x+2),(1/2))')
x = np.linspace(a, b, num = 20)
y = YChebyshev(n, x, C_value)
#Draw the Chebyshev Interpolation figure
plt.title("Chebyshev Interpolation")
plt.plot(x,np.power((x+2),(1/2)),'b',label="oringal values") #Blue line indicates the original values
plt.plot(x,y,'ro',label='Interpolation values P_7') #Red line indicates the interpolated values
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc=2) #Specify the position of legend in the higher left corner
<matplotlib.legend.Legend at 0x18798235d90>
#(6)For (x+2)^(x+2)
a = -1
b = 1
n = 7
C_value = Cheby(a,b,n-1,'np.power((x+2),(x+2))')
x = np.linspace(a, b, num = 20)
y = YChebyshev(n, x, C_value)
#Draw the Chebyshev Interpolation figure
plt.title("Chebyshev Interpolation")
plt.plot(x,np.power((x+2),(x+2)),'b',label="oringal values") #Blue line indicates the original values
plt.plot(x,y,'ro',label='Interpolation values P_7') #Red line indicates the interpolated values
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc=2) #Specify the position of legend in the higher left corner
<matplotlib.legend.Legend at 0x187982b3ee0>
For each set of data, find the least-squares curve:
-
$f (x) = Ce^{Ax}$ , by using the change of variables$X = x$ ,$Y = ln(y)$ , and$C = e^B$ , from Table 5.6 (in Numerical Methods Using MATLAB, 4th), to linearize the data points. -
$f (x) = (Ax + B)^{−2}$ , by using the change of variables$X = x$ and$Y = y^{−1/2} $ , from Table 5.6, to linearize the data points. -
Use
$E_2 ( f )$ to determine which curve gives the best fit$x_k$ $y_k$ -1 13.45 0 3.01 1 0.67 2 0.15 $x_k$ $y_k$ -1 13.65 0 1.38 1 0.49 3 0.15
#for (a)
import numpy as np
import matplotlib.pyplot as plt
import math
def linear_regression(x, y):
N = len(x)
sumx = sum(x)
sumy = sum(y)
sumx2 = sum(x**2)
sumxy = sum(x*y)
A = np.mat([[N, sumx], [sumx, sumx2]])
b = np.array([sumy, sumxy])
return np.linalg.solve(A, b)
#(i)
xi = np.array([-1,0,1,2])
Xi = xi
yi = np.array([13.45,3.01,0.67,0.15])
Yi = [np.log(i) for i in yi]
B ,A = linear_regression(Xi, Yi)
_Xi = [-1, 2]
_Yi = [B + A * x for x in _Xi]
_xi = np.linspace(-1,2,num = 50)
_yi = [math.exp(B + A * x) for x in _xi]
print("(i)")
plt.plot(xi, yi, 'yo')
plt.plot(_xi, _yi, 'g', label= 'Fit exponential function')
plt.plot(Xi, Yi, 'ro')
plt.plot(_Xi, _Yi, 'b', label = 'Fit linearized function')
plt.legend()
plt.show()
print("The Fit linearized function Y = %f %fX" % (B,A))
print("The fit exponential function y = %f *exp(%fx)" % (math.exp(B),A))
#(ii)
xii = np.array([-1,0,1,3])
Xii = xii
yii = np.array([13.65,1.38,0.49,0.15])
Yii = [np.log(i) for i in yii]
D ,C = linear_regression(Xii, Yii)
_Xii = [-1, 3]
_Yii = [D + C * x for x in _Xii]
_xii = np.linspace(-1,3,num = 50)
_yii = [math.exp(D + C * x) for x in _xii]
print("\n(ii)")
plt.plot(xii, yii, 'yo')
plt.plot(_xii, _yii, 'g', label= 'Fit exponential function')
plt.plot(Xii, Yii, 'ro')
plt.plot(_Xii, _Yii, 'b', label = 'Fit linearized function')
plt.legend()
plt.show()
print("The Fit linearized function Y = %f %fX" % (D,C))
print("The fit exponential function y = %f *exp(%fx)" % (math.exp(D),C))
(i)
The Fit linearized function Y = 1.100366 -1.499071X
The fit exponential function y = 3.005266 *exp(-1.499071x)
(ii)
The Fit linearized function Y = 0.875264 -1.058567X
The fit exponential function y = 2.399508 *exp(-1.058567x)
#for (B)
#(i)
xi = np.array([-1,0,1,2])
Xi = xi
yi = np.array([13.45,3.01,0.67,0.15])
Yi = [math.pow(i, -0.5) for i in yi]
B ,A = linear_regression(Xi, Yi)
_Xi = [-1, 2]
_Yi = [B + A * x for x in _Xi]
_xi = np.linspace(-1,2,num = 50)
_yi = [math.pow(B + A * x, -2) for x in _xi]
print("(i)")
plt.plot(xi, yi, 'yo')
plt.plot(_xi, _yi, 'g', label= 'Fit exponential function')
plt.plot(Xi, Yi, 'ro')
plt.plot(_Xi, _Yi, 'b', label = 'Fit linearized function')
plt.legend()
plt.show()
print("The Fit linearized function Y = %f + %fX" % (B,A))
print("The fit exponential function y = (%f + %fx)^(-2)" % (B,A))
#(ii)
xii = np.array([-1,0,1,3])
Xii = xii
yii = np.array([13.65,1.38,0.49,0.15])
Yii = [math.pow(i, -0.5) for i in yii]
D ,C = linear_regression(Xii, Yii)
_Xii = [-1, 3]
_Yii = [D + C * x for x in _Xii]
_xii = np.linspace(-1,3,num = 50)
_yii = [math.pow(D + C * x,-2) for x in _xii]
print("\n(ii)")
plt.plot(xii, yii, 'yo')
plt.plot(_xii, _yii, 'g', label= 'Fit exponential function')
plt.plot(Xii, Yii, 'ro')
plt.plot(_Xii, _Yii, 'b', label = 'Fit linearized function')
plt.legend()
plt.show()
print("The Fit linearized function Y = %f + %fX" % (D,C))
print("The fit exponential function y = (%f + %fx)^(-2)" % (D,C))
(i)
The Fit linearized function Y = 0.784523 + 0.757326X
The fit exponential function y = (0.784523 + 0.757326x)^(-2)
(ii)
The Fit linearized function Y = 0.849877 + 0.577658X
The fit exponential function y = (0.849877 + 0.577658x)^(-2)
#(c)
def get_rmse(records_real, records_predict):
if len(records_real) == len(records_predict):
return math.sqrt(sum([(x - y) ** 2 for x, y in zip(records_real, records_predict)]) / len(records_real))
else:
return None
def compare(a,b):
print('(a) the fitting root-mean-square error is %f' % a)
print('(b) the fitting root-mean-square error is %f' % b)
if a < b:
print('%f < %f' % (a,b))
print('The method of (a) is the best fitting.\n')
if a > b:
print('%f > %f' % (a,b))
print('The method of (b) is the best fitting.\n')
xi = np.array([-1,0,1,2])
yi = np.array([13.45,3.01,0.67,0.15])
Yi = [np.log(i) for i in yi]
B ,A = linear_regression(xi, Yi)
_yi = [math.exp(B + A * x) for x in xi]
YI = [math.pow(i, -0.5) for i in yi]
B ,A = linear_regression(xi, YI)
_yI = [math.pow(B + A * x, -2) for x in xi]
mse_ai = get_rmse(yi, _yi)
mse_bi = get_rmse(yi, _yI)
print('For(i):')
compare(mse_ai,mse_bi)
xii = np.array([-1,0,1,3])
yii = np.array([13.65,1.38,0.49,0.15])
Yii = [np.log(i) for i in yii]
B ,A = linear_regression(xii, Yii)
_yii = [math.exp(B + A * x) for x in xii]
YII = [math.pow(i, -0.5) for i in yii]
B ,A = linear_regression(xii, YII)
_yII = [math.pow(B + A * x, -2) for x in YII]
mse_aii = get_rmse(yii, _yii)
mse_bii = get_rmse(yii, _yII)
print('For(ii):')
compare(mse_aii,mse_bii)
For(i):
(a) the fitting root-mean-square error is 0.003933
(b) the fitting root-mean-square error is 669.221864
0.003933 < 669.221864
The method of (a) is the best fitting.
For(ii):
(a) the fitting root-mean-square error is 3.409785
(b) the fitting root-mean-square error is 6.344949
3.409785 < 6.344949
The method of (a) is the best fitting.
For(i):
(a) the fitting root-mean-square error is 0.003933
(b) the fitting root-mean-square error is 669.221864 0.003933 < 669.221864 The method of (a) is the best fitting.
For(ii):
(a) the fitting root-mean-square error is 3.409785
(b) the fitting root-mean-square error is 0.077670 3.409785 > 0.077670 The method of (b) is the best fitting.
The least-squares plane
The least squares plane with the data model: $$ y = A x + B y + C.\tag{1} $$ The square of the root-mean error is: $$ E = \sum_{k=1}^N(A x_k + By_k +c - z_k)^2.\tag{2} $$ Taking the partial derivatives: $$ \begin{eqnarray} \frac{\partial E}{\partial A} &=& \sum_{k=1}^Nx_k(A x_k + By_k +c - z_k) = 0,\tag{3}\ \frac{\partial E}{\partial B} &=& \sum_{k=1}^Ny_k(A x_k + By_k +c - z_k) = 0,\tag{4}\ \frac{\partial E}{\partial C} &=& \sum_{k=1}^N(A x_k + By_k +c - z_k) = 0.\tag{5} \end{eqnarray} $$ Simplify the equation(3)~(5), we get: \begin{eqnarray} (\sum_{k=1}^Nx_k^2) A+ (\sum_{k=1}^Nx_ky_k) B + (\sum_{k=1}^Nx_k) C &=& \sum_{k=1}^Nz_kx_k,\tag{6}\ (\sum_{k=1}^Nx_ky_k) A+ (\sum_{k=1}^Ny_k^2) B + (\sum_{k=1}^Ny_k) C &=& \sum_{k=1}^Nz_ky_k,\tag{7}\ (\sum_{k=1}^Nx_k) A+ (\sum_{k=1}^Ny_k) B + NC &=& \sum_{k=1}^Nz_k.\tag{8} \end{eqnarray}
补:
Use Definition 5.5 (in Numerical Methods Using MATLAB, 4th) to establish the formula $$ tB_{i,N}(t) = \frac{i+1}{N+1}B_{i+1,N+1}(t) $$
For