fix project 1 q1 bug
This commit is contained in:
@@ -16,15 +16,15 @@ As $\epsilon \rightarrow 0$, the normal equation method numerically unstable. A
|
||||
Results:
|
||||
| $\epsilon$ | Normal Equation Method `proj(A, b)` | SVD Method `proj_SVD(A, b)` | Difference (Normal Eq - SVD) |
|
||||
|:---:|:---|:---|:---|
|
||||
| **1.0** | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[ 1.78e-15 -2.22e-15 -8.88e-16 8.88e-16 0.00e+00]` |
|
||||
| **0.1** | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[ 7.28e-14 -4.44e-16 -2.66e-14 -1.62e-14 -5.28e-14]` |
|
||||
| **0.01** | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[-5.45e-12 -7.28e-12 -3.45e-13 -4.24e-12 -4.86e-12]` |
|
||||
| **1e-4** | `[1.85714297 1.00000012 3.14285716 1.28571442 3.85714302]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.11e-07 1.19e-07 1.92e-08 1.36e-07 1.67e-07]` |
|
||||
| **1e-8** | **Error:** `LinAlgError: Singular matrix` | `[1.85714286 1. 3.14285714 1.28571428 3.85714286]` | `Could not compute difference due to previous error.` |
|
||||
| **1e-16**| **Error:** `ValueError: Matrix A must be full rank` | `[1.81820151 1. 3.18179849 2.29149804 2.89030045]` | `Could not compute difference due to previous error.` |
|
||||
| **1e-32**| **Error:** `ValueError: Matrix A must be full rank` | `[2. 1. 3. 2.5 2.5]` | `Could not compute difference due to previous error.` |
|
||||
| **1.0** | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[ 2.22e-16 1.55e-15 -4.44e-16 2.22e-16 -8.88e-16]` |
|
||||
| **0.1** | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[ 8.08e-14 5.87e-14 -8.88e-16 4.82e-14 -1.38e-14]` |
|
||||
| **0.01** | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[-4.77e-12 -1.84e-14 5.54e-13 -4.00e-12 1.50e-12]` |
|
||||
| **1e-4** | `[1.85714282 1. 3.14285716 1.28571427 3.85714291]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[-3.60e-08 9.11e-13 1.94e-08 -1.20e-08 4.80e-08]` |
|
||||
| **1e-8** | `[-1.87500007 0.99999993 -3.12499997 -2.62500007 3.49999996]` | `[1.85714286 1. 3.14285714 1.28571427 3.85714286]` | `[-3.73e+00 -7.45e-08 -6.27e+00 -3.91e+00 -3.57e-01]` |
|
||||
| **1e-16**| **Error:** `ValueError: Matrix A must be full rank` | `[3.4 1. 1.6 1.8 1.8]` | `Could not compute difference due to previous error.` |
|
||||
| **1e-32**| **Error:** `ValueError: Matrix A must be full rank` | `[3.4 1. 1.6 1.8 1.8]` | `Could not compute difference due to previous error.` |
|
||||
|
||||
Numerical experiments show that as $\epsilon$ becomes small, the difference between the two methods increases. When $\epsilon$ becomes very small (e.g., 1e-8), the normal equation method fails while the SVD method continues to provide a stable solution.
|
||||
Numerical experiments show that as $\epsilon$ becomes small, the difference between the two methods increases. When $\epsilon$ becomes very small (e.g., 1e-8), the normal equation method can't give a valid result and fails when $\epsilon$ continues to decrease (e.g., 1e-16), while the SVD method continues to provide a stable solution.
|
||||
|
||||
***
|
||||
|
||||
|
||||
@@ -4,8 +4,8 @@ Projection of b onto the column space of A(1): (Using proj function)
|
||||
Projection of b onto the column space of A(1): (Using proj_SVD function)
|
||||
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
|
||||
Difference between the two methods:
|
||||
[ 1.77635684e-15 -2.22044605e-15 -8.88178420e-16 8.88178420e-16
|
||||
0.00000000e+00]
|
||||
[ 2.22044605e-16 1.55431223e-15 -4.44089210e-16 2.22044605e-16
|
||||
-8.88178420e-16]
|
||||
|
||||
Question 1(b):
|
||||
|
||||
@@ -15,8 +15,8 @@ Projection of b onto the column space of A(1.0):
|
||||
Projection of b onto the column space of A(1.0) using SVD:
|
||||
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
|
||||
Difference between the two methods:
|
||||
[ 1.77635684e-15 -2.22044605e-15 -8.88178420e-16 8.88178420e-16
|
||||
0.00000000e+00]
|
||||
[ 2.22044605e-16 1.55431223e-15 -4.44089210e-16 2.22044605e-16
|
||||
-8.88178420e-16]
|
||||
|
||||
For ε = 0.1:
|
||||
Projection of b onto the column space of A(0.1):
|
||||
@@ -24,8 +24,8 @@ Projection of b onto the column space of A(0.1):
|
||||
Projection of b onto the column space of A(0.1) using SVD:
|
||||
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
|
||||
Difference between the two methods:
|
||||
[ 7.28306304e-14 -4.44089210e-16 -2.66453526e-14 -1.62092562e-14
|
||||
-5.28466160e-14]
|
||||
[ 8.08242362e-14 5.87307980e-14 -8.88178420e-16 4.81836793e-14
|
||||
-1.37667655e-14]
|
||||
|
||||
For ε = 0.01:
|
||||
Projection of b onto the column space of A(0.01):
|
||||
@@ -33,35 +33,37 @@ Projection of b onto the column space of A(0.01):
|
||||
Projection of b onto the column space of A(0.01) using SVD:
|
||||
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
|
||||
Difference between the two methods:
|
||||
[-5.45297141e-12 -7.28239691e-12 -3.44613227e-13 -4.24371649e-12
|
||||
-4.86499729e-12]
|
||||
[-4.76907402e-12 -1.84297022e-14 5.53779245e-13 -4.00324218e-12
|
||||
1.49613655e-12]
|
||||
|
||||
For ε = 0.0001:
|
||||
Projection of b onto the column space of A(0.0001):
|
||||
[1.85714297 1.00000012 3.14285716 1.28571442 3.85714302]
|
||||
[1.85714282 1. 3.14285716 1.28571427 3.85714291]
|
||||
Projection of b onto the column space of A(0.0001) using SVD:
|
||||
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
|
||||
Difference between the two methods:
|
||||
[1.11406275e-07 1.19209290e-07 1.91642426e-08 1.35516231e-07
|
||||
1.67459071e-07]
|
||||
[-3.59740875e-08 9.10937992e-13 1.94238092e-08 -1.19938544e-08
|
||||
4.79721098e-08]
|
||||
|
||||
For ε = 1e-08:
|
||||
LinAlgError for eps=1e-08: Singular matrix
|
||||
Projection of b onto the column space of A(1e-08):
|
||||
[-1.87500007 0.99999993 -3.12499997 -2.62500007 3.49999996]
|
||||
Projection of b onto the column space of A(1e-08) using SVD:
|
||||
[1.85714286 1. 3.14285714 1.28571428 3.85714286]
|
||||
[1.85714286 1. 3.14285714 1.28571427 3.85714286]
|
||||
Difference between the two methods:
|
||||
Could not compute difference due to previous error.
|
||||
[-3.73214294e+00 -7.45058057e-08 -6.26785711e+00 -3.91071435e+00
|
||||
-3.57142909e-01]
|
||||
|
||||
For ε = 1e-16:
|
||||
ValueError for eps=1e-16: Matrix A must be full rank.
|
||||
Projection of b onto the column space of A(1e-16) using SVD:
|
||||
[1.81820151 1. 3.18179849 2.29149804 2.89030045]
|
||||
[3.4 1. 1.6 1.8 1.8]
|
||||
Difference between the two methods:
|
||||
Could not compute difference due to previous error.
|
||||
|
||||
For ε = 1e-32:
|
||||
ValueError for eps=1e-32: Matrix A must be full rank.
|
||||
Projection of b onto the column space of A(1e-32) using SVD:
|
||||
[2. 1. 3. 2.5 2.5]
|
||||
[3.4 1. 1.6 1.8 1.8]
|
||||
Difference between the two methods:
|
||||
Could not compute difference due to previous error.
|
||||
|
||||
@@ -38,9 +38,13 @@ def proj_SVD(A:np.ndarray, b:np.ndarray) -> np.ndarray:
|
||||
= U S V^* (V S^2 V^*)^(-1) V S U^* b
|
||||
= U S V^* V S^(-2) V^* V S U^* b
|
||||
= U U^* b
|
||||
If A = U S V^*, then the projection onto the column space of A is:
|
||||
proj_A(b) = U_r U_r^* b
|
||||
where U_r are the left singular vectors corresponding to nonzero singular values.
|
||||
"""
|
||||
U_r = U[:, :np.linalg.matrix_rank(A)] # Take only the relevant columns
|
||||
# Compute the projection using the SVD components
|
||||
projection = U @ U.conj().T @ b
|
||||
projection = U_r @ U_r.conj().T @ b
|
||||
return projection
|
||||
|
||||
def build_A(eps: float) -> np.ndarray:
|
||||
|
||||
@@ -38,26 +38,26 @@ As $\epsilon \rightarrow 0$, the normal equation method becomes numerically unst
|
||||
\hline
|
||||
\textbf{$\epsilon$} & \textbf{Normal Equation Method \texttt{proj(A,b)}} & \textbf{SVD Method \texttt{proj\_SVD(A,b)}} & \textbf{Difference (Normal Eq - SVD)} \\
|
||||
\hline
|
||||
\textbf{1.0} & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [ 1.78e-15 -2.22e-15 -8.88e-16 8.88e-16 0.00e+00] \\
|
||||
\textbf{1.0} & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [ 2.22e-16 1.55e-15 -4.44e-16 2.22e-16 -8.88e-16] \\
|
||||
\hline
|
||||
\textbf{0.1} & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [ 7.28e-14 -4.44e-16 -2.66e-14 -1.62e-14 -5.28e-14] \\
|
||||
\textbf{0.1} & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [ 8.08e-14 5.87e-14 -8.88e-16 4.82e-14 -1.38e-14] \\
|
||||
\hline
|
||||
\textbf{0.01} & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [-5.45e-12 -7.28e-12 -3.45e-13 -4.24e-12 -4.86e-12] \\
|
||||
\textbf{0.01} & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [-4.77e-12 -1.84e-14 5.54e-13 -4.00e-12 1.50e-12] \\
|
||||
\hline
|
||||
\textbf{1e-4} & [1.85714297 1.00000012 3.14285716 1.28571442 3.85714302] & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [1.11e-07 1.19e-07 1.92e-08 1.36e-07 1.67e-07] \\
|
||||
\textbf{1e-4} & [1.85714282 1. 3.14285716 1.28571427 3.85714291] & [1.85714286 1. 3.14285714 1.28571429 3.85714286] & [-3.60e-08 9.11e-13 1.94e-08 -1.20e-08 4.80e-08] \\
|
||||
\hline
|
||||
\textbf{1e-8} & \textbf{Error:} \texttt{LinAlgError: Singular matrix} & [1.85714286 1. 3.14285714 1.28571428 3.85714286] & Could not compute difference due to previous error. \\
|
||||
\textbf{1e-8} & [-1.87500007 0.99999993 -3.12499997 -2.62500007 3.49999996] & [1.85714286 1. 3.14285714 1.28571427 3.85714286] & [-3.73e+00 -7.45e-08 -6.27e+00 -3.91e+00 -3.57e-01] \\
|
||||
\hline
|
||||
\textbf{1e-16} & \textbf{Error:} \texttt{ValueError: Matrix A must be full rank} & [1.81820151 1. 3.18179849 2.29149804 2.89030045] & Could not compute difference due to previous error. \\
|
||||
\textbf{1e-16} & \textbf{Error:} \texttt{ValueError: Matrix A must be full rank} & [3.4 1. 1.6 1.8 1.8] & Could not compute difference due to previous error. \\
|
||||
\hline
|
||||
\textbf{1e-32} & \textbf{Error:} \texttt{ValueError: Matrix A must be full rank} & [2. 1. 3. 2.5 2.5] & Could not compute difference due to previous error. \\
|
||||
\textbf{1e-32} & \textbf{Error:} \texttt{ValueError: Matrix A must be full rank} & [3.4 1. 1.6 1.8 1.8] & Could not compute difference due to previous error. \\
|
||||
\hline
|
||||
\end{tabular}
|
||||
\caption{Comparison of results from the normal equation and SVD-based methods.}
|
||||
\label{tab:proj_results}
|
||||
\end{table}
|
||||
|
||||
\noindent Numerical experiments show that as $\epsilon$ becomes small, the difference between the two methods increases. When $\epsilon$ becomes very small (e.g., 1e-8), the normal equation method fails while the SVD method continues to provide a stable solution.
|
||||
\noindent Numerical experiments show that as $\epsilon$ becomes small, the difference between the two methods increases. When $\epsilon$ becomes very small (e.g., 1e-8), the normal equation method can't give a valid result and fails when $\epsilon$ continues to decrease (e.g., 1e-16), while the SVD method continues to provide a stable solution.
|
||||
|
||||
\section{QR Factorisation}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user