first commit
This commit is contained in:
85
project1/Project1.md
Normal file
85
project1/Project1.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# MATH 6601 - Project 1 Report
|
||||
|
||||
Author: Zhe Yuan (yuan.1435)
|
||||
|
||||
Date: Sep 2025
|
||||
|
||||
### 1. Projections
|
||||
|
||||
**(a)**
|
||||
The projection of vector $b$ onto the column space of a full-rank matrix $A$ is given by $p=A(A^*A)^{-1}A^*b$. A function `proj(A, b)` was implemented based on this formula (see `Project1_Q1.py`). For $\epsilon = 1$, the projection is:
|
||||
`p ≈ [1.85714286, 1.0, 3.14285714, 1.28571429, 3.85714286]`
|
||||
|
||||
**(b)**
|
||||
As $\epsilon \rightarrow 0$, the normal equation method numerically unstable. A more robust SVD-based method, `proj_SVD(A, b)`, was implemented to handle this (see `Project1_Q1.py`).
|
||||
|
||||
Results:
|
||||
| $\epsilon$ | Normal Equation Method `proj(A, b)` | SVD Method `proj_SVD(A, b)` | Difference (Normal Eq - SVD) |
|
||||
|:---:|:---|:---|:---|
|
||||
| **1.0** | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[ 1.78e-15 -2.22e-15 -8.88e-16 8.88e-16 0.00e+00]` |
|
||||
| **0.1** | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[ 7.28e-14 -4.44e-16 -2.66e-14 -1.62e-14 -5.28e-14]` |
|
||||
| **0.01** | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[-5.45e-12 -7.28e-12 -3.45e-13 -4.24e-12 -4.86e-12]` |
|
||||
| **1e-4** | `[1.85714297 1.00000012 3.14285716 1.28571442 3.85714302]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.11e-07 1.19e-07 1.92e-08 1.36e-07 1.67e-07]` |
|
||||
| **1e-8** | **Error:** `LinAlgError: Singular matrix` | `[1.85714286 1. 3.14285714 1.28571428 3.85714286]` | `Could not compute difference due to previous error.` |
|
||||
| **1e-16**| **Error:** `ValueError: Matrix A must be full rank` | `[1.81820151 1. 3.18179849 2.29149804 2.89030045]` | `Could not compute difference due to previous error.` |
|
||||
| **1e-32**| **Error:** `ValueError: Matrix A must be full rank` | `[2. 1. 3. 2.5 2.5]` | `Could not compute difference due to previous error.` |
|
||||
|
||||
Numerical experiments show that as $\epsilon$ becomes small, the difference between the two methods increases. When $\epsilon$ becomes very small (e.g., 1e-8), the normal equation method fails while the SVD method continues to provide a stable solution.
|
||||
|
||||
***
|
||||
|
||||
|
||||
### 2. QR Factorisation
|
||||
|
||||
**(a, b, c)**
|
||||
Four QR factorization algorithms were implemented to find an orthonormal basis $Q$ for the column space of the $n \times n$ Hilbert matrix (for $n=2$ to $20$):
|
||||
1. Classical Gram-Schmidt (CGS, `cgs_q`)
|
||||
2. Modified Gram-Schmidt (MGS, `mgs_q`)
|
||||
3. Householder QR (`householder_q`)
|
||||
4. Classical Gram-Schmidt Twice (CGS-Twice, `cgs_twice_q`)
|
||||
|
||||
For implementation details, please see the `Project1_Q2.py` file.
|
||||
|
||||
**(d)**
|
||||
The quality of the computed $Q$ from each method was assessed by plotting the orthogonality loss, $log_{10}(||I - Q^T Q||_F)$, versus the matrix size $n$.
|
||||
|
||||

|
||||
|
||||
A summary of the loss and computation time for n = 4, 9, 11 is provided below.
|
||||
|
||||
|
||||
|
||||
**(e)**
|
||||
The speed and accuracy of the four methods were compared based on the plots.
|
||||
|
||||

|
||||
|
||||
- **Accuracy:** Householder is the most accurate and stable. MGS is significantly better than CGS. CGS-Twice improves on CGS, achieving accuracy comparable to MGS. CGS is highly unstable and loses orthogonality quickly.
|
||||
- **Speed:** CGS and MGS are the fastest. Householder is slightly slower, and CGS-Twice is the slowest.
|
||||
|
||||
***
|
||||
|
||||
### 3. Least Square Problem
|
||||
|
||||
**(a)**
|
||||
The problem is formulated as $Ax = b$, where $A$ is an $(N+1) \times (n+1)$ vandermonde matrix, $x$ is the $(n+1) \times 1$ vector of unknown polynomial coefficients $[a_0, ..., a_n]^T$, and $b$ is the $(N+1) \times 1$ vector of known function values $f(t_j)$.
|
||||
|
||||
**(b)**
|
||||
The least squares problem was solved using the implemented Householder QR factorization (`householder_lstsq` in `Project1_Q3.py`, the same function in Question 2) to find the coefficient vector $x$.
|
||||
|
||||
**(c)**
|
||||
The function $f(t) = 1 / (1 + t^2)$ and its polynomial approximation $p(t)$ were plotted for $N=30$ and $n=5, 15, 30$.
|
||||
|
||||

|
||||
|
||||
When $n=15$, the approximation is excellent. When $n=30$, the polynomial interpolates the points but exhibits wild oscillations between them.
|
||||
|
||||
**(d)**
|
||||
No, $p(t)$ will not converge to $f(t)$ if $N = n$ tends to infinity. Polynomial interpolation on equally spaced nodes diverges for this function, as demonstrated by the severe oscillations in the $n=30$ case.
|
||||
|
||||
**(e)**
|
||||
The error between the computed and true coefficients was plotted against the polynomial degree $n$.
|
||||
|
||||

|
||||
|
||||
The error in the computed coefficients grows significantly as $n$ increases.
|
||||
Reference in New Issue
Block a user