first commit

This commit is contained in:
2025-09-26 22:16:05 -04:00
commit 4b5b5eeb4c
13 changed files with 664 additions and 0 deletions

85
project1/Project1.md Normal file
View File

@@ -0,0 +1,85 @@
# MATH 6601 - Project 1 Report
Author: Zhe Yuan (yuan.1435)
Date: Sep 2025
### 1. Projections
**(a)**
The projection of vector $b$ onto the column space of a full-rank matrix $A$ is given by $p=A(A^*A)^{-1}A^*b$. A function `proj(A, b)` was implemented based on this formula (see `Project1_Q1.py`). For $\epsilon = 1$, the projection is:
`p ≈ [1.85714286, 1.0, 3.14285714, 1.28571429, 3.85714286]`
**(b)**
As $\epsilon \rightarrow 0$, the normal equation method numerically unstable. A more robust SVD-based method, `proj_SVD(A, b)`, was implemented to handle this (see `Project1_Q1.py`).
Results:
| $\epsilon$ | Normal Equation Method `proj(A, b)` | SVD Method `proj_SVD(A, b)` | Difference (Normal Eq - SVD) |
|:---:|:---|:---|:---|
| **1.0** | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[ 1.78e-15 -2.22e-15 -8.88e-16 8.88e-16 0.00e+00]` |
| **0.1** | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[ 7.28e-14 -4.44e-16 -2.66e-14 -1.62e-14 -5.28e-14]` |
| **0.01** | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[-5.45e-12 -7.28e-12 -3.45e-13 -4.24e-12 -4.86e-12]` |
| **1e-4** | `[1.85714297 1.00000012 3.14285716 1.28571442 3.85714302]` | `[1.85714286 1. 3.14285714 1.28571429 3.85714286]` | `[1.11e-07 1.19e-07 1.92e-08 1.36e-07 1.67e-07]` |
| **1e-8** | **Error:** `LinAlgError: Singular matrix` | `[1.85714286 1. 3.14285714 1.28571428 3.85714286]` | `Could not compute difference due to previous error.` |
| **1e-16**| **Error:** `ValueError: Matrix A must be full rank` | `[1.81820151 1. 3.18179849 2.29149804 2.89030045]` | `Could not compute difference due to previous error.` |
| **1e-32**| **Error:** `ValueError: Matrix A must be full rank` | `[2. 1. 3. 2.5 2.5]` | `Could not compute difference due to previous error.` |
Numerical experiments show that as $\epsilon$ becomes small, the difference between the two methods increases. When $\epsilon$ becomes very small (e.g., 1e-8), the normal equation method fails while the SVD method continues to provide a stable solution.
***
### 2. QR Factorisation
**(a, b, c)**
Four QR factorization algorithms were implemented to find an orthonormal basis $Q$ for the column space of the $n \times n$ Hilbert matrix (for $n=2$ to $20$):
1. Classical Gram-Schmidt (CGS, `cgs_q`)
2. Modified Gram-Schmidt (MGS, `mgs_q`)
3. Householder QR (`householder_q`)
4. Classical Gram-Schmidt Twice (CGS-Twice, `cgs_twice_q`)
For implementation details, please see the `Project1_Q2.py` file.
**(d)**
The quality of the computed $Q$ from each method was assessed by plotting the orthogonality loss, $log_{10}(||I - Q^T Q||_F)$, versus the matrix size $n$.
![Orthogonality Loss Plot](Project1_A2d_orthogonality_loss.png)
A summary of the loss and computation time for n = 4, 9, 11 is provided below.
**(e)**
The speed and accuracy of the four methods were compared based on the plots.
![Time Taken Plot](Project1_A2d_time_taken.png)
- **Accuracy:** Householder is the most accurate and stable. MGS is significantly better than CGS. CGS-Twice improves on CGS, achieving accuracy comparable to MGS. CGS is highly unstable and loses orthogonality quickly.
- **Speed:** CGS and MGS are the fastest. Householder is slightly slower, and CGS-Twice is the slowest.
***
### 3. Least Square Problem
**(a)**
The problem is formulated as $Ax = b$, where $A$ is an $(N+1) \times (n+1)$ vandermonde matrix, $x$ is the $(n+1) \times 1$ vector of unknown polynomial coefficients $[a_0, ..., a_n]^T$, and $b$ is the $(N+1) \times 1$ vector of known function values $f(t_j)$.
**(b)**
The least squares problem was solved using the implemented Householder QR factorization (`householder_lstsq` in `Project1_Q3.py`, the same function in Question 2) to find the coefficient vector $x$.
**(c)**
The function $f(t) = 1 / (1 + t^2)$ and its polynomial approximation $p(t)$ were plotted for $N=30$ and $n=5, 15, 30$.
![Least Squares Plot](Project1_A3c_least_squares_approximation.png)
When $n=15$, the approximation is excellent. When $n=30$, the polynomial interpolates the points but exhibits wild oscillations between them.
**(d)**
No, $p(t)$ will not converge to $f(t)$ if $N = n$ tends to infinity. Polynomial interpolation on equally spaced nodes diverges for this function, as demonstrated by the severe oscillations in the $n=30$ case.
**(e)**
The error between the computed and true coefficients was plotted against the polynomial degree $n$.
![Coefficient Error Plot](Project1_A3e_coefficient_error.png)
The error in the computed coefficients grows significantly as $n$ increases.

67
project1/Project1_A1.txt Normal file
View File

@@ -0,0 +1,67 @@
Question 1(a):
Projection of b onto the column space of A(1): (Using proj function)
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
Projection of b onto the column space of A(1): (Using proj_SVD function)
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
Difference between the two methods:
[ 1.77635684e-15 -2.22044605e-15 -8.88178420e-16 8.88178420e-16
0.00000000e+00]
Question 1(b):
For ε = 1.0:
Projection of b onto the column space of A(1.0):
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
Projection of b onto the column space of A(1.0) using SVD:
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
Difference between the two methods:
[ 1.77635684e-15 -2.22044605e-15 -8.88178420e-16 8.88178420e-16
0.00000000e+00]
For ε = 0.1:
Projection of b onto the column space of A(0.1):
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
Projection of b onto the column space of A(0.1) using SVD:
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
Difference between the two methods:
[ 7.28306304e-14 -4.44089210e-16 -2.66453526e-14 -1.62092562e-14
-5.28466160e-14]
For ε = 0.01:
Projection of b onto the column space of A(0.01):
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
Projection of b onto the column space of A(0.01) using SVD:
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
Difference between the two methods:
[-5.45297141e-12 -7.28239691e-12 -3.44613227e-13 -4.24371649e-12
-4.86499729e-12]
For ε = 0.0001:
Projection of b onto the column space of A(0.0001):
[1.85714297 1.00000012 3.14285716 1.28571442 3.85714302]
Projection of b onto the column space of A(0.0001) using SVD:
[1.85714286 1. 3.14285714 1.28571429 3.85714286]
Difference between the two methods:
[1.11406275e-07 1.19209290e-07 1.91642426e-08 1.35516231e-07
1.67459071e-07]
For ε = 1e-08:
LinAlgError for eps=1e-08: Singular matrix
Projection of b onto the column space of A(1e-08) using SVD:
[1.85714286 1. 3.14285714 1.28571428 3.85714286]
Difference between the two methods:
Could not compute difference due to previous error.
For ε = 1e-16:
ValueError for eps=1e-16: Matrix A must be full rank.
Projection of b onto the column space of A(1e-16) using SVD:
[1.81820151 1. 3.18179849 2.29149804 2.89030045]
Difference between the two methods:
Could not compute difference due to previous error.
For ε = 1e-32:
ValueError for eps=1e-32: Matrix A must be full rank.
Projection of b onto the column space of A(1e-32) using SVD:
[2. 1. 3. 2.5 2.5]
Difference between the two methods:
Could not compute difference due to previous error.

15
project1/Project1_A2.txt Normal file
View File

@@ -0,0 +1,15 @@
Table of Orthogonality Loss and Time for n=4, 9, 11:
n Method Loss Time (s)
4 Classical Gram-Schmidt -10.31105413525148 4.2772293090820314e-05
4 Modified Gram-Schmidt -12.374458030887125 5.888938903808594e-05
4 Householder -15.351739155108316 0.00014197826385498047
4 Classical Gram-Schmidt Twice -15.570756316229907 8.988380432128906e-05
9 Classical Gram-Schmidt 0.3894252720607535 0.00028896331787109375
9 Modified Gram-Schmidt -5.257678985289507 0.00034956932067871095
9 Householder -14.715202433670639 0.0006194353103637695
9 Classical Gram-Schmidt Twice -8.398044187428138 0.0006073951721191406
11 Classical Gram-Schmidt 0.6094906217544358 0.0005226373672485351
11 Modified Gram-Schmidt -2.0059498550218353 0.0005770444869995118
11 Householder -14.69705544935381 0.0008681297302246093
11 Classical Gram-Schmidt Twice -1.7450868860162283 0.0010661602020263672

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

107
project1/Project1_Q1.py Normal file
View File

@@ -0,0 +1,107 @@
import numpy as np
def proj(A:np.ndarray, b:np.ndarray) -> np.ndarray:
"""
Project vector b onto the column space of matrix A using the formula:
proj_A(b) = A(A^* A)^(-1) A^* b
Parameters:
A (np.ndarray): The matrix whose column space we are projecting onto.
b (np.ndarray): The vector to be projected.
Returns:
np.ndarray: The projection of b onto the column space of A.
"""
# Check if A is full rank
if np.linalg.matrix_rank(A) < min(A.shape):
raise ValueError("Matrix A must be full rank.")
# Compute the projection
AhA_inv = np.linalg.inv(A.conj().T @ A)
projection = A @ AhA_inv @ A.conj().T @ b
return projection
def proj_SVD(A:np.ndarray, b:np.ndarray) -> np.ndarray:
"""
Project vector b onto the column space of matrix A using SVD.
Parameters:
A (np.ndarray): The matrix whose column space we are projecting onto.
b (np.ndarray): The vector to be projected.
Returns:
np.ndarray: The projection of b onto the column space of A.
"""
# A = U S V^*
U, S, Vh = np.linalg.svd(A, full_matrices=False)
"""
proj_A(b) = A(A^* A)^(-1) A^* b
= U S V^* (V S^2 V^*)^(-1) V S U^* b
= U S V^* V S^(-2) V^* V S U^* b
= U U^* b
"""
# Compute the projection using the SVD components
projection = U @ U.conj().T @ b
return projection
def build_A(eps: float) -> np.ndarray:
"""
Build the matrix A(ε) as specified in the problem statement.
"""
A = np.array([
[1.0, 1.0, 0.0, (1.0 - eps)],
[1.0, 0.0, 1.0, (eps + (1.0 - eps))],
[0.0, 1.0, 0.0, eps],
[1.0, 0.0, 0.0, (1.0 - eps)],
[1.0, 0.0, 0.0, (eps + (1.0 - eps))]
], dtype=float)
return A
def build_b() -> np.ndarray:
"""
Build the vector b as specified in the problem statement.
"""
b = np.array([2.0, 1.0, 3.0, 1.0, 4.0])
return b
def question_1a(A: np.ndarray, b: np.ndarray):
# Question 1(a)
print("Question 1(a):")
projection = proj(A, b)
print("Projection of b onto the column space of A(1): (Using proj function)")
print(projection)
projection_svd = proj_SVD(A, b)
print("Projection of b onto the column space of A(1): (Using proj_SVD function)")
print(projection_svd)
print("Difference between the two methods:")
print(projection - projection_svd)
def question_1b(A: np.ndarray, b: np.ndarray):
# Question 1(b)
print("\nQuestion 1(b):")
for eps in [1.0, 1e-1, 1e-2, 1e-4, 1e-8, 1e-16, 1e-32]:
print(f"\nFor ε = {eps}:")
A_eps = build_A(eps)
try:
projection_eps = proj(A_eps, b)
print(f"Projection of b onto the column space of A({eps}):")
print(projection_eps)
except np.linalg.LinAlgError as e:
print(f"LinAlgError for eps={eps}: {e}")
except ValueError as ve:
print(f"ValueError for eps={eps}: {ve}")
projection_svd_eps = proj_SVD(A_eps, b)
print(f"Projection of b onto the column space of A({eps}) using SVD:")
print(projection_svd_eps)
print("Difference between the two methods:")
try:
print(projection_eps - projection_svd_eps)
except:
print("Could not compute difference due to previous error.")
projection_eps = None # Reset for next iteration
if __name__ == "__main__":
A = build_A(eps=1.0)
b = build_b()
question_1a(A, b)
question_1b(A, b)

209
project1/Project1_Q2.py Normal file
View File

@@ -0,0 +1,209 @@
import numpy as np
import time
import matplotlib.pyplot as plt
def cgs_q(A: np.ndarray) -> np.ndarray:
"""
Perform Classical Gram-Schmidt orthogonalization on matrix A.
Parameters:
A (np.ndarray): The input matrix to be orthogonalized.
Returns:
np.ndarray: The orthogonalized matrix Q.
"""
m, n = A.shape
Q = np.zeros((m, n), dtype=A.dtype)
for j in range(n):
v = A[:, j]
for i in range(j):
# r_ij = q_i^* a_j
r_ij = np.dot(Q[:, i].conj(), A[:, j])
# v = a_j - r_ij * q_i
v = v - r_ij * Q[:, i]
# v = a_j - sum_{i=1}^{j-1} r_ij * q_i
# r_jj = ||v||
r_jj = np.linalg.norm(v)
#if r_jj < 1e-14:
# print(f"Warning: Vector {j} is close to zero after orthogonalization.")
Q[:, j] = v / r_jj
return Q
def mgs_q(A: np.ndarray) -> np.ndarray:
"""
Perform Modified Gram-Schmidt orthogonalization on matrix A.
Parameters:
A (np.ndarray): The input matrix to be orthogonalized.
Returns:
np.ndarray: The orthogonalized matrix Q.
"""
m, n = A.shape
Q = np.zeros((m, n), dtype=A.dtype)
V = A.copy()
for j in range(n):
for i in range(j):
# r_ij = q_i^* a_j
r_ij = np.dot(Q[:, i].conj(), V[:, j])
# a_j = a_j - r_ij * q_i
V[:, j] = V[:, j] - r_ij * Q[:, i]
# r_jj = ||a_j||
r_jj = np.linalg.norm(V[:, j])
#if r_jj < 1e-14:
# print(f"Warning: Vector {j} is close to zero after orthogonalization.")
Q[:, j] = V[:, j] / r_jj
return Q
def householder_q(A: np.ndarray) -> np.ndarray:
"""
Perform Householder QR factorization on matrix A to obtain orthogonal matrix Q.
Parameters:
A (np.ndarray): The input matrix to be orthogonalized.
Returns:
np.ndarray: The orthogonalized matrix Q.
"""
m, n = A.shape
Q = np.eye(m, dtype=A.dtype)
R = A.copy()
for k in range(n):
# Extract the vector x
# Target: x -> [||x||, 0, 0, ..., 0]^T
x = R[k:, k] # x = [a_k, a_(k+1), ..., a_m]^T
s = np.sign(x[0]) if x[0] != 0 else 1
e = np.zeros_like(x) # e = [||x||, 0, 0, ..., 0]^T
e[0] = s * np.linalg.norm(x)
# Compute the Householder vector
# x - e is the reflection plane normal
u = x - e
if np.linalg.norm(u) < 1e-14:
continue # Skip the transformation if u is too small
v = u / np.linalg.norm(u)
# Apply the Householder transformation to R[k:, k:]
R[k:, k:] -= 2 * np.outer(v, v.conj().T @ R[k:, k:])
# Update Q
# Q_k = I - 2 v v^*
Q_k = np.eye(m, dtype=A.dtype)
Q_k[k:, k:] -= 2 * np.outer(v, v.conj().T)
Q = Q @ Q_k
return Q
def cgs_twice_q(A: np.ndarray) -> np.ndarray:
"""
Perform Classical Gram-Schmidt orthogonalization twice on matrix A.
Parameters:
A (np.ndarray): The input matrix to be orthogonalized.
Returns:
np.ndarray: The orthogonalized matrix Q.
"""
return cgs_q(cgs_q(A))
def ortho_loss(Q: np.ndarray) -> float:
"""
Compute the orthogonality loss of matrix Q.
Parameters:
Q (np.ndarray): The orthogonal matrix to be evaluated.
Returns:
float: The orthogonality loss defined as log10(||I - Q^* Q||_F).
"""
m, n = Q.shape
I = np.eye(n, dtype=Q.dtype)
E = I - Q.conj().T @ Q
loss = np.linalg.norm(E, ord='fro')
return np.log10(loss)
def build_hilbert(m: int, n: int) -> np.ndarray:
"""
Build an m x n Hilbert matrix.
Parameters:
m (int): Number of rows.
n (int): Number of columns.
Returns:
np.ndarray: The m x n Hilbert matrix.
"""
H = np.zeros((m, n), dtype=float)
for i in range(m):
for j in range(n):
H[i, j] = 1.0 / (i + j + 1)
return H
def question_2bcd(n_min=2, n_max=20):
# Question 2(b)
ns = list(range(n_min, n_max + 1))
methods = {
"Classical Gram-Schmidt": cgs_q,
"Modified Gram-Schmidt": mgs_q,
"Householder": householder_q,
"Classical Gram-Schmidt Twice": cgs_twice_q
}
losses = {name: [] for name in methods.keys()}
times = {name: [] for name in methods.keys()}
for n in ns:
H = build_hilbert(n, n)
# run ten times and take the average
times_temp = {name: [] for name in methods.keys()}
losses_temp = {name: [] for name in methods.keys()}
for _ in range(10):
for name, method in methods.items():
start_time = time.time()
Q = method(H)
end_time = time.time()
loss = ortho_loss(Q)
times_temp[name].append(end_time - start_time)
losses_temp[name].append(loss)
for name in methods.keys():
times[name].append(np.mean(times_temp[name]))
losses[name].append(np.log(np.mean(np.exp(losses_temp[name]))))
#print(f"n={n}, Method={name}, Loss={loss}, Time={end_time - start_time}s")
# Plot the orthogonality loss
plt.figure(figsize=(10, 6))
for name, loss in losses.items():
plt.plot(ns, loss, marker='o', label=name)
plt.xlabel('Matrix Size n')
plt.ylabel('Orthogonality Loss (log10 scale)')
plt.title('Orthogonality Loss vs Matrix Size for Different QR Methods')
# set x ticks to be integers
plt.xticks(ns)
plt.legend()
plt.grid(True, alpha=0.3)
plt.savefig('Project1_A2d_orthogonality_loss.png')
#plt.show()
# Plot the time taken
plt.figure(figsize=(10, 6))
for name, time_list in times.items():
plt.plot(ns, time_list, marker='o', label=name)
plt.xlabel('Matrix Size n')
plt.ylabel('Time Taken (seconds)')
plt.title('Time Taken vs Matrix Size for Different QR Methods')
# set x ticks to be integers
plt.xticks(ns)
plt.legend()
plt.grid(True, alpha=0.3)
plt.savefig('Project1_A2d_time_taken.png')
#plt.show()
# Table of results (n=4, 9, 11)
print("\nTable of Orthogonality Loss and Time for n=4, 9, 11:")
print(f"{'n':<5} {'Method':<30} {'Loss':<20} {'Time (s)':<10}")
for n in [4, 9, 11]:
for name in methods.keys():
idx = ns.index(n)
print(f"{n:<5} {name:<30} {losses[name][idx]:<20} {times[name][idx]:<10}")
if __name__ == "__main__":
question_2bcd()

166
project1/Project1_Q3.py Normal file
View File

@@ -0,0 +1,166 @@
import numpy as np
import matplotlib.pyplot as plt
# householder function from question 2
def householder_qr(A: np.ndarray) -> tuple[np.ndarray, np.ndarray]:
"""
Perform Householder QR factorization on matrix A to obtain orthogonal matrix Q.
Parameters:
A (np.ndarray): The input matrix to be orthogonalized.
Returns:
tuple[np.ndarray, np.ndarray]: The orthogonal matrix Q and upper triangular matrix R such that A = QR.
"""
m, n = A.shape
Q = np.eye(m, dtype=A.dtype)
R = A.copy()
for k in range(n):
# Extract the vector x
# Target: x -> [||x||, 0, 0, ..., 0]^T
x = R[k:, k] # x = [a_k, a_(k+1), ..., a_m]^T
s = np.sign(x[0]) if x[0] != 0 else 1
e = np.zeros_like(x) # e = [||x||, 0, 0, ..., 0]^T
e[0] = s * np.linalg.norm(x)
# Compute the Householder vector
# x - e is the reflection plane normal
u = x - e
if np.linalg.norm(u) < 1e-14:
continue # Skip the transformation if u is too small
v = u / np.linalg.norm(u)
# Apply the Householder transformation to R[k:, k:]
R[k:, k:] -= 2 * np.outer(v, v.conj().T @ R[k:, k:])
# Update Q
# Q_k = I - 2 v v^*
Q_k = np.eye(m, dtype=A.dtype)
Q_k[k:, k:] -= 2 * np.outer(v, v.conj().T)
Q = Q @ Q_k
return Q, R
def householder_lstsq(A: np.ndarray, b: np.ndarray) -> np.ndarray:
"""
Solve the least squares problem Ax = b using Householder reflections.
Parameters:
A (np.ndarray): The input matrix A.
b (np.ndarray): The right-hand side vector b.
Returns:
np.ndarray: The solution vector x.
"""
"""
Ax = b, A = QR
Rx = Q^*b
x = R^{-1}Q^*b
R is upper triangular
Q is orthogonal
"""
m, n = A.shape # m is N + 1, n is n + 1
Q, R = householder_qr(A)
# Compute Q^*b
Qt_b = Q.conj().T @ b
# Solve Rx = Q^*b using back substitution
x = np.zeros(n, dtype=A.dtype)
for i in range(n-1, -1, -1):
x[i] = (Qt_b[i] - R[i, i+1:] @ x[i+1:]) / R[i, i]
return x
def f1(t: float) -> float:
"""
f1(t) = 1 / (1 + t^2)
"""
return 1.0 / (1.0 + t**2)
def build_A(N: int, n: int, f: callable) -> tuple[np.ndarray, np.ndarray]:
"""
Build the matrix A for the least squares problem.
Parameters:
n + 1 (int): The number of columns in the matrix A.
Returns:
np.ndarray: The constructed matrix A.
"""
"""
{t_j}j=0->N, t_0=-5, t_N=5, t_j divides [-5, 5] equally
A = [[1, t_0, t_0^2, t_0^3, ..., t_0^n],
[1, t_1, t_1^2, t_1^3, ..., t_1^n],
...
[1, t_N, t_N^2, t_N^3, ..., t_N^n]] (N + 1) x (n + 1)
b = [f(t_0), f(t_1), ..., f(t_N)]^T (N + 1) x 1
x = [a_0, a_1, a_2, ..., a_n]^T (n + 1) x 1
Ax = b, x is the coeffs of the polynomial
"""
A = np.zeros((N + 1, n + 1), dtype=float)
t = np.linspace(-5, 5, N + 1)
for i in range(N + 1):
A[i, :] = [t[i]**j for j in range(n + 1)]
b = np.array([f(t_i) for t_i in t])
return A, b
def question_3c():
Ns = [30, 30, 30]
ns = [5, 15, 30]
# p(t) = a_n t^n + a_(n-1) t^(n-1) + ... + a_1 t + a_0
# plot f(t) and p(t) on the same figure for each (N, n)
t = np.linspace(-5, 5, 1000)
f_t = f1(t)
plt.figure(figsize=(15, 5))
for i in range(len(Ns)):
N = Ns[i]
n = ns[i]
A = np.zeros((N + 1, n + 1), dtype=float)
A, b = build_A(N, n, f1)
x = householder_lstsq(A, b)
p_t = sum([x[j] * t**j for j in range(n + 1)])
plt.subplot(1, 3, i + 1)
plt.plot(t, f_t, label='f(t)', color='blue')
plt.plot(t, p_t, label='p(t)', color='red', linestyle='--')
plt.title(f'N={N}, n={n}')
plt.xlabel('t')
plt.ylabel('y')
plt.legend()
plt.grid(True, alpha=0.3)
plt.suptitle('Least Squares Polynomial Approximation using Householder QR')
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
plt.savefig('Project1_A3c_least_squares_approximation.png')
#plt.show()
def f2(n:int, t: float) -> float:
"""
f2(t) = t^n + t^(n-1) + ... + t^2 + t + 1
"""
return sum([t**i for i in range(n + 1)])
def question_3e():
# n = 4, 6, 8, 10, ..., 20
ns = list(range(4, 21, 2))
# N = 2n
# quantity is log10||(a_0-1,...,a_n-1)||_2)
losses = []
for n in ns:
N = 2 * n
A, b = build_A(N, n, lambda t: f2(n, t))
x = householder_lstsq(A, b)
true_coeffs = np.array([1.0 for _ in range(n + 1)])
loss = np.linalg.norm(x - true_coeffs)
losses.append(np.log10(loss))
plt.figure(figsize=(10, 6))
plt.plot(ns, losses, marker='o')
plt.xlabel('Polynomial Degree n')
plt.ylabel('log10(||a - true_coeffs||_2)')
plt.title('Coefficient Error vs Polynomial Degree using Householder QR')
plt.xticks(ns)
plt.grid(True, alpha=0.3)
plt.savefig('Project1_A3e_coefficient_error.png')
#plt.show()
if __name__ == "__main__":
question_3c()
question_3e()

13
project1/README.md Normal file
View File

@@ -0,0 +1,13 @@
### Create a virtual environment and install dependencies
```bash
conda create -n math6601_env python=3.10
conda activate math6601_env
pip install -r requirements.txt
```
### Question 1
```bash
python Project1_Q1.py > Project1_A1.txt
```

View File

@@ -0,0 +1,2 @@
numpy
matplotlib