Problem of the Day
May 1 One of the people in this 1978
picture really knows how to zero the (2,1) entry. Name that
person!
April 29 How would you fminbnd to find
the area of the largest triangle with vertices on the ellipse (x/a)^2+(y/b)^2=1?
April 27 How would you solve TX+XT = F for X where all matrices are n-by-n and F is given and T is given and upper triangular.
April 24 Suppose Q is an orthogonal matrix with the property that Q'SQ = diag(d1,...,dn) where S is the symmetric tridiagonal matrix with two's on the diagonal and minus ones on the sub and superdiagonal. Assume that the eigenvalues are known and that solving linear systems with Q and Q' requires n*log n flops. Let A(alfa,beta) be the symmetric tridiagonal matrix with alfa's down the diagonal and beta's down the sub and superdiagonal. How would you solve A(alfa,beta)x = b for many different choices of alfa and beta?
April 22
The nonlinear function F = [F1(x) ; ...; Fn(x)] has the property that the kth component function Fk(x) depend only x(k) and x(n) EXCEPT Fn, which depends on x(1),...,x(n). Describe a Newton method step.
April 20
(1) Suppose A(theta) is a matrix that depends on a real parameter theta. Assume that A(theta) is symmetric positive definite for all theta and that G(theta) is its cholsky factor (assumed to be differentiable). How would you compute the derivative of the (n,n) entry of G? (2) Complete: function F = AppxExp(A,j) % Suppose R(z) = (1+z/2)/(1 - z/2). % F = M^p where p = 2^j and M = R(A/p)
April 17
Complete: function y = PageRank(G,alfa,x,m) % G is n-by-n matrix of zeros and 1's with no nonzero columns. % alfa is a positive number between 0 and 1. % x is a column n-vector with x(i)>0 and sum(x) = 1. % y = M^k * x with % M = alfa*G*D + (1-alfa)*e*e' % where D = inv(diag(sum(G))) and e = ones(n,1)
April 15
(1) Complete: function [a,b] = TrigFit(tVals,y,omega) % tVals and y are column m-vectors % omega is a column n-vector with distinct values and < m % if f(t) = a(1)cos(omega(1)t) + b(1)*sin(omega(1)*t) + ... % b(1)sin(omega(n)t) + b(m)*sin(omega(n)*t) % then |f(tVals(1)) - y(1)|^ + ... + |f(tVals(m)) - y(m)|^2 is minimized. (2) Complete: function [minPhi,maxPhi] = G(A,x,z) % A is m-by-n % x,z are n-by-1 % minPhi and maxPhi are respectively the smallest possible value and % largest possible value of % Phi(theta) = norm(cos(theta)*A*x + sin(theta)*A*y,2)
April 13
Complete: function X = TwoSidedLS(A,C,B) % A is m-by-n with rank n % C is p-by-q with rank q % B is m-by-q % X is n-by-p and minimizes norm(A*X*C - B,'fro') [UA,SA,VA] = svd(A); [UC,SC,VC] = svd(C);
April 10
< pre> (2) Complete: function B = OneOne(A,mu,lambda) % A is n-by-n and symmetric % mu and lambda are m-by-1 % B is m-by-m with the property that B(i,j) is the (1,1) entry % of the inverse of the matrix (A - mu(i)*I)(A - lambda(j)*I). % Assume that these matrices are always nonsingularApril 8 Suppose H is positive definite and Q'*A*Q = D its schur decomposition. Let s(mu) be the solution to the linear system (H + mu*I)s = -g. How would you compute mu so that norm(s(mu) = delta where delta >0. Assume that norm(s(0)) > delta. Are there multiple solutions? What solution minimizes .5*s'*H*s + s'*g /
April 6. Suppose t and y are column m-vectors. We wish to fit data points (t(i),y(i)), i=1:m with a function
f(t) = a*exp(bt).
Taking the least squares approach, the idea is to choose a and b so that norm(a*exp(bt) - y,2) is minimized. Show how this problem could be solved using the 1-D minimizer fminbnd. Hint: for a fixed b, what is the best a? Suggest a heuristic for picking the initial bracketing interval.
April 3 Suppose p(x)= a(1)+a(2)x + a(3)x^2 + a(4)x^3 + x^4 has 4 real roots a<b<c<d. We apply golden section search with initial interval [L,R] . Under what conditions will the process find a local minimum? Suppose F(x) is a function from Rn to Rn with the property that its Jacobian J(x) is tridiagonal for all x. Explain how one can generate a divided difference approximation of J with just four F-evaluations. Hint: for a cleverly chosen vector v, (F(x+hv)-F(x))/h will provide an approximation to J(:,1), J(:,4), J(:,7), etc.
Apr 1 How would you use fminbnd to find the area of the largest triangle whose vertices are on the ellipse (x/a)^2 + (y/b)^2 = 1?
Mar 30 Suppose p(x)= a(1)+a(2)x + a(3)x^2 + a(4)x^3 + x^4 has 4 real roots a<b<c<d. We apply golden section search with initial interval [L,R] . Under what conditions will the process find a local minimum?
Mar 27 Consider applying the secant method to the function f(x) = (x^2 - A) where A>0. Show that
e(k+1) = e(k) * e(k -1) / (x(k) + x(k-1) ) where e(j) = x(j) - sqrt(A)
Mar 25 Write a function r = cRoots(a,b,c,d) that returns in the column 3-vector r all the roots of the cubic equation ax^3 + bx^2 + cx + d = 0. Make effective use of fzero
From this we conclude that |e(k+1) / ( e(k) * e(k-1)) | converges to a constant C>0.
Mar 23 The act of finding sqrt(A) is the act of "building a square" with area A. If we have a rectangle with base x and height A/x, then we can make it "more square" by replacing x with the average of the "current" base and current height, i.e., xNew = (x + A/x)/2. This is precisely the update recipe obtained when we apply Newton's method to get a zero of the function f(x) = x^2 - A.
To find cube roots we can apply Newton's method to f(x) = x^3 -A giving
xNew = x - (x^3-A)/(3x^2) = ( 2x^3 + A )/( 3x^2 )
Can you derive this recipe using an intuitive geometric argument similar to the one above for square roots? Start with the fact that we want to build a cube with volume A. Develop an update that makes the "current cube" more cubical!
Mar 13 Let A = [lambda_1 m ; 0 lambda_2] and assume that the eigenvalues lambda_1 and lambda_2 are distinct. Determine X so inv(A)*A*X = diag(lambda_1,lambda_2). Recall that if Ax = lambda* x and y'*A = lambda*y' then the condition of lambda is 1/abs(x'*y) assuming that x and y have unit 2-norm. What is the condition of lambda_1 and lambda_2 in the above 2-by-2 example? More generally, how would you compute the condition number of all the eigenvalues using [X,D] = eig(A) assuming that the eigenvalues of A are distinct?
Mar 11 A is a real matrix and mu is a complex eigenvalue. Suppose Az = mu*z. Show that the real and imaginary parts of the (complex) eigenvector z define a real, 2-dimensional invariant subspace. Suppose the dominant eigenvalue lambda_1 of a real matrix is complex. Note that the conjugate of lambda_1 is also an eigenvalue so the dominant eigenvalue is not unique. What happens if we apply the power method with a real starting vector? Assume that all the other eigenvalues are smaller in maginitude.
Mar 9 If A-sI = QR, why is B = RQ +sI similar to A?
Mar 6 Write a fragment to handle the update A = H'*A*H where H = I - 2*u*u' with u(1:r) = 0. Assume A is symmetric.
Mar 4 Suppose Ax = lambda*x is the dominant eigenpair for a symmetric A. What happens if we apply the power method with the matrix A - lambda*x*x' ?
Mar 2 Show that after a Jacobi update that zeros A(p,q), then off(Anew)^2 = off(A)^2 - 2A(p,q)^2. Here, off(A)^2 is the sum of the squares of the off-diagonal entries.
Feb 27 If you have a function [c,s] = Schur2(A) that computes a rotation that diagonalizes a symmetric 2-by-2 A, show how it can be used to compute a 2-by-2 svd. Hint: How would you determine c and s so that [c s;-s c]'*[w x ; y z] is symmetric.?
Feb 25 What is the 2-norm condition of A - mu*un*vn' where U'AV = S is the svd of an n-by-n matrix A and un = U(:,n) and vn = V(:,n).
Feb 23 Suppose A is m-by-n and rank deficient. How would you compute the minimizer of norm(Ax-b) that is closest toa given vector d? Hint, Use the SVD and the fact that if xStar is a minimizer then so is any vector of the form xStar + z where z is a linear combo of singular vectors V(:,r+1),...,V(n).
Feb 20 Suppose f and g are column n-vectors. Show how to compute an orthogonal Q as a product of Givens rotations so that Q' * (f * g') * Q is zero everywhere except in the (1,1) and (1,2) entry. Hint. Put a lot of zeros into f and then put a lot of zeros in g. Hint: If (Q'*f)*(Q'*g)' has all those zeros then Q'* had better be zero in all but component 1 and Q'*g had better be zero everywhere except components 1 and 2.
Feb 18 Suppose u is a unit m-vector and u(1:k-1) = 0. Suppose A is an m-by-n matrix. Carefully show how you would overwrite A with H*A where H = I - 2*u*u'.
Feb 16 Suppose the two smallest singular values of A (m-by-n) are less than delta. Assume we are given a nonzero n-vector z. How would you compute a unit 2-norm vector x such that (a) x and z are orthogonal and (b) norm(A*x,2) <= delta?. Hint: let x be a linear comination of V(:,n-1) and V(:,n). Recall that if you have a pair of unit vectors f and g that are orthogonal to each other, then norm(alfa*f + beta*g)^2 = alfa^2 + beta^2.
Feb 13 For the 3-by-2 case, verify that the method of normal equations is equivalent to zeroing the gradient of
f(x) = (A*x - b)'*(A*x - b)
Feb 11 Using very little pencil and paper, what is the SVD of A = [10 20 ; 25 -8]?
Feb 9 For an n = 2 SVD, is it always possible to make both U and V rotations? Recall that a 2-by-2 orthogonal matrix is a rotation if it has the form [c s ; -s c].
Feb 6 A is symmetric positive definite and we compute A = LU. How would you compute lower triangular G so A = GG'?
Feb 4 If A is n-by-n, symmetric, and positive definite, explain why A(1:n-1,1:n-1) is symmetric and positive definite. Suppose b is a given column n-vector. How would you go about computing x and y so that Ax = b and A(1:n-1,1:n-1)y = b(1:n-1)?
Feb 2. How might you represent an n-by-n permutation P with an n-vector of integers? Assuming P is represented in this way and x is a column n-vector, how would you compute y = P*x and z = P'*x?
Jan 30. Suppose U is n-by-n, nonsingular, and upper triangular. We want to solve U*Ux = b for x. What's is the flop difference between x = (U*U)\b and x = U\(U\b) ? Assume that Matlab uses back-substitution when backslash is applied to an upper triangular system.
Jan 28. How would you solve (A^k)x = b where A is n-by-n nonsingular, b is n-by-1, and k is a positive integer.
Jan 26 Suppose A is n-by-n and x is n-by-1. Show that norm(A*x,2) <= n*a* norm(x,2) where a = max |A(i,j)|
Jan 23. Suppose x and y are two positive, normalized floating numbers. (Assume IEEE double precision.) Exactly how many floating point numbers are there in between x and y given that
log2(x) = e1 < e2 = log2(y)
and that neither x or y is an integral power of two.
Jan 21. Suppose A is m-by-n, B is n-by-p, C is p-by-q, and D is q-by-r. How would you compute F = ABCD if minimizing flops is the goal?
Jan 19. Suppose A is an initialized n-by-n matrix and y and z are initialized n-by-1 vectors. What is the dimension of B = z*y'*A and how would you compute it? How many flops would be required by your method?