Math

This page covers the list of math topics provided here:

solutions of ordinary and partial differential equations, power series solutions, linear independence, elementary linear algebra, determinants, Taylor series, eigenvalue problems, orthogonality, Fourier series and integrals, integration by parts, vector algebra, complex numbers, special functions (e.g., Bessel functions and Legendre polynomials).

I have organized these topics in a way that progresses from basic mathematics to more difficult concepts:

  1. Review of basics
  2. Taylor series
  3. Fourier series and transforms
  4. Linear algebra
  5. Ordinary differential equations
  6. Orthogonality and special functions
  7. Vector algebra & vector calculus
  8. Partial differential equations

← Return to home

Review of basics

  1. State the fundamental theorem of algebra (d'Alambert's theorem). [answer]

    Every non-constant single-variable polynomial \(a_nx^n + a_{n-1}x^{n-1} + \dots+a_0 =0\) with complex coefficients \(a_n, a_{n-1}, \dots, a_0\) has at least one complex root.

  2. State the rational root theorem. [answer]

    Each rational root of \(a_nx^n + a_{n-1}x^{n-1} + \dots+a_0 =0\) is given by \(x= p/q\), where \(p\) is an integer factor of \(a_0\), and where \(q\) is an integer factor of \(a_n\).

  3. Solve \(x^3 - x^2 - 5x -3 = 0.\) [answer]

    \(x=-1,-1,3\)

  4. Solve \(x^3 - x^2 - 8 x - 6 = 0\). [answer]

    \(x=-1,1+\sqrt{7},1-\sqrt{7}\)

  5. Solve \(x^4 - 2x^3 - 3x^2 + 8x -4 = 0\). [answer]

    \(x=-2,1,1,2\)

  6. Decompose \(1/(x^2 + 2x-3)\) into partial fractions. [answer]

    \[\frac{1}{4}\bigg(\frac{-1}{x+3} + \frac{1}{x-1}\bigg)\]

  7. Decompose \((x^3+16)/(x^3 - 4x^2 +8x)\) into partial fractions. [answer]

    See example 2 here.

  8. Decompose \(1/(x^3-1)\) into partial fractions. [answer]

    See example 5 here and note that the limit method need not be used.

  9. Find the roots of \(\exp(2z)=2i\). [answer]

    Write \(i = e^{i\arctan \infty} = e^{i\pi/4}\). Then take the log of both sides, giving \(z =\frac{\ln 2}{2} + i\frac{\pi}{4}\).

  10. Evaluate \((-i)^{1/3}\). [answer]

    \begin{align*} (-i)^{1/3} &= (e^{-i\pi/2})^{1/3} \\ &=e^{-i\pi/6} \\ &=(\cos \pi/6 - i\sin \pi/6)\\ &=(\sqrt{3}/2 - i/2) \end{align*}

  11. Find the real part of \(e^{-ix}/(1+e^{a+ib})\). [answer]

    \begin{align*} \frac{e^{-ix}}{1+ e^{a+ib}}&=\frac{\cos x - i\sin x}{1 + e^ae^{ib}}\\ &=\frac{\cos x -i\sin x}{1 + e^a(\cos b + i\sin b)}\\ &=\frac{\cos x -i\sin x}{1 + e^a(\cos b + i\sin b)}\frac{1 + e^a(\cos b - i\sin b)}{1 + e^a(\cos b - i\sin b)}\\ &= \frac{\cos x +e^a\cos x(\cos b - i\sin b)- i\sin x - ie^a\sin x(\cos b - i\sin b)}{1 +e^{a}(\cos b +i\sin b) + e^{a}(\cos b -i\sin b) + e^{2a}(\cos b +i\sin b)(\cos b -i\sin b)}\\ &= \frac{\cos x +e^a\cos x\cos b - ie^a\cos x\sin b- i\sin x - ie^a\sin x\cos b -e^a\sin x \sin b}{1 +e^{a}(\cos b) + e^{a}(\cos b) + e^{2a}(\cos^2 b + \sin^2 b)}\\ &=\frac{\cos x +e^a\cos x\cos b -e^a\sin x \sin b}{1 + e^{2a}(\cos^2 b + \sin^2 b)} -i\frac{e^a\cos x\sin b+\sin x +e^a\sin x\cos b}{1 + 2e^{a}\cos b + e^{2a}(\cos^2 b + \sin^2 b)} \end{align*} The real part is \(\frac{\cos x +e^a\cos x\cos b -e^a\sin x \sin b}{1 + e^{2a}(\cos^2 b + \sin^2 b)}\).

  12. State the fundamental theorem of calculus. [answer]

    The derivative and integral are inverses.

  13. What is the relationship between differentiability and continuity? [answer]

    Differentiability implies continuity, but continuity does not imply differentiability. For an example of a continuous function that is not differentiable, consider \(\sqrt{|x|}\) at \(x= 0\).

  14. \(\lim_{x\to 0}\frac{e^x-1}{x^2+x}=\) [answer]

    \(1\) by L'Hopital's rule.

  15. \(\lim_{x\to 0}\frac{2\sin x - \sin 2x}{x - \sin x}=\) [answer]

    \(6\) by repetitive use of L'Hopital's rule.

  16. \(\lim_{x\to \infty} x^n\, e^{-x} =\) [answer]

    \(0\) by L'Hopital's rule. Repeat l'Hopital's rule until \(x^{n-1} = x^0\).

  17. \(d\sinh x/dx = \) [answer]

    \(\cosh x\)

  18. \(d\cosh x/dx = \) [answer]

    \(\sinh x\)

  19. \(d\tan x/dx = \) [answer]

    \(\sec^2 x\)

  20. \(d\cot x/dx = \) [answer]

    \(-\csc^2 x\)

  21. \(d\sec x/dx = \) [answer]

    \(\sec x\tan x\)

  22. \(d\csc x/dx = \) [answer]

    \(-\cot x\csc x\)

  23. \(\int dx(x^4 + x^3 + x^2 + 1)/(x^2 + x - 2)=\) [answer]

    See example 6 here.

  24. Show that \(\int u dv = uv - \int v du \). [answer]

    Start with the product rule, \(\frac{d}{dx} (uv) = v\frac{du}{dx} + u\frac{dv}{dx}\), integrate both sides over \(x\), and rearrange.

  25. \(\int \ln x dx = \) [answer]

    \(x\ln (x) -x\)

  26. \( \int e^x \sin x dx = \) [answer]

    Integrate by parts twice to get \( \int e^x \sin x dx = e^x[\sin(x)- \cos(x)]/2\).

  27. What are some general guidelines about trigonometric subtitution? [answer]

    If a quantity \(a^2-x^2\) is involved, set \(x = a\sin\theta\). If a quantity \(x^2+ a^2\) is involved, set \(x = a\tan\theta\). If a quantity \(x^2-a^2\) is involved, set \(x = a\sec\theta\).

  28. \( \int \sqrt{a^2-x^2} dx = \) [answer]

    \[\frac{a^2}{2}\arcsin{\frac{x}{a}} + \frac{a^2}{4}\frac{x}{a}[1-(x/a)^2]^{1/2} +C.\] See this page for more on trigonometric substitution.

  29. \( \int \frac{dx}{\sqrt{a^2-x^2}} = \) [answer]

    \[\arcsin(x/a) + C\]

  30. \( \int \frac{dx}{a^2+x^2} = \) [answer]

    \[\frac{1}{a}\arctan (x/a)+C \]

  31. ☸ \( \int \sqrt{a^2+x^2}dx = \) [answer]

    See this page for the solution. The integral of secant cubed is needed.

Taylor series

  1. What is the difference between a Taylor series and a Maclaurin series? [answer]

    A Taylor series of \(f(x)\) is \[f(x-a) = f(a) + f'(a)(x-a) + \frac{1}{2!}f''(a)(x-a)^2 + \dots,\] while the Maclaurin series is the special case for \(a= 0\).

  2. Write the first three nonzero terms of the Maclaurin series representation of \(e^x \), \(\sin x \), \(\cos x \), and \(\tan x \). [answer]

    \begin{align*} e^x&\simeq 1 + x + \frac{x^2}{2!}\\ \sin x &\simeq x - \frac{x^3}{3!} + \frac{x^5}{5!}\\ \cos x &\simeq 1 - \frac{x^2}{2!} + \frac{x^4}{4!}\\ \tan x & \simeq x + \frac{x^3}{3} \tag*{Sorry, I will go no higher.} \end{align*}

  3. Show that \( \cos x = (e^{jx} + e^{-jx})/2\), \( \sin x = (e^{jx} - e^{-jx})/2j\).
  4. Write the first three terms of the Maclaurin series representation of \(\ln (1+x)\). [answer]

    \[\ln (1+x) \simeq x - \frac{x^2}{2} + \frac{x^3}{3} \text{ near } x = 0\]

  5. Write the first three terms of the Maclaurin series representation of \(\ln (1-x)\) [answer]

    \[\ln (1-x) \simeq -x - \frac{x^2}{2} - \frac{x^3}{3} \text{ near } x = 0\]

  6. Write the first three terms of the Taylor series representation of \(\ln (-x)\) about \(x= -1\). Verify your answer for by checking that it can be obtained from the previous expansion for \(\ln (1-x)\) by shifting the latter one unit in the negative \(x\) direction. [answer]

    \[\ln (-x) \simeq -(x+1) - \frac{(x+1)^2}{2} - \frac{(x+1)^3}{3} \text{ near } x = -1\] This matches the previous result when the previous result is shifted one unit in the \(-x\) direction.

Fourier series and transforms

  1. The sine-cosine form of the Fourier series is \[f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty \big[a_n \cos(nx) + b_n\sin(nx) \big]\,. \] Why is \(a_0\) not included in the sum? [answer]

    \(a_0\) is not included in the sum to preserve a symmetric form of the definitions of \(a_n\) and \(b_n\).

  2. Use orthogonality to determine \(a_0\), \(a_n\), and \(b_n\). [answer]

    First multiply both sides of the form above by \(\cos(mx)dx\) on both sides and integrate from \(-\pi\) to \(\pi\). Do the same for \(\sin(mx)dx\). Use the relations \begin{align*} \int_{-\pi}^\pi \sin mx \sin nx dx &= \delta_{nm}\pi\\ \int_{-\pi}^\pi \cos mx \cos nx dx &= \delta_{nm}\pi\quad (2\pi \text{ for } m=n=0)\\ \int_{-\pi}^\pi \sin mx \cos nx dx &=0 \end{align*} and obtain \begin{align*} a_0&=\frac{1}{\pi}\int_{-\pi}^\pi f(x) dx\\ a_n&=\frac{1}{\pi}\int_{-\pi}^\pi f(x)\cos nx dx\\ b_n&=\frac{1}{\pi}\int_{-\pi}^\pi f(x)\sin nx dx \end{align*}

  3. If \(f(x)\) is periodic in \(2\pi\), has a finite number of maxima, minima, and discontinuities, and if \(\int_{-\pi}^\pi |f(x)|dx \) is finite, what is the relationship between \(f(x)\) and its Fourier series? What are these conditions called? [answer]

    The Fourier series of \(f(x)\) will converge to \(f(x)\) at all points where \(f(x)\) is continuous. For points at which \(f(x)\) is discontinuous (i.e., has a jump), the Fourier series converges to the midpoint of the jump.

    These are the Dirichlet conditions.

  4. Obtain the Fourier series expansion coefficients of a periodic square wave, which is \(1\) for \(-\pi \leq x\leq 0 \) and \(-1\) for \(0 < x \leq \pi \). [answer]

    Using the definitions above, \(a_0=0\), \(a_n=0\), and \(b_n = -4/n\pi\) for odd \(n\) and \(0\) for even \(n\).

  5. Obtain the Fourier series expansion coefficients of a periodic sawtooth wave, which is \(x/\pi\) for \(-\pi \leq x\leq \pi \). [answer]

    Using the definitions above, \(a_0=0\), \(a_n=0\), and \(b_n = -(-1)^n 2/n\pi\).

  6. Obtain the Fourier series expansion coefficients for some of the periodic functions listed here. [answer]

    The expansion coefficients are listed on that page. The half-wave rectified sine appears in the Penn State math packet.

  7. The complex-exponential form of the Fourier series is \[f(x) = \sum_{n = -\infty}^{\infty} c_n e^{inx}\] Use orthogonality to determine \(c_n\). [answer]

    The orthogonality relation is \[\int_{-\pi}^{\pi} e^{inx}e^{-imx}dx = 2\pi \delta_{nm} \,.\] Multiplying both sides by \(e^{-imx}\) and integrating from \(-\pi\) to \(\pi\) gives \[c_n = \frac{1}{2\pi}\int_{-\pi}^{\pi} f(x)e^{-inx}dx.\]

  8. Attempt some of these problems from Boas's Methods book, using the sine-cosine and complex exponential Fourier series. [answer]

    The answers are provided in that document.

  9. State Parseval's theorem. [answer]

    Parseval's theorem says that the average value of \(|f(x)|^2\) over a period is the sum of the magnitude-squared of the expansion coefficients. This is most straightforwardly written in terms of the complex exponential Fourier series (because there's only one expansion coefficient): \[|f(x)|^2 = \sum_{-\infty}^{\infty} |c_n|^2 \,.\] Parseval's theorem is sometimes referred to as the "completeness relation," because if any one of the harmonics were left out, then \(|f(x)|^2 > \sum_{-\infty}^{\infty} |c_n|^2 \,.\), which is known as Bessel's inequality.

  10. In what sense is the Fourier transform a generalization of the Fourier series? What are the conditions for the convergence of the Fourier transform? [answer]

    The Fourier transform is the "continuum limit" of the Fourier series, expressing a function as the integral of a continuous spectrum of waves, rather than as just the sum of a discrete spectrum of waves. The Fourier transform applies to arbitrary functions (not necessarily periodic).

    The form of the Fourier transform can readily be derived by considering the complex-exponential form of the Fourier series, given above. In the limit that \(n\) is a continuous index \(k\), the series becomes an integral, \[f(x) = \int_{-\infty}^\infty c(k) e^{ikx}dk,\] and the coefficients \(c(k)\) are \[c(k) = \frac{1}{2\pi}\int_{-\infty}^\infty f(x) e^{-ikx}dx.\] Note that the appearance of \(1/2\pi\) in the expression for \(c(k)\) is merely a convention.

    The conditions for the convergence of the Fourier transform are the same as those for the Fourier series (the Dirichlet conditions).

  11. Given the function \begin{align*} f(x) = \begin{cases} 1,\quad &x \in [-1,1]\\ 0,\quad & |x| \in (1,\infty) \end{cases}\,. \end{align*} calculate its Fourier transform \(c(k)\). Write the integral representation of \(f(k)\) in terms of the calculated \(c(k)\). [answer]

    Calculating the Fourier transform in this case is straightforward: \begin{align*} c(k) = \frac{1}{2\pi} \int_{-\infty}^{\infty} f(x) e^{-ikx} dx &= \frac{1}{2\pi}\int_{-1}^{1} e^{-ikx} dx\\ &= \frac{i}{2\pi k} (e^{-ik}-e^{ik})\\ &= -\frac{i}{2\pi k} (e^{ik}-e^{-ik})\\ &= -\frac{i}{2\pi k} 2i \sin{k}\\ &= \frac{\sin{k}}{\pi k} \end{align*} So the integral representation of the given function is \(f(x) = \int_{-\infty}^{\infty} \frac{\sin k}{\pi k} e^{ikx}dk\).

  12. Given the function \begin{align*} f(x) = \begin{cases} x,\quad &x \in [0,1]\\ 0,\quad & \text{otherwise} \end{cases}\,. \end{align*} calculate its Fourier transform \(c(k)\). Write the integral representation of \(f(k)\) in terms of the calculated \(c(k)\). [answer]

    \begin{align*} c(k) = \frac{1}{2\pi} \int_{-\infty}^{\infty} f(x) e^{-ikx} dx &= \frac{1}{2\pi}\int_{0}^{1} xe^{-ikx} dx\\ &= \frac{ixe^{-ikx}}{2\pi k} + \frac{e^{-ikx}}{2\pi k^2} \bigg\rvert_{x=0}^{1}\\ &=\frac{ie^{-ik}}{2\pi k} + \frac{e^{-ik}-1}{2\pi k^2} \end{align*} So the integral representation of the given function is \[f(x) = \int_{-\infty}^{\infty} \bigg( \frac{ie^{-ik}}{2\pi k} + \frac{e^{-ik}-1}{2\pi k^2} \bigg) e^{ikx}dk\,.\]

  13. What does a Gaussian pulse in the time domain i.e., \(\exp (-t^2/T^2)\) look like in the frequency domain? [answer]

    Since the Fourier transform of a Gaussian is a Gaussian, the signal in the frequency domain is a Gaussian.

  14. ☸ Now consider a sine wave modulated by a Gaussian envelope in the time domain, i.e., \(\exp(j\omega_0 t) \exp (-t^2/T^2)\). What does this signal look like in the frequency domain? [answer]

    The Fourier transform of this signal is also a pure Gaussian. I confirmed this analytically and numerically.

    At first, it was not intuitively clear to me why this signal is a pure Gaussian in the frequency domain, rather than a combination of a Gaussian and a delta function in the frequency domain. I was expecting a \(\delta\)- function to appear at the frequency \(\omega_0\). This can be rationalized by making use of the convolution theorem, which states that the Fourier transform of the product of two functions is equal to the convolution of their individual Fourier transforms. We wish to find the Fourier transform of a sinusoid \(\times\) a Gaussian, and we know that the Fourier transform of a sinuosoid is a delta function, while that of a Gaussian is a Gaussian. By the convolution theorem, the Fourier transform of the product is the convolution of the delta function and the Gaussian, which is a pure Gaussian.

Linear algebra

These questions come from Introduction to Linear Algebra by Gilbert Strang and the appendix of Introduction to Quantum Mechanics by D. J. Griffiths.

Towards the end of this section there is some overlap with tensor algebra.

  1. Define linear independence. [answer]

    Vectors \(\vec{v}_1,\vec{v}_2,\vec{v}_3,\dots,\vec{v}_n,\) are linearly independent iff \[a_1 \vec{v}_1 + a_2 \vec{v}_2 + a_3 \vec{v}_3 + \dots + a_n \vec{v}_n = \vec{0}\] for \(a_1,a_2,a_3,\dots,a_n = 0\).

  2. The columns of an invertable matrix are _______ _______. [answer]

    linearly independent

  3. How can one easily check to see that \(n\) vectors, each one \(n\times 1\), are linearly independent? [answer]

    One can assemble a matrix of the \(n\) vectors as \(n\times 1\) columns and see if the determinant is nonzero. If so, the matrix can be inverted, and the columns are linearly indepepdent. If the determinant is \(0\), the columns are not linearly independent.

  4. Determine whether the vectors \(\vec{v}_1 = (1,1)\) and \(\vec{v}_2 = (-3,2)\) are linearly independent. [answer]

    Apply the definition of linear independence, \begin{align*} a_1 \begin{pmatrix} 1 \\ 1 \end{pmatrix} + a_1 \begin{pmatrix} -3 \\ 2 \end{pmatrix} = \begin{pmatrix} 0 \\0 \end{pmatrix} \end{align*} This can be recast as \begin{align*} \begin{pmatrix} 1 & - 3\\ 1 & 2 \end{pmatrix} \begin{pmatrix} a_1 \\a_2 \end{pmatrix} = \begin{pmatrix} 0\\0 \end{pmatrix} \end{align*} Performing row reduction gives the identity matrix. Therefore the columns are linearly independent.

  5. Taking \(\vec{v}_1 = (1,1)\) and \(\vec{v}_2 = (-3,2)\) as basis vectors, what linear combination of \(\vec{v}_1\) and \(\vec{v}_2\) gives the vector \(\vec{v}_3 = (2,1)\)? [answer]

    The question at hand is to explore \begin{align*} a_1 \vec{v}_1 + a_2 \vec{v}_2 + a_3 \vec{v}_3 = \vec{0}. \end{align*} Perform row operations on \begin{align*} \begin{pmatrix} 1 & -3 & 2\\ 1 & 2 & 1 \end{pmatrix} \begin{pmatrix} a_1 \\ a_2 \\a_3 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \end{align*} gives \begin{align*} \begin{pmatrix} 1 & 0 & 7/5\\ 0 & 1 & -1/5 \end{pmatrix} \begin{pmatrix} a_1 \\ a_2 \\a_3 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \end{align*} Thus \(a_1 = -\frac{7}{5} a_3\) and \(a_2 = \frac{1}{5}a_3\). To answer the question at hand, set \(a_3 = -1\) to find that \(a_1 = \frac{7}{5}\) and \(a_2 = -\frac{1}{5}\). That is, \(\frac{7}{5} \vec{v}_1 - \frac{1}{5}\vec{v}_2 = v_3\).

  6. Invert \begin{align*} {A} = \begin{pmatrix} 0 & 1 & 1\\ 0 & 0 & 1\\ 2 & 1 & 0 \end{pmatrix} \end{align*} using row operations. Check the result by using Cramer's rule for the inverse, \({A}^{-1} = \frac{1}{\det{A}} {C}^\mathsf{T}\), where \({C}\) is the matrix of cofactors, which is \((-1)^{i + j}\) times the determinant of \({A}\) when the \(i\)th row and \(j\)th column are crossed out. [answer]

    See here for the inverse taken using row operations.

  7. Invert \begin{align*} A = \begin{pmatrix} -2 & 1 & 0\\ -1 & 0 & 1\\ 0 & 1 & 0 \end{pmatrix} \end{align*} using row operations. Check the result by using \({A}^{-1} = \frac{1}{\det{A}} {C}^\mathsf{T}\). [answer]

    See here for the solution.

  8. What is the rank of a matrix? How does one find the rank of a matrix? [answer]

    The rank of a matrix is the dimension of the vector space spanned by its columns (or equivalently, rows). To find the rank of a matrix, the number of linearly independent columns of the matrix must be determined.

  9. Find the rank of \begin{align*} \begin{pmatrix} 1 & 2 & 1 \\ -2 & -3 & 1 \\ 3 & 5 & 0 \end{pmatrix} \end{align*} using row operations. [answer]

    The rank is 2. See here.

  10. Find the rank of \begin{align*} \begin{pmatrix} 1 & 3 & 1 & 4 \\ 2 & 7 & 3 & 9\\ 1 & 5 & 3 & 1\\ 1 & 2 & 0 & 8 \end{pmatrix} \end{align*} What is the rank of the transpose of this matrix? [answer]

    Using row operations leads to \begin{align*} \begin{pmatrix} 1 & 0 & -2 & 1 \\ 0 & 1 & 1 & 1\\ 0 & 0 & 0 & -5\\ 0 & 0 & 0 & 0 \end{pmatrix}\,, \end{align*} from which it can be seen that the third column is \(-2\vec{v}_1 + \vec{v}_2\), where \(\vec{v}_1\) and \(\vec{v}_2\) are the column vectors of the original matrix. However, the rest of the column vectors are linearly independent. Thus the rank of the matrix is 3. In general the rank of the transpose of a matrix is rank of its transpose, i.e., the rank of the transpose of this matrix is 3.

  11. ☸ What are the four fundamental subspaces corresponding to an \(m\times n\) matrix? State the fundamental theorem of linear algebra. [answer]

    The four subspaces of an \(m\times n\) matrix \(A\) are listed below:

    1. The column space is the vector space spanned by the the linearly independent columns of \(A\). That is to say, the linearly independent columns \(A\) form the basis of the column space.
    2. The nullspace is the vector space spanned by all the vectors \(\vec{x}\) that satisfy \(A\vec{x} =\vec{0}\).
    3. The row space is just the column space of \(A^\mathsf{T}\).
    4. The left nullspace is just the nullspace of \(A^\mathsf{T}\).

    The fundamental theorem of linear algebra provides the following relations between the four subspaces:

    • The column space and row space both have dimension \(r\), which is the rank of the matrix. The nullspace has dimension \(n-r\), and the left nullsapce has dimension \(m-r\).
    • The nullspace is orthogonal to the row space, and the column space is orthogonal to the left nullspace.

    The first statement makes sense because if \(n\) is the rank of \(A\) (i.e., \(r=n\)), then the matrix is full rank, i.e., \(A\) is invertible. Thus its nullspace has dimension \(0\); indeed, \(n-r= n-n = 0\). Now suppose that the rank of \(A\) is less then \(n\), i.e., \(n > r \). Then, there are \(n-r>0\) free variables in the solution to \(A\vec{x} = \vec{0}\). That is to say, \(\vec{x}_1, \vec{x}_2,\dots,\vec{x}_{n-r}\) form a basis of dimension \(n-r\) in the nullspace of \(A\).

    The second statement says that if \(\vec{v}\) is in the nullspace of \(A\), and if \(\vec{w}\) is in the row space of \(A\), then \(\vec{v}\cdot \vec{w} = 0\). This makes sense because each row vector \(\vec{w}\) of \(A\), when multiplied by a vector \(\vec{v}\) from the nullspace, gives \(0\) on the right-hand side of the equation \(A\vec{v} = 0\). (A similar argument can be made for the column space and the left nullspace).

  12. Prove the triangle inequality, \(|\vec{u} + \vec{v}| \leq |\vec{u}| + |\vec{v}|\). [answer]

    Note that \begin{align} |\vec{u} + \vec{v}|^2 &= (\vec{u} + \vec{v})\cdot(\vec{u} + \vec{v})\notag\\ &=|\vec{u}|^2 + |\vec{v}|^2 + 2|\vec{u}||\vec{v}|\cos\theta\,. \label{triangle1}\tag{i} \end{align} Meanwhile note that \begin{align} (|\vec{u}| + |\vec{v}|)^2 &= |\vec{u}|^2 + |\vec{v}|^2 + 2|\vec{u}||\vec{v}|\,. \label{triangle2}\tag{ii} \end{align} Comparing equations (\ref{triangle1}) and (\ref{triangle2}) and noting that \(|\cos\theta|\leq 1\) results in \begin{align*} |\vec{u}|^2 + |\vec{v}|^2 + 2|\vec{u}||\vec{v}|\cos\theta\ \leq |\vec{u}|^2 + |\vec{v}|^2 + 2|\vec{u}||\vec{v}| \end{align*} which is equivalent to \begin{align} |\vec{u} + \vec{v}|^2 \leq (|\vec{u}| + |\vec{v}|)^2\label{triangle3}\tag{iii} \end{align} Taking the square root of equation (\ref{triangle3}) gives the triangle inequality, \begin{align*} |\vec{u} + \vec{v}| \leq |\vec{u}| + |\vec{v}|\,. \end{align*}

  13. \(a_x\vec{e}_x + a_y\vec{e}_y + a_z\vec{e}_z\) is a Cartesian 3-vector. Does the subset of all vectors with \(a_z = 0\) constitute a vector space? If so, what is its dimension; if not, why not? [answer]

    Yes, \(a_x\vec{e}_x + a_y\vec{e}_y\) constitutes a vector space in the \(x\)-\(y\) plane of dimension \(2\).

  14. Does the subset of all vectors with \(a_z = 1\) constitute a vector space? Explain why or why not. [answer]

    No, the subset of vectors with \(a_z = 1\) does not span the space for two reasons. The first reason is that linear combinations of these vectors result in vectors outside the space. For example, adding two vectors from this subset would result in a vector with \(a_z = 2\), which is not in the subset. The second reason is that this subset does not have a null vector, \((0,0,0)\).

  15. Does the subset of all vectors with \(a_x = a_y = a_z\) constitute a vector space? Explain why or why not. [answer]

    Yes, this combination consitutes a vector space with dimension 1.

  16. Consider the collection of all polynomials in \(x\) with complex coefficients of degree less than \(N\). Does this set constitute a "vector" space? If so, suggest a convenient basis and provide its dimensions. [answer]

    Yes, this combination consitutes a vector space with dimension \(N-1\). A convenient basis is \(1, x, x^2, \dots , x^{N-1}\).

  17. Do polynomials that are even functions constitute a "vector" space? What about polynomials that are odd functions? [answer]

    Yes, even polynomials consitutes a vector space with dimension \((N)/2\). A convenient basis is \(1, x^2, x^4 \dots , x^{N}\). The dimension is \(N/2\). Meanwhile, odd polynomials consitutes a vector space with dimension \((N-1)/2\). A convenient basis is \(x, x^3, x^5 \dots , x^{N-1}\).

  18. Do polynomials whose leading coefficient is \(1\) constitute a "vector" space? [answer]

    No, because the sum of two such polynomials would have a leading coefficient not equal to one, and therefore the sum would not in the space.

  19. Do polynomials whose value of \(0\) at \(x=1\) constitute a "vector" space? [answer]

    Yes; \( 1, x-1, (x-1)^2, \dots, (x-1)^{N-1}\). The dimension is \(N-1\).

  20. Do polynomials whose value of \(1\) at \(x=0\) constitute a "vector" space? [answer]

    No, because the sum of two such polynomials would not be \(1\) at \(x=0\), and therefore the sum would not in the space.

  21. Provide definitions of the following types of matrices: symmetric, Hermitian, skew-symmetric, skew-Hermitian, orthogonal, unitary. [answer]

    \begin{align*} \text{Symmetric: }\quad A&= A^\mathrm{T}.\\ \text{Hermitian: }\quad A&= A^\dagger.\\ \text{Skew-symmetric: }\quad A &= -A^\mathrm{T}.\\ \text{Skew-Hermitian: }\quad A&= -A^\dagger.\\ \text{Orthogonal: }\quad A^{-1}&= A^\mathrm{T}.\\ \text{Unitary: }\quad A^{-1}&= A^\dagger. \end{align*}

  22. ☸ Prove that a real symmetric matrix \(S\) has real eigenvalues and orthogonal eigenvectors. [answer]

    First the eigenvalues: the eigenvalue equation is \(S \vec{x} =\lambda \vec{x}\), which, upon taking the conjugate and noting that \(S\) is real, gives \(S \vec{x} = \lambda^* \vec{x}^*\). Now consider the quantity \begin{align*} \lambda \vec{x}^* \cdot \vec{x} &= \vec{x}^* \cdot \lambda \vec{x} \end{align*} Invoking the eigenvalue equation \(S \vec{x} = \lambda^* \vec{x}^*\) gives \begin{align*} \lambda \vec{x}^* \cdot \vec{x} &= \vec{x}^* \cdot S \vec{x} \end{align*} By the definition of the transpose, \begin{align*} \lambda \vec{x}^* \cdot \vec{x} &= S^\mathsf{T}\vec{x}^* \cdot \vec{x}\\ &=S\vec{x}^* \cdot \vec{x}\\ &=(S^*\vec{x}^*) \cdot \vec{x} \end{align*} where the second two equalities are because \(S\) is symmetric and real, respectively. Again invoking the eigenvalue equation gives \begin{align*} \lambda \vec{x}^* \cdot \vec{x} &=\lambda^* \vec{x}^* \cdot \vec{x}\\ \lambda x^2 &= \lambda^* x^2, \end{align*} or \(\lambda = \lambda^*\) i.e., \(\lambda\) is real.

    Now for the eigenvectors. Consider two distinct eigenvalue-eigenvector pairs of \(S\): \begin{align*} S\vec{x} = \lambda \vec{x},\qquad S\vec{y} = \mu \vec{y} \end{align*} Then consider the quantity \(S \vec{x} \cdot \vec{y}\). Using the definition of the transpose and the fact that \(S\) is symmetric gives \begin{align*} S \vec{x} \cdot \vec{y} &= \vec{x}\cdot S^\mathsf{T}\vec{y}\\ &=\vec{x}\cdot S \vec{y}\\ \lambda \vec{x} \cdot \vec{y} &=\vec{x}\cdot \mu \vec{y}, \end{align*} where the eigenvalue equations have been used in the last equality. Rearranging terms gives \begin{align*} \lambda \vec{x} \cdot \vec{y} &=\mu \vec{x}\cdot \vec{y}\\ (\lambda -\mu) \vec{x}\cdot\vec{y} &= 0\,. \end{align*} By the zero product property, and by noting that \(\lambda \neq \mu\) (i.e., they're distinct), \(\vec{x}\cdot\vec{y}= 0\), which means the eigenvectors are orthogonal.

  23. ☸ When solving \(A\vec{x} = \lambda \vec{x}\) where \(\vec{x} \neq 0\), what is the meaning of setting \(\det (A-I\lambda) = 0\)? [answer]

    Note that the equation \(A\vec{x} = \lambda \vec{x}\) can equivalently be written as \((A-I\lambda) \vec{x} = 0\). Call \((A-I\lambda)\equiv B\). The situation \(B\vec{x} = \vec{0}\) needs to be considered.

    Recall two theorems:

    Thm. 1........ \(B\) is invertible if and only if \(\text{Null}(B) =\{0\}\), i.e., the only solution to \(B\vec{x} = \vec{0}\) is \(x=0\),
    and
    Thm. 2........ \(B\) is invertible if and only if \(\det B = 0\).

    Combining Thm. 1 and 2 to eliminate the statement "\(B\) is invertible" gives

    The only solution to \(B\vec{x} = \vec{0}\) is \(\vec{x}=0\) if and only if \(\det{B} \neq 0\).

    The constrapositive of this statement (which is logically equivalent to any conditional statement) is:

    \(\det{B} = 0\) if and only if \(\vec{x}= 0\) is not the only solution to \(B\vec{x} = \vec{0}\).

    Since the statement is and iff statement, its converse is also true:

    \(\vec{x}= 0\) is not the only solution to \(B\vec{x} = \vec{0}\) if and only if \(\det{B} = 0\).

    We are interested in cases in which \(\vec{x} =0\) is not the only solution to \((A - I\lambda)\vec{x} = \vec{0}\), and the above statement says that \(\vec{x}\) is not the only solution if and only if \(\det (A - I\lambda) = 0\). Thus we evaluate \(\det (A - I\lambda) = 0\) and solve for the eigenvalues and eigenvectors.

  24. Suppose \(A\vec{x} = \lambda \vec{x}\) and \(A\) is not full rank (i.e., it has at least two columns that are linearly dependent. What then must be at least one of its eigenvalues? [answer]

    At least one of the eigenvalues of must be \(0\) if \(A\) is not full rank.

  25. Find the eigenvalues and eigenvectors of \begin{align*} {A} = \begin{pmatrix} 2 & -1\\ -1 & 2 \end{pmatrix} \end{align*} as well as the eigenvalues and eigenvectors of \({A}^2\), \({A}^{-1}\), and \({A} + 4{I}\) (without actually computing the eigenvalues and eigenvectors of the latter four matrices). [answer]

    The eigenvalues and eigenvectors of \({A}\) are \begin{align*} \lambda &= 1, \quad \vec{x}_1=\begin{pmatrix} 1 \\1 \end{pmatrix}\\ \lambda &= 3, \quad \vec{x}_1=\begin{pmatrix} 1 \\ -1 \end{pmatrix}\,. \end{align*} \({A}^2\), \({A}^{-1}\), and \({A} + 4{I}, \) have the same eigenvectors as \({A}\), but the eigenvalues are respectively squared (\(1, 9\)), inverted (\(1, 1/3\)), and added by four (\(5, 7\)).

  26. Find the eigenvalues and eigenvectors of \begin{align*} {S} = \begin{pmatrix} 1 & -1 & 0\\ -1 & 2 & -1\\ 0 & -1 & 1 \end{pmatrix}\,. \end{align*} Does the relative orientation of the eigenvectors make sense? [answer]

    The eigenvalues and eigenvectors of \({S}\) are \begin{align*} \lambda &= 0, \quad \vec{x}_1 =\begin{pmatrix} 1 \\1 \\1 \end{pmatrix}\\ \lambda &= 1, \quad \vec{x}_2 = \begin{pmatrix} 1 \\0\\ -1 \end{pmatrix}\\ \lambda &= 3, \quad \vec{x}_3 = \begin{pmatrix} 1 \\-2\\ 1 \end{pmatrix} \end{align*} Since the matrix is symmetric, it makes sense that the eigenvalues are real and the eigenvectors are orthogonal (see the proof above).

  27. ☸ \(A\vec{x} = \lambda\vec{x}\) is a vector equation because the left- and right-hand sides are both vectorial. Recast \(A\vec{x} = \lambda\vec{x}\) into a matrix form (in which the LHS and RHS are matrices), where \(X\) is a matrix whose columns consist of the eigenvectors \(\vec{x}\), and where \(\Lambda = I\lambda\). Solve the resulting matrix equation for \(A\). How does this equation simplify if \(A\) is a symmetric matrix? What is the result for the symmetric matrix called? [answer]

    Writing the eigenvalue problem \(A\vec{x} = \lambda\vec{x}\) as a matrix equation gives \[AX = \Lambda X.\] The right hand side can be written as \(\Lambda X = I\lambda X = IX \lambda = XI\lambda = X \Lambda\): \[AX = X\Lambda .\] Multiplying both sides by \(X^{-1}\) on the right solves for \(A\): \[A = X \Lambda X^{-1}.\] For a symmetric matrix, \(A = A^{\mathsf{T}}\), so \(A = (X\Lambda X^{-1})^T =(X^{-1})^\mathsf{T} \Lambda^{\mathsf{T}} X^{\mathsf{T}} \). Since \(X\) consists of orthonormal eigenvectors (remember, the eigenvectors of a symmetric matrix are orthonormal), the matrix itself is orthogonal, i.e., \(X^\mathsf{T} = X^{-1}\). Thus \(A = X\Lambda X^{\mathsf{T}} \) for a symmetric matrix \(A\). This is called spectral decomposition, and the fact that any symmetric matrix can be written this way is called the spectral theorem.

  28. ☸ What is the recasting of of a matrix \(A\) as \(\Lambda = X^{-1}AX\) called? For what matrices can this procedure be carried out? What is the significance of the matrix \(X\), and why would someone want to perform this operation? How does this particular operation relate to general changes of basis? [answer]

    This is called diagonalization. It can be carried out if \(X^{-1}\) exists. Since the columns of \(X\) are simply the eigenvectors of \(A\), the eigenvectors of \(A\) must be linearly independent for \(X^{-1}\) to exist. In other words, the determinant of \(A\) cannot be \(0\) for \(A\) to be diagonalized.

    The significance of the matrix \(X\) is that it represents the linear transformation from one basis into a better basis. In the first basis, the matrix \(A\) may not be diagonal, but in the better basis, the matrix is diagonalized into \(\Lambda\). This linear transformation, into a basis in which the matrix is diagonal, is a special type of change of basis, which can be thought of as a "change to the best basis." The more general change-of-base operation (to basis that does not necessarily diagonalize a matrix) is considered in the final three problems of this section. The form of the general change-of-basis is still "transformation \(\times\) matrix in old basis \(\times\) inverse transformation."

    At this juncture, it is helpful to keep in mind the distinction between the absolute and the relative. \(\mathsf{A}\) is an absolute quantity (for example, a tensor or operator), and it is represented by a relative quantity (a matrix), that changes in different bases. In one basis, it is represented by \(A\), and in the better basis, it is represented by the diagonal matrix \(\Lambda\), where the eigenvalues of \(\mathsf{A}\) lie along the diagonal of \(\Lambda\). To get from the first basis to the better basis, one multiplies vectors by the linear transformation \(X\), and one diagonalizes matrices calculating \(\Lambda = X^{-1} A X \).

  29. ☸ Find the eigenvalues and eigenvectors of \begin{align*} {M} = \begin{pmatrix} 2 & 0 & -2\\ -2i & i & 2i\\ 1 & 0 & -1 \end{pmatrix}\,. \end{align*} Is it possible to diagonalize the matrix, i.e., find the decomposition \(M = X\Lambda X^{-1}\)? Suppose there is a vector represented by \(\vec{v} = (1,0,0)\) in the first basis, calculate the respresentation of this vector in the new (eigen)basis. [answer]

    After a considerable amount of manipulation, it is found that \begin{align*} \lambda_1 &= 0,\quad \vec{x}_1 = \begin{pmatrix}1 \\ 0 \\ 1\end{pmatrix}\\ \lambda_2 &= 1,\quad \vec{x}_2 = \begin{pmatrix}2 \\ 1-i \\ 1\end{pmatrix}\\ \lambda_3 &= i,\quad \vec{x}_3 = \begin{pmatrix}0 \\ 1 \\ 0\end{pmatrix} \end{align*} The eigenvectors span the space, because a matrix whose columns consist of these vectors has a nonzero determinant. Therefore it is possible to diagonalize the \(M\). To find the decomposition \(M = XAX^{-1}\), note that \(X\) consists of columns that are the eigenvectors of \(M\), \begin{align*} X = \begin{pmatrix}1 & 2 & 0\\ 0 & 1-i & 1\\ 1& 1 &0\end{pmatrix}, \end{align*} the inverse of which is \(C^\mathsf{T}/|X|\), where \(C\) is the cofactor matrix: \begin{align*} X^{-1} = \frac{1}{|X|}\begin{pmatrix} -1 & 1& i-1\\ 0 & 0& 1\\ 2 &-1 &1-i \end{pmatrix}^\mathsf{T} = \begin{pmatrix} -1 & 0& 2\\ 1 & 0& -1\\ i-1 &1 &1-i \end{pmatrix} \end{align*} The decomposition is thus \begin{align*} M &= X \Lambda X^{-1}\\ &= \begin{pmatrix}1 & 2 & 0\\ 0 & 1-i & 1\\ 1& 1 &0\end{pmatrix} \begin{pmatrix}0 & 0 & 0\\ 0 & 1-i & 0\\ 0& 0 &i\end{pmatrix} \begin{pmatrix} -1 & 0& 2\\ 1 & 0& -1\\ i-1 &1 &1-i \end{pmatrix} \end{align*} which, upon multiplying, does indeed recover \begin{align*} {M} = \begin{pmatrix} 2 & 0 & -2\\ -2i & i & 2i\\ 1 & 0 & -1 \end{pmatrix}\,. \end{align*} To rotate \(\vec{v} = (1,0,0)\) into the new basis, multiply \(X^{-1} \vec{v} = (-1,1,i-1)\). Why is the multiplication by \(X^{-1}\) and not \(X\)? See question 36.

  30. Factor \begin{align*} A = \begin{pmatrix} 1 & 2 \\ 0 &3 \end{pmatrix} \end{align*} into \(A = X\Lambda X^{-1}\). Without actually performing the diagonalization again, find the factorization for \(A^3\) and \(A^{-1}\). [answer]

    The eigenvalues and eigenvectors are found to be \begin{align*} \lambda_1 &= 1,\quad \vec{x}_1 = \begin{pmatrix} 1\\ 0 \end{pmatrix}\\ \lambda_2 &= 3,\quad \vec{x}_2 = \begin{pmatrix} 1\\ 1 \end{pmatrix} \end{align*} The eigendecomposition for \(A\) is thus \begin{align*} A = X\Lambda X^{-1} = \begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0\\ 0 & 3 \end{pmatrix} \begin{pmatrix} 1 & -1\\ 0 & 1 \end{pmatrix}\,. \end{align*} The eigendecomposition for \(A^3\) is obtained by simply cubing the eigenvalues, \begin{align*} A^3 = X\Lambda^3 X^{-1} = \begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0\\ 0 & 27 \end{pmatrix} \begin{pmatrix} 1 & -1\\ 0 & 1 \end{pmatrix}\,, \end{align*} and the eigendecomposition for \(A^{-1}\) is obtained by simply inverting the eigenvalues, \begin{align*} A^{-1} = X\Lambda^{-1} X^{-1} = \begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0\\ 0 & 1/3 \end{pmatrix} \begin{pmatrix} 1 & -1\\ 0 & 1 \end{pmatrix}\,. \end{align*} I verified these calculations in MATLAB.

  31. ☸ Qualitatively, what is the difference between \(\mathsf{A}\) (sans-serif) and \(A\) (italicized)? Similarly, what is the difference between \(\mathsf{v}\) and \(\vec{v}\)? [answer]

    \(\mathsf{A}\) is a tensor, while \(A\) is a matrix. \(A\) is a representation of \(\mathsf{A}\) in a chosen basis. The difference between \(\mathsf{A}\) and \(A\) is analogous to the difference between a quantum mechanical operator \(\mathcal{H}\) and its matrix representation in a particular basis \(\hat{H}\).

    Similarly, \(\mathsf{v}\) is a vector, while \(\vec{v}\) is that vector evaluated in a particular basis. The difference between \(\mathsf{v}\) and \(\vec{v}\) is analogous to the difference between the ket \(|\psi\rangle\) and the wave function \(\psi(x)\) in quantum mechanics: the latter is a representation of the former in the basis of Euclidean 3-space. It is also analogous to the difference between the operator \(\text{curl }\) and its vector representation in Cartesian coordinates \(\gradient \times\) (or any of the vector calculus operators, like divergence, gradient, and the Laplacian).

  32. Let \(\mathsf{A}\) and \(\mathsf{B}\) be two dyadic tensors (dyads). Show that \((\mathsf{A}\mathsf{B})^\mathsf{T} = \mathsf{B}^\mathsf{T} \mathsf{A}^\mathsf{T}\). Hint: introduce two auxiliary vectors \(\mathsf{u}\) and \(\mathsf{v}\) and use the definition of transpose, \((\mathsf{A}^\mathsf{T} \mathsf{u})\cdot \mathsf{v} = (\mathsf{A} \mathsf{v})\cdot \mathsf{u}\) twice. [answer]

    Multiply \(\mathsf{u}\) on the right-hand side by the identity matrix \(\mathsf{I}\) twice: \begin{align*} [(\mathsf{A}\mathsf{B})^\mathsf{T} \mathsf{u}]\cdot \mathsf{v} &= [(\mathsf{A}\mathsf{B}) \mathsf{v}]\cdot \mathsf{u}\\ &=\mathsf{A}[\mathsf{B} \mathsf{v}]\cdot \mathsf{u}\\ &=(\mathsf{A}^\mathsf{T}\mathsf{u})\cdot[\mathsf{B} \mathsf{v}]\\ &=\mathsf{B} \mathsf{v}\cdot \mathsf{A}^\mathsf{T}\mathsf{u}\\ &=\mathsf{B}^\mathsf{T} (\mathsf{A}^\mathsf{T} \mathsf{u}) \cdot \mathsf{v}\\ &=\mathsf{B}^\mathsf{T} \mathsf{A}^\mathsf{T} \mathsf{u} \cdot \mathsf{v} \end{align*} Since \(\mathsf{u}\) and \(\mathsf{v}\) are arbitrary, the proof is complete.

  33. Let \(\mathsf{A}\) and \(\mathsf{B}\) be two invertible dyads. Show that \((\mathsf{A}\mathsf{B})^{-1} = \mathsf{B}^{-1} \mathsf{A}^{-1}\). [answer]

    Multiply \(\mathsf{u}\) on the right-hand side by the identity tensor \(\mathsf{I}\) twice: \begin{align*} (\mathsf{A}\mathsf{B})^{-1} \mathsf{u} &= (\mathsf{A}\mathsf{B})^{-1} \mathsf{I}\mathsf{I}\mathsf{u}\\ &=(\mathsf{A}\mathsf{B})^{-1} \mathsf{I}\mathsf{A}\mathsf{A}^{-1}\mathsf{u}\\ &=(\mathsf{A}\mathsf{B})^{-1} \mathsf{A}\mathsf{I}\mathsf{A}^{-1}\mathsf{u}\\ &=(\mathsf{A}\mathsf{B})^{-1} \mathsf{A}\mathsf{B}\mathsf{B}^{-1}\mathsf{A}^{-1}\mathsf{u}\\ &= \mathsf{B}^{-1}\mathsf{A}^{-1}\mathsf{u} \end{align*} Since \(\mathsf{u}\) is arbitrary, the proof is complete.

  34. Prove that any tensor \(\mathsf{T}\) can be written as the sum of a symmetric tensor \(\mathsf{S}\) and an antisymmetric tensor \(\mathsf{A}\). Similarly prove that any tensor can be written as the sum of a real tensor \(\mathsf{R}\) and an imaginary tensor \(\mathsf{M}\). Finally prove that any tensor can be written as the sum of a symmetric tensor \(\mathsf{H}\) and an antisymmetric tensor \(\mathsf{K}\). [answer]

    Note that a symmetric matrix is constructed by adding \(\mathsf{T}\) and its transpose, while an asymmetric matrix is constructed by subtracting the transpose from \(\mathsf{T}\). \begin{align*} \mathsf{S} &= \frac{1}{2}(\mathsf{T} + \mathsf{T}^\mathsf{T}) \\ \mathsf{A} &= \frac{1}{2}(\mathsf{T} - \mathsf{T}^\mathsf{T}) \\ \end{align*} Adding the above equations shows that \(\mathsf{T} = \mathsf{S} + \mathsf{A}\).

    Next, note that a real matrix is constructed by adding \(\mathsf{T}\) and its conjugate, while an imaginary matrix is constructed by subtracting the conjugate from \(\mathsf{T}\). \begin{align*} \mathsf{R} &= \frac{1}{2}(\mathsf{T} + \mathsf{T}^*) \\ \mathsf{M} &= \frac{1}{2}(\mathsf{T} - \mathsf{T}^*) \\ \end{align*} Adding the above equations shows that \(\mathsf{T} = \mathsf{R} + \mathsf{M}\).

    Finally, note that a Hermitian matrix is constructed by adding \(\mathsf{T}\) and its Hermitian conjugate, while an skew-Hermitian matrix is constructed by subtracting the Hermitian conjugate from \(\mathsf{T}\). \begin{align*} \mathsf{H} &= \frac{1}{2}(\mathsf{T} + \mathsf{T}^\dagger) \\ \mathsf{K} &= \frac{1}{2}(\mathsf{T} - \mathsf{T}^\dagger) \\ \end{align*} Adding the above equations shows that \(\mathsf{T} = \mathsf{H} + \mathsf{K}\).

  35. How does one test to see if \(\mathsf{A}\) and \(\mathsf{B}\) share the same independent eigenvectors? Provide examples in physics of two linear operators that share eigenvectors, and two linear operators that do not. [answer]

    They share the same independent eigenvectors iff \(\mathsf{AB}= \mathsf{BA}\), i.e., they commute. An example of linear operators that commute and therefore share eigenvectors are \(L^2\) and \(L_z\); an example of matrices that do not commute and therefore do not share eigenvectors are \(L_x\) and \(L_z\), where \(L \) is the orbital angular momentum operator of quantum mechanics, and \(L_n\) is its projection on the \(n\) Cartesian axis..

  36. ☸ For some tensorial linear equation \(\mathsf{A}\mathsf{x}= \mathsf{b}\), suppose there is one representation of this equation in the primed basis \('\), \[A'\vec{x}' = \vec{b}'\] and another representation in the unprimed basis, \[A \vec{x} = \vec{b}.\] Given the unprimed basis is a linear transformation of the primed basis, i.e., \(S\vec{v}' = \vec{v}\), find the matrix representation of \(\mathsf{A}\) in the unprimed basis in terms of \(A'\) and \(S\). What is the relationship between the eigenvalues of \(\mathsf{A}\), \(A\) and \(A'\)? What are \(A\) and \(A'\) called with respect to one another? How does this change of basis relate to diagonalization? [answer]

    See here for the manipulations that lead to \(A = SA'S^{-1}\). The relationship between the eigenvalues of \(\mathsf{A}\), \(A\) and \(A'\), is that they are all equal. The eigenvalues are independent of the basis, and the eigenvalues of \(\mathsf{A}\) can be calculated using the so-called principle invariants (see Stern's nonlinear continuum mechanics notes, ch. 1). Matrices \(A= SA'S^{-1}\) and \(A'\) are called similar.

    Compare the change-of-basis, \(A = S A' S^{-1}\), to diagonalization, \(\Lambda = X^{-1} A' X\). Usually the diagonal matrix \(\Lambda\) is the representation in the new basis. Thus \(S\) is analogous to \(X^{-1}\), i.e., when the columns of \(S^{-1}\) are composed of the eigenvectors of \(A'\), the change of basis recovers the eigendecomposition. Since vectors transform from the old basis to the new basis as \(S\vec{v}' = \vec{v}\), the transformation from the old (primed) basis \('\) to the new (unprimed) basis in which the representation of \(\mathsf{A}\) is diagonal is \(X^{-1}\vec{v}' = \vec{v}\).

  37. ☸ Show that the eigenvalues \(\lambda\) satisfying \(A\vec{x} = \lambda \vec{x}\) (unprimed basis) are invariant under the transformation to the primed \('\) basis, where the linear transformation between unprimed and primed bases is \(\vec{v} = S\vec{v}'\). In other words, show that \(A'\vec{x}' =\lambda \vec{x}'\) [answer]

    Start with the eigenvalue problem in the unprimed basis. Invoke the linear transformation between unprimed and primed bases is \(\vec{x} = S\vec{x}'\). Multiply both sides by the inverse of the linear transformation, and identify \(S^{-1}A S = A'\) (see previous problem). Note that the equation is an eigenvalue problem now in the primed basis, with the same eigenvalue \(\lambda\). \begin{align*} A\vec{x} &= \lambda\vec{x}\\ AS \vec{x}' &= \lambda S \vec{x}'\\ S^{-1}A S \vec{x}' &= \lambda \vec{x}' \\ A' \vec{x}' &= \lambda \vec{x}' \end{align*} Thus the eigenvalues are invariant under linear transformations of matrices.

  38. Do the trace and detemrinant of a tensor depend on the basis? Provide expressions for the trace and determinant of a tensor \(\mathsf{A}\) (with an \(n\times n\) matrix representation) terms of the eigenvalues of \(\mathsf{A}\). [answer]

    Like the eigenvalues themselves, the trace and the determinant are independent of basis. \(\text{Tr } \mathsf{A} = \sum_n \lambda_n \) and \(\text{det } {\mathsf{A}} = \prod_n \lambda_n \).

  39. ☸ Find the eigenvalues and eigenvectors of \begin{equation*} \begin{pmatrix} 1 & 1& 1\\ 1 & 1& 1\\ 1 & 1& 1 \end{pmatrix}\,. \end{equation*} Comment on the orientation of the eigenvectors. Are they orthogonal? Why or why not? If not, does this contradict the earlier claim that the eigenvectors of a real symmetric matrix (problem 22) are orthogonal? And if the eigenvectors are not orthogonal, can one make them orthogonal? If so, do this. [answer]

    See here for the solution. In summary, two of the eigenvalues \(\lambda_1\) and \(\lambda_2\) are degenerate, and their corresponding eigenvectors \(\vec{v}_1\) and \(\vec{v}_2\) are found to not be orthogonal. Meanwhile, both \(\vec{v}_1\) and \(\vec{v}_2\) are orthogonal to \(\vec{v}_3\). This is consistent with the claim made in problem 22, since that statement was restricted to non-degenerate eigenvalues. To orthogonalize the eigenvectors \(\vec{v}_1\) and \(\vec{v}_2\), the so-called Gram-Schmidt procedure is used. Griffiths notes that this procedure rarely needs to be carried out in practice, but that it can be done in principle.

Ordinary differential equations

Each numbered equation in this section represents a unique type of differential equation. For a thorough review of this section, be sure to know how to solve each type of differential equation.

  1. Define a homogeneous function. Provide an example of a homogeneous functions. [answer]

    Simply put, \[f(sx_1, \dots, sx_n) = s^k f(x_1,\dots, x_n)\] is a homogeneous function, where \(k\) is the "degree of homogeneity." For example, \(f(x,y,z) = x^5 y^2 z^3\) is homogeneous of degree 10 because \(f(\alpha x,\alpha y,\alpha z) = (\alpha x)^5 (\alpha y)^2 (\alpha z)^3 = \alpha^{10} f(x,y,z)\).

  2. Define a homogeneous polynomial and provide an example. [answer]

    Similar to a homogeneous function, a homogeneous polynomial is defined by \[P(sx_1, \dots, sx_n) = s^k P(x_1,\dots, x_n)\] where \(P\) is some multivariate polynomial. Simply put, a polynomial is homogeneous if its terms all have the same degree. An example of a homogeneous polynomial is \(x^2 + 2xy + y^2\).

  3. What defines the homogeneity of a rational function? [answer]

    A rational function is homogeneous if it consists of a ratio of two homogeneous polynomials.

  4. Define the linearity of a function \(f(x)\). Linear functions obey the _________ principle. [answer]

    The properties \(f(x+y) = f(x) + f(y)\) and \(f(ax) = af(x)\) for all \(a\) together define linearity. Linear functions obey the superposition principle.

  5. Is a homogeneous function always linear? Is a linear function always homogeneous? [answer]

    A homogeneous function is not always linear. For example, \(f(x,y,z) = x^5 y^2 z^3\) is homogeneous but not linear. However, a linear function is always homogeneous, because homogeneity is one of the two properties of linear functions.

  6. What defines a homogeneous first-order ordinary differential equation? Provide an example of a first-order ODE that is homogeneous, as well as an example of one that is inhomogeneous. [answer]

    A homogeneous ordinary differential equation is of the form \(\frac{dy}{dx} = f(x,y) = f\bigg(\frac{y}{x}\bigg)\). An example of a first-order ODE that is homogeneous is \(\frac{dy}{dx} = \frac{\sin (y/x)}{y/x} + (y/x)^2\), while an example of a first-order ODE that is inhomogeneous is \(\frac{dy}{dx} = y+x\).

  7. Classify the differential equation \begin{equation}\label{ode1}\tag{1} \frac{dy}{dx} = a(x) y\,. \end{equation} Find the general solution to equation (\ref{ode1}). [answer]

    This is a linear, homogeneous, separable ordinary differential equation. The general solution is \[y = \pm C e^{\int a(x)dx}\,.\]

  8. Classify and solve \(\frac{dy}{dx} = [x + \cos(x) ]y\,.\) for \(y\). [answer]

    This is a linear, homogeneous, separable ordinary differential equation. The general solution is \[y = \pm C e^{\int a(x)dx}\,.\]

  9. Classify and solve \(\frac{dy}{dx} = (y-x)^2\). The solution can remain in an implicit form. Hint: is there a transformation that can make this ODE separable? [answer]

    As written, this is a nonlinear, homogeneous, inseparable ordinary differential equation. However, it can be made separable by making the substitution \(z = y-x\). Then, the ODE becomes \[\frac{dz}{dx} = z^2-1,\] and the general solution is \[\frac{\ln{(z-1)}}{\ln{(z+1)}} = 2x + C\,.\]

  10. Classify and solve \(\frac{dy}{dt} = \cos(y-t)\). Follow the hint above. [answer]

    As written, this is a nonlinear, homogeneous, inseparable ordinary differential equation. However, it can be made separable by making the substitution \(z = y-t\). Then, the ODE becomes \[\frac{dz}{dx} = \cos(z)-1,\] and the general solution is found by performing the integral \[\int \frac{dz}{\cos z- 1} = t.\] I used Wolfram to calculate the integral (It is doable, but involves a trigonometric substitution). The solution is \[\cot{\frac{z}{2}} = t,\] or in an explicit form, \[z = 2 \text{arccot} (t),\]

  11. Classify and solve \( y + \sqrt{xy} = x\frac{dy}{dx}\), where \(x>0\). Hint: is there a transformation that can make this ODE separable? This time it a multiplicative transformation, unlike that addititive transformations in the previous two examples. [answer]

    First the ODE is rearranged into the form, \begin{align*} x\frac{dy}{dx} &= y + \sqrt{xy}\\ \frac{dy}{dx} &= \frac{y}{x} + \frac{\sqrt{xy}}{x}\\ \frac{dy}{dx} &= \frac{y}{x} + {\sqrt{\frac{y}{x}}} \end{align*} From the last equation it can be seen that the ODE is homogeneous (because it is of the form \(dy/dx= f(y/x)\)). Making the substitution \(y/x = u\) results in \begin{align*} x\frac{du}{dx} + u &= u + \sqrt{u}\\ x\frac{du}{dx} &= \sqrt{u}\\ \int u^{-1/2} du &= \int \frac{dx}{x}\\ 2 u^{1/2} &= \ln{x} + C\\ y^{1/2}/x^{1/2} &= \frac{1}{2}(\ln{x} + C)\\ y &= x(\frac{1}{2}\ln{x} + C)^2 \end{align*}

  12. Classify the differential equation \begin{equation}\label{ode2}\tag{2} \frac{dy}{dx} = a(x) y + b(x)\,. \end{equation} Find the general solution to equation (\ref{ode2}) by assuming the form of solution to equation (\ref{ode1}), but with \(C = C(x)\). What is the name of this solution? [answer]

    This is a linear, inhomogeneous first-order ordinary differential equation.

    The form of solution is assumed to be \(y = C(x)e^{\int a(x)dx}\). Inserting this solution into equation (\ref{ode2}) and applying the product rule gives \begin{align*} e^{\int a(x)dx}\frac{d}{dx} C(x) + C(x) \frac{d}{dx} e^{\int a(x)dx} &= a(x)y + b(x)\\ \end{align*} Note that \(\frac{d}{dx} e^{\int a(x)dx} = e^{\int a(x)dx} \frac{d}{dx}\int a(x)dx = a(x)e^{\int a(x)dx} \). Therefore, \begin{align*} e^{\int a(x)dx}\frac{d}{dx} C(x) + C(x)a(x)e^{\int a(x)dx} &= a(x)y + b(x)\\ &= a(x)C(x)e^{\int a(x)dx} +b(x) \end{align*} Canceling the common term above gives \begin{align*} e^{\int a(x)dx}\frac{d\, C(x)}{dx} &=b(x) \end{align*} Solving for \(C(x)\) gives \begin{align*} C(x) &= \int b(x) e^{-\int a(x)dx} dx \end{align*} Substituting this equation for \(C(x)\) into the assumed form of solution gives \begin{align*} y = e^{\int a(x) dx}\bigg[\int b(x) e^{-\int a(x)dx} dx\bigg] \end{align*} This is called the Cauchy equation.

  13. Classify and solve \((1 + x^2)y' + 2xy = \cos x\). [answer]

    This is a linear, inhomogeneous first-order ordinary differential equation. Rewriting it as \(y' = \frac{\cos x}{1 + x^2} - \frac{2xy}{1 + x^2} \) renders it in the form of \(y' = a(x)y + b(x)\). Thus the Cauchy formula \begin{align*} y = e^{\int a(x) dx}\bigg[\int b(x) e^{-\int a(x)dx} dx\bigg] \end{align*} can be applied with \begin{align*} a(x)&= -\frac{2x}{1 + x^2}\\ b(x)&=\frac{\cos x}{1 + x^2}\,. \end{align*} Noting that \begin{align*} \int a(x) dx &= -\int \frac{2x}{1 + x^2} dx\\ &= -\ln(1+x^2) + C\quad\text{and}\\ \int b(x) e^{-\int a(x)dx}dx &= \int \frac{\cos(x)}{1 + x^2} e^{\ln(1+x^2)} dx\\ &=\int \cos{x} dx = \sin{x} + C \end{align*} Thus Cauchy equation is \begin{align*} y &= e^{\ln(1+x^2)^{-1}}(\sin x + C)\\ &=\frac{\sin x}{1+x^2} + C \end{align*} where \(C\) above varies form line to line (I know it is not technically correct, but one can absorb any arbitrary constant in \(C\)).

  14. Classify the differential equation \begin{equation}\label{bern}\tag{3} \frac{dy}{dx} = a(x)y + b(x)y^n \end{equation} where \(n\neq 0,1\) is real. What is the name of this equation? What substitution does one make to facilitate its solution? [answer]

    This is a nonlinear, inhomogeneous first-order ordinary differential equation. It is called the Bernoulli equation.

    To solve this equation, one should seek a substitution that makes the equation linear. Let \(y=z^\alpha\), and therefore \(dy/dx = \alpha z^{\alpha-1}dz/dx\) Then equation (\ref{bern}) becomes \begin{align*} \alpha \frac{dz}{dx} = a(x)z + b(x)z^{\alpha n -\alpha + 1} \end{align*} In order for the above equation to be reduced to a linear inhomogeneous equation [see equation (\ref{ode2})], one should set the exponent of \(z\) multiplying \(b(x)\) equal to \(0\), which results in \begin{align*} \alpha = \frac{1}{1-n} \end{align*}

  15. Classify and solve \(xy dy = (y^2 + x)dx\). [answer]

    Rearrange the equation into its standard form: \( \frac{dy}{dx} = \frac{y^2 + x}{xy} = \frac{y}{x} + \frac{1}{y}\). This is a nonlinear inhomogeneous ordinary differential equation, \(\frac{dy}{dx} = a(x)y + b(x)y^n\) with \(a(x) = \frac{1}{x}\), \(b(x) = 1\), and \(n = -1\). To transform the ODE into a linear one, set \(y=z^\alpha\), where \(\alpha = \frac{1}{1-n} = \frac{1}{2},\) resulting in \begin{align*} \alpha\frac{dz}{dx} &= a(x)z + b(x)\\ \frac{1}{2}\frac{dz}{dx} &= \frac{1}{x}z + 1\\ \frac{dz}{dx} &= \frac{2}{x}z + 2\\ \end{align*} \(\frac{dz}{dx} = \frac{2}{x}z + 2\) is now a linear inhomogeneous ordinary differential equation whose solution is given by the Cauchy equation: \begin{align*} z &= e^{\int \frac{2}{x}dx}\, \bigg( 2\int e^{-\int \frac{2}{x}dx}dx\bigg).\\ &=-2x + C \end{align*} The solution to for \(y\) is thus \(y= \pm\sqrt{C-2x}\).

    In my ODE notes from UT Dallas, I had \(y= \pm\sqrt{Cx^2-2x}\), but I disagree with the way the constants of integration were defined in the Cauchy equation that lead to this answer.

  16. Classify the differential equation \begin{equation}\label{exact}\tag{4} M(x,y)dx + N(x,y)dy = 0\,,\quad \text{where } \frac{\partial M}{\partial y} = \frac{\partial N}{\partial x}\,. \end{equation} What is the general solution of equation (\ref{exact}) in integral form? [answer]

    This is an exact equation, and the solution in integral form is \(F = \int \frac{\partial F}{\partial x}dx = \int M dx\). Equivalently, the solution is \(F = \int \frac{\partial F}{\partial y}dy = \int N dy\). One should take the integral over whichever variable is easier to integrate over. After the taking the integral over one variable, the unknown constant is determined by taking the derivative of \(F\) with respect to the other variable, and setting this equal to \(M\) if the other variable is \(x\), or \(N\) if the other variable is \(y\).

  17. Classify and solve \((4x^3 + 3y)dx + (3x + 4y^3)dy = 0\). [answer]

    This is an exact differential equation because \begin{align*} \frac{\partial M}{\partial y} = \frac{\partial N}{\partial x} = 3 \end{align*} Therefore the solution is \begin{align*} F(x,y) = \int \frac{\partial F}{\partial x} dx &= \int (4x^3 + 3y)dx\\ &= x^4 + 3xy + C(y) \end{align*} To determine \(C(y)\), take the partial derivative with respect to \(y\) and set this equal to \(N = 3x + 4y^3\): \begin{align*} 3x + C'(y) &= 3x + 4y^3\\ C'(y) &= 4y^3\\ C(y) &= y^4 \end{align*} Thus the solution is \begin{align*} F(x,y) = y^4 + x^4 + 3xy\,. \end{align*}

  18. How does one solve \begin{equation}\label{integratingfactor}\tag{5} M(x,y)dx + N(x,y)dy = 0\,,\quad \text{but now where } \frac{\partial M}{\partial y} \neq \frac{\partial N}{\partial x}\,? \end{equation} What is the name of this method? [answer]

    One can solve equation (\ref{integratingfactor}) by multiplying through by \(\mu(x)\) or \(\mu(y)\), and then forcing that the mixed partials of \(F\) be equal. Note that multiplying through by \(\mu(x,y)\) leads to a partial differential equation, which is not desired in this context.

    In this derivation, multiply through by \(\mu(x)\), \begin{align*} \mu(x)M(x,y)dx + \mu(x)N(x,y)dy = 0\,, \end{align*} and then set \begin{align} \frac{\partial}{\partial y}[\mu(x)M(x,y)] &= \frac{\partial}{\partial x}[\mu(x)N(x,y)] \notag\\ \mu \frac{\partial M}{\partial y} &= \frac{d\mu}{dx} N + \mu \frac{\partial N}{\partial x}\notag\\ \frac{d\mu}{dx} N&= \mu \bigg[\frac{\partial M}{\partial y} - \frac{\partial N}{\partial x} \bigg] \notag\\ \frac{1}{\mu}\frac{d\mu}{dx} &= \frac{\partial M/\partial y - \partial N/\partial x}{N}\tag{i}\label{interatheraway} \end{align} Equation (\ref{interatheraway}) can be integrated and solved for \(\mu\). Thus \(\mu\) is called the integrating factor, and this method is called the ''integrating factor method.''

    Note that the right-hand side of equation (\ref{interatheraway}) must be only a function of \(x\), in the case above. If it is not (i.e., if it is a function of \(y\) as well), then try multiplying the original equation by \(\mu_y\), for which \begin{align*} \frac{1}{\mu}\frac{d\mu}{dy} &= \frac{\partial N/\partial x- \partial M/\partial y}{M} \end{align*}

  19. Classify and solve \((x+2)\sin y\, dx + x \cos y \, dy=0\). [answer]

    This looks like it might be in the form of an exact equation, but since \(\partial M/\partial y = (x+2) \cos y \neq \partial N/\partial x = \cos y \), it requires the integrating factor method to turn it into an exact equation. An integrating factor \(\mu(x)\) is attempted. \begin{align*} \frac{1}{\mu}\frac{d\mu}{d x} &= \frac{(x+2) \cos y - \cos y}{x \cos y} = 1 + \frac{1}{x}\\ \int \frac{d\mu}{\mu} &= \int \bigg(1 + \frac{1}{x}\bigg)dx\\ \ln \mu &= x + \ln x\\ \mu &= e^x e^{\ln x}= xe^x \end{align*} Now the original ODE becomes \begin{align*} xe^x(x+2)\sin y\, dx + x^2 e^x \cos y \, dy=0 \end{align*} The solution is \begin{align*} F(x,y) = \int \frac{\partial F}{\partial x}dx = \dots \end{align*} However, before proceeding with this option, note that in this case that it is much easier to find \begin{align*} F(x,y) = \int \frac{\partial F}{\partial y} dy = \int x^2 e^x \cos y\, dy = x^2 e^x \sin y + C(x)\,. \end{align*} The constant \(C(x)\) is found by differentiating \(F\) with respect to \(x\) and setting this equal to \(x e^x (x+2)\sin y\): \begin{align*} \frac{\partial F}{\partial x} = (2x e^x + x^2 e^x) \sin y + C'(x) &= x e^x (x+2)\sin y \end{align*} Thus \[C'(x) = x^2 e^x \sin y + 2x e^x \sin y - (2x e^x + x^2 e^x) \sin y = 0,\] and thus \(C(x) = \text{const}\). The solution is therefore \(F(x,y) = x^2 e^x \sin y + \text{const}\).

  20. Classify the differential equation \begin{equation}\label{secondorder}\tag{6} ay'' + b' + cy = 0\,, \end{equation} where \(a\), \(b\), and \(c\) are constants, and where the prime \('\) will now be used for notational ease to signify derivative. How does one solve this equation? There are the three possible cases that emerge. What are they? [answer]

    This is a linear, second-order ordinary differential equation with constant coefficients. It is solved by making the substitution \(e^{rx}\), which leads to a quadratic equation in \(r\). Note that this generalizes to higher derivatives, leading to polynomials of higher degree to solve.

    Three cases emerge.

    1. The solutions \(r\) are real (discrimant \(b^2-4ac>0\). In this case, the solution to the 2nd order ODE is exponential decay and/or growth.
    2. The solutions \(r\) are complex (discrimant \(b^2-4ac<0\). In this case, the solution to the 2nd order ODE is waves, which can either be written as complex exponentials, or as sines and cosines.
    3. The solutions \(r\) are equal, i.e., a double root (discrimant \(b^2-4ac=0\). In this case, the second solution gets multiplied by \(x\times\) the first. This will be proved later.

  21. Classify and solve \(y''' + 4y'' + 9y' + 10y = 0\). [answer]

    This is a linear, third-order ordinary differential equation. Follow the procedure above and obtain the characteristic cubic equation in \(r\), \(r^3 + 4r^2 + 9r + 10 = 0\). The three solutions of this equation are \( r= -2\), \( r= -1 + 2i\), and \( r= -1 -2i\). Thus the solution is \(y= C_1 e^{-2x}+ C_2 e^{-x}\cos(2x)+ C_3 e^{-x}\sin(2x) \).

  22. Classify and solve \(y''' + 6y'' + 12y' + 8y = 0\). [answer]

    This is also a a linear, third-order ordinary differential equation. Following the same procedure as above, obtain the characteristic cubic equation in \(r\), \(r^3 + 6r^2 + 12r + 8 = 0\). The three solutions of this equation are the triple root \( r= -2\). Thus the solution is therefore \(y= C_1 e^{-2x}+ C_2 xe^{-2x} + C_3 x^2e^{-2x}\).

  23. What is the definition of the Wronskian? What information does it provide? [answer]

    The Wronskian \(W\) is defined as \begin{align*} W = \begin{vmatrix} y_1(x) & y_2(x)\\ y_1'(x) & y_2'(x) \end{vmatrix} =y_1 y_2' - y_2 y_1'\,, \end{align*} where \(y_1\) and \(y_2\) are two solutions to a second-order linear ordinary differential equation, and where \('\) signifies derivative. Two solutions of the same equation are called a fundamental pair if \(W \neq 0\).

    There is another way to calculate \(W\): Abel's formula for the Wronskian: \(W = \int e^{-\int p(x) dx}\), where the ODE is written as \(y'' + p(x)y' +q(x)y = 0\). However, I don't think it's worth reviewing.

  24. Determine whether \(y_1 = x\) and \(y_2 = \ln x\) are a fundamental pair, and if so, on what interval. What about \( y_1 = \arccos \frac{x}{\pi}\) and \(y_2 = \arcsin \frac{x}{\pi}\)? [answer]

    The Wronskian for the first case is \(W= \ln x \), which is never 0. Thus \(y_1 = x\) and \(y_2 = \ln x\) are a fundamental pair for all \(x>0\) (since \(\ln x\) is defined on that interval).

    The Wronskian for the second case is \(W = \frac{\arccos(x/\pi)}{\pi\sqrt{1-x^2}} + \frac{\arcsin(x/\pi)}{\pi\sqrt{1-x^2}} \), which (I don't think) has a solution. Thus the solutions are a fundamental pair for \(x\in (-\pi,\pi)\).

  25. ☸ Classify the differential equation \begin{equation}\label{homo2}\tag{7} a(x) y'' + b(x) y' + c(x) y = 0\,. \end{equation} Given one solution \(y_1\) to equation (\ref{homo2}), how can the other solution \(y_2\) be found? What is this method called? Hint: Let \(y_2 = uy_1\). Insert this into equation (\ref{homo2}). Introduce \(z= u'\) and then let \(z= u'\) (This is where the name of the method comes from). Solve for \(z\), integrate, and find \(u\). \(y_2\) is then found because \(y_2 = uy_1\). I doubt this would be on the exam as it is too involved. [answer]

    Assume that \(y_2 = u y_1\). Then the derivatives of \(y_2\) are \begin{align*} y_2' &= u' y_1 + u y_1'\\ y_2'' &= u'' y_1 +u' y_1' + u' y_1' + u y_1''\,. \end{align*} Inserting these relations into equation (\ref{homo2}) results in \begin{align*} a(x)(u'' y_1 +u' y_1' + u' y_1' + u y_1'') + b(x) (u' y_1 + u y_1') + c(x) uy_1 &= 0\,. \end{align*} This equation is regrouped: \begin{align*} u[a(x) y_1'' + b(x) y_1' + c(x)] + a(x)u'' y_1 + 2a(x)u' y_1' + b(x)u' y_1 &= 0\,. \end{align*} The first term in \([...]\) is \(0\) by equation (\ref{homo2}). Therefore, \begin{align*} a(x)u'' y_1 + u'[2a(x) y_1' + b(x)y_1 ] &= 0\,. \end{align*} Now, introduce the parameter \(z = u'\), which kicks all the derivatives above down one: \begin{align*} a(x) y_1 \frac{dz}{dx} + z[2a(x) y_1' + b(x)y_1 ] &= 0\,.\\ \frac{dz}{z} &= \bigg[-\frac{2 y_1'}{y_1} - \frac{b(x)}{a(x)} \bigg]dx \end{align*} Integrating results in \begin{align*} \ln z &= -2 \int \frac{ y_1'}{y_1}dx - \int \frac{b(x)}{a(x)}dx \\ &= -2 \int \frac{dy_1}{dx}\frac{1}{y_1}dx - \int \frac{b(x)}{a(x)}dx \\ &= -2 \int\frac{dy_1}{y_1} - \int \frac{b(x)}{a(x)}dx \\ &= \ln (y_1^{-2}) - \int \frac{b(x)}{a(x)}dx \,. \end{align*} Exponentiating gives \begin{align*} z &= y_1^{-2}e^{\int \frac{b(x)}{a(x)} dx}\,.\\ u &= \int z dx = \int y_1^{-2}e^{\int \frac{b(x)}{a(x)} dx} dx\,.\\ y_2 &= y_1u = y_1\int y_1^{-2}e^{\int \frac{b(x)}{a(x)} dx} dx\,. \end{align*}

    This method is called reduction of order.

  26. Given that \(y_1 = x\) is a solution to \(x^2 y'' -x(x+2) y' + (x+2) y = 0\), classify this equation and find the general solution. [answer]

    This is a linear homogeneous second-order ordinary differential equation with nonconstant coefficients. Reduction of order should be used: \begin{align*} y_2 &= y_1\int y_1^{-2}e^{\int \frac{b(x)}{a(x)} dx} dx\\ &= x \int x^{-2}e^{-\int \frac{x(x+2)}{x^2} dx} dx\\ &= x \int x^{-2}e^{-\int (1 + 2/x)dx} dx\\ &= x \int x^{-2}e^{-x + \ln (x^{-2})} dx\\ &= x \int e^{-x} dx = xe^{-x}\\ \end{align*}

  27. Classify the differential equation \begin{equation}\label{inhomo2} \tag{8} ay'' + by' + cy = g(x)\,. \end{equation} List the two ways to solve this equation. [answer]

    This is linear inhomogeneous second-order ordinary differential equation with constant coefficients. The two methods to solve this are (1) the method of undetermined coefficients and (2) the variation of parameters:

    First let us discuss the method of unetermined coefficients, which works when the right-hand side \(g(x)\) is exponential, sinusoidal, polynomial, or a product or sum of exponentials, sinusoidals, and polynomials.

    1. If \(g(x) = e^{kx}p(x)\), where \(p\) is a polynomial of degree \(n\), then use as the form of particular solution \(y_p(x) = e^{kx}q(x)\), where \(q\) is the \(n\)th degree polynomial. The coefficients of \(q\) are then determined by substituting it into equation (\ref{inhomo2}).
    2. If \(g(x) = e^{kx}p(x)\cos(mx)\), or if \(g(x) = e^{kx}p(x)\sin(mx)\) where \(p\) is a polynomial of degree \(n\), then use as the form of particular solution \(y_p(x) = e^{kx}q(x)\cos (mx) + e^{kx}s(x)\sin (mx) + \), where \(q\) and \(s\) are \(n\)th degree polynomials with different coefficients.

    Importantly, in either case above, if any term of the particular solution is also a solution of the homogeneous equation, then the form of solution should be multiplied by \(x\) (if the particular solution coincides with the homogeneous solution once), or \(x^2\) (if the particular solution coincides with the homogeneous solution twice). This is called resonance.

  28. Classify and solve \(y'' + 4y = e^{3x}\). [answer]

    \(y = y_h + y_p = C_1\cos 2x + C_2 \sin 2x + \frac{1}{13}e^{3x}\).

  29. Classify and solve \(y'' + y = \sin x\). [answer]

    See here for a video solution. There a resonance here of multiplicity \(1\). The solution is \(y = C_1 \cos x + C_2 \sin x - \frac{x}{2}\cos x\).

  30. Classify and solve \(y'' - 4y' + 3y = 2xe^{x}\). [answer]

    The method of undetermined coefficients is used. However, one needs to be careful in this case, because the particular solution resonates with the homogeneous solution. Thus the particular solution be chosen to be \(y_p = xe^{-x}(Ax +B)\). See here for the complete solution.

  31. Classify and solve \(y'' + 2y' + y = e^{-x}\). [answer]

    The method of undetermined coefficients is used. Again, one needs to be careful because the particular solution resonates with the homogeneous solution, this time with multiplicity 2. Thus the particular solution be chosen to be \(y_p = Ax^2e^{-x}\). See here for the complete solution.

  32. Determine the form of trial solution for \(y'' -4y' + 13y = e^{2x}\cos 3x\). [answer]

    Note that the homogeneous solution has characteristic roots \(r = 2+3i\) and \(r = 2-3i\). The first of these roots coincides with the driving function, \(e^{2x}\cos 3x\). Thus the form of solution should be the non-resonant guess multiplied by \(x\), i.e., \(xe^{2x}(A\cos 3x + B\sin 3x)\).

  33. What is the variation of parameters? Do not provide the full derivation, but provide the big picture (like, why is it called "variation of parameters"?). In what situations should the variation of parameters be used? [answer]

    Variation of parameters provides a more general way of solving \(y'' + p(x)y' + q(x)y= 0\), a linear 2nd order inhomogeneous ordinary differential eqution with non-constant coefficients. It can of course also be used to solve a linear 2nd order inhomogeneous ordinary differential eqution with constant coefficients, and in fact this is the method that must be used when the right-hand side \(g\) is more complicated than a sum or product of polynomials, trigonometric functions, or exponentials.

    To derive the variation of parameters, first one solves the homogeneous equation to obtain solutions \(y_1\) and \(y_2\). Then, one looks for the form of solution \(y_p= C_1(x)y_1(x) + C_2(x)y_2(x),\) which gives the method its name \(C_1\) and \(C_2\) are allowed to vary as functions, not constants. An additional constraint \begin{equation}\label{constrainterers}\tag{i} C_1'y_1 + C_2' y_2 = 0 \end{equation} is imposed. Taking the derivatives of \(y\), substituting the result into the ODE, and simplifying results in \begin{equation}\label{solutitors}\tag{ii} y_p= C_1'y_1' + C_2'y_2'\,. \end{equation} Combining equations (\ref{constrainterers}) and (\ref{solutitors}) results (by Cramer's rule) in a solution for \(y\): \begin{equation}\label{variationofparam}\tag{iii} y_p = -y_1 \int \frac{y_2 g}{W}dx + y_2 \int \frac{y_1 g}{W}dx\,. \end{equation} Just memorize equation (\ref{variationofparam}).

  34. Solve \(y'' + 4y = \frac{3}{\sin x}\). [answer]

    See here.

  35. Solve \(y'' - 2y' + y = \frac{e^t}{t^2 + 1}\). [answer]

    Using the variation of parameters, the particular solution is found by taking the integrals \begin{align*} y_p &= -y_1 \int \frac{y_2 g}{W}dx + y_2 \int \frac{y_1 g}{W}dx\\ &\dots\\ &= C_1 e^t + C_2 t e^t -\frac{e^t}{2}\ln (t^2 +1) + te^t \arctan t\,. \end{align*}

  36. ☸ What is \begin{equation}\label{Cauchy-Euler}\tag{9} ax^2 y'' + bxy' + cy = 0 \end{equation} called? (Why is it a bad name)? What clever substitution does one make to go about solving it? [answer]

    Equation (\ref{Cauchy-Euler}) is known as Euler's equation. A better name is the "Cauchy-Euler equation,'' because Euler already has so many equations named after him (like \(\frac{\partial f}{\partial y}-\frac{d}{dx} \frac{\partial f}{\partial y'}=0\) from the calculus of variations and \(e^{i\pi}+1=0\), the identity containing the most important numbers of mathematics)!

    To solve equation (\ref{Cauchy-Euler}), one lets \(x\) be an exponential function of a new independent variable \(t\). That is, \(x=e^t\). Then the derivatives of \(y\) with respect to \(t\) are \begin{align*} \frac{dy}{dt} &= \frac{dy}{dx}\frac{dx}{dt} = \frac{dy}{dx} e^t = \frac{dy}{dx} x \\ \frac{d^2y}{dt^2} &= \frac{d}{dt} \frac{dy}{dt} = \frac{d}{dt} \bigg(\frac{dy}{dx} x\bigg) = x\frac{d^2y}{dtdx}+ \frac{dx}{dt}\frac{dy}{dx} \end{align*} Note that \(\frac{d^2y}{dtdx} = \frac{d^2y}{dxdt} = \frac{d}{dx}\big(\frac{dy}{dx}\frac{dx}{dt}\big) = \frac{d}{dx}\big(\frac{dy}{dx}\big)\frac{dx}{dt} =\frac{d}{dx}\big(\frac{dy}{dx}\big)x\). Therefore, \begin{align*} \frac{d^2y}{dt^2}&= x^2 \frac{d^2y}{dx^2} + x\frac{dy}{dx}\,. \end{align*} Solving the above relations for the derivatives with respect to \(x\) gives \begin{align*} \frac{dy}{dx} &= \frac{1}{x}\frac{dy}{dt}\\ \frac{d^2 y}{dx^2} &= \frac{1}{x^2}\bigg(\frac{d^2y}{dt^2} - x\frac{dy}{dx}\bigg) \end{align*} Thus equation (\ref{Cauchy-Euler}) is \begin{align*} ax^2 \frac{d^2y}{dx^2} + bx \frac{dy}{dx} + cy &=0\\ a\bigg(\frac{d^2y}{dt^2} - x\frac{dy}{dx}\bigg) + b\frac{dy}{dt} + cy &=0\\ a\bigg(\frac{d^2y}{dt^2} - \frac{dy}{dt}\bigg) + b\frac{dy}{dt} + cy &=0\\ a\frac{d^2y}{dt^2} + (b-a)\frac{dy}{dt} + cy &=0\\ \end{align*} Therefore, the introduction of \(x=e^t\) successfully converts equation (\ref{Cauchy-Euler}) into a constant-coefficient differential equation, where the independent variable is \(t\) instead of \(x\). Thus, the solution to \(a\frac{d^2y}{dt^2} + (b-a)\frac{dy}{dt} + cy =0\) is \(y(t)\) (and it is solved the same way as equation (\ref{secondorder}), only in \(t\), i.e., letting \(y = e^{rt}\)), which must be converted back to \(y(x)\) by letting \(t = \ln x\).

  37. What are the three cases that arise when solving equation (\ref{Cauchy-Euler})? [answer]

    It was shown in the previous problem that Euler's equation is a constant-coefficient differential equation where \(r\) is the independent variable: \[a^2y'' + (b-a)y' + cy = 0\,.\] Three cases arise:

    1. \(a^2y'' + (b-a)y' + cy = 0\) has real roots \(r_1\) and \(r_2\), and the solution to Euler's differential equation is \begin{align*} y(t) &= C_1 e^{r_1t} + C_2 e^{r_2t}\\ \implies y(x)&= C_1 x^{r_1} + C_2 x^{r_2}\,. \end{align*}
    2. \(a^2y'' + (b-a)y' + cy = 0\) has a double roots \(r_1 = r_2 \equiv r\), and the solution to Euler's differential equation is \begin{align*} y(t) &= C_1 e^{rt} + C_2 t e^{rt}\\ \implies y(x)&= C_1 x^{r} + C_2 \ln{x} x^{r}\,. \end{align*}
    3. \(a^2y'' + (b-a)y' + cy = 0\) has a complex roots \(r_1 = \alpha + i\beta\) and \( r_2 = \alpha - i\beta \), and the solution to Euler's differential equation is \begin{align*} y(t) &= C_1 e^{\alpha t}\cos \beta t + C_2 e^{\alpha t} \sin {\beta t}\\ \implies y(x) &= C_1 \alpha t\cos (\beta \ln x) + C_2 \alpha t \sin ({\beta \ln t})\\ \end{align*}
    Thus, to remember how to solve equation (\ref{Cauchy-Euler}), one must simply remember that the substitution \(x =e^t\) converts equation (\ref{Cauchy-Euler}) into \(a^2y'' + (b-a)y' + cy = 0\).
  38. Classify and solve \(2x^2 y'' + 3x y' -y = 0\). [answer]

    This is a Cauchy-Euler differential equation. Recalling that the substitution \(x =e^t\) converts equation (\ref{Cauchy-Euler}) into \(a^2y'' + (b-a)y' + cy = 0\), the coefficients \(a=2\), \(b = 3\) and \(c=-1\) are identified, and the constant-coefficient equation is \[2y'' + y' - y = 0,\] the solution of which is \begin{align*} y(t) = C_1 e^{t/2} + C_2 e^{-t} \end{align*} Making the substitution \(x = e^t\) gives the solution of the Cauchy-Euler differential equation: \begin{align*} y(x) = C_1 x^{1/2} + C_2 x^{-1}\,. \end{align*}

  39. Let \(a_n\) be the expansion coefficients in a series solution \(y = \sum_n a_n x^n\). Define the radius of convergence of this sum. [answer]

    \begin{align*} R = \lim_{n\to\infty}\bigg\lvert\frac{a_n}{a_{n+1}}\bigg\rvert \end{align*}

  40. Calculate the radius of convergence for \(e^x\). [answer]

    Noting that \(e^x = \sum_{n=0}\frac{x^n}{n!}\), the radius of convergence is \begin{align*} R &= \lim_{n\to\infty}\bigg\lvert\frac{(n+1)!}{n!}\bigg\rvert\\ &= \lim_{n\to\infty} (n+1) =\infty\,. \end{align*} That is, the series converges everywhere.

  41. Calculate the radius of convergence for \(\frac{1}{1-x}\). [answer]

    Noting that \(\frac{1}{1-x} = \sum_{n=0}x^n \), the radius of convergence is \begin{align*} R &= \lim_{n\to\infty}\bigg\lvert\frac{(n+1)!}{n!}\bigg\rvert\\ &= \lim_{n\to\infty} 1 =1\,. \end{align*} That is, this series converges on the interval \(x =(-1 ,1)\).

  42. Solve \(y'' + y = 0\) by series. [answer]

    Let \begin{align*} y &= \sum_{n=0}^{\infty} a_n x^n\\ y' &= \sum_{n=0}^{\infty}n a_n x^{n-1} = \sum_{n=1}^{\infty}n a_n x^{n-1} \\ y'' &= \sum_{n=0}^{\infty}n(n-1) a_n x^{n-2} = \sum_{n=2}^{\infty}n(n-1) a_n x^{n-2}\\ \end{align*} where it has been noted that the first term in the sum for \(y'\) is \(0\), and the first two terms in the sum for \(y''\) are \(0\). Substituting these sums into the ODE gives \begin{align*} \sum_{n=2}^{\infty}n(n-1) a_n x^{n-2} + \sum_{n=0}^{\infty} a_n x^n = 0 \end{align*} Now the indices of the first summation term shifted from \(n+2 \mapsto n\), which means that \(2\) must be added to each \(n\) in the summand to compensate: \begin{align*} \sum_{n=0}^{\infty}(n+2)(n+1) a_{n+2} x^{n} + \sum_{n=0}^{\infty} a_n x^n = 0 \end{align*} The two summations may now be combined under the same sum: \begin{align*} \sum_{n=0}^{\infty}[(n+2)(n+1) a_{n+2} + a_n ]x^{n} = 0 \end{align*} Since this relation holds for every \(n\), one can identify a recurrence relation: \begin{align*} (n+2)(n+1) a_{n+2} + a_n &= 0\\ a_{n+2} &= -\frac{a_n}{(n+2)(n+1)} \end{align*} Now for the fun part, let \(a_0 = 1\) and \(a_1 = 0\). Then all \(a_n\) for \(n\) odd vanishes, and \begin{align*} n&= 0:\quad a_{2} = -\frac{a_0}{(0 + 2)(0+1)}= -\frac{1}{1\cdot 2}\\ n&= 2:\quad a_{4} = -\frac{a_2}{(2+2)(2+1)} = \frac{1}{1\cdot 2 \cdot 3\cdot 4}\\ n&= 4:\quad a_{6} = -\frac{a_4}{(4+2)(4+1)} = -\frac{1}{1\cdot 2 \cdot 3\cdot 4 \cdot 5 \cdot 6}\\ \end{align*} Defining \(k= 2n\), the pattern is explicitly given by \(a_{2k} = (-1)^k/(2k)!\) and \(a_{2k+1} = 0\). The solution can be written as \[\sum_{k=0}^\infty \frac{(-1)^k}{(2k)!} x^{2k}\]

    Meanwhile, letting \(a_0 = 0\) and \(a_1 = 1\) all \(a_n\) for \(n\) even vanishes, and \begin{align*} n&= 1:\quad a_{3} = -\frac{a_1}{(1 + 2)(1+1)}= -\frac{1}{2\cdot 3}\\ n&= 3:\quad a_{5} = -\frac{a_3}{(3+2)(3+1)} = \frac{1}{1\cdot 2 \cdot 4\cdot 5}\\ n&= 5:\quad a_{7} = -\frac{a_5}{(5+2)(5+1)} = -\frac{1}{1\cdot 2 \cdot 3\cdot 4 \cdot 5 \cdot 6\cdot 7}\\ \end{align*} Thus the pattern is explicitly given by \(a_{2k+1} = (-1)^k/(2k+1)!\) and \(a_{2k} = 0\). The solution can be written as \[\sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)!} x^{2k+1}\]

    The general solution is therefore \[y = a_0 \sum_{k=0}^\infty \frac{(-1)^k}{(2k)!}x^{2k}+ a_1\sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)!} x^{2k+1}\] which we recognize as linear combinations of sines and cosines.

  43. What is the name of the ordinary differential equation \(y'' - xy = 0\)? What are some of its applications? Solve it by series. [answer]

    This is the Airy equation, which was developed by Airy to describe caustics in optics, like in a rainbow. It is also used in quantum mechanics, at the turning point in the WKB approximation.

    As before, let \begin{align*} y &= \sum_{n=0}^{\infty} a_n x^n\\ y' &= \sum_{n=0}^{\infty}n a_n x^{n-1} = \sum_{n=1}^{\infty}n a_n x^{n-1} \\ y'' &= \sum_{n=0}^{\infty}n(n-1) a_n x^{n-2} = \sum_{n=2}^{\infty}n(n-1) a_n x^{n-2}\\ \end{align*} where again it has been noted that the first term in the sum for \(y'\) is \(0\), and the first two terms in the sum for \(y''\) are \(0\). Substituting these sums into the Airy equation gives \begin{align*} \sum_{n=2}^{\infty}n(n-1) a_n x^{n-2} - x\sum_{n=0}^{\infty} a_n x^{n} &= 0\\ \sum_{n=0}^{\infty}(n+2)(n+1) a_{n+2} x^{n} - \sum_{n=0}^{\infty} a_n x^{n+1} &= 0 \end{align*} Now the indices must be shifted so the powers of \(x\) match in both sums. This is done by shifting the index of the sum on the right from \(n=0\mapsto 1\), and shifting the index of the sum on the left from \(n=0\mapsto 1\): \begin{align*} \sum_{n=0}^{\infty}(n+2)(n+1) a_{n+2} x^{n} - \sum_{n=1}^{\infty} a_{n-1} x^{n} = 0 \end{align*} In order to combine the sums, the lower index must also match. This is done by explicitly removing the \(n=0\) terms of the sum on the left: \begin{align*} 2 a_{2} + \sum_{n=1}^{\infty}(n+2)(n+1) a_{n+2} x^{n} - \sum_{n=1}^{\infty} a_{n-1} x^{n} = 0 \end{align*} The two summations may now be combined under the same sum: \begin{align*} 2 a_{2} + \sum_{n=1}^{\infty}[(n+2)(n+1) a_{n+2} -a_{n-1}]x^{n} = 0 \end{align*} Since this relation holds for every \(n\), \(a_2 = 0\), and \begin{align*} (n+2)(n+1) a_{n+2} -a_{n-1} &= 0\\ a_{n+2} &= \frac{a_{n-1}}{(n+2)(n+1)} \end{align*} First, let \(a_0 = 1\) and \(a_1 = 0\). Then all \(a_n\) for \(n\) odd vanishes, and \begin{alignat*}{2} n&= 1:\quad a_{3} = \frac{a_0}{(1 + 2)(1+1)}= \frac{1}{2\cdot 3} &&\qquad k=1\\ n&= 2:\quad a_{4} = \frac{a_1}{(2+2)(2+1)} = 0 &&\\ n&= 3:\quad a_{5} = \frac{a_2}{(3+2)(3+1)} = 0 &&\\ n&= 4:\quad a_{6} = \frac{a_3}{(4+2)(4+1)} = \frac{1}{2\cdot 3\cdot 5\cdot 6} &&\qquad k=2 \end{alignat*} The pattern is explicitly given by \(a_{k} = [2\cdot 3 \cdot 5\cdot 6 \cdot \dots \cdot (3k+1)3k]^{-1}\) . The solution can be written as \[y_1 = 1 + \sum_{k=1}^\infty \frac{1}{2\cdot 3 \cdot 5\cdot 6 \cdot \dots \cdot (3k+1)3k} x^{3k},\] where the term \(1\) appears because \(a_0 = 1\) is not included in the summation.

    Meanwhile, letting \(a_0 = 0\) and \(a_1 = 1\) gives \begin{alignat*}{2} n&= 1:\quad a_{3} = 0 &&\\ n&= 2:\quad a_{4} = \frac{1}{3\cdot 4} && \qquad k = 1\\ n&= 3:\quad a_{5} = 0 &&\\ n&= 4:\quad a_{5} = 0 && \\ n&= 5:\quad a_{4} = \frac{1}{3\cdot 4 \cdot 6\cdot 7} && \qquad k=2 \end{alignat*} The pattern is explicitly given by \(a_{3k-1} = [3\cdot 4 \cdot 6\cdot 7\cdot \dots \cdot 3k(3k+1)]^{-1}\). The solution can be written as \[y_2 = 1 + \sum_{k=1}^\infty \frac{1}{3\cdot 4 \cdot 6\cdot 7\cdot \dots \cdot } x^{3k+1},\] where again the term \(1\) appears because \(a_1 = 1\) is not included in the summation.

    The sum \(y_1 +y_2\) is the general solution to Airy's equation.

  44. Solve \(y'' + xy' +y = 0\) by series. [answer]

    See here for the solution.

  45. When solving equation (\ref{homo2}) (the linear, homogeneous second-order ordinary differential equation with non-constant coefficients \(a(x) y'' + b(x) y' + c(x)y = 0\)), what must one be weary of if there is an \(x_0\) such that \(a(x_0) = 0\)? What is the condition on \(x_0\) for equation (\ref{homo2}) to be solved by series? [answer]

    \(x_0\) is called a singular point. Specifically, if the following limits are finite, \(x_0\) is called a "regular singular point," and the method of Frobenius can be used to solve eqaution (\ref{homo2}). \begin{align*} \lim_{x\to x_0} & \frac{b(x)}{a(x)} (x-x_0)\\ \lim_{x\to x_0} &\frac{c(x)}{a(x)} (x-x_0)^2 \end{align*} If either of the limits above are not finite, \(x_0\) is called a "singular singular point" (very creative!).

  46. Identify the differential equation \((1-x^2)y'' - 2xy' + \alpha(\alpha +1)y = 0\), and identify the singular point(s). Classify them as "regular singular" or "singular singular." [answer]

    The singular points are \(x_0 = 1\) and \(x_0 =-1\). For \(x_0 = 1\), \begin{align*} \lim_{x\to 1} \frac{-2x}{1-x^2}(x-1) &=\lim_{x\to 1} \frac{2x}{x+1} = 1 \\ \lim_{x\to 1} \frac{\alpha(\alpha +1)}{1-x^2}(x-1)^2 &=\alpha(\alpha +1)\lim_{x\to 1} \frac{(x-1)(x-1)}{(1-x)(1+x)} = - \alpha(\alpha +1)\lim_{x\to 1} \frac{(x-1)}{(1+x)} = 0 \end{align*} Therefore the point \(x=1\) is a regular singular point. Meanwhile, for \(x_0 =-1\), \begin{align*} \lim_{x\to -1} \frac{-2x}{1-x^2}(x+1) &=\lim_{x\to -1} \frac{2x}{x-1} = \frac{-2}{-2} = 1\\ \lim_{x\to -1} \frac{\alpha(\alpha +1)}{1-x^2}(x+1)^2 &=\alpha(\alpha +1)\lim_{x\to -1} \frac{(x+1)(x+1)}{(1-x)(1+x)} = - \alpha(\alpha +1)\lim_{x\to -1} \frac{(x+1)}{(1-x)} = 0 \end{align*} Therefore the point \(x=-1\) is also a regular singular point.

  47. Solve \(2x^2 y'' + xy' - (1+x)y = 0\). [answer]

    See here for the solution.

  48. Identify and solve \(x^2 y'' + xy' + (x^2-\nu^2)y = 0\) for \(\nu=0\). What is the name of the solution? Where does it appear in acoustics? [answer]

    This is Bessel's equation, and the solutions for \(\nu=0\) are the Bessel functions \(J_0\) and \(N_0\). This appears in acoustics as the radial eigenfunction of the Helmholtz equation in cylindrical coordinates for axisymmetric radiation. See here for the solution for \(J_0\).

Orthogonality and special functions

This section naturaly picks up where the previous section on ODEs left off. Also included are problems involving Dirac delta functions.
  1. Derive the recursion relation for the power expansion coefficients that solve Bessel's equation \[x^2 y'' + xy' + (x^2-\nu^2)y = 0\] for arbitrary \(\nu\). What choice gives the power expansion coefficients for \(J_n\)? What choice gives the power expansion coefficients for \(N_n\)? [answer]

    One can easily show that \(x=0\) is a regular singular point. Thus the method of Frobenius is taken up: \begin{align*} y &= \sum_{n=0}^{\infty} a_n x^{n+r}\\ y' &= \sum_{n=0}^{\infty} a_n (n+r) x^{n+r - 1}\\ y'' &= \sum_{n=0}^{\infty} a_n (n+r)(n+r-1) x^{n+r-2} \end{align*} Insertion into Bessel's equation gives \begin{align*} x^2 \sum_{n=0}^{\infty} a_n (n+r)(n+r-1) x^{n+r-2} + x\sum_{n=0}^{\infty} a_n (n+r) x^{n+r - 1} + (x^2-\nu^2)\sum_{n=0}^{\infty} a_n x^{n+r} &= 0\\ \sum_{n=0}^{\infty} a_n (n+r)(n+r-1) x^{n+r} + \sum_{n=0}^{\infty} a_n (n+r) x^{n+r} - \nu^2\sum_{n=0}^{\infty} a_n x^{n+r} + \sum_{n=0}^{\infty} a_n x^{n+r+2}&= 0\,. \end{align*} It is desired for all the summations to be combined into one, and thus for all powers of \(x\) to match. For this, final summation above is rewritten starting from \(n=2\): \begin{align*} \sum_{n=0}^{\infty} a_n (n+r)(n+r-1) x^{n+r} + \sum_{n=0}^{\infty} a_n (n+r) x^{n+r} - \nu^2\sum_{n=0}^{\infty} a_n x^{n+r} + \sum_{n=2}^{\infty} a_{n-2} x^{n+r}&= 0\,. \end{align*} Now, the \(n=0\) and \(n=1\) terms of the first three summations are written explicitly, and the remaining \(\sum_{2}^{\infty}\) is written as a single summation: \begin{align*} &[r(r-1) + r - \nu^2]a_0 x^{r} + [r(r+1) + r+1 - \nu^2]a_1 x^{r+1}\\ &\quad + \sum_{n=2}^{\infty}\big[a_n (n+r)(n+r-1) +a_n (n+r) -\nu^2 a_n + a_{n-2} \big]x^{n+r} = 0\,. \end{align*} Each term above must vanish, because the right-hand side is \(0\). Specifically, the coefficient of \(x^r\) gives the indicial equation and the values of \(r\): \[r = \pm \nu.\] Meanwhile, the setting the summand above to \(0\) gives the recurrence relation: \begin{align*} a_n = -\frac{a_{n-2}}{(n+r)^2 -\nu^2} \end{align*} It turns out the choice \(r=\nu\) gives Bessel functions \(J_\nu(x)\), while the choice \(r=-\nu\) gives Neumann functions: \begin{align*} a_n &= -\frac{a_{n-2}}{n^2 + 2n\nu} \tag*{Bessel}\\ a_n &= -\frac{a_{n-2}}{n^2 - 2n\nu} \tag*{Neumann}\\ \end{align*}

  2. The singular points \(x=\pm 1\) of Legendre's equation \[(1-x^2)y'' -2xy' + \lambda y = 0 \] were already found to be to regular singular. Thus derive the recursion relation for the power expansion coefficients that solve Legendre's equation for arbitrary \(\lambda\). Why is there never much discussion about the second solution of Legendre's equation \(Q_n\)? [answer]

    \begin{align*} y &= \sum_{n=0}^{\infty} a_n x^{n+r}\\ y' &= \sum_{n=0}^{\infty} a_n (n+r) x^{n+r - 1}\\ y'' &= \sum_{n=0}^{\infty} a_n (n+r)(n+r-1) x^{n+r-2} \end{align*} Insertion into Legendre's equation gives \begin{align*} %(1-x^2)\sum_{n=0}^{\infty} a_n (n+r)(n+r-1) x^{n+r-2} -2x\sum_{n=0}^{\infty} a_n (n+r) x^{n+r - 1} +n(n+1) \sum_{n=0}^{\infty} a_n x^{n+r} &= 0\\ &\sum_{n=0}^{\infty} a_n (n+r)(n+r-1) x^{n+r-2} - \sum_{n=0}^{\infty} a_n (n+r)(n+r-1) x^{n+r} \\&\quad- 2\sum_{n=0}^{\infty} a_n (n+r) x^{n+r} +\lambda \sum_{n=0}^{\infty} a_n x^{n+r} = 0 \end{align*} The crucial step that was at first throwing me off is to write the first two terms of the first summation explicitly. (This is different from the other Frobenius method problems I have done, where it is the last summation that has terms pulled out so as to match the index of the rest of the summations; this time, it is the first summation, because of the \(1-x^2\) coefficient of \(y''\) in Legendre's equation). \begin{align*} &a_0r(r-1)x^{r-2} + a_1r(r+1)x^{r-1} + \sum_{n=2}^{\infty} a_{n} (n+r)(n+r-1) x^{n+r-2} - \sum_{n=0}^{\infty} a_n (n+r)(n+r-1) x^{n+r} \\&\quad- 2\sum_{n=0}^{\infty} a_n (n+r) x^{n+r} + \lambda \sum_{n=0}^{\infty} a_n x^{n+r} = 0 \end{align*} In order to write all the summations as a single sum, the first summation is now shifted down by two: \begin{align*} &a_0r(r-1)x^{r-2} + a_1r(r+1)x^{r-1}\\&\quad + \sum_{n=0}^{\infty} [a_{n+2} (n+r+2)(n+r+1) - a_n (n+r)(n+r-1)- 2 a_n (n+r) + \lambda a_n] x^{n+r} = 0 \end{align*} The indicial equation is found by setting the coefficient of the lowest power of \(x\) to zero, giving \begin{align*} r = 0, \quad r=1\,. \end{align*} Meanwhile, setting the summand above equal to \(0\) gives the recursion relation: \begin{align*} a_{n+2} &= \frac{ (n+r)(n+r-1) + 2 (n+r) - \lambda }{(n+r+2)(n+r+1)} a_n\\ &= \frac{ (n+r)^2 +n + r - \lambda }{(n+r+2)(n+r+1)} a_n \end{align*} For \(r=0\), the above recursion relation gives the Legendre polynomials (\(P_l\) once we pick \(\lambda = l(l+1)\)), but for \(r=1\), the recursion relation gives a dependent solution of the Legendre polynomials. This is a reflection of Fuchs's theorem (See section 21 of Boas's Methods). To find the second independent solution of Legendre's equation, one can use reduction of order (see derivation of reduction of order in the ODE section; also See Boas's ch. 12 section 2 problem 4 for the outline of how to find \(Q\)). It turns out that \(Q\) diverges at the poles, \(x=\pm1\), or \(\theta = 0^\circ\) and \(\theta = 180^\circ\) if the argument is \(x= \cos\theta\). Thus this second solution is not included in problems that include the poles (for the same reason that the Neumann functions are not included in spherical wave problems that include the origin, because the Neumann functions diverge at the origin.

    An example that requires \(Q\) is modeling a torroidal bubble in spherical coordinates. The domain of the sound inside such a bubble does not include the poles. Whales and dolphins create these bubbles when they exhale.

  3. Given the recurrence relation for Legendre polynomials, \[(n+1)P_{n+1}(x) = (2n+1)xP_{n}(x)-nP_{n-1}(x),\] and the integral result \[\int_{-1}^1 P_n(x)P_m(x)dx = \frac{2}{2n+1}\delta_{nm},\] show that \[\int_{-1}^{1} x^2 P_{n+1}(x) P_{n-1}(x) dx = \frac{2n(n+1)}{(4n^2-1)(2n+3)}\,.\] [answer]

    Solve the first equation for \((2n+1)x P_n(x)\): \begin{align}\label{shifters}\tag{i} (2n+1)x P_n(x) = (n+1)P_{n+1}(x) + nP_{n-1}(x) \end{align} Shift the indices on the LHS of equation (\ref{shifters}) to \(n+1\) and to \(n-1\): \begin{align} (2n+3)x P_{n+1}(x) &= (n+2)P_{n+2}(x) + (n+1)P_{n}(x) \tag{ii}\label{his}\\ (2n-1)x P_n(x) &= nP_{n}(x) + (n-1)P_{n-2}(x) \tag{iii}\label{hers} \end{align} Multiply equations (\ref{his}) and (\ref{hers}): \begin{align*} (2n-1)(2n+3)x^2 P_{n-1}(x)P_{n+1}(x) &= n(n+2)P_n(x)P_{n+2}(x) + n(n+1)P_n(x)P_n(x) \\&\quad+ (n-1)(n+2)P_{n+2}(x)P_{n-2}(x) + (n-1)(n+1)P_{n-2}(x)P_n(x) \end{align*} Integrate the above over \(x\) from \(0\) to \(1\), and employ the orthogonality relation \begin{align*} \int_{-1}^{1}P_n(x)P_m(x)dx = \frac{2}{2n +1}\delta_{nm}\,. \end{align*} This results in \begin{align*} (2n-1)(2n+3) \int_{-1}^{1} P_{n-1}(x)P_{n+1}(x)x^2\, dx &= \frac{2n(n+1)}{2n+1} \end{align*} Dividing by \((2n-1)(2n+3)\) on both sides and noting that \((2n-1)(2n+1) = 4n^2-1\) gives the desired result, \begin{align*} \int_{-1}^{1} P_{n-1}(x)P_{n+1}(x)x^2\, dx = \frac{2n(n+1)}{(4n^2-1)(2n+3)} \end{align*}

  4. Given \(J_{p-1}(x)-J_{p+1} = 2J_p'\), the integral relation \(J_0(x) = \frac{2}{\pi}\int_0^{\pi/2}\cos(x\sin\theta)d\theta\), show that (part a) \[J_1 = \frac{2}{\pi}\int_0^{\pi/2}\sin(x\sin\theta)\sin\theta d\theta.\] Then (part b) obtain \[x^{-1}J_1(x)= \frac{2}{\pi}\int_0^{\pi/2}\cos(x\sin\theta)\cos^2\theta d\theta\] by integrating the right-hand side of the first result by parts. [answer]

    Setting \(p=0\) and invoking the first identity gives \(J_{-1}(x) - J_{1}(x) = 2J'_0(x)\). Noting that \(J_{-1}(x) = -J_1(x)\) gives \(-2J_1(x) = 2J'_0(x),\) or \begin{equation}\label{integrateher}\tag{i} -J_1(x) = J'_0(x)\,. \end{equation} Attention is now turned to the integral relationship, \begin{equation}\label{defers}\tag{ii} J_0(x) = \frac{2}{\pi}\int_0^{\pi/2}\cos(x\sin\theta)d\theta\,. \end{equation} The derivative with respect to \(x\) of equation (\ref{defers}) is taken, giving \begin{align*} J_0' = \frac{d}{dx}J_0(x) &= \frac{d}{dx}\frac{2}{\pi}\int_0^{\pi/2}\cos(x\sin\theta)d\theta \end{align*} Invoking equation (\ref{integrateher}) gives \begin{align*} -J_1(x)&=\frac{2}{\pi}\int_0^{\pi/2}\frac{d}{dx}\cos(x\sin\theta)d\theta\\ -J_1(x)&=-\frac{2}{\pi}\int_0^{\pi/2}\sin(x\sin\theta)\sin\theta d\theta\,, \end{align*} or \begin{align*} J_1(x)=\frac{2}{\pi}\int_0^{\pi/2}\sin(x\sin\theta)\sin\theta d\theta\,. \end{align*}

    Part (b): To integrate \begin{align}\label{inters2}\tag{iii} J_1(x)&=\frac{2}{\pi}\int_0^{\pi/2}\sin(x\sin\theta)\sin\theta d\theta\,, \end{align} by parts, set \begin{align*} u &= \sin (x\sin \theta), \qquad &&du=\cos(x\sin\theta)x\cos\theta d\theta\\ v &= -\cos\theta,\qquad &&dv=\sin \theta d\theta\,. \end{align*} Equation (\ref{inters2}) integral becomes \begin{align*} J_1(x)&= -\sin (x\sin \theta)\cos\theta \bigg\rvert_{0}^{\pi/2}+ \frac{2}{\pi}\int_0^{\pi/2}x\cos\theta\cos(x\sin\theta)\cos\theta d\theta\\ &=\frac{2}{\pi}\int_0^{\pi/2}\cos(x\sin\theta)x\cos^2\theta d\theta \end{align*} \(x\) is not an integration variable and can thus be divided through on both sides of the equation, giving the desired result, \begin{align*} x^{-1} J_1(x)=\frac{2}{\pi}\int_0^{\pi/2}\cos(x\sin\theta)\cos^2\theta d\theta\,. \end{align*} Sometimes when integrating by parts it is helpful to recall the trick "ILATE,'' (Inverse trig, Logarithmic, Algebraic, Trig, Exponential) which gives the priority of which function to set equal to \(u\) when integrating by parts. For this problem, I just guessed, first trying \(u = \sin\theta\) and then trying \(u = \sin(x\sin\theta)\). The second choice worked out.

  5. Prove that \[\delta(kx) = \frac{1}{|k|}\delta(x)\] where \(k\) is any nonzero constant. Hint: let \(y = kx\), and integrate a test function \(f(x) = f(y/k)\) times the Dirac delta function of \(y\) from \(-\infty\) to \(\infty\). [answer]

    \begin{align*} \int_{-\infty}^{\infty} f(x)\delta(kx) dx &= \frac{1}{|k|}\int_{-\infty}^{\infty} f(y/k)\delta(y) dy\\ &= \frac{1}{|k|}f(0) \\ &= \int_{-\infty}^{\infty} f(x)\frac{1}{|k|}\delta(x) dx \end{align*} where the absolute value has been included to account for the fact that the limits of integration are reversed if \(k < 0\). Comparing the first and last lines above gives the desired equality, \begin{align*} \delta(kx) = \frac{1}{|k|} \delta(x)\,. \end{align*}

  6. \(\int_{2}^{6} (3x^2 - 2x -1)\delta(x-3)dx = \) [answer]

    \(3(3)^2 - 2(3) - 1 = 20\)

  7. \(\int_{0}^{5} \cos x \delta(x-\pi)dx = \) [answer]

    \(\cos \pi = -1\)

  8. \(\int_{0}^{3} x^3\delta(x+1)dx = \) [answer]

    \(0 \) (because the limits do not include \(-1\)).

  9. \(\int_{-\infty}^{\infty} \ln(x+3)\delta(x+2)dx = \) [answer]

    \(\ln (-2 + 3) = 0\)

  10. \(\int_{-2}^{2} (2x+3)\delta(3x)dx = \) [answer]

    Here it should be noted that \(\delta(kx) = \delta(x)/|k|\). Therefore the integral evaluates to \(\frac{1}{3}[2(0) + 3] = 1\)

  11. \(\int_{0}^{2} (x^3 + 3x +2)\delta(1-x)dx = \) [answer]

    Since \(\delta(kx) = \delta(x)/|k|\), it follows that \(\delta(-x) = \delta(x)\). Thus \(\delta(1-x) = \delta[-(x-1)] = \delta(x-1)\). Thus the integral evaluates to \(1 + 3 +2 = 6\).

  12. \(\int_{-1}^{1} 9x^2\delta(3x+1)dx = \) [answer]

    The delta function can be written as \(\delta[3(x+1/3)] = \frac{1}{3}\delta{(x+1/3)}\). Thus the integral becomes \(\int_{-1}^{1} 3x^2\delta{(x+1/3)}dx = \frac{1}{3}\).

  13. \(\int_{-\infty}^{a} \delta(x-b)dx = \) [answer]

    \(1\) for \(a\geq b\) and \(0\) for \(a\) less than \(b\) (I can't type the less than symbol in HTML!).

  14. ☸ Prove that \[x\frac{d}{dx}[\delta(x)] = -\delta(x).\] Hint: Integrate \(\int_{-\infty}^{\infty} f(x) x \frac{d}{dx}[\delta(x)] dx \) by parts. [answer]

    Following the hint, the following definitions are made, \begin{alignat*}{2} u&= f(x)x \quad &&v= \delta(x) \\ du&= \frac{d}{dx}[f(x)x]dx \quad &&dv= \frac{d}{dx}[\delta(x)]dx \,, \end{alignat*} and the integral is taken by parts. \begin{align} \int_{-\infty}^{\infty} f(x) x \frac{d}{dx}[\delta(x)] dx &= f(x)x\delta(x)\bigg\rvert_{-\infty}^{\infty} - \int_{-\infty}^{\infty} \delta(x) \frac{d}{dx}[f(x) x]dx\label{inters}\tag{i} \end{align} The first term on the right-hand side above is zero because \(\delta(\infty) = 0= \delta(-\infty)\), so equation (\ref{inters}) becomes \begin{align} \int_{-\infty}^{\infty} f(x) x \frac{d}{dx}[\delta(x)] dx &= - \int_{-\infty}^{\infty} \delta(x) \frac{d}{dx}[f(x) x]dx\label{int}\tag{ii} \end{align} Applying the product rule to the integral on the right-hand side of equation (\ref{int}) gives \begin{align} \int_{-\infty}^{\infty} f(x) x \frac{d}{dx}[\delta(x)] dx &= - \int_{-\infty}^{\infty} \delta(x) [f'(x) x + f(x)]dx\notag\\ &= - [-0 f'(0) + f(0)]\notag\\ &= -\int_{-\infty}^{\infty} \delta(x)f(x)dx \label{interss}\tag{iii} \end{align} where the definition of the delta function has been used in the second and third equalities above. Comparing the integrands of the left- and right-hand sides of equation (\ref{interss}) gives the desired equality, \[x\frac{d}{dx}[\delta(x)] = -\delta(x).\]

  15. ☸ Prove that \[\frac{d\theta}{dx} = \delta(x),\] where \(\theta\) is the Heaviside step function. Hint: Integrate \(\int_{-\infty}^{\infty} f(x) \frac{d\theta}{dx} dx \) by parts. [answer]

    Following the hint, the following definitions are made, \begin{alignat*}{2} u&= f \quad &&v= \theta \\ du&= \frac{df}{dx}dx \quad &&dv= \frac{d\theta}{dx}dx \,, \end{alignat*} and the integral is taken by parts. \begin{align} \int_{-\infty}^{\infty} f \frac{d\theta}{dx} dx &= f\theta\bigg\rvert_{-\infty}^\infty - \int_{-\infty}^{\infty}\theta \frac{df}{dx}dx \label{inty}\tag{i}\\ &=f(\infty)\theta(\infty) - f(-\infty)\theta(-\infty) - \int_{-\infty}^{0}0 \frac{df}{dx}dx - \int_{0}^{\infty} 1 \frac{df}{dx}dx \notag\\ &=f(\infty) - \int_{0}^{\infty} \frac{df}{dx}dx \notag\\ &=f(\infty) - f(\infty) + f(0)\\ &= \int_{-\infty}^{\infty} \delta(x) f(x)dx \label{inters1}\tag{ii} \end{align} where the definition of the delta function has been used in the last line to write \(f(0)\). Lines (\ref{inty}) and (\ref{inters1}) give \begin{align*} \int_{-\infty}^{\infty} f(x) \frac{d\theta}{dx} dx = \int_{-\infty}^{\infty} \delta(x) f(x)dx \,. \end{align*} The integrands must be equal, which gives the desired result: \[\frac{d\theta}{dx} =\delta(x). \]

Vector algebra & vector calculus

Vector calculus (for whatever reason) is not listed as a topic on the math section. However, as essentially all of acoustics is formulated in terms of the calculus of vector fields, I think it is very worthy of my review.

An orthonormal vector basis may be assumed; no need to work out the proofs below in general curvilinear coordinates. Therefore, one can write \(\mathsf{v} = \vec{v}\).

Some of the problems below come from chapter 1 of Introduction to Electrodynamics by D. J. Griffiths.

  1. Suppose we have a barrel of fruit that contains \(a_x\) bananas, \(a_y\) pears, and \(a_z\) apples. Denoting \(\vec{e}_n\) as the unit vector in the \(n\) direction in space, is \(\vec{a} = a_x\vec{e}_x + a_y\vec{e}_y + a_z\vec{e}_z\), a vector? Explain. [answer]

    No, because \(\vec{a}\) does not obey coordinate transformations. For example, choosing a different set of axes does not turn a pear into a banana. By definition, "a vector is any set of three components that transforms in the same manner as a displacement when you change coordinates." (from Griffiths Introduction to Electrodynamics, section 1.1.5).

  2. How do the components \(a_x\), \(a_y\), and \(a_z\) of a vector \(\vec{a} = a_x \vec{e}_x + a_y \vec{e}_y + a_z \vec{e}_z \) transform under the translation of coordinates? \begin{align*} x' &= x\\ y' &= y-a\\ z' &= z \end{align*} In other words, what happens to \(a_x\), \(a_y\), and \(a_z\) when \(\vec{a}\) is written as \(\vec{a} = a_x \vec{e}_x' + a_y \vec{e}_y' + a_z \vec{e}_z' \)? [answer]

    The components of a vector are invariant under this transformation.

  3. How do the components of a vector transform under the inversion of coordinates? \begin{align*} x' &= -x\\ y' &= -y\\ z' &= -z \end{align*} In other words, what happens to \(a_x\), \(a_y\), and \(a_z\) when \(\vec{a}\) is written as \(\vec{a} = a_x \vec{e}_x' + a_y \vec{e}_y' + a_z \vec{e}_z' \)? [answer]

    The components are also inverted. \(a_x\mapsto -a_x\), \(a_y\mapsto -a_y\), and \(a_z\mapsto -a_y\).

  4. How does the cross product of two vectors \(\vec{u}\) and \(\vec{v}\) transform under the inversion of coordinates? Is the cross product of two vectors really a vector? [answer]

    The cross product \(\vec{w} = \vec{u} \times \vec{v}\) is invariant under the inversion, because \(\vec{w} = -\vec{u} \times -\vec{v}\). Thus \(\vec{w}\) is a different kind of quantity than vectors \(\vec{u}\) and \(\vec{w}\). It is called a psuedovector.

  5. How does the scalar triple product of \(\vec{w}\cdot(\vec{u} \times \vec{v})\) transform under the inversion of coordinates? Is the scalar triple product really a scalar? (Griffiths problem 1.10d) [answer]

    The scalar triple product transforms as \(-\vec{w}\cdot(-\vec{u} \times -\vec{v}) = -\vec{w}\cdot(\vec{u} \times \vec{v})\), i.e., the product changes signs when the coordinates are inverted. This is in contrast with the fact that scalars are invariant under coordinate inversions. Thus the scalar triple product is a different kind of quantity than an ordinary scalar. It is called a psuedoscalar.

  6. In what direction does the gradient of a function point? [answer]

    The gradient of a function points in the direction of the steepest ascent.

  7. Show that \( |\vec{u}\times \vec{v}|^2 + (\vec{u}\cdot \vec{v})^2 = |\vec{u}|^2|\vec{v}|^2 \). [answer]

    \begin{align*} |\vec{u}\times \vec{v}|^2 + (\vec{u}\cdot \vec{v})^2 &=\epsilon_{ijk}u_j v_k \epsilon_{ilm}u_l v_m + u_i v_i u_j v_j\\ &= \epsilon_{ijk}\epsilon_{ilm}u_j v_k u_l v_m + u_i v_i u_j v_j\\ &=(\delta_{jl}\delta_{km}-\delta_{jm}\delta_{kl})u_j v_k u_l v_m + u_i v_i u_j v_j\\ &=\delta_{jl}\delta_{km}u_j v_k u_l v_m -\delta_{jm}\delta_{kl}u_j v_k u_l v_m + u_i v_i u_j v_j \\ &=u_l u_l v_k v_k - u_m v_m v_k u_k + u_i v_i u_j v_j \\ &=u_l u_l v_k v_k - u_i v_i u_j v_j + u_i v_i u_j v_j \\ &=|\vec{u}|^2|\vec{v}|^2 \end{align*}

  8. ☸ Prove that \(\gradient (\vec{a} \cdot \vec{b}) = \vec{a}\times(\gradient \times \vec{b}) + \vec{b}\times(\gradient\times \vec{a}) + (\vec{a}\cdot\gradient)\vec{b} + (\vec{b} \cdot \gradient)\vec{a}\). [answer]

    It is much easier to go from the right-hand side to the left-hand side. \begin{align*} [\vec{a}\times(\gradient \times \vec{b}) &+ \vec{b}\times(\gradient\times \vec{a}) + (\vec{a}\cdot\gradient)\vec{b} + (\vec{b} \cdot \gradient)\vec{a}]_i\\ &=\epsilon_{ijk}a_j (\epsilon_{klm}\partial_l b_m) + \epsilon_{ijk}b_j (\epsilon_{klm}\partial_l a_m) + (a_j \partial_j)b_i + (b_j\partial_j)a_i\\ &= \epsilon_{kij}\epsilon_{klm}a_j \partial_l b_m + \epsilon_{kij}\epsilon_{klm} b_j \partial_l a_m + (a_j \partial_j)b_i + (b_j\partial_j)a_i\\ &= (\delta_{il}\delta_{jm}-\delta_{im}\delta_{lj})a_j \partial_l b_m + (\delta_{il}\delta_{jm}-\delta_{im}\delta_{lj}) b_j \partial_l a_m + (a_j \partial_j)b_i + (b_j\partial_j)a_i\\ &= a_m \partial_i b_m - a_l \partial_l b_i + b_j \partial_i a_j- b_l \partial_l a_i + (a_j \partial_j)b_i + (b_j\partial_j)a_i\\ &= a_j \partial_i b_j + b_j \partial_i a_j\\ &= \partial_i (a_j b_j)\\ &= [\gradient (\vec{a} \cdot \vec{b})]_i \end{align*}

    Since this holds for all three components, the proof is complete.
  9. Prove that \(\gradient\times(\vec{a}\times \vec{b}) = (\vec{b}\cdot \gradient)\vec{a} - (\vec{a}\cdot\gradient)\vec{b} + \vec{a}(\gradient\cdot\vec{b}) -\vec{b}(\gradient\cdot\vec{a})\). [answer]

    \begin{align*} \gradient\times(\vec{a}\times \vec{b}) &= \epsilon_{ijk}\partial_j (\epsilon_{klm}a_l b_m)\\ &=\epsilon_{ijk}\epsilon_{klm} \partial_j a_l b_m\\ &=\epsilon_{kij}\epsilon_{klm} \partial_j a_l b_m\\ &= (\delta_{il}\delta_{jm}- \delta_{im}\delta_{jl})\partial_j a_l b_m\\ &= \delta_{il}\delta_{jm}\partial_j a_l b_m- \delta_{il}\delta_{jm}\partial_j a_l b_m\\ &= \partial_j (a_i b_j)- \partial_j (a_i b_j)\\ &= b_j \partial_j a_i + a_i \partial_j b_j - a_i \partial_j b_j - b_j \partial_j a_i\\ &= \partial_j (a_i b_j)- \partial_j (a_i b_j)\\ &= b_j \partial_j a_i - a_i \partial_j b_j + a_i \partial_j b_j - b_j \partial_j a_i\\ &= (\vec{b}\cdot \gradient)\vec{a} - (\vec{a}\cdot\gradient)\vec{b} + \vec{a}(\gradient\cdot\vec{b}) -\vec{b}(\gradient\cdot\vec{a}) \end{align*}

  10. Prove that the divergence of the curl is 0. [answer]

    \begin{align} \gradient \cdot (\gradient\times\vec{u}) &= \partial_i (\epsilon_{ijk}\partial_j u_k)\notag\\ &= -\partial_i (\epsilon_{jik}\partial_j u_k) \tag{permute}\\ &= -\partial_j (\epsilon_{ijk}\partial_i u_k)\tag{relabel}\\ &= -\partial_i (\epsilon_{ijk}\partial_j u_k)\tag{equality of mixed pt'ls} \end{align} \(-\partial_i (\epsilon_{ijk}\partial_j u_k) = \partial_i (\epsilon_{ijk}\partial_j u_k)\) is of the form \(a = -a\), which means that \(a=0\), i.e., \(-\partial_i (\epsilon_{jik}\partial_j u_k) = 0\), which completes the proof: \(\gradient \cdot (\gradient\times\vec{u}) = 0\).

  11. Prove that the curl of the gradient is 0. [answer]

    \begin{align} \gradient \times (\gradient f) &= \epsilon_{ijk}\partial_j (\gradient f)_k\notag\\ &=\epsilon_{ijk}\partial_j \partial_k f\notag\\ &=-\epsilon_{ikj}\partial_j \partial_k f\tag{permute}\\ &=-\epsilon_{ijk}\partial_k \partial_j f\tag{relabel}\\ &=-\epsilon_{ijk}\partial_j \partial_k f\tag{equality of mixed pt'ls}\\ \end{align} Since \(-\epsilon_{ijk}\partial_j \partial_k f = \epsilon_{ijk}\partial_j \partial_k f\), \(\epsilon_{ijk}\partial_j \partial_k f=0\), which completes the proof: \(\gradient \times (\gradient f) = 0\).

  12. Show that \(\gradient\times(\gradient\times \vec{a}) = \gradient(\gradient \cdot\vec{a})-\nabla^2 \vec{a}\). [answer]

    \begin{align*} [\gradient\times(\gradient\times \vec{a})]_i &= \epsilon_{ijk} \partial_j (\gradient\times \vec{a})_k \\ &=\epsilon_{ijk} \partial_j \epsilon_{klm} \partial_l a_m \\ &=\epsilon_{ijk} \epsilon_{klm} \partial_j \partial_l a_m \\ &=\epsilon_{kij} \epsilon_{klm} \partial_j \partial_l a_m \\ &=(\delta_{il}\delta_{jm}-\delta_{im}\delta_{jl}) \partial_j \partial_l a_m \\ &=\delta_{il}\delta_{jm}\partial_j \partial_l a_m -\delta_{im}\delta_{jl}\partial_j \partial_l a_m \\ &= \partial_i\partial_j a_j -\partial_l \partial_l a_i \\ &= \partial_i(\gradient\cdot \vec{a}) -[\nabla^2 \vec{a}]_i \\ &=[\gradient(\gradient \cdot\vec{a})-\nabla^2 \vec{a}]_i \end{align*}

  13. In Cartesian coordinates, \(\vec{r} = x\vec{e}_x + y\vec{e}_x + z\vec{e}_x\) and thus \(r = \sqrt{x^2 + y^2 + z^2}\). Find \(\gradient r \). [answer]

    \begin{align*} \gradient r &= \gradient (x^2 + y^2 +z^2)^{1/2} \\ &= \frac{1}{r}(x\vec{e}_x + y\vec{e}_x + z\vec{e}_x)\\ &= \frac{\vec{r}}{r} \end{align*}

  14. Let \(\vec{R} = \vec{r}- \vec{r}'\) be the position vector. Thus in Cartesian coordinates, \(R = \sqrt{(x-x')^2+(y-y')^2+(z-z')^2}\). Find \(\gradient R^2\). [answer]

    \begin{align*} \gradient R^2 = 2\vec{R} \end{align*}

  15. Find \(\gradient R^{-1}\). [answer]

    \begin{align*} \gradient 1/R = -\vec{R}/R^3 = -\vec{e}_r/R^2 \end{align*}

  16. What is the coordinate-free definition of the divergence of a vector field \(\vec{F}\)? [answer]

    \[\gradient \cdot \vec{F} = \lim_{V\to 0}\frac{1}{V}\oint_S \vec{F}\cdot d\vec{a}\,.\]

  17. ☸ Find the divergence of \(\vec{e}_r/r^2 = \vec{r}/r^3\) in both Cartesian and spherical coordinates. Note that the \(r\) component of the divergence of \(\vec{v}\) in spherical coordinates is \(\frac{1}{r^2}\frac{\partial}{\partial r}(r^2 v_r)\). Explain the result. [answer]

    In cartesian coordinates, \begin{align*} \gradient \cdot \frac{\vec{e}_r}{r^2}&= \gradient \cdot\frac{\vec{r}}{r^3} \\ &=\gradient \cdot \frac{x\vec{e}_x + y\vec{e}_y + z\vec{e}_z}{(x^2+y^2+z^2)^{3/2}}\\ &=\gradient \cdot ({x\vec{e}_x + y\vec{e}_y + z\vec{e}_z})(x^2+y^2+z^2)^{-3/2}\\ &=\frac{\partial}{\partial x} [x(x^2+y^2+z^2)^{-3/2}] +\frac{\partial}{\partial y} [y(x^2+y^2+z^2)^{-3/2}] +\frac{\partial}{\partial z} [z(x^2+y^2+z^2)^{-3/2}] \\ &= -\frac{3}{2}(x^2 + y^2 + z^2)^{-5/2}2x^2 + (x^2 + y^2 +z^2)^{-3/2} + \dots\\ &= \frac{3}{r^3} - \frac{3\cdot 2(x^2+y^2+z^2)}{2r^5}\\ &= \frac{3}{r^3} - \frac{3r^2}{r^5} = \frac{3}{r^3} - \frac{3}{r^3} = 0\,. \end{align*} In spherical coordinates, \begin{align*} \gradient \cdot \frac{\vec{e}_r}{r^2}&= \frac{1}{r^2} \frac{\partial}{\partial r} \bigg(r^2 \frac{1}{r^2}\bigg)\\ &= \frac{1}{r^2} \frac{\partial}{\partial r} (1) = 0\,.\\ \end{align*} It is a paradoxical result, because \(\vec{e}_r/r^2\) is a function that points away from the origin radially in all directions and quadratically falling off in amplitude. That is a pretty divergent function. The resolution to the paradox is that \(\vec{e}_r/r^2\to\infty\) at the origin, at which the divergence is not \(0\).

  18. What is the coordinate-free definition of the curl of a vector field \(\vec{F}\)? [answer]

    \[\gradient \times \vec{F} = \lim_{S\to 0}\frac{1}{S}\oint_C \vec{F}\cdot d\vecell\,.\]

  19. Construct a non-constant vector function that has zero divergence and zero curl everywhere. [answer]

    The vector-valued function \(\vec{f} = x\vec{e}_x - y\vec{e}_y\) has zero divergence and zero curl everywhere. Another example is \(\vec{f} = y\vec{e}_x + x\vec{e}_y\).

  20. Calculate the line integral of the function \(\vec{v} = y^2 \vec{e}_x + 2x(y+1)\vec{e}_y\) from \(\vec{a} = (1,1,0)\) to \(\vec{b} = (2,2,0)\), following the path from \((x,y,z) = (1,1,0)\) to \((2,1,0)\) to \((2,2,0)\). Then calculate the line integral following the path from \(\vec{a} = (1,1,0)\) to \(\vec{b} = (2,2,0)\) directly. Finally calculate \(\oint \vec{v}\cdot d\vecell\) for the loop going from \((1,1,0)\) to \((2,1,0)\) to \((2,2,0)\) to \((1,1,0)\). [answer]

    The integral for first path \(C_1\) from \((x,y,z) = (1,1,0)\) to \((2,1,0)\) (for which \(y= 1\)) and then to \((2,2,0)\) (for which \(x = 2\)) is \begin{align*} \int_{C_1} \vec{v}\cdot d\vecell &= \int_{C_1} [y^2 \vec{e}_x + 2x(y+1)\vec{e}_y]\cdot [dx\vec{e}_x + dy\vec{e}_y]\\ &=\int_{C_1} [y^2 dx + 2x(y+1)dy]\\ &= \int_1^2 1^2 dx + \int_{1}^2 2\cdot 2(y+1)dy\\ &= \int_1^2 dx + 4\int_{1}^2 (y+1)dy\\ &= 1 + 10 = 11\\ \end{align*} The integral for the second path \(C_2\) from \((x,y,z) = (1,1,0)\) to \((2,2,0)\) (for which \(y = x\)) is \begin{align*} \int_{C_2} \vec{v}\cdot d\vecell &= \int_{C_2} [y^2 \vec{e}_x + 2x(y+1)\vec{e}_y]\cdot [dx\vec{e}_x + dy\vec{e}_y]\\ &=\int_{C_2} [x^2 dx + 2y(y+1)dy]\\ &= \int_1^2 x^2 dx + \int_{1}^2 2y(y+1)dy\\ &= 10 \end{align*} The integral over the closed loop is easy to calculate, now that the individual paths forming the loop have been calculated. The loop going from \((1,1,0)\) to \((2,1,0)\) to \(2,2,0\) and finally back to \((1,1,0)\) can be written as \begin{align*} \int_{C_1} \vec{v}\cdot d\vecell - \int_{C_2} \vec{v}\cdot d\vecell = 11-10 = 1 \end{align*}

  21. Calculate the surface integral of \(\vec{v} = 2xz \vec{e}_x + (x+2)\vec{e}_y + y(z^2-3)\vec{e}_z\) over five sides (excluding the bottom) of a cubical box whose bottom edge extends from \(x = 0\) to \(x = 2\). [answer]

    The surface integral \(\int_S \vec{v} \cdot d\vec{a}\) is to be calculated. It will be given by the sum of five integrals: at \(x=2\) and \(x=0\) (for which \(d\vec{a} = dydz\vec{e}_x\)), at \(z = 2\) and \(z=0\) (for which \(d\vec{a}= dxdy\vec{e}_z\)), and at \(y=0\) (for which \(d\vec{a} = dxdz\vec{e}_y\)). \begin{align*} \int_S \vec{v} \cdot d\vec{a}&= \int_S [2xz \vec{e}_x + (x+2)\vec{e}_y + y(z^2-3)\vec{e}_z] \cdot d\vec{a}\\ &=\int_{0}^{2}\int_{0}^{2} 4z dy dz + \int_{0}^{2}\int_{0}^{2} 0 dy dz\\&\quad + \int_{0}^{2}\int_{0}^{2} y(4-3) dx dy + \int_{0}^{2}\int_{0}^{2} -3y dx dy + \int_{0}^{2}\int_{0}^{2} (x+2) dx dz\\ &=16+ 0 + 4 -12 + 12 = 20 \end{align*}

  22. Calculate the volume integral of the scalar-valued field \(T = xyz^2\) over a right triangular prism of height \(z= 3\), whose lower base is the triangle with vertices at the origin, \((x,y,z) = (1,0,0)\), and \((x,y,z) = (0,1,0)\). [answer]

    The volume integral \(\int_{V'} T dV\) is to be calculated: \begin{align*} \int_{V'} xyz^2 dV &= \int_{z=0}^{z=3} \int_{y=0}^{1-x} \int_{x=0}^{x=1} xyz^2 dx dy dz\\ &=\int_{z=0}^{z=3} z^2 \int_{x=0}^{x=1} x dx \int_{y=0}^{1-x} y dy dz\\ &=\int_{z=0}^{z=3} z^2 dz\int_{x=0}^{x=1} x dx \frac{(1-x)^2}{2} \\ &=\frac{9}{2} \int_{x=0}^{x=1} (x-2x^2+x^3) dx \\ &=\frac{9}{2} \bigg(\frac{1}{2}-\frac{2}{3}+\frac{1}{4}\bigg) \\ &=\frac{9}{3}\cdot \frac{1}{12} = \frac{3}{8} \end{align*}

  23. State the gradient theorem. What is the meaning of a conservative field? Provide examples of conservative fields and nonconservative fields from physics. [answer]

    The gradient theorem states that \begin{align*} \int_{\vec{a}}^{\vec{b}} \gradient V \cdot d\vecell &= V(b) - V(a) \end{align*} A conservative field is one that is given by a scalar potential function. For example, \(\gradient V \) above is a conservative field. The gradient theorem says that the line integral over a conservative field is independent of the path. Examples of conservative fields are the gravitational and electric fields, given by minus the gradient of the gravitational and electrical potential respectively. Meanwhile, the magnetic field is given by the curl of a vector potential, and is therefore not a conservative field.

  24. Calculate the line integral \(\int_{\vec{a}}^{\vec{b}} \gradient T \cdot d\vecell\) for \(T = x^2 + 4xy + 2yz^3\) where \(\vec{a} = (0,0,0)\) and \(b = (1,1,1)\) for the path \((0,0,0)\to (1,0,0)\to (1,1,0)\to (1,1,1)\). Then calculate the integral for the path \((0,0,0)\to (0,0,1)\to (0,1,1)\to (1,1,1)\). Next, calculate the integral for the parabolic path \(z=x^2,\, y=x\). Finally, check the result with the gradient theorem. [answer]

    For the first path from \((0,0,0)\to (1,0,0)\to (1,1,0)\to (1,1,1)\), \begin{align*} \int_{\vec{a}}^{\vec{b}} \gradient T \cdot d\vecell&= \int_{\vec{a}}^{\vec{b}} \gradient (x^2 + 4xy + 2yz^3) \cdot (dx\vec{e}_x + dy\vec{e}_y + dz\vec{e}_z)\\ &=\int_{\vec{a}}^{\vec{b}} [(2x + 4y)\vec{e}_x + (4x +2z^3)\vec{e}_y + (6yz^2)\vec{e}_z] \cdot (dx\vec{e}_x + dy\vec{e}_y + dz\vec{e}_z)\\ &=\int_{\vec{a}}^{\vec{b}} [(2x + 4y)dx + (4x +2z^3)dy + (6yz^2)dz] \end{align*} The integral splits into three integrals. For \((0,0,0)\to (1,0,0)\), \(y=z=0\). For \((1,0,0)\to (1,1,0)\), \(x=1\) and \(z=0\). For \((1,1,0)\to (1,1,1)\), \(x=y=1\): \begin{align*} \int_{\vec{a}}^{\vec{b}} [(2x + 4y)dx + (4x +2z^3)dy + (6yz^2)dz]&= \int_{0}^{1} 2x dx + \int_{0}^{1} 4 dy + \int_{0}^{1} (6z^2)dz\\ &= \int_{0}^{1} 2x dx + \int_{0}^{1} 4 dy + \int_{0}^{1} 6z^2dz\\ &=1 + 4 + 2 = 7. \end{align*} For the path \((0,0,0)\to (0,0,1)\to (0,1,1)\to (1,1,1)\), the integral is again split into three parts. For \((0,0,0)\to (0,0,1)\), \(x=y=0\). For \((0,0,1)\to (0,1,1)\), \(x=0\) and \(z=1\). For \((0,1,1)\to (1,1,1)\), \(y=z=1\). \begin{align*} \int_{\vec{a}}^{\vec{b}} [(2x + 4y)dx + (4x +2z^3)dy + (6yz^2)dz]&= \int_{0}^{1} 0 dz + \int_{0}^{1} 2 dy + \int_{0}^{1} (2x+4)dx\\ &= 0 + 2 + 5 = 7 \end{align*} Next, for the parabolic path \(z=x^2,\, y=x\), the integration is reduced to that over a single variable, which is here chosen to be \(x\). The differentials then \(dz= 2dx\) and \(dy=dx\), and the integral becomes \begin{align*} \int_{{x=0}}^{{x=1}} [(2x + 4x) + (4x +2x^6) + (6x^5)2x]dx&=\int_{{x=0}}^{{x=1}} (6x + 4x + 2x^6 + 12x^6)dx\\ &=\int_{{x=0}}^{{x=1}} (10x + 2x^6 + 12x^6)dx\\ &=5x^2 + \frac{2}{7}x^7 + \frac{12}{7}x^7\bigg\rvert_{x=0}^{x=1}\\ &= 5 + \frac{2}{7} + \frac{12}{7} = 5 + 14/7 = 7. \end{align*} Finally, the integral is simply given by \begin{align} T(\vec{b}) - T(\vec{a}) &= x^2 +4xy +2yz^3\bigg\rvert_{(0,0,0)}^{(1,1,1)}\notag\\ &=1 + 4 + 2 = 7.\tag{much easier!} \end{align}

  25. ☸ State and prove the divergence theorem and Stokes's theorem. [answer]

    The divergence theorem is \[\oint \vec{F}\cdot d\vec{a} = \int \gradient\cdot\vec{F}\, dV\,,\] and Stokes's theorem is \[\oint \vec{F}\cdot d\vecell = \int (\gradient\times\vec{F}) \cdot d\vec{a}\,.\] They are immediate from the definitions of divergence and curl above, but to see what motivates these definitions, see here.

  26. List everything that can be concluded about a vector field \(\vec{F}\) that has a vanishing curl, i.e., \(\gradient\times \vec{F} = 0\). [answer]

    • The line integral \(\int_{\vec{a}}^{\vec{b}} \vec{F}\cdot d\vecell\) is independent of the path.
    • The line integral over any closed loop is zero, i.e., \(\oint_{\vec{a}}^{\vec{a}} \vec{F}\cdot d\vecell = 0\).
    • \(\vec{F}\) is given by the gradient of a scalar potential, i.e., \(\vec{F} = -\gradient V\).

  27. List everything that can be concluded about a vector field \(\vec{F}\) that has a vanishing divergence, i.e., \(\gradient\cdot \vec{F} = 0\). [answer]

    • The surface integral \(\int_{S} \vec{F}\cdot d\vec{a}\) is indepndent of the surface (but the line integral \(\int_{\vec{a}}^{\vec{b}} \vec{F}\cdot d\vecell\) depends on the path.)
    • The integral over any closed surface is zero, i.e., \(\oint_{S}\vec{F}\cdot d\vec{a} = 0\).
    • \(\vec{F}\) is given by the curl of a vector potential, i.e., \(\vec{F} = \gradient \times \vec{A}\).

Partial differential equations

  1. Provide the name of each of the following partial differential equations. Also list a few physical phenomena described by each: \begin{align} \nabla^2 u &= 0\label{Laplace} \tag{i}\\ \nabla^2 u &= f(\vec{r})\label{Poisson}\tag{ii}\\ \nabla^2 u &= \frac{1}{\alpha^2}\frac{\partial u }{\partial t}\label{Diffusion}\tag{iii}\\ \nabla^2 u &= \frac{1}{v^2}\frac{\partial^2 u}{\partial t^2}\label{Wave}\tag{iv}\\ \nabla^2 F + k^2 F&= 0\label{Helmholtz}\tag{v}\\ i\hbar \frac{\partial\Psi}{\partial t}&= -\frac{\hbar^2}{2m}\nabla^2 \Psi + V\Psi \label{Schrodinger}\tag{vi} \end{align} Which two equations above both reduce to equation (\ref{Helmholtz}) upon assuming time-harmonic solutions? [answer]

    Equation (\ref{Laplace}) is the Laplace equation, which describes the electric potential in a space that does not contain charge, the gravitational potential in a space that does not contain mass, and the velocity potential in an incompressible fluid in a space that does not contain vortices, sources, and sinks, to name a few examples.

    Equation (\ref{Poisson}) is the Poisson equation, which describes the electric potential in a space that contains charge, the gravitational potential in a space that contains mass, and the velocity potential in an incompressible fluid in a space that contains sources and sinks.

    Equation (\ref{Diffusion}) is the diffusion equation, which describes the temperature as a function of time in a space with no heat sources, or the concentration of a diffusing substance.

    Equation (\ref{Wave}) is the wave equation, which is the most beautiful PDE, describing light, sound, water waves, and gravitational waves, to name a few.

    Equation (\ref{Helmholtz}) is the Helmholtz equation, which is the spatial part of both the wave equation and the diffusion equation.

    Equation (\ref{Schrodinger}) is the Schrodinger equation, which describes nonrelativistic quanta. The paraxial wave equation also has the form of the Schrodinger equation, only with the temporal derivative in equation (\ref{Schrodinger}) replaced with a spatial derivative (\(\partial/\partial z\), for example), and with the Laplacian in equation (\ref{Schrodinger}) replaced with the transverse Laplacian \(\partial^2/\partial x^2 + \partial^2/\partial y^2\), for example).

  2. Solve Laplace's equation in 2D \(\frac{\partial^2 V}{\partial x^2} + \frac{\partial^2 V}{\partial y^2} = 0\) where \begin{align*} V&=0\quad \text{for}\quad y=0\\ V&=0\quad \text{for}\quad y=a\\ V&=V_0\quad \text{for}\quad x=-b\\ V&=V_0\quad \text{for}\quad x=b\,. \end{align*} are the boundary conditions. [answer]

    Separating variables gives \begin{align*} \frac{X''}{X} + \frac{Y''}{Y} = 0 \end{align*} Introducing a separation constant gives \begin{align} \frac{X''}{X} &= k^2 \label{equationforX}\tag{i}\\ \frac{Y''}{Y} &= -k^2 \label{equationforY}\tag{ii} \end{align} The sign in front of \(k^2\) determines variable is harmonic and which is exponential. Since the boundary condition on \(Y\) goes to \(0\), it is chosen to be the harmonic solution. The boundary condition on \(X\) is finite, and thus it chosen to be exponential solution (because exponential solutions cannot satisfy a 0 boundary condition, because exponentials \(\neq 0 \)). Equation (\ref{equationforX}) gives \begin{align*} X = Ae^{ kx} + Be^{-kx}, \end{align*} and equation (\ref{equationforY}) gives \begin{align*} Y = C\cos ky + D\sin ky \,. \end{align*} Thus the general solution is \begin{align*} V = XY = (Ae^{ kx} + Be^{-kx})(C\cos ky + D\sin ky ) \end{align*} Applying the boundary condition \(V=0\) for \(y=0\) sets \(C=0\). And applying the boundary condition \(V=0\) for \(y = L\) sets \(kL = n\pi\), where \(n = 1, 2, ,3 \dots\). Meanwhile, applying the boundary condition \(V=V_0\) at both \(x = -b \) and \(x=b\) requires that \(A=B\). The solution is therefore \begin{align*} V = XY = \sum_{n=1}^\infty C_n \cosh (n\pi x/L)\sin (n\pi y/L) \end{align*} where \(\cosh x = (e^{x} + e^{-x})/2\) has been used. To find the expansion coefficients \(C_n\), employ the final (or third) boundary condition, \(V=V_0\) at \(x=b\) and invoke the orthogonality of sines is used: \begin{align*} V_0 &= \sum_{n=1}^\infty C_n \cosh (n\pi b/L)\sin (n\pi y/L)\\ V_0 \sin (m\pi y/L) &= \sum_{n=1}^\infty C_n \cosh (n\pi b/L)\sin (n\pi y/L)\sin (m\pi y/L)\\ V_0 \int_0^{L}\sin (m\pi y/L)dy &= \sum_{n=1}^\infty C_n \cosh (n\pi b/L)\int_{0}^{L}\sin (n\pi y/L)\sin (m\pi y/L)dy\\ -V_0 \frac{L}{m\pi}\cos (m\pi y/L)\bigg\rvert_{0}^{L} &= \sum_{n=1}^\infty C_n \cosh (n\pi b/L)\frac{L}{2}\delta_{nm}\\ C_n &= -V_0 \frac{1}{\cosh (n\pi b/L)}\frac{2}{n\pi}\bigg[\cos n\pi - 1\bigg]\\ &= \begin{cases} 0 \quad & n=0,2,4\dots\\ \frac{4V_0}{n\pi\cosh(n\pi b/L)}\quad & n=1,3,5\dots \end{cases} \end{align*} The solution is thus fully determined.

  3. Solve the 2D Laplace equation \(\frac{\partial^2 V}{\partial x^2} + \frac{\partial^2 V}{\partial y^2} = 0\) where now \begin{align*} V&=0\quad \text{for}\quad y=0\\ V&=f(x)\quad \text{for}\quad y=H\\ \frac{\partial V}{\partial x}&=0\quad \text{for}\quad x=0\\ \frac{\partial V}{\partial x}&=0\quad \text{for}\quad x=L \end{align*} are the boundary conditions. Then solve the problem for \(f(x) = V_0x/L\). [answer]

    Separating variables leads to \begin{align*} \frac{X''}{X} + \frac{Y''}{Y} &= 0 \end{align*} Introduce the separation constant \(k^2\). Now one has to choose whether to set \(k^2\) equal to \(\frac{X''}{X} \) or \(\frac{Y''}{Y}\). The one that equals \(k^2\) will have decay/growth solutions, and the other one (that equals \(-k^2\)) will have harmonic solutions. Since \(X'\) goes to \(0\) at both boundaries, \(\frac{X''}{X} = -k^2\), and thus \(\frac{Y''}{Y} = k^2\). This is a good rule of thumb: the solutions that are zero at two boundaries must be harmonic. This choice gives the general solution \begin{align*} V = XY = (A\cos kx+ B\sin kx)(Ce^{ky}+ De^{-ky})\,. \end{align*} Now the boundary conditions are invoked. Since \(\frac{\partial V}{\partial x}=0\) for \(x=0\), \(B =0\), and since \(\frac{\partial V}{\partial x}=0\) for \(x=L\), \(\sin kL =0\), which means that \(kL = n\pi\), where \(n= 0, 1, 2,\dots\). Meanwhile, since \(V=0\) at \(y=0\), \(C = -D\). Thus the general solution is \begin{align*} V(x,y) = \sum_{n=0}^{\infty}A_n \cos \frac{n\pi x}{L} \sinh \frac{n\pi y}{L}\,. \end{align*} Now the final boundary condition at \(y=H\) is applied, which allows for the determination of \(A_n\) using the orthogonality of cosines: \begin{align*} f(x) &= \sum_{n=0}^{\infty}A_n \cos \frac{n\pi x}{L} \sinh \frac{n\pi y}{L}\\ \int_{0}^{L}f(x)\cos \frac{m\pi x}{L}\, dx &= \sum_{n=0}^{\infty} \sinh \frac{n\pi H}{L} A_n \int_{0}^{L}\cos \frac{n\pi x}{L} \cos \frac{m\pi x}{L}dx \\ \implies A_n&= \frac{2}{L\sinh (n\pi H/L)}\int_{0}^{L}f(x)\cos \frac{n\pi x}{L}\, dx \end{align*} Therefore, the coefficients \(A_n\) above, along with the general solution for \(V(x,y)\), solves the Laplace equation for the given boundary conditions.

    The coefficients are calculated for \(f(x) = V_0 x/L\): \begin{align*} A_n&= \frac{V_0}{L}\frac{2}{L\sinh (n\pi H/L)}\int_{0}^{L} x\cos \frac{n\pi x}{L}\, dx \\ &= \begin{cases} -\frac{4V_0}{n^2\pi^2\sinh (n\pi H/L)} \quad \text{for } n = 1,3, 5\dots\\ 0 \quad \text{for } n = 0, 2, 4\dots \end{cases} \end{align*}

  4. Solve the 1D diffusion equation \(\kappa \frac{\partial^2 u}{\partial x^2} = \frac{\partial u}{\partial t}\) where \begin{align*} u&=0\quad \text{for}\quad x=-L\\ u&=0\quad \text{for}\quad x=L\\ u&=\begin{cases} 1,\quad x\geq0\\ 0,\quad x<0 \end{cases} \quad \text{at }t =0 \end{align*} are the boundary and initial conditions. [answer]

    Pick and orthogonal set of basis functions, \(u= X(x)T(t)\), giving \begin{align*} \nu^2 \frac{X''}{X} &= \frac{T'}{T} \end{align*} Next, introduce the separation constant \(-k\) and set it equal to \(X''/X\), because harmonic spatial solutions are expected. This gives \begin{align*} X'' + \frac{k^2}{\nu^2} X &= 0\\ \implies X&= A\cos {k\nu}x + B\sin {k\nu}x \end{align*} Meanwhile, \begin{align*} T' &= -k^2T\\ \implies T&= e^{-k^2t} \end{align*} Thus \begin{align*} XT = (A\cos \kappa \nu x + B\sin \kappa \nu x )e^{-k^2t}\,. \end{align*} Applying the first to BCs gives \(A \cos (-k\nu L) + B \sin (-k\nu L) =0\) and \(A \cos (k\nu L) + B \sin (k\nu L) =0\). The odd and even properties of cosine and sine can be used in the first equation to obtain a full-rank linear system of equations: \begin{align*} A \cos (k\nu L) - B \sin (k\nu L) &=0\\ A \cos (k\nu L) + B \sin (k\nu L) &=0 \end{align*} Adding the two equations gives the condition \(k = (2n-1)\pi/2\nu L\), and subtracting the two equations gives the condition \(k = \pi n/\nu L\), where \(n=1,2,3\dots\). Thus the general solution is in terms of two infinite series: \begin{align*} u = \sum_{n=1}^{\infty} \bigg\lbrace A_n \cos \frac{(2n-1)\pi x}{2L}\exp{\big[(-[2n-1]\pi/2)^2t\big]} + B_n\sin \frac{n\pi x}{L}\exp{[(-{n\pi}/{L})^2t]}\bigg\rbrace \end{align*} What a terrible looking equation. Anyhow, denoting the Heaviside step function as \(H(x)\), the initial condition is invoked: \begin{align*} H(x)= \sum_{n=1}^{\infty} \bigg\lbrace A_n \cos \frac{(2n-1)\pi x}{2L} + B_n\sin \frac{n\pi x}{L}\bigg\rbrace \end{align*} Orthogonality can be used to find \(A_n\) and \(B_n\): \begin{align*} \int_{-L}^{L}H(x)\cos \frac{(2n-1)\pi x}{2L} dx &= \sum_{n=1}^{\infty} A_n \int_{-L}^L \cos\frac{(2n-1)\pi x}{2L} \cos\frac{(2m-1)\pi x}{2L} dx\\ \int_{0}^{L}\cos \frac{(2m-1)\pi x}{2L} dx &= L A_m\\ A_n &= \frac{2}{\pi(2n-1)} \sin \frac{(2n-1)\pi x}{2L}\bigg\rvert_{x=0}^{x=L}\\ &=\frac{2}{\pi(2n-1)} \sin \frac{(2n-1)\pi}{2}\\ &=\frac{2(-1)^{n-1}}{\pi(2n-1)}\,, \end{align*} and \begin{align*} \int_{-L}^{L}H(x)\sin \frac{n\pi x}{L} dx &= \sum_{n=1}^{\infty} B_n \int_{-L}^L \sin\frac{n\pi x}{L} \sin\frac{n\pi x}{L} dx\\ \int_{0}^{L}\sin \frac{n\pi x}{L} dx &= LB_n\\ B_n &= -\frac{1}{n\pi}\cos\frac{n\pi x}{L}\bigg\rvert_{x=0}^{x=L}\\ &=-\frac{1}{n\pi}[\cos (n\pi) - 1] = -\frac{1}{n\pi}[(-1)^n- 1]\\ &= \begin{cases} 0,\quad n = 0,2,4\dots\\ \frac{2}{n\pi},\quad n = 1,3,5\dots \end{cases} \end{align*} Inserting the calculated values for \(A_n\) and \(B_n\) into the general solution above gives the complete solution.

    Note, however, that it is easier to shift the original coordinates by \(L\), i.e., \(x + L = x'\), so that the boundary conditions are \begin{align*} u&=0\quad \text{for}\quad x'=0\\ u&=0\quad \text{for}\quad x'=2\\ u&=\begin{cases} 1,\quad x'\geq L\\ 0,\quad x'<2L \end{cases} \quad \text{at }t =0\,. \end{align*} Then one must simply remember to shift the coordinates back, i.e., \(x = x' - L\), when presenting the final solution.

  5. Solve the Schrodinger equation for a potential that is 0 inside a a cube of side length \(l\), and infinite at the boundaries. Discuss the degeneracy of modes. [answer]

    Let \(\Psi = XYZT\). Separating variables leads to \begin{align*} -\frac{\hbar^2}{2m}\bigg(\frac{X''}{X} + \frac{Y''}{Y} + \frac{Z''}{Z}\bigg) = i\hbar \frac{T'}{T} \end{align*} The time dependence is easily solved, introducing the separation constant \(E\): \begin{align*} i\hbar \frac{T'}{T}&= E\\ \implies T &= \exp (-iEt/\hbar) \end{align*} Meanwhile the spatial dependence is solved by writing \begin{align*} \frac{X''}{X} + \frac{Y''}{Y} + \frac{Z''}{Z} &= -\frac{2mE}{\hbar^2}\\ \frac{X''}{X} &= - k_x^2\\ \frac{Y''}{Y} &= - k_y^2\\ \frac{Z''}{Z} &= - k_z^2\,. \end{align*} Since the boundary condition is \(0\) at \(X,Y,Z = 0\text{ and }L\), one throws out the cosine solutions and obtains \begin{align*} k_x = \frac{n\pi}{L},\quad k_y = \frac{m\pi}{L},\quad k_z = \frac{l\pi}{L}. \end{align*} Thus the eigenenergies are \begin{align*} E_n = \frac{\hbar^2 \pi}{2mL}\sqrt{n^2 + m^2 + l^2} \end{align*} There is no nice expression for the degeracy in terms of the indices.

  6. List the eigenfunctions that solve the Helmholtz equation in Cartesian coordinates, cylindrical coordinates, and spherical coordinates. Given this, which equations in problem (1) of this section are solved spatially? What are the time dependences for each of these equations? [answer]

    The eigenfunctions that solve the Helmholtz equation in Cartesian coordinates are exponentials of (almost always) imaginary arguments (i.e., sines and cosines), which describe spatially harmonic waves. However, note that exponentials of real arguments are also eigenfunctions of the Helmholtz equation in Cartesian coordinates. These correspond to decay and (less often) growth (evanescent waves). This is an important distinction between the Helmholtz equation and its paraxial approximation in acoustics and optics: the paraxial approximation does not contain evanescent waves.

    The eigenfunctions that solve the Helmholtz equation in cylindrical coordinates consist of radial, polar, and axial functions. The radial eigenfunctions are Bessel and Neumann functions \(J_n\) and \(N_n\). The polar eigenfunctions are Legendre polynomials \(P_n\). The axial eigenfunctions are spatial harmonics (sines and cosines).

    The eigenfunctions that solve the Helmholtz equation in spherical coordinates consist of radial, polar, and azimuthal functions. The radial eigenfunctions are spherical Bessel and spherical Neumann functions \(j_n\) and \(n_n\). The polar eigenfunctions are the associated Legendre functions \(P^m_n\). The azimuthal eigenfunctions are spatial harmonics (sines and cosines).

    Given these eigenfunctions, the spatial part of equations (\ref{Diffusion}) (the diffusion equation), (\ref{Wave}) (the wave equation) are solved. Also, if \(V\) in equation (\ref{Schrodinger}) (the Schrodinger equation) is \(0\) in a box, sphere, or cylinder and \(\infty\) at the boundaries, then equation (\ref{Schrodinger}) reduces to equation (\ref{Diffusion}), and thus shares the same eigenfunctions as the Helmholtz equation. For example, a quantum particle in infinite spherical well has the same eigenfunctions as sound in a spherical enclosure, or temperature in a hollow sphere.

    Further, it is interesting to think of the time eigenfunctions of the diffusion equation, the wave equation, and the Schrodinger equation. All share the same spatial eigenfunctions, but the time eigenfunctions for the wave equation are harmonic, while those for the diffusion equation are exponential decay/growth. Meanwhile, what are the time eigenfunctions for the Schrodinger equation (regardless of \(V\)? They would be exponential decay/growth if it were not for the \(i\) in the Schrodinger equation, which makes them harmonic. Thus the Schrodinger equation is physically more like a wave equation than a diffusion equation, but is mathematically more like a diffusion equation than a wave equation.

    On this point, one should note that the Schrodinger equation and heat diffusion equations do not obey time-reversal symmmetry, while the wave equation does.

  7. ☸ One solution to the 1D wave equation \(\frac{\partial^2 p }{\partial x^2} - \frac{1}{c^2}\frac{\partial^2 p}{\partial t^2} = 0\) is \begin{align}\label{wavers}p = \begin{Bmatrix} e^{\alpha x}\\ e^{-\alpha x} \end{Bmatrix} \begin{Bmatrix} e^{\alpha c t}\\ e^{-\alpha c t} \end{Bmatrix} \tag{a} \end{align} What a bizarre-looking solution! If you do not believe it is a solution to the wave equation, you can check it for yourself by setting the separation constant equal to \(X''/X = \alpha^2\), which gives \(T''/T = (\alpha c)^2\)! Is this solution a wave? Meanwhile, a solution to the 1D diffusion equation \(\frac{\partial^2 p }{\partial x^2} - \kappa^2\frac{\partial p}{\partial t} = 0\) is \begin{align}\label{heaters}p = \begin{Bmatrix} \cos{k x}\\ \sin k x \end{Bmatrix} e^{-(k/\kappa)^2t} = \begin{Bmatrix} e^{ik x}\\ e^{-ik x} \end{Bmatrix} e^{-(k/\kappa)^2t} \tag{b} \end{align} Is this solution a wave? [answer]

    Equation \ref{wavers} describes solutions that grow exponentially in space and time. At first glance, this is no wave! We are not used to such bizarre behaviours, i.e., waves that grow and decay exponentially in space and time.

    However, a wave need not be harmonic to be a wave (which can be proved by the closure of the Fourier series). A second glance at equation (\ref{wavers}) shows that it is in fact of the form \(f(x\pm ct)\), which we know solves the wave equation. Physically, equation (\ref{wavers}) is disturbance that propagates at a finite speed \(c\) whose waveform is exponential. Thus is equation (\ref{wavers}) indeed describes wave motion.

    Meanwhile, equation (\ref{heaters}) is not a solution to the wave equation because it cannot be written in the form \(f(x\pm ct)\). Spatially, the solutions are harmonic (wave-like), but that is not a sufficient criterion for a physical wave.

  8. List the eigenfunctions that solve the Laplace equation in Cartesian coordinates, cylindrical coordinates, and spherical coordinates. [answer]

    The eigenfunctions of that solve the Laplace equation in Cartesian coordinates are equivalent those that solve the Helmholtz equation in Cartesian, i.e., exponentials of real and imaginary arguments.

    Two of the three eigenfunctions that solve the Laplace equation in cylindrical coordinates are also identical to those that solve the Helmholtz equation in Cartesian coordinates: Bessel and Neumann functions in \(r\) and sines and cosines in \(\theta\). However, the eigenfunctions of the Laplace equation for \(z\) are exponentials of real arguments, while those of the Helmholtz equation are exponentials of imaginary arguments.

    Similarly, two of the three eigenfunctions that solve the Laplace equation in spherical coordinates are also equivalent to those that solve the Helmholtz equation in Spherical coordinates: Legendre polynomials in \(\cos\theta\) and sines and cosines in \(\phi\) (Blackstock's \(\psi\)). However, the eigenfunctions of the Laplace equation for \(r\) are \(r^l \) and \(r^{-(l+1)}\), while those of the Helmholtz equation are spherical Bessel and Neumann functions \(j_n\) and \(n_n\).

  9. Solve the Laplace equation \(\nabla^2 V = 0\) (assuming polar symmetry) where \(V=V_0(\theta)\) on the boundary of a sphere of radius \(a\), where \[ \frac{\partial}{\partial r} \bigg(r^2 \frac{\partial V}{\partial r}\bigg) + \frac{1}{\sin \theta} \frac{\partial }{\partial \theta}\bigg(\sin\theta \frac{\partial V}{\partial \theta}\bigg) = 0 \] is Laplace's equation in spherical coordinates with no azimuthal dependence. Does the coordinate \(z= \cos\theta\) actually correspond to the \(z\) axis? [answer]

    Let \(V = R(r)\Theta(\theta)\) and substitute into the above: \begin{align*} \frac{\partial}{\partial r} (r^2 R'\Theta) + \frac{1}{\sin \theta} \frac{\partial }{\partial \theta}(\sin\theta R\Theta') &=0\,. \end{align*} Divide by \(R\Theta\): \begin{align*} \frac{1}{R}\frac{\partial}{\partial r} (r^2 R') + \frac{1}{\Theta}\frac{1}{\sin\theta} \frac{\partial }{\partial \theta}(\sin\theta \Theta') &=0\,. \end{align*} The variables have been separated in the two terms above. A separation constant \(l(l+1)\) is introduced. This is cheating, but it is done because we anticipate the solution to the polar equation to be Legendre polynomials, which converge when the separation constant is chosen: \begin{align*} \frac{1}{\Theta}\frac{1}{\sin\theta} \frac{d }{d \theta}(\sin\theta \Theta') &= -l(l+1)\\ \frac{d }{d \theta}(\sin\theta \Theta') &= -l(l+1)\sin\theta \Theta \end{align*} That is a bizarre-looking equation may look more familiar if a change of variable \(z= \cos\theta\) is made. This relation holds only for a unit sphere, so no, in reality \(z\) does not correspond to the Cartesian \(z\) coordinate. Then, \(\frac{dz}{d\theta} = -\sin\theta\), and the derivatives with respect to \(\theta\) are \begin{align*} \frac{d}{d\theta} &= \frac{d}{dz}\frac{dz}{d\theta} = -\sin\theta\frac{d}{dz}\\ \frac{d^2}{d\theta^2} &= \sin^2\theta\frac{d^2}{dz^2} \end{align*} The differential equation becomes \begin{align*} -\sin\theta\frac{d }{d z}\bigg(\sin^2\theta \frac{d\Theta}{dz}\bigg) &= -l(l+1)\sin\theta\, \Theta \end{align*} Dividing the equation above by \(-\sin\theta\) and writing \(\sin^2\theta = 1-\cos^2\theta\) gives \begin{align*} \frac{d }{d z}\bigg([1-z^2]\frac{d\Theta}{dz}\bigg) &= l(l+1)\, \Theta\,. \end{align*} Expanding the derivatives and writing everything on the left-hand side gives \begin{align*} (1-z^2)\frac{d^2 \Theta}{d z^2} - 2z\frac{d\Theta}{dz} + l(l+1)\Theta&= 0 \end{align*} This is the canonical form of Legendre's equation. See the first question of the ODE section above for the solution. Meanwhile, the radial equation is \begin{align*} \frac{d}{d r} \bigg(r^2 \frac{dR}{dr}\bigg) &= Rl(l+1) % r^2\frac{d^2R}{dr^2} + 2r \frac{dR}{dr} + R\, l(l+1) &=0 \end{align*} The general solution to this equation is (just memorize it): \begin{align*} R(r) = A_l r^l + \frac{B_l}{r^{l+1}} \end{align*} Thus the general solution to Laplace's equation in spherical coordinates with no azimuthal dependence is \begin{align*} V = \sum_{l = 0}^{\infty} \bigg( A_l r^l + \frac{B_l}{r^{l+1}} \bigg)P_l(\cos\theta) \end{align*}

    To satisfy the boundary condition that \(V= V_0(\theta)\) at \(r = a\), first the \(B_l = 0\) because those terms diverge at the origin. Then set \[\sum_{l=0}^\infty A_l a^l P_l(\cos\theta) = V_0(\theta).\] The coefficient \(A_l\) can be solved using the orthogonality of cosines: \[A_l = \frac{2l+1}{2a^l} \int_0^\pi V_0(\theta) P_l(\cos\theta) \sin\theta d\theta.\]

    You can probably guess that solution to Laplace's equation (with no symmetry assumed) is \begin{align*} V = \sum_{m=0}^{m=l}\sum_{l = 0}^{\infty} \bigg( A_l r^l + \frac{B_l}{r^{l+1}} \bigg)P^m_l(\cos\theta)(C_m\cos m\phi + D_m\sin m\phi). \end{align*} where \(\phi\) is the azimuthal angle.

  10. What is the solution to Poisson's equation \(\nabla^2 u = f(\vec{r})\) in terms of the appropriate Green's function, \(G(\vec{r}|\vec{r}')\)? [answer]

    The solution is \[u(\vec{r}) = \int_V G(\vec{r}|\vec{r}') f(\vec{r}') dV' \] where \(G(\vec{r}|\vec{r}')\) is the sum of the free space Green's function and the solution to Laplace's equation.

  11. Solve Laplace's equation \(\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = 0\) for a semi-infinite plate satisfying the following boundary conditions: \begin{align*} u(x,\infty) &= 0\\ u(0,y) &=0\\ u(x,0) &=\begin{cases} T_0 \quad \text{for }\quad x\in (0,a]\\ 0\quad \text{for }\quad x>1\\ \end{cases} \end{align*} Note that \begin{align*} \begin{cases}f(x) &= \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} g(k) \sin kx\, dk\\ g(k) &= \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(x) \sin kx\, dx\,\end{cases} \end{align*} is the sine Fourier transform pair. [answer]

    Write the solution \(u\) as the product of basis functions \(XY\). The PDE becomes \begin{align*} \frac{X''}{X} + \frac{Y''}{Y} = 0\,. \end{align*} Note that the eigenfunctions \(Y\) must vanish at \(y=\infty\). Anticipating exponential solutions for \(Y\), set \(\frac{Y''}{Y} = k^2\). Thus \(Y \,\propto \, e^{-ky}\), where the growth solutions have been tossed because they are unphysical. Meanwhile, \(\frac{X''}{X} = -k^2\), so \(X = A\cos kx + B\sin kx\). Since \(X(0) = 0\), the \(\cos kx\) term cannot be included, so \(A =0\). This leaves the general solution \begin{align*} u(x,y) = Be^{-ky}\sin kx \,. \end{align*} Now the boundary condition at \(y=0\) needs to be used to find the coefficient \(B\). However, \(u\) at the boundary changes value from \(T_0\) to \(0\) at \(x=a\). Thus a continuous variable is needed to satisfy the boundary condition (rather than a discrete variable): \begin{align*} u(x,y) = \int_{0}^\infty B(k)e^{-ky}\sin (kx) dk\,. \end{align*} At \(y=0\), the solution is \begin{align}\tag{i}\label{invertitoffthatway} u(x,0) = \int_{0}^\infty B(k)\sin (kx)\, dk\,. \end{align} Now, finding the coefficients \(B(k)\) amounts to taking an inverse Fourier transform. Note that Fourier sine transform pair is \begin{align*} f(x) &= \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} g(k) \sin kx\, dk\\ g(k) &= \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(x) \sin kx\, dx\,. \end{align*} The pair is written so that the first equation above has the form of equation (\ref{invertitoffthatway}): \begin{align*} u(x) &= \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} \bigg[\sqrt{\frac{\pi}{2}}B(k)\bigg] \sin (kx)\, dk\\ B(k) &= \frac{2}{\pi} \int_{0}^{\infty} u(x) \sin (kx)\, dx\,. \end{align*} The second equation above is the equation for the expansion coefficients, upon letting \(u(x)\) be \(T_0\) for \(x\leq a\) and \(0\) otherwise: \begin{align*} B(k) &= \frac{2T_0}{\pi} \int_{0}^{1} \sin kx\, dx\\ &= \frac{2T_0}{\pi k} (1 -\cos k)\,. \end{align*} Thus the integral solution of the PDE is \begin{align*} u(x,y) = \frac{2T_0}{\pi}\int_{0}^\infty \frac{1 -\cos k}{k} \sin (kx) e^{-ky} dk\,. \end{align*}