This page covers the list of math topics provided here:

solutions of ordinary and partial differential equations, power series solutions, linear independence, elementary linear algebra, determinants, Taylor series, eigenvalue problems, orthogonality, Fourier series and integrals, integration by parts, vector algebra, complex numbers, special functions (e.g., Bessel functions and Legendre polynomials).

I have organized these topics in a way that progresses from basic mathematics to more difficult concepts:

- Review of basics
- Taylor series
- Fourier series and transforms
- Linear algebra
- Ordinary differential equations
- Orthogonality and special functions
- Vector algebra & vector calculus
- Partial differential equations

Concepts and problems I find challenging are denoted by the wheel of dharma, ☸.

- State the fundamental theorem of algebra (d'Alambert's theorem).
- State the rational root theorem.
- Solve \(x^3 - x^2 - 5x -3 = 0.\)
- Solve \(x^3 - x^2 - 8 x - 6 = 0\).
- Solve \(x^4 - 2x^3 - 3x^2 + 8x -4 = 0\).
- Decompose \(1/(x^2 + 2x-3)\) into partial fractions.
- Decompose \((x^3+16)/(x^3 - 4x^2 +8x)\) into partial fractions.
- Decompose \(1/(x^3-1)\) into partial fractions.
- Find the roots of \(\exp(2z)=2i\).
- Evaluate \((-i)^{1/3}\).
- Find the real part of \(e^{-ix}/(1+e^{a+ib})\).
- State the fundamental theorem of calculus.
- What is the relationship between differentiability and continuity?
- \(\lim_{x\to 0}\frac{e^x-1}{x^2+x}=\)
- \(\lim_{x\to 0}\frac{2\sin x - \sin 2x}{x - \sin x}=\)
- \(\lim_{x\to \infty} x^n\, e^{-x} =\)
- \(d\sinh x/dx = \)
- \(d\cosh x/dx = \)
- \(d\tan x/dx = \)
- \(d\cot x/dx = \)
- \(d\sec x/dx = \)
- \(d\csc x/dx = \)
- \(\int dx(x^4 + x^3 + x^2 + 1)/(x^2 + x - 2)=\)
- Show that \(\int u dv = uv - \int v du \).
- \(\int \ln x dx = \)
- \( \int e^x \sin x dx = \)
- What are some general guidelines about trigonometric subtitution?
- \( \int \sqrt{a^2-x^2} dx = \)
- \( \int \frac{dx}{\sqrt{a^2-x^2}} = \)
- \( \int \frac{dx}{a^2+x^2} = \)
- ☸ \( \int \sqrt{a^2+x^2}dx = \)

- What is the difference between a Taylor series and a Maclaurin series?
- Write the first three nonzero terms of the Maclaurin series representation of \(e^x \), \(\sin x \), \(\cos x \), and \(\tan x \).
- Show that \( \cos x = (e^{jx} + e^{-jx})/2\), \( \sin x = (e^{jx} - e^{-jx})/2j\).
- Write the first three terms of the Maclaurin series representation of \(\ln (1+x)\).
- Write the first three terms of the Maclaurin series representation of \(\ln (1-x)\)
- Write the first three terms of the Taylor series representation of \(\ln (-x)\) about \(x= -1\). Verify your answer for by checking that it can be obtained from the previous expansion for \(\ln (1-x)\) by shifting the latter one unit in the negative \(x\) direction.

- The sine-cosine form of the Fourier series is \[f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty \big[a_n \cos(nx) + b_n\sin(nx) \big]\,. \] Why is \(a_0\) not included in the sum?
- Use orthogonality to determine \(a_0\), \(a_n\), and \(b_n\).
- If \(f(x)\) is periodic in \(2\pi\), has a finite number of maxima, minima, and discontinuities, and if \(\int_{-\pi}^\pi |f(x)|dx \) is finite, what is the relationship between \(f(x)\) and its Fourier series? What are these conditions called?
- Obtain the Fourier series expansion coefficients of a periodic square wave, which is \(1\) for \(-\pi \leq x\leq 0 \) and \(-1\) for \(0 < x \leq \pi \).
- Obtain the Fourier series expansion coefficients of a periodic sawtooth wave, which is \(x/\pi\) for \(-\pi \leq x\leq \pi \).
- Obtain the Fourier series expansion coefficients for some of the periodic functions listed here.
- The complex-exponential form of the Fourier series is \[f(x) = \sum_{n = -\infty}^{\infty} c_n e^{inx}\] Use orthogonality to determine \(c_n\).
- Attempt some of these problems from Boas's
*Methods*book, using the sine-cosine and complex exponential Fourier series. - State Parseval's theorem.
- In what sense is the Fourier transform a generalization of the Fourier series? What are the conditions for the convergence of the Fourier transform?
- Given the function \begin{align*} f(x) = \begin{cases} 1,\quad &x \in [-1,1]\\ 0,\quad & |x| \in (1,\infty) \end{cases}\,. \end{align*} calculate its Fourier transform \(c(k)\). Write the integral representation of \(f(k)\) in terms of the calculated \(c(k)\).
- Given the function \begin{align*} f(x) = \begin{cases} x,\quad &x \in [0,1]\\ 0,\quad & \text{otherwise} \end{cases}\,. \end{align*} calculate its Fourier transform \(c(k)\). Write the integral representation of \(f(k)\) in terms of the calculated \(c(k)\).
- What does a Gaussian pulse in the time domain i.e., \(\exp (-t^2/T^2)\) look like in the frequency domain?
- ☸ Now consider a sine wave modulated by a Gaussian envelope in the time domain, i.e., \(\exp(j\omega_0 t) \exp (-t^2/T^2)\). What does this signal look like in the frequency domain?

These questions come from *Introduction to Linear Algebra* by Gilbert Strang and the appendix of *Introduction to Quantum Mechanics* by D. J. Griffiths.

Towards the end of this section there is some overlap with *tensor* algebra.

- Define linear independence.
- The columns of an invertable matrix are _______ _______.
- How can one easily check to see that \(n\) vectors, each one \(n\times 1\), are linearly independent?
- Determine whether the vectors \(\vec{v}_1 = (1,1)\) and \(\vec{v}_2 = (-3,2)\) are linearly independent.
- Taking \(\vec{v}_1 = (1,1)\) and \(\vec{v}_2 = (-3,2)\) as basis vectors, what linear combination of \(\vec{v}_1\) and \(\vec{v}_2\) gives the vector \(\vec{v}_3 = (2,1)\)?
- Invert \begin{align*} {A} = \begin{pmatrix} 0 & 1 & 1\\ 0 & 0 & 1\\ 2 & 1 & 0 \end{pmatrix} \end{align*} using row operations. Check the result by using Cramer's rule for the inverse, \({A}^{-1} = \frac{1}{\det{A}} {C}^\mathsf{T}\), where \({C}\) is the matrix of cofactors, which is \((-1)^{i + j}\) times the determinant of \({A}\) when the \(i\)th row and \(j\)th column are crossed out.
- Invert \begin{align*} A = \begin{pmatrix} -2 & 1 & 0\\ -1 & 0 & 1\\ 0 & 1 & 0 \end{pmatrix} \end{align*} using row operations. Check the result by using \({A}^{-1} = \frac{1}{\det{A}} {C}^\mathsf{T}\).
- What is the rank of a matrix? How does one find the rank of a matrix?
- Find the rank of \begin{align*} \begin{pmatrix} 1 & 2 & 1 \\ -2 & -3 & 1 \\ 3 & 5 & 0 \end{pmatrix} \end{align*} using row operations.
- Find the rank of \begin{align*} \begin{pmatrix} 1 & 3 & 1 & 4 \\ 2 & 7 & 3 & 9\\ 1 & 5 & 3 & 1\\ 1 & 2 & 0 & 8 \end{pmatrix} \end{align*} What is the rank of the transpose of this matrix?
- ☸ What are the four fundamental subspaces corresponding to an \(m\times n\) matrix? State the fundamental theorem of linear algebra.
- Prove the triangle inequality, \(|\vec{u} + \vec{v}| \leq |\vec{u}| + |\vec{v}|\).
- \(a_x\hat{x} + a_y\hat{y} + a_z\hat{z}\) is a Cartesian 3-vector. Does the subset of all vectors with \(a_z = 0\) constitute a vector space? If so, what is its dimension; if not, why not?
- Does the subset of all vectors with \(a_z = 1\) constitute a vector space? Explain why or why not.
- Does the subset of all vectors with \(a_x = a_y = a_z\) constitute a vector space? Explain why or why not.
- Consider the collection of all polynomials in \(x\) with complex coefficients of degree less than \(N\). Does this set constitute a "vector" space? If so, suggest a convenient basis and provide its dimensions.
- Do polynomials that are even functions constitute a "vector" space? What about polynomials that are odd functions?
- Do polynomials whose leading coefficient is \(1\) constitute a "vector" space?
- Do polynomials whose value of \(0\) at \(x=1\) constitute a "vector" space?
- Do polynomials whose value of \(1\) at \(x=0\) constitute a "vector" space?
- Provide definitions of the following types of matrices: symmetric, Hermitian, skew-symmetric, skew-Hermitian, orthogonal, unitary.
- ☸ Prove that a real symmetric matrix \(S\) has real eigenvalues and orthogonal eigenvectors.
- ☸ When solving \(A\vec{x} = \lambda \vec{x}\) where \(\vec{x} \neq 0\), what is the meaning of setting \(\det (A-I\lambda) = 0\)?
- Suppose \(A\vec{x} = \lambda \vec{x}\) and \(A\) is not full rank (i.e., it has at least two columns that are linearly dependent. What then must be at least one of its eigenvalues?
- Find the eigenvalues and eigenvectors of \begin{align*} {A} = \begin{pmatrix} 2 & -1\\ -1 & 2 \end{pmatrix} \end{align*} as well as \({A}^2\), \({A}^{-1}\), and \({A} + 4{I}\) (without actually computing the eigenvalues and eigenvectors of the latter four matrices).
- Find the eigenvalues and eigenvectors of \begin{align*} {S} = \begin{pmatrix} 1 & -1 & 0\\ -1 & 2 & -1\\ 0 & -1 & 1 \end{pmatrix}\,. \end{align*} Does the relative orientation of the eigenvectors make sense?
- ☸ \(A\vec{x} = \lambda\vec{x}\) is a vector equation because the left- and right-hand sides are both vectorial. Recast \(A\vec{x} = \lambda\vec{x}\) into a matrix form (in which the LHS and RHS are matrices), where \(X\) is a matrix whose columns consist of the eigenvectors \(\vec{x}\), and where \(\Lambda = I\lambda\). Solve the resulting matrix equation for \(A\). How does this equation simplify if \(A\) is a symmetric matrix? What is the result for the symmetric matrix called?
- ☸ What is the recasting of of a matrix \(A\) as \(\Lambda = X^{-1}AX\) called? For what matrices can this procedure be carried out? What is the significance of the matrix \(X\), and why would someone want to perform this operation? How does this particular operation relate to general changes of basis?
- ☸ Find the eigenvalues and eigenvectors of \begin{align*} {M} = \begin{pmatrix} 2 & 0 & -2\\ -2i & i & 2i\\ 1 & 0 & -1 \end{pmatrix}\,. \end{align*} Is it possible to diagonalize the matrix, i.e., find the decomposition \(M = X\Lambda X^{-1}\)? Suppose there is a vector represented by \(\vec{v} = (1,0,0)\) in the first basis, calculate the respresentation of this vector in the new (eigen)basis.
- Factor \begin{align*} A = \begin{pmatrix} 1 & 2 \\ 0 &3 \end{pmatrix} \end{align*} into \(A = X\Lambda X^{-1}\). Without actually performing the diagonalization again, find the factorization for \(A^3\) and \(A^{-1}\).
- ☸ Qualitatively, what is the difference between \(\mathsf{A}\) (sans-serif) and \(A\) (italicized)? Similarly, what is the difference between \(\vec{\mathsf{v}}\) and \(\vec{v}\)?
- Let \(\mathsf{A}\) and \(\mathsf{B}\) be two dyadic tensors (dyads). Show that \((\mathsf{A}\mathsf{B})^\mathsf{T} = \mathsf{B}^\mathsf{T} \mathsf{A}^\mathsf{T}\). Hint: introduce two auxiliary vectors \(\vec{\mathsf{u}}\) and \(\vec{\mathsf{v}}\) and use the definition of transpose, \((\mathsf{A}^\mathsf{T} \vec{\mathsf{u}})\cdot \vec{\mathsf{v}} = (\mathsf{A} \vec{\mathsf{v}})\cdot \vec{\mathsf{u}}\) twice.
- Let \(\mathsf{A}\) and \(\mathsf{B}\) be two invertible dyads. Show that \((\mathsf{A}\mathsf{B})^{-1} = \mathsf{B}^{-1} \mathsf{A}^{-1}\).
- Prove that any tensor \(\mathsf{T}\) can be written as the sum of a symmetric tensor \(\mathsf{S}\) and an antisymmetric tensor \(\mathsf{A}\). Similarly prove that any tensor can be written as the sum of a real tensor \(\mathsf{R}\) and an imaginary tensor \(\mathsf{M}\). Finally prove that any tensor can be written as the sum of a symmetric tensor \(\mathsf{H}\) and an antisymmetric tensor \(\mathsf{K}\).
- How does one test to see if \(\mathsf{A}\) and \(\mathsf{B}\) share the same independent eigenvectors? Provide examples in physics of two linear operators that share eigenvectors, and two linear operators that do not.
- ☸ For some tensorial linear equation \(\mathsf{A}\vec{\mathsf{x}}= \vec{\mathsf{b}}\), suppose there is one representation of this equation in the primed basis \('\),
\[A'\vec{x}' = \vec{b}'\]
and another representation in the unprimed basis,
\[A \vec{x} = \vec{b}.\]
Given the unprimed basis is a linear transformation of the primed basis, i.e., \(S\vec{v}' = \vec{v}\), find the matrix representation of \(\mathsf{A}\) in the unprimed basis in terms of \(A'\) and \(S\). What is the relationship between the eigenvalues of \(\mathsf{A}\), \(A\) and \(A'\)? What are \(A\) and \(A'\) called with respect to one another? How does this change of basis relate to
*diagonalization*? - ☸ Show that the eigenvalues \(\lambda\) satisfying \(A\vec{x} = \lambda \vec{x}\) (unprimed basis) are invariant under the transformation to the primed \('\) basis, where the linear transformation between unprimed and primed bases is \(\vec{v} = S\vec{v}'\). In other words, show that \(A'\vec{x}' =\lambda \vec{x}'\)
- Do the trace and detemrinant of a tensor depend on the basis? Provide expressions for the trace and determinant of a tensor \(\mathsf{A}\) (with an \(n\times n\) matrix representation) terms of the eigenvalues of \(\mathsf{A}\).
- ☸ Find the eigenvalues and eigenvectors of \begin{equation*} \begin{pmatrix} 1 & 1& 1\\ 1 & 1& 1\\ 1 & 1& 1 \end{pmatrix}\,. \end{equation*} Comment on the orientation of the eigenvectors. Are they orthogonal? Why or why not? If not, does this contradict the earlier claim that the eigenvectors of a real symmetric matrix (problem 22) are orthogonal? And if the eigenvectors are not orthogonal, can one make them orthogonal? If so, do this.

Each numbered equation in this section represents a unique type of differential equation. For a thorough review of this section, be sure to know how to solve each type of differential equation.

- Define a homogeneous function. Provide an example of a homogeneous functions.
- Define a homogeneous polynomial and provide an example.
- What defines the homogeneity of a rational function?
- Define the linearity of a function \(f(x)\). Linear functions obey the _________ principle.
- Is a homogeneous function always linear? Is a linear function always homogeneous?
- What defines a homogeneous first-order ordinary differential equation? Provide an example of a first-order ODE that is homogeneous, as well as an example of one that is inhomogeneous.
- Classify the differential equation \begin{equation}\label{ode1}\tag{1} \frac{dy}{dx} = a(x) y\,. \end{equation} Find the general solution to equation (\ref{ode1}).
- Classify and solve \(\frac{dy}{dx} = [x + \cos(x) ]y\,.\) for \(y\).
- Classify and solve \(\frac{dy}{dx} = (y-x)^2\). The solution can remain in an implicit form.
*Hint: is there a transformation that can make this ODE separable?* - Classify and solve \(\frac{dy}{dt} = \cos(y-t)\). Follow the hint above.
- Classify and solve \( y + \sqrt{xy} = x\frac{dy}{dx}\), where \(x>0\).
*Hint: is there a transformation that can make this ODE separable? This time it a multiplicative transformation, unlike that addititive transformations in the previous two examples.* - Classify the differential equation \begin{equation}\label{ode2}\tag{2} \frac{dy}{dx} = a(x) y + b(x)\,. \end{equation} Find the general solution to equation (\ref{ode2}) by assuming the form of solution to equation (\ref{ode1}), but with \(C = C(x)\). What is the name of this solution?
- Classify and solve \((1 + x^2)y' + 2xy = \cos x\).
- Classify the differential equation \begin{equation}\label{bern}\tag{3} \frac{dy}{dx} = a(x)y + b(x)y^n \end{equation} where \(n\neq 0,1\) is real. What is the name of this equation? What substitution does one make to facilitate its solution?
- Classify and solve \(xy dy = (y^2 + x)dx\).
- Classify the differential equation \begin{equation}\label{exact}\tag{4} M(x,y)dx + N(x,y)dy = 0\,,\quad \text{where } \frac{\partial M}{\partial y} = \frac{\partial N}{\partial x}\,. \end{equation} What is the general solution of equation (\ref{exact}) in integral form?
- Classify and solve \((4x^3 + 3y)dx + (3x + 4y^3)dy = 0\).
- How does one solve \begin{equation}\label{integratingfactor}\tag{5} M(x,y)dx + N(x,y)dy = 0\,,\quad \text{but now where } \frac{\partial M}{\partial y} \neq \frac{\partial N}{\partial x}\,? \end{equation} What is the name of this method?
- Classify and solve \((x+2)\sin y\, dx + x \cos y \, dy=0\).
- Classify the differential equation \begin{equation}\label{secondorder}\tag{6} ay'' + b' + cy = 0\,, \end{equation} where \(a\), \(b\), and \(c\) are constants, and where the prime \('\) will now be used for notational ease to signify derivative. How does one solve this equation? There are the three possible cases that emerge. What are they?
- Classify and solve \(y''' + 4y'' + 9y' + 10y = 0\).
- Classify and solve \(y''' + 6y'' + 12y' + 8y = 0\).
- What is the definition of the Wronskian? What information does it provide?
- Determine whether \(y_1 = x\) and \(y_2 = \ln x\) are a fundamental pair, and if so, on what interval. What about \( y_1 = \arccos \frac{x}{\pi}\) and \(y_2 = \arcsin \frac{x}{\pi}\)?
- ☸ Classify the differential equation
\begin{equation}\label{homo2}\tag{7}
a(x) y'' + b(x) y' + c(x) y = 0\,.
\end{equation}
Given one solution \(y_1\) to equation (\ref{homo2}), how can the other solution \(y_2\) be found? What is this method called?
*Hint: Let \(y_2 = uy_1\). Insert this into equation (\ref{homo2}). Introduce \(z= u'\) and then let \(z= u'\) (This is where the name of the method comes from). Solve for \(z\), integrate, and find \(u\). \(y_2\) is then found because \(y_2 = uy_1\). I doubt this would be on the exam as it is too involved.* - Given that \(y_1 = x\) is a solution to \(x^2 y'' -x(x+2) y' + (x+2) y = 0\), classify this equation and find the general solution.
- Classify the differential equation \begin{equation}\label{inhomo2} \tag{8} ay'' + by' + cy = g(x)\,. \end{equation} List the two ways to solve this equation.
- Classify and solve \(y'' + 4y = e^{3x}\).
- Classify and solve \(y'' + y = \sin x\).
- Classify and solve \(y'' - 4y' + 3y = 2xe^{x}\).
- Classify and solve \(y'' + 2y' + y = e^{-x}\).
- Determine the form of trial solution for \(y'' -4y' + 13y = e^{2x}\cos 3x\).
- What is the variation of parameters? Do not provide the full derivation, but provide the big picture (like, why is it called "variation of parameters"?). In what situations should the variation of parameters be used?
- Solve \(y'' + 4y = \frac{3}{\sin x}\).
- Solve \(y'' - 2y' + y = \frac{e^t}{t^2 + 1}\).
- ☸ What is
\begin{equation}\label{Cauchy-Euler}\tag{9}
ax^2 y'' + bxy' + cy = 0
\end{equation}
called? (
*Why is it a bad name*)? What clever substitution does one make to go about solving it? - What are the three cases that arise when solving equation (\ref{Cauchy-Euler})?
- Classify and solve \(2x^2 y'' + 3x y' -y = 0\).
- Let \(a_n\) be the expansion coefficients in a series solution \(y = \sum_n a_n x^n\). Define the radius of convergence of this sum.
- Calculate the radius of convergence for \(e^x\).
- Calculate the radius of convergence for \(\frac{1}{1-x}\).
- Solve \(y'' + y = 0\) by series.
- What is the name of the ordinary differential equation \(y'' - xy = 0\)? What are some of its applications? Solve it by series.
- Solve \(y'' + xy' +y = 0\) by series.
- When solving equation (\ref{homo2}) (the linear, homogeneous second-order ordinary differential equation with non-constant coefficients \(a(x) y'' + b(x) y' + c(x)y = 0\)), what must one be weary of if there is an \(x_0\) such that \(a(x_0) = 0\)? What is the condition on \(x_0\) for equation (\ref{homo2}) to be solved by series?
- Identify the differential equation \((1-x^2)y'' - 2xy' + \alpha(\alpha +1)y = 0\), and identify the singular point(s). Classify them as "regular singular" or "singular singular."
- Solve \(2x^2 y'' + xy' - (1+x)y = 0\).
- Identify and solve \(x^2 y'' + xy' + (x^2-\nu^2)y = 0\) for \(\nu=0\). What is the name of the solution? Where does it appear in acoustics?

- Derive the recursion relation for the power expansion coefficients that solve Bessel's equation
\[x^2 y'' + xy' + (x^2-\nu^2)y = 0\] for
*arbitrary*\(\nu\). What choice gives the power expansion coefficients for \(J_n\)? What choice gives the power expansion coefficients for \(N_n\)? - The singular points \(x=\pm 1\) of Legendre's equation \[(1-x^2)y'' -2xy' + \lambda y = 0 \] were already found to be to regular singular. Thus derive the recursion relation for the power expansion coefficients that solve Legendre's equation for arbitrary \(\lambda\). Why is there never much discussion about the second solution of Legendre's equation \(Q_n\)?
- Given the recurrence relation for Legendre polynomials, \[(n+1)P_{n+1}(x) = (2n+1)xP_{n}(x)-nP_{n-1}(x),\] and the integral result \[\int_{-1}^1 P_n(x)P_m(x)dx = \frac{2}{2n+1}\delta_{nm},\] show that \[\int_{-1}^{1} x^2 P_{n+1}(x) P_{n-1}(x) dx = \frac{2n(n+1)}{(4n^2-1)(2n+3)}\,.\]
- Given \(J_{p-1}(x)-J_{p+1} = 2J_p'\), the integral relation \(J_0(x) = \frac{2}{\pi}\int_0^{\pi/2}\cos(x\sin\theta)d\theta\), show that (part a) \[J_1 = \frac{2}{\pi}\int_0^{\pi/2}\sin(x\sin\theta)\sin\theta d\theta.\] Then (part b) obtain \[x^{-1}J_1(x)= \frac{2}{\pi}\int_0^{\pi/2}\cos(x\sin\theta)\cos^2\theta d\theta\] by integrating the right-hand side of the first result by parts.
- Prove that \[\delta(kx) = \frac{1}{|k|}\delta(x)\] where \(k\) is any nonzero constant.
*Hint: let \(y = kx\), and integrate a test function \(f(x) = f(y/k)\) times the Dirac delta function of \(y\) from \(-\infty\) to \(\infty\).* - \(\int_{2}^{6} (3x^2 - 2x -1)\delta(x-3)dx = \)
- \(\int_{0}^{5} \cos x \delta(x-\pi)dx = \)
- \(\int_{0}^{3} x^3\delta(x+1)dx = \)
- \(\int_{-\infty}^{\infty} \ln(x+3)\delta(x+2)dx = \)
- \(\int_{-2}^{2} (2x+3)\delta(3x)dx = \)
- \(\int_{0}^{2} (x^3 + 3x +2)\delta(1-x)dx = \)
- \(\int_{-1}^{1} 9x^2\delta(3x+1)dx = \)
- \(\int_{-\infty}^{a} \delta(x-b)dx = \)
- ☸ Prove that \[x\frac{d}{dx}[\delta(x)] = -\delta(x).\] Hint: Integrate \(\int_{-\infty}^{\infty} f(x) x \frac{d}{dx}[\delta(x)] dx \) by parts.
- ☸ Prove that \[\frac{d\theta}{dx} = \delta(x),\]
where \(\theta\) is the Heaviside step function.
*Hint: Integrate \(\int_{-\infty}^{\infty} f(x) \frac{d\theta}{dx} dx \) by parts.*

Vector calculus (for whatever reason) is not listed as a topic on the math section. However, as essentially all of acoustics is formulated in terms of the calculus of vector fields, I think it is very worthy of my review.

An orthonormal vector basis may be assumed; no need to work out the proofs below in general curvilinear coordinates. Therefore, one can write \(\vec{\mathsf{v}} = \vec{v}\).

Some of the problems below come from chapter 1 of *Introduction to Electrodynamics* by D. J. Griffiths.

- Suppose we have a barrel of fruit that contains \(a_x\) bananas, \(a_y\) pears, and \(a_z\) apples. Denoting \(\hat{n}\) as the unit vector in the \(n\) direction in space, is \(\vec{a} = a_x\hat{x} + a_y\hat{y} + a_z\hat{z}\), a vector? Explain.
- How do the components \(a_x\), \(a_y\), and \(a_z\) of a vector \(\vec{a} = a_x \hat{x} + a_y \hat{y} + a_z \hat{z} \) transform under the translation of coordinates? \begin{align*} x' &= x\\ y' &= y-a\\ z' &= z \end{align*} In other words, what happens to \(a_x\), \(a_y\), and \(a_z\) when \(\vec{a}\) is written as \(\vec{a} = a_x \hat{x}' + a_y \hat{y}' + a_z \hat{z}' \)?
- How do the components of a vector transform under the inversion of coordinates? \begin{align*} x' &= -x\\ y' &= -y\\ z' &= -z \end{align*} In other words, what happens to \(a_x\), \(a_y\), and \(a_z\) when \(\vec{a}\) is written as \(\vec{a} = a_x \hat{x}' + a_y \hat{y}' + a_z \hat{z}' \)?
- How does the cross product of two vectors \(\vec{u}\) and \(\vec{v}\) transform under the inversion of coordinates? Is the cross product of two vectors really a vector?
- How does the scalar triple product of \(\vec{w}\cdot(\vec{u} \times \vec{v})\) transform under the inversion of coordinates? Is the scalar triple product really a scalar? (Griffiths problem 1.10d)
- In what direction does the gradient of a function point?
- Show that \( |\vec{u}\times \vec{v}|^2 + (\vec{u}\cdot \vec{v})^2 = |\vec{u}|^2|\vec{v}|^2 \).
- ☸ Prove that \(\vec{\nabla} (\vec{a} \cdot \vec{b}) = \vec{a}\times(\vec{\nabla} \times \vec{b}) + \vec{b}\times(\vec{\nabla}\times \vec{a}) + (\vec{a}\cdot\vec{\nabla})\vec{b} + (\vec{b} \cdot \vec{\nabla})\vec{a}\).
- Prove that \(\vec{\nabla}\times(\vec{a}\times \vec{b}) = (\vec{b}\cdot \vec{\nabla})\vec{a} - (\vec{a}\cdot\vec{\nabla})\vec{b} + \vec{a}(\vec{\nabla}\cdot\vec{b}) -\vec{b}(\vec{\nabla}\cdot\vec{a})\).
- Prove that the divergence of the curl is 0.
- Prove that the curl of the gradient is 0.
- Show that \(\vec{\nabla}\times(\vec{\nabla}\times \vec{a}) = \vec{\nabla}(\vec{\nabla} \cdot\vec{a})-\nabla^2 \vec{a}\).
- In Cartesian coordinates, \(\vec{r} = x\hat{x} + y\hat{x} + z\hat{x}\) and thus \(r = \sqrt{x^2 + y^2 + z^2}\). Find \(\vec{\nabla} r \).
- Let \(\vec{R} = \vec{r}- \vec{r}'\) be the position vector. Thus in Cartesian coordinates, \(R = \sqrt{(x-x')^2+(y-y')^2+(z-z')^2}\). Find \(\vec{\nabla} R^2\).
- Find \(\vec{\nabla} R^{-1}\).
- What is the coordinate-free definition of the divergence of a vector field \(\vec{F}\)?
- ☸ Find the divergence of \(\hat{r}/r^2 = \vec{r}/r^3\) in both Cartesian and spherical coordinates. Note that the \(r\) component of the divergence of \(\vec{v}\) in spherical coordinates is \(\frac{1}{r^2}\frac{\partial}{\partial r}(r^2 v_r)\). Explain the result.
- What is the coordinate-free definition of the curl of a vector field \(\vec{F}\)?
- Construct a non-constant vector function that has zero divergence and zero curl everywhere.
- Calculate the line integral of the function \(\vec{v} = y^2 \hat{x} + 2x(y+1)\hat{y}\) from \(\vec{a} = (1,1,0)\) to \(\vec{b} = (2,2,0)\), following the path from \((x,y,z) = (1,1,0)\) to \((2,1,0)\) to \((2,2,0)\). Then calculate the line integral following the path from \(\vec{a} = (1,1,0)\) to \(\vec{b} = (2,2,0)\) directly. Finally calculate \(\oint \vec{v}\cdot d\vec{\ell}\) for the loop going from \((1,1,0)\) to \((2,1,0)\) to \((2,2,0)\) to \((1,1,0)\).
- Calculate the surface integral of \(\vec{v} = 2xz \hat{x} + (x+2)\hat{y} + y(z^2-3)\hat{z}\) over five sides (excluding the bottom) of a cubical box whose bottom edge extends from \(x = 0\) to \(x = 2\).
- Calculate the volume integral of the scalar-valued field \(T = xyz^2\) over a right triangular prism of height \(z= 3\), whose lower base is the triangle with vertices at the origin, \((x,y,z) = (1,0,0)\), and \((x,y,z) = (0,1,0)\).
- State the gradient theorem. What is the meaning of a conservative field? Provide examples of conservative fields and nonconservative fields from physics.
- Calculate the line integral \(\int_{\vec{a}}^{\vec{b}} \vec{\nabla}T \cdot d\vec{\ell}\) for \(T = x^2 + 4xy + 2yz^3\) where \(\vec{a} = (0,0,0)\) and \(b = (1,1,1)\) for the path \((0,0,0)\to (1,0,0)\to (1,1,0)\to (1,1,1)\). Then calculate the integral for the path \((0,0,0)\to (0,0,1)\to (0,1,1)\to (1,1,1)\). Next, calculate the integral for the parabolic path \(z=x^2,\, y=x\). Finally, check the result with the gradient theorem.
- ☸ State and prove the divergence theorem and Stokes's theorem.
- List everything that can be concluded about a vector field \(\vec{F}\) that has a vanishing curl, i.e., \(\vec{\nabla}\times \vec{F} = 0\).
- List everything that can be concluded about a vector field \(\vec{F}\) that has a vanishing divergence, i.e., \(\vec{\nabla}\cdot \vec{F} = 0\).

- Provide the name of each of the following partial differential equations. Also list a few physical phenomena described by each: \begin{align} \nabla^2 u &= 0\label{Laplace} \tag{i}\\ \nabla^2 u &= f(\vec{r})\label{Poisson}\tag{ii}\\ \nabla^2 u &= \frac{1}{\alpha^2}\frac{\partial u }{\partial t}\label{Diffusion}\tag{iii}\\ \nabla^2 u &= \frac{1}{v^2}\frac{\partial^2 u}{\partial t^2}\label{Wave}\tag{iv}\\ \nabla^2 F + k^2 F&= 0\label{Helmholtz}\tag{v}\\ i\hbar \frac{\partial\Psi}{\partial t}&= -\frac{\hbar^2}{2m}\nabla^2 \Psi + V\Psi \label{Schrodinger}\tag{vi} \end{align} Which two equations above both reduce to equation (\ref{Helmholtz}) upon assuming time-harmonic solutions?
- Solve Laplace's equation in 2D \(\frac{\partial^2 V}{\partial x^2} + \frac{\partial^2 V}{\partial y^2} = 0\) where \begin{align*} V&=0\quad \text{for}\quad y=0\\ V&=0\quad \text{for}\quad y=a\\ V&=V_0\quad \text{for}\quad x=-b\\ V&=V_0\quad \text{for}\quad x=b\,. \end{align*} are the boundary conditions.
- Solve the 2D Laplace equation \(\frac{\partial^2 V}{\partial x^2} + \frac{\partial^2 V}{\partial y^2} = 0\) where now \begin{align*} V&=0\quad \text{for}\quad y=0\\ V&=f(x)\quad \text{for}\quad y=H\\ \frac{\partial V}{\partial x}&=0\quad \text{for}\quad x=0\\ \frac{\partial V}{\partial x}&=0\quad \text{for}\quad x=L \end{align*} are the boundary conditions. Then solve the problem for \(f(x) = V_0x/L\).
- Solve the 1D diffusion equation \(\kappa \frac{\partial^2 u}{\partial x^2} = \frac{\partial u}{\partial t}\) where \begin{align*} u&=0\quad \text{for}\quad x=-L\\ u&=0\quad \text{for}\quad x=L\\ u&=\begin{cases} 1,\quad x\geq0\\ 0,\quad x<0 \end{cases} \quad \text{at }t =0 \end{align*} are the boundary and initial conditions.
- Solve the Schrodinger equation for a potential that is 0 inside a a cube of side length \(l\), and infinite at the boundaries. Discuss the degeneracy of modes.
- List the eigenfunctions that solve the Helmholtz equation in Cartesian coordinates, cylindrical coordinates, and spherical coordinates. Given this, which equations in problem (1) of this section are solved spatially? What are the time dependences for each of these equations?
- ☸ One solution to the 1D wave equation \(\frac{\partial^2 p }{\partial x^2} - \frac{1}{c^2}\frac{\partial^2 p}{\partial t^2} = 0\) is
\begin{align}\label{wavers}p =
\begin{Bmatrix}
e^{\alpha x}\\ e^{-\alpha x}
\end{Bmatrix}
\begin{Bmatrix}
e^{\alpha c t}\\ e^{-\alpha c t}
\end{Bmatrix} \tag{a}
\end{align}
What a bizarre-looking solution! If you do not believe it is a solution to the wave equation, you can check it for yourself by setting the separation constant equal to \(X''/X = \alpha^2\), which gives \(T''/T = (\alpha c)^2\)!
*Is this solution a wave?*Meanwhile, a solution to the 1D diffusion equation \(\frac{\partial^2 p }{\partial x^2} - \kappa^2\frac{\partial p}{\partial t} = 0\) is \begin{align}\label{heaters}p = \begin{Bmatrix} \cos{k x}\\ \sin k x \end{Bmatrix} e^{-(k/\kappa)^2t} = \begin{Bmatrix} e^{ik x}\\ e^{-ik x} \end{Bmatrix} e^{-(k/\kappa)^2t} \tag{b} \end{align} Is*this*solution a wave? - List the eigenfunctions that solve the Laplace equation in Cartesian coordinates, cylindrical coordinates, and spherical coordinates.
- Solve the Laplace equation \(\nabla^2 V = 0\) (assuming polar symmetry) where \(V=V_0(\theta)\) on the boundary of a sphere of radius \(a\), where \[ \frac{\partial}{\partial r} \bigg(r^2 \frac{\partial V}{\partial r}\bigg) + \frac{1}{\sin \theta} \frac{\partial }{\partial \theta}\bigg(\sin\theta \frac{\partial V}{\partial \theta}\bigg) = 0 \] is Laplace's equation in spherical coordinates with no azimuthal dependence. Does the coordinate \(z= \cos\theta\) actually correspond to the \(z\) axis?
- What is the solution to Poisson's equation \(\nabla^2 u = f(\vec{r})\) in terms of the appropriate Green's function, \(G(\vec{r}|\vec{r}')\)?
- Solve Laplace's equation \(\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = 0\) for a semi-infinite plate satisfying the following boundary conditions: \begin{align*} u(x,\infty) &= 0\\ u(0,y) &=0\\ u(x,0) &=\begin{cases} T_0 \quad \text{for }\quad x\in (0,a]\\ 0\quad \text{for }\quad x>1\\ \end{cases} \end{align*} Note that \begin{align*} \begin{cases}f(x) &= \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} g(k) \sin kx\, dk\\ g(k) &= \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} f(x) \sin kx\, dx\,\end{cases} \end{align*} is the sine Fourier transform pair.