Identities (1) and (3) are proved in general curvilinear coordinates—see Ref. [8]. For a less general (but more straightforward) treatment, these identities are also proved in index notation for orthonormal coordinates. Note that the outer product is defined as \((\vec{u} \otimes \vec{v}) \cdot \vec{w} = (\vec{v} \cdot \vec{w}) \vec{u}\).
\(\divergence(\vec{u}\otimes\vec{v})=(\gradient \vec{u})\vec{v}+ (\divergence \vec{v})\vec{u}\). This product rule is shown below in general curvilinear coordinates, where \(\vec{g}_k\) is a contravariant basis vector, and where \(_{,k}\) denotes differentiation with respect to the the \(k^\mathrm{th}\) coordinate in the basis, and where \(\vec{u} = u^k \vec{g}_k\) means \(\sum_{k=1}^3 u^k \vec{g}_k = u^1 \vec{g}_1 + u^2 \vec{g}_2 + u^3 \vec{g}_3\). Evaluating the left-hand side of the identity to be proved yields \begin{align*} \divergence(\vec{u}\otimes\vec{v})&= (\vec{u}\otimes\vec{v})_{,k}\cdot \vec{g}^k\\ &= (\vec{u}_{,k}\otimes \vec{v} + \vec{u}\otimes \vec{v}_{,k}) \cdot \vec{g}^k \\ &= (\vec{u}_{,k}\otimes \vec{v}) \cdot \vec{g}^k + (\vec{u}\otimes \vec{v}_{,k}) \cdot \vec{g}^k \\ &=(\vec{u}_{,k}\otimes\vec{g}^k) \cdot \vec{v} + (\vec{v}_{,k}\cdot \vec{g}^k) \vec{u} \\ &=(\gradient\vec{u}) \cdot \vec{v} + (\divergence\vec{v})\vec{u}\,, \end{align*} where the first line holds by the definition of divergence of tensor, the second line by the product rule, the third line by the distributive property, the fourth line by the definition of the outer product, and the last line by the definitions of the gradient and divergence of a vector. Note: This was a homework problem in Ref. [8].
Below is a less general version of the proof, included for readers unfamiliar with Ricci calculus. In orthonormal index notation, the left-hand side of the identity to be proved in one line, by the product rule: \begin{align*} (u_iv_j)_{,j} &= u_{i,j} v_j + u_i v_{j,j}\,. \end{align*}
\(\divergence(\phi\dyad{T})= \phi\,\divergence\dyad{T}+ \dyad{T}\cdot (\gradient \phi)\), where rank-2 tensors on this website are denoted by being underlined. The left-hand side of the identity is manipulated: \begin{align*} \divergence(\phi\dyad{T}) &= (\phi\dyad{T})_{,k} \cdot \vec{g}^k \\ &= \phi\dyad{T}_{,k} \cdot \vec{g}^k + \phi_{,k}\dyad{T}\cdot \vec{g}^k \\ &= \phi\dyad{T}_{,k} \cdot \vec{g}^k + \dyad{T} \cdot (\phi_{,k} \vec{g}^k) \\ &= \phi\,\divergence\dyad{T}+ \dyad{T} \cdot (\gradient \phi)\,. \end{align*} The first line holds by the definition of the divergence of a tensor, the second line holds by the product rule, the third line by the commutative property of multiplication, and the fourth line by the definition of the divergence of a tensor and vector.
For a less general version of this proof, appeal to orthonormal index notation. As in Item 1, the statement is proved in one line by the product rule: \begin{align} (\phi T_{ij})_{,j} = \phi T_{ij,j} + T_{ij} \phi_{,j} \end{align}
For two time-harmonic functions \(f\) and \(g\) represented by the real parts of the complex-valued functions \(\tilde{f}(t) = f_0 e^{-i(\omega t + \phi_f)} = \tilde{f}_\omega e^{-i\omega t}\) and \(\tilde{g}(t) = g_0 e^{-i(\omega t + \phi_g)} = \tilde{g}_\omega e^{-i\omega t}\) (where \(\tilde{f}_\omega = f_0e^{-i\phi_F}\) and \(\tilde{g}_\omega = g_0 e^{-i\phi_G}\)), the time average of their product, \(\langle fg\rangle\), is given by \(\langle \Re (\tilde{f}) \,\,\Re (\tilde{g}) \rangle = \frac{1}{2} \Re (\tilde{f}_\omega \, \tilde{g}^*_\omega) = \frac{1}{2} \Re (\tilde{f}_\omega^*\, \tilde{g}_\omega)\), where "\(\Re\)" denotes "real part".
To show this, note that \(\Re(\tilde{f}) = f_0 \cos(\omega t + \phi_f)\) and \(\Re(\tilde{g}) = g_0 \cos(\omega t + \phi_g)\). thus \begin{align} \langle \Re (\tilde{f}) \, \Re (\tilde{g}) \rangle &= \langle f_0 \cos(\omega t + \phi_f) \, g_0 \cos(\omega t + \phi_g)\rangle \notag\\ &= f_0 g_0 \langle\cos(\omega t + \phi_f) \, \cos(\omega t + \phi_g)\rangle\,. \label{eq:id:avg:simplify:1} \end{align} Since \(\cos A \cos B = \cos(A+B) + \sin A\sin B\), Eq. \eqref{eq:id:avg:simplify:1} becomes (by letting \(A = \omega t + \phi_f\) and \(B= \omega t + \phi_g\)) \begin{align} \langle \Re (\tilde{f}) \, \Re (\tilde{g}) \rangle &= f_0 g_0 \langle \cos(2\omega t + \phi_f + \phi_g) + \sin(\omega t + \phi_f)\sin(\omega t + \phi_g)\rangle\,.\label{eq:id:avg:simplify:2} \end{align} Noting that \(\sin A \sin B = \tfrac{1}{2} [\cos(A-B) - \cos (A+B)]\), Eq. \eqref{eq:id:avg:simplify:2} becomes \begin{align} \langle \Re (\tilde{f}) \, \Re (\tilde{g}) \rangle &= f_0 g_0 \langle \cos(2\omega t + \phi_f + \phi_g) - \tfrac{1}{2} \cos(2\omega t + \phi_f + \phi_g) + \tfrac{1}{2}\cos(\phi_f - \phi_g)\rangle \notag\\ &= f_0 g_0 \langle \tfrac{1}{2} \cos(2\omega t + \phi_f + \phi_g) + \tfrac{1}{2}\cos(\phi_f - \phi_g)\rangle\,.\label{eq:id:avg:simplify:3} \end{align} The time-averaging operation amounts to an integral, which is a linear operation. Thus Eq. \eqref{eq:id:avg:simplify:3} becomes \begin{align*} \langle \Re (\tilde{f}) \, \Re (\tilde{g}) \rangle &= \tfrac{1}{2} f_0 g_0 \langle \cos(2\omega t + \phi_f + \phi_g)\rangle + \tfrac{1}{2} f_0 g_0 \langle\cos(\phi_f - \phi_g)\rangle\,. \end{align*} The first term on the left-hand side is 0. Meanwhile, the second term does not depend on time, and therefore its time average is itself: \begin{align} \langle \Re (\tilde{f}) \, \Re (\tilde{g}) \rangle &= \tfrac{1}{2} f_0 g_0 \cos(\phi_f - \phi_g)\,.\label{eq:id:avg:simplify} \end{align} Noting that \(f_0 g_0 \cos(\phi_f - \phi_g) \) is \(\Re [f_0 g_0 e^{-i(\phi_f - \phi_g)}]\), which by the relations \(\tilde{f}_\omega = f_0e^{-i\phi_F}\) and \(\tilde{g}_\omega = g_0 e^{-i\phi_G}\) is \(\Re(\tilde{f}_\omega \, \tilde{g}_\omega^*)\), Eq. \eqref{eq:id:avg:simplify} becomes \begin{align} \langle fg \rangle = \langle \Re (\tilde{f}) \, \Re (\tilde{g}) \rangle &= \tfrac{1}{2} \Re(\tilde{f}_\omega\tilde{g}_\omega^*) = \tfrac{1}{2} \Re(\tilde{f}_\omega^*\tilde{g}_\omega) \,,\label{eq:id:avg} \end{align} where the final equality holds by noting that \(\cos (\phi_f - \phi_g) = \cos(\phi_g - \phi_f)\).
A consequence of this relation is that the time-averaged intensity of a time-harmonic acoustic field is \[\langle \vec{I}\rangle = \langle p \vec{v} \rangle = \frac{1}{2} \Re(\tilde{p}_\omega \tilde{\vec{v}}_\omega^* ) = \frac{1}{2} \Re(\tilde{p}_\omega ^*\tilde{\vec{v}}_\omega )\,,\] as can be seen by letting \(\Re(\tilde{f}) = p\) and \(\Re(\tilde{\vec{g}})= \vec{v}\) in Eq. \eqref{eq:id:avg}
\(\vec{u} \times (\vec{v} \times \vec{w}) = (\vec{u} \cdot \vec{w}) \vec{v} - (\vec{u}\cdot \vec{v}) \vec{w}\), known as the vector triple product. It is straightforward to prove this in index notation for orthonormal bases: \begin{align} \lbrack\vec{u} \times (\vec{v} \times \vec{w})\rbrack_i &= \epsilon_{ijk}u_j(\vec{v}\times\vec{w})_k\notag\\ &= \epsilon_{ijk}u_j(\vec{v}\times\vec{w})_k\notag\\ &= \epsilon_{ijk}\epsilon_{kmn}u_jv_m w_n\notag\\ &= \epsilon_{kij}\epsilon_{kmn}u_jv_m w_n\notag\\ &= \delta_{im}\delta_{jn}u_jv_m w_n-\delta_{in}\delta_{jm}u_jv_m w_n\notag\\ &= u_nv_i w_n- u_mv_m w_i\notag\\ &= (\vec{u} \cdot \vec{w}) v_i - (\vec{u}\cdot\vec{v}) w_i\notag \end{align} Since this applies to \(i=1,2,3\), the above statement becomes \(\vec{u} \times (\vec{v} \times \vec{w}) = (\vec{u} \cdot \vec{w}) \vec{v} - (\vec{u}\cdot \vec{v}) \vec{w}\), as desired.
Definition of the axial vector \(\vec{w}\) of a skew-symmetric tensor \(\dyad{W}\): \(\vec{w} = \mathrm{ax}(\dyad{W})\) is defined in terms of the skew-symmetric tensor \(\dyad{W}^\mathrm{T} = -\dyad{W}\) as \begin{align}\label{eq:id:axial} \dyad{W} \cdot \vec{u} = \mathrm{ax} (\dyad{W}) \times \vec{u} = \vec{w} \times \vec{u}\,. \end{align} To show this, note that the skew tensor \(\vec{a} \otimes \vec{b} - (\vec{a} \otimes \vec{b})^\mathrm{T}\) satsfies \begin{align*} [\vec{a} \otimes \vec{b} - (\vec{a} \otimes \vec{b})^\mathrm{T}] \cdot \vec{u} &= (\vec{b} \cdot \vec{u}) \vec{a} - (\vec{a} \cdot \vec{u}) \vec{b}\notag\\ &= (\vec{b} \times \vec{a}) \times \vec{u}\,, \end{align*} where Item 6 has been used in the second equality. By comparison of the above to Eq. \eqref{eq:id:axial}, it can be seen that \[\mathrm{ax}[\vec{a} \otimes \vec{b} - (\vec{a} \otimes \vec{b})^\mathrm{T}] = \vec{b}\times \vec{a}\,.\] Since any skew-symmetric tensor \(\dyad{W}\) can be built on the basis of \(\vec{a} \otimes \vec{b} - (\vec{a} \otimes \vec{b})^\mathrm{T}\), all skew-symmetric tensors have an axial vector satisfying Eq. \eqref{eq:id:axial}.
Equation \eqref{eq:id:axial} is defined only in terms of a skew-symmetric tensor \(\dyad{W}\). To extend the definition of the axial vector to any tensor \(\dyad{A}\), let \[\mathrm{ax} \,\dyad{A} = \mathrm{ax} \tfrac{1}{2} (\dyad{A} - \dyad{A}^\mathrm{T})\,.\] This discussion was adapted from Ref. [8], pp. 18 and Eqs. (1.2.16) and (1.2.17).