12.2. Error Propagation#

Further Reading: §3.2, §3.3, and §3.4 in Navidi (2015)

12.2.1. Learning Objectives#

After studying this notebook, completing the activities, engaging in class, and reading the book, you should be able to:

  • Apply error propagation formulas to predict uncertainty in calculations using experimental data

  • Derive linear error propagation formulas from definitions of expected value and variance

  • Derive general nonlinear error propagation formulas from Taylor series approximation

  • Use multivariate error propagation formula

import numpy as np
import matplotlib.pyplot as plt

As engineers, we are rarely interested in simply characterizing the uncertainty in measurements. Instead, we often use the measurement values to perform calculations. Naturally, we want to propagate the uncertainty from the measures through to the calculation. This is known as error propagation. I promise you will use it in multiple laboratory classes at Notre Dame and as practicing engineers after graduation.

Recall, we will model the outcome of measurements with random variables. As such, I will write the error propagation formulas as random variables. The formulas that follow are given for two measured quantities \(X\) and \(Y\), but generalize to any dimensions. We assume we know the measurement uncertainties \(\sigma_X\) and \(\sigma_Y\) as well as the covariance \(\sigma_{X,Y}\). We want to estimate the uncertainty \(\sigma_Z\) for the computed quantity \(Z\).

First, let’s consider either adding or subtracting two measured quantities.

12.2.2. Addition Rule#

Consider \(Z = c_1 X + c_2 Y\):

\[\sigma_Z^2 = c_1^2 \sigma_X^2 + c_2^2 \sigma_Y^2 + 2 c_1 c_2 \sigma_{X,Y}\]

12.2.3. Subtraction Rule#

Consider \(Z = c_1 X - c_2 Y\):

\[\sigma_Z^2 = c_1^2 \sigma_X^2 + c_2^2 \sigma_Y^2 - 2 c_1 c_2 \sigma_{X,Y}\]

12.2.4. Derivation#

Notice these formulas are exact. We’ll explore that more during class.

\[Var[Z] = \sum_{i} c_i^2 * Var[X_i] + \sum_{i}\sum_{j\not=i} 2 * c_i * c_j * Cov[X_i,X_j] = c^T \sum c\]
\[\begin{split}Cov = \sum = \begin{bmatrix} \sigma_1^2 & \sigma_{1,2} & ... & \sigma{1,n}\\ \sigma_{2,1} & \sigma_2^2 & ... & \sigma_{2,n} \\ \vdots & \vdots & ... & \vdots \\ \sigma_{n,1} & \sigma_{n,2} & ... & \sigma_n^2 \end{bmatrix} \end{split}\]

Note

Home Activity (Video): Derive the addition/subtraction error propagation rules from the properties of random variables.

12.2.5. Multiplication Rule#

Consider \(Z = X \cdot Y\):

\[\sigma_Z^2\approx \left(E[Z]\right)^2 \left[ \left(\frac{\sigma_{X}}{E[X]} \right)^2 + \left(\frac{\sigma_{Y}}{E[Y]} \right)^2 + \frac{2 \sigma_{X,Y}}{E[X] E[Y]} \right]\]

12.2.6. Division Rule#

Consider \(Z = X / Y\):

\[\sigma_Z^2\approx \left(E[Z]\right)^2 \left[ \left(\frac{\sigma_{X}}{E[X]} \right)^2 + \left(\frac{\sigma_{Y}}{E[Y]} \right)^2 - \frac{2 \sigma_{X,Y}}{E[X] E[Y]} \right]\]

12.2.7. Derivation#

Notice these formulas are approximations. We’ll see these formulas come from the more general differentiable function rule.

Note

Study Activity: Derive the multiplication and division rules from the differentiable function rule.

Next, let’s consider any differentiable function.

12.2.8. Differentiable Function Rule#

Consider \(Z = g(X,Y)\):

\[\sigma_Z^2 \approx \left| \frac{\partial g}{\partial X} \right|^2 \sigma_X^2 + \left| \frac{\partial g}{\partial Y} \right|^2 \sigma_Y^2 + 2 \frac{\partial g}{\partial X} \frac{\partial g}{\partial Y} \sigma_{X,Y}\]

12.2.9. Derivation#

Notice this formula is an approximation. We’ll see it comes from a Taylor series approximation.

Taylor Series: $\(Z = g(X,Y) \approx g(\mu_X,\mu_Y) + \frac{\partial g}{\partial X} * (X-\mu_X) + \frac{\partial g}{\partial Y} * (Y-\mu_Y)\)$

\[ Var[Z] \approx Var[g(\mu_X,\mu_Y)+ \frac{\partial g}{\partial X} * (X-\mu_X) + \frac{\partial g}{\partial Y} * (Y-\bar{Y})\]

Apply linear error propagation rule.

\[\sigma_Z^2 \approx \frac{\partial g}{\partial X} \bigg\vert_{\mu_X,\mu_Y}^2 \sigma_X^2 + \frac{\partial g}{\partial Y} \bigg\vert_{\mu_X,\mu_Y}^2 \sigma_Y^2 + 2 \frac{\partial g}{\partial X} \frac{\partial g}{\partial Y} \sigma_{X,Y}\]

Note

Home Activity (Video): Derive the differential function error propagation rule from the properties of random variables and a truncated Taylor series expansion.

12.2.10. Multivariate Case#

\[\overrightarrow{x}: \textrm{ vector } ; x \in \mathbb{R}^n\]
\[f(\overrightarrow{x}): \textrm{ returns scalar } ; f(x) \in \mathbb{R}^1\]
\[\begin{split}\bigtriangledown_xf = \begin{bmatrix} \frac{\partial{f} }{ \partial{x_1}} \\ ... \\ \frac{\partial{f}}{\partial{x_n}} \end{bmatrix}\end{split}\]
\[\sum_x \approx \mathbb{R}^{n \times n} \textrm{ covariance matrix}\]

Tying this in to the error propogation formula…

\[ \sigma_f^2 \approx (\bigtriangledown f)^T \sum_x(\bigtriangledown f)\]
\[ \approx (1 \times n)(n \times n)(n \times 1)\]