GLOSSARY
Algebraic properties of exponential functions
The algebraic properties of exponential functions are the index laws: `a^x a^y = a^(x + y)`, `a^(-x) = 1/a^x`, `(a^x)^y = a^(xy)`, `a^0 = 1`, where `x`, `y` and `a` are real.
Additivity property of definite integrals
The additivity property of definite integrals refers to ‘addition of intervals of integration’: `int_a^b f(x) dx + int_b^c f(x) dx = int_a^c f(x) dx` for any numbers `a`, `b` and `c`, and any function `f(x)`.
Algebraic properties of exponential functions
The algebraic properties of exponential functions are the index laws: `a^x a^y = a^(x + y)`, `a^(-x) = 1/a^x`, `(a^x)^y = a^(xy)`, `a^0 = 1`, for any real numbers `x`, `y`, and `a`, with `a > 0`.
Algebraic properties of logarithms
The algebraic properties of logarithms are the rules: `log_a (xy) = log_a x + log_a y`, `log_a (1/x) = -log_a x`, and `log_a 1 = 0`, for any positive real numbers `x`, `y` and `a`.
Antidifferentiation
An anti-derivative, primitive or indefinite integral of a function `f(x)` is a function `F(x)` whose derivative is `(x)`, i.e. `F'(x) = f(x)`.
The process of solving for anti-derivatives is called anti-differentiation.
Anti-derivatives are not unique. If `F(x)` is an anti-derivative of `f(x)`, then so too is the function `F(x) + c` where `c` is any number. We write `int f(x) dx = F(x) + c` to denote the set of all anti-derivatives of `f(x)`. The number `c` is called the constant of integration. For example, since `d/dx (x^3) = 3x^2`, we can write `int 3x^2 dx = x^3 + c`.
Asymptote
A straight line is an asymptote of the function `y = f(x)` if graph of `y = f(x)` gets arbitrarily close to the straight line. An asymptote can be horizontal, vertical or oblique. For example, the line with equation `x = pi/2` is a vertical asymptote to the graph of `y = tan x`, and the line with equation `y = 0` is a horizontal asymptote to the graph of `y = 1/x`.
Bernoulli random variable
A Bernoulli random variable has two possible values, namely `0` and `1` . The parameter associated with such a random variable is the probability `p` of obtaining a `1`.
Bernoulli trial
A Bernoulli trial is a chance experiment with possible outcomes, typically labelled ‘success’ and failure’.
Binomial distribution
The expansion `(x + y)^n = x^n + ((n),(1))x^(n-1)y + ... + ((n),(r))x^(n-r)y^r + ... + y^n` is known as the binomial theorem. The numbers `((n),(r)) = (n!)/(r!(n - r)!) = (n xx (n - 1) xx ... xx (n - r + 1))/(r xx (r - 1) xx ... xx 2 xx 1)` are called binomial coefficients.
Central Limit Theorem
There are various forms of the Central Limit Theorem, a result of fundamental importance in statistics. For the purposes of this course, it can be expressed as follows: "If `bar X` is the mean of `n` independent values of random variable `X` which has a finite mean `mu` and a finite standard deviation `sigma`, then as `n -> oo`, the distribution of `(bar X - mu)/(sigma/sqrt n)` approaches the standard normal distribution.”
In the special case where `X` is a Bernoulli random variable with parameter `p`, `bar X` is the sample proportion `hat p`, `mu = p` and `sigma = sqrt (p(1 - p)`. In this case, the central limit theorem is a statement that as `n -> oo` the distribution of `(hat p - p)/sqrt ((hat p(1 - hat p))/n` approaches the standard normal distribution.
Chain rule
The chain rule relates the derivative of the composite of two functions to the functions and their derivatives. If `h(x) = f@g(x)`, then `(f@g)'(x) = f'(g(x))g'(x)`, and in Leibniz notation: `dz/dx = dz/dy dy/dx`.
Circular measure
A rotation, typically measured in radians or degrees.
Composition of functions
If `y = g(x)` and `z = f(y)` for functions `f` and `g`, then `z` is a composite function of `x`. We write `z = f@g(x) = f(g(x))`. For example, `z = sqrt (x^2 + 3)` expresses `z` as a composite of the functions `f(y) = sqrt y` and `g(x) = x^2 + 3`.
Concave up and concave down
A graph of `y = f(x)` is concave up at a point `P` if points on the graph near `P` lie above the tangent at `P`. The graph is concave down at `P` if points on the graph near `P` lie below the tangent at `P`.
Effect of linear change
The effects of linear changes of scale and origin on the mean and variance of a random variable are summarised as follows:
If `X` is a random variable and `Y = aX + b`, where `a` and `b` are constants, then `E(Y) = aE(X) + b` and `Var(Y) = a^2 Var(X)`.
Euler’s number
Euler’s number `e` is an irrational number whose decimal expansion begins `e = 2.7182818284590452353602874713527...`.
It is the base of the natural logarithms, and can be defined in various ways including: `e = 1 + 1/(1!) + 1/(2!) + 1/(3!) + ...` and `e = lim_(n -> oo) (1 + 1/n)^n`.
Expected value
The expected value `E(X)` of a random variable `X` is a measure of the central tendency of its distribution.
If `X` is discrete, `E(X) = sum_i p_i x_i`, where the `x_i` are the possible values of `X` and `p_i = P(X = x_i)`.
If `X` is continuous, `E(x) = int_-oo^oo xp(x) dx`, where `p(x)` is the probability density function of `X`.
Function
A function `f` is a rule such that for each `x`-value there is only one corresponding `y`-value. This means that if `(a, b)` and `(a, c)` are ordered pairs, then `b = c`.
Gradient (Slope)
The gradient of the straight line passing through points `(x_1, y_1)` and `(x_2, y_2)` is the ratio `(y_2 - y_1)/(x_2 - x_1)`. Slope is a synonym for gradient.
Graph of a function
The graph of a function `f` is the set of all points `(x, y)` in Cartesian plane where `x` is in the domain of `f` and `y = f(x)`.
Index laws
The index laws are the rules: `a^x a^y = a^(x + y)`, `a^(-x) = 1/(a^x)`, `(a^x)^y = a^(xy)`, `a^0 = 1`, and `(ab)^x = a^x b^x`, where `a`, `b`, `x` and `y` are real numbers.
Level of confidence
The level of confidence associated with a confidence interval for an unknown population parameter is the probability that a random confidence interval will contain the parameter.
Linearity property of the derivative
The linearity property of the derivative is summarised by the equations: `d/dx(ky) = k dy/dx` for any constant `k` and `d/dx (y_1 + y_2) = dy_1/dx + dy_2/dx`.
Local and global maximum and minimum
A stationary point on the graph `y = f(x)` of a differentiable function is a point where `f'(x) = 0`.
We say that `f(x_0)` is a local maximum of the function `f(x)` if `f(x) <= f(x_0)`for all values of `x` near `x_0`.
We say that `f(x_0)` is a global maximum of the function `f(x)` if `(x) <= f(x_0)` for all values of `x` in the domain of `f`.
We say that `f(x_0)` is a local minimum of the function `f(x)` if `f(x) >= f(x_0)` for all values of `x` near `x_0`.
We say that `f(x_0)` is a global minimum of the function `f(x)` if `f(x) >= f(x_0)` for all values of `x` in the domain of `f`.
Margin of error
The margin of error of a confidence interval of the form `f - E < p < f + E` is `E`, the half-width of the confidence interval. It is the maximum difference between `f` and `p` if `p` is actually in the confidence interval.
Mean of a random variable
The mean of a random variable is another name for its expected value.
The variance `Var(X)` of a random variable `X` is a measure of the ‘spread’ of its distribution.
If `X` is discrete, `(X) = sum_i p_i(x_i - mu)^2`, where `mu = E(X)` is the expected value.
If `X` is continuous, `Var(X) = int_-oo^oo (x - mu)^2 p(x)dx`.
Non-routine problems
Problems solved using procedures not regularly encountered in learning activities.
Pascal’s triangle
Pascal’s triangle is a triangular arrangement of binomial coefficients. The `n^"th"` row consists of the binomial coefficients `((n),(r))`, for `0 <= r <= n`, each interior entry is the sum of the two entries above it, and sum of the entries in the `n^"th"` row is `2^n`.
Period of a function
The period of a function `f(x)` is the smallest positive number `p` with the property that `f(x + p) = f(x)` for all `x`. The functions `sin x` and `cos x` both have period `2pi`, and `tan x` has period `pi`.
Point and interval estimates
In statistics estimation is the use of information derived from a sample to produce an estimate of an unknown probability or population parameter. If the estimate is a single number, this number is called a point estimate. An interval estimate is an interval derived from the sample that, in some sense, is likely to contain the parameter.
A simple example of a point estimate of the probability `p` of an event is the relative frequency `f` of the event in a large number of Bernoulli trials. An example of an interval estimate for `p` is a confidence interval centred on the relative frequency `f`.
Point of inflection
A point on a curve at which the curve changes from being concave (concave downward) to convex (concave upward), or vice versa.
Probability density function
The probability density function of a continuous random variable is a function that describes the relative likelihood that the random variable takes a particular value. Formally, if `p(x)` is the probability density of the continuous random variable `X`, then the probability that `X` takes a value in some interval `[a, b]` is given by `int_a^b p(x) dx`.
Probability distribution
The probability distribution of a discrete random variable is the set of probabilities for each of its possible values.
Product rule
The product rule relates the derivative of the product of two functions to the functions and their derivatives.
If`h(x) = f(x) g(x)`, then `h'(x) = f(x) g'(x) + f'(x) g(x)`, and in Leibniz notation: `d/dx(uv) = u (dv)/dx + (du)/dx v`.
Quadratic formula
If `ax^2 + bx + c = 0` with `a != 0`, then `x = (b +- sqrt(b^2 - 4ac))/(2a)`. This formula for the roots is called the quadratic formula.
Quantile
A quantile `t_alpha` for a continuous random variable `X` is defined by `P(X > t_alpha) = alpha`, where `0 < alpha < 1`.
The median `m` of `X` is the quantile corresponding to `alpha = 0.5 : P(X > m) = 0.5`.
Quotient rule
The quotient rule relates the derivative of the quotient of two functions to the functions and their derivatives
If `h(x) = f(x)/g(x)`, then `h'(x) = (g(x) f'(x) - f(x) g'(x))/g(x)^2`.
Radian measure
The radian measure `theta` of an angle in a sector of a circle is defined by `theta = l/r`, where `r` is the radius and `l` is the arc length. Thus, an angle whose degree measure is `180` has radian measures `pi`.
Random variable
A random variable is a numerical quantity, the value of which depends on the outcome of a chance experiment. For example, the proportion of heads observed in 100 tosses of a coin.
A discrete random variable is one which can only take a countable number of value, usually whole numbers.
A continuous random variable is one whose set of possible values are all of the real numbers in some interval.
Relative frequency
If an event `E` occurs `r` times in `n` trials of a chance experiment, the relative frequency of `E` is `r/n`.
Routine problems
Problems solved using procedures regularly encountered in learning activities.
Secant
A secant of the graph of a function is a straight line passing through two points on the graph. The line segment between the two points is called a chord.
Second derivative test
According to the second derivative test, if `f'(x) = 0`, then `f(x)` is a local maximum of `f` if `f''(x) < 0`, and `f(x)` is a local minimum if `f''(x) > 0`.
Sine and cosine functions
In the unit circle definition of cosine and sine, `cos theta` and `sin theta` are the `x` and `y` coordinates of the point on the unit circle corresponding to the angle `theta` measured as a rotation from the ray `OX`. If `theta` is measured in the counter-clockwise direction, then it is said to be positive; otherwise it is said to be negative.
Standard deviation of a random variable
The standard deviation of a random variable is the square root of its variance.
Tangent line
The tangent line (or simply the tangent) to a curve at a given point `P` can be described intuitively as the straight line that "just touches" the curve at that point. At the point where the tangent touches the curve, the curve has “the same direction” as the tangent line. In this sense it is the best straight-line approximation to the curve at the point.
The fundamental theorem of calculus
The fundamental theorem of calculus relates differentiation and definite integrals. It has two forms: `d/dx (int_a^x f(t) dt) = f(x)` and `int_a^b f'(x) dx = f(b) - f(a)`.
The linearity property of anti-differentiation
The linearity property of anti-differentiation is summarised by the equations: `int kf(x) dx = k int f(x) dx` for any constant `k`, and `int (f_1(x) + f_2(x)) dx = int f_1(x) dx + int f_2(x) dx` for any two functions `f_1(x)` and `f_2(x)`.
Similar equations describe the linearity property of definite integrals: `int_a^b kf(x)dx = kint_a^b f(x)dx` for any constant `k`, and `int_a^b(f_1(x) + f_2(x))dx = int_a^b f_1(x)dx + int_a^b f_2(x)dx` for any two functions `f_1(x)` and `f_2(x)`.
Uniform continuous random variable
A uniform continuous random variable `X` is one whose probability density function `p(x)` has constant value on the range of possible values of `X`. If the range of possible values is the interval `[a, b]`, then `p(x) = 1/(b - a)` if `a <= x <= b`.
Vertical line test
A relation between two real variables `x` and `y` is a function and `y = f(x)` for some function `f`, if and only if each vertical line, i.e. each line parallel to the `y`-axis, intersects the graph of the relation in at most one point. This test to determine whether a relation is, in fact, a function is known as the vertical line test.