Jump to main content or area navigation.

Contact Us

Water: Microbial

Thesaurus of Terms Used in Microbial Risk Assessment: 5.9  Modeling, Statistics, and Math Terms

Go to Thesaurus Contents
Go to Thesaurus Index

  1. The measure of the correctness of data, as given by the difference between the measured value and the true or standard value.  (EPA 1997a, EPA 2004)
  2. Degree of agreement between average predictions of a model or the average of measurements and the true value of the quantity being predicted or measured. (FAO/WHO 2003b)
  3. The degree to which a measurement (e.g., the mean estimate of a treatment effect) is true or correct.  An estimate can be accurate, yet not be precise, if it is based upon an unbiased method that provides observations having great variation (i.e., not close in magnitude to each other).  (NLM/NICHRS 2004)
  4. The degree of agreement between a measured value and the true value; usually expressed as +/- percent of full scale.  (RAIS 2004, SRA 2004)
    RELATED TERMS: contrast with precision
Akaikes information criterion          (ACRONYM: AIC)

These are criteria that are used in model selection to select the best model from a set of plausible models.  One model is better than another model if it has a smaller AIC (or Bayesian information criterion (BIC)) value.  AIC is based on Kullback-Leibler distance in information theory, and BIC is based on integrated likelihood in Bayesian theory.  If the complexity of the true model does not increase with the size of the data set, BIC is the preferred criterion, otherwise, AIC is preferred.  (Burnham and Anderson 1998)
RELATED TERMS: Bayesian information criterion

aleatory uncertainty

Aleatory is of or pertaining to natural or accidental causes and cannot be explained with mechanistic theory.  Generally interpreted to be the same as stochastic variability.  (FAO/WHO 2003b)

  1. A logical, step-by-step procedure used to solve problems in mathematics and computer programming.  (AAS 1999)
  2. A process or set of rules by which a calculation or process can be carried out usually referring to calculations that will be done by a computer.  (CancerWEB 2005) 
  3. A computable set of steps to achieve a desired result.  (NIST 2005)
  4. A specific set of instructions for carrying out a procedure or solving a problem, usually with the requirement that the procedure terminate at some point.  Specific algorithms sometimes also go by the name method, procedure, or technique.  (Weisstein/MathWorld 2002)
analysis of variance

A statistical technique that isolates and assesses the contribution of categorical factors to variation in the mean of a continuous outcome variable.  The data are divided into categories based on their values for each of the independent variables, and the differences between the mean outcome values of these categories are tested for statistical significance.  (Last 1983)

Anderson-Darling goodness of fit

The Anderson-Darling test is used to test if a sample of data came from a population with a specific distribution.  It is a modification of the Kolmogorov-Smirnov (K-S) test and gives more weight to the tails than does the K-S test.  The K-S test is distribution free in the sense that the critical values do not depend on the specific distribution being tested.  The Anderson-Darling test makes use of the specific distribution in calculating critical values.  This has the advantage of allowing a more sensitive test and the disadvantage that critical values must be calculated for each distribution.  The Anderson-Darling test is also an alternative to the chi-squared goodness-of-fit test.  (NIST/SEMATECH 2005a)
RELATED TERMS: Kolmogorov-Smirnov test

Top of page


An approach to a correct estimate, calculation, or conception, or to a given quantity, quality, etc; a continual approach or coming nearer to a result; as, to solve an equation by approximation.  A value that is nearly but not exactly correct.  (CancerWEB 2005)

arithmetic mean
  1. The sum of all the measurements in a data set divided by the number of measurements in the data set.  (EPA 1992)
  2. A measure of central tendency.  It is computed by adding all the individual values together and dividing by the number in the group.  (Last 1983)

Statistical relationship between two or more events, characteristics, or other variables.  (CDC 2005)

Bayes’ theorem

A result that allows new information to be used to update the conditional probability of an event.  (STEPS 1997)

Bayesian inference

Inference is using data to learn about some uncertain quantity.  Bayes’ theorem describes how to update a prior distribution about the uncertain quantity using a model (expressing likelihood of observed data) to obtain a posterior distribution.  Bayesian inference allows incorporation of prior beliefs and can handle problems with insufficient data for frequentist inference.  (FAO/WHO 2003b)

Bayesian information criterion          (ACRONYM: BIC)

SEE: Akaikes information criterion

Bayesian methods

These are an approach—founded on Bayes’ theorem—that forms one of the two flows of statistics.  Bayesian inference is very strong when only subjective data are available and is useful for using data to improve one’s estimate of a parameter.  (FAO/WHO 2003b)

Bernoulli distribution

SEE: binomial distribution

Top of page

Bernoulli trial

A single random event for which there are two and only two possible outcomes that are mutually exclusive and have a priori fixed (and complementary) probabilities of resulting.  The trial is the realization of this process.  Conventionally one outcome is termed a success and is assigned the score 1, the other is a failure and has the score zero.  (CancerWEB 2005) 
RELATED TERMS: binomial distribution

best case

The situation or input for which an algorithm or data structure takes the least time or resources.  (NIST 2005)

beta distribution

The general formula for the probability density function of the beta distribution is


where p and q are the shape parameters, a and b are the lower and upper bounds, respectively, of the distribution, and B(p,q) is the beta function.  (NIST/SEMATECH 2005a)

beta error
  1. The probability of a type II (false-negative) error.  In hypothesis testing, β is the probability of concluding incorrectly that an intervention is not effective when it has true effect.  (1-β) is the power to detect an effect of an intervention if one truly exists.  (NLM/NICHSR 2004)
  2. In a hypothesis test, a beta (or type II) error occurs when the null hypothesis, H0, is not rejected when it is in fact false.  For example, in a clinical trial of a new drug, the null hypothesis might be that the new drug is no better, on average, than the current drug; i.e., H0: there is no difference between the two drugs on average.  A type II error would occur if it was concluded that the two drugs produced the same effect, i.e., there is no difference between the two drugs on average, when in fact they produced different ones.  A type II error is frequently due to sample sizes being too small.  The probability of a type II error is generally unknown, but is symbolised by β and written P(Type II error) = β.  A type II error can also be referred to as an error of the second kind.  (STEPS 1997)
    RELATED TERMS: null hypothesis, power, type II error
  1. Systematic error introduced into sampling or analysis by selecting or encouraging one outcome or answer over others.  (EPA 2004)
  2. A set of standard criteria for deciding whether a person has a particular disease or health-related condition, by specifying clinical criteria and limitations on time, place, and person.  (CDC 2005)
  3. A term that refers to how far the average statistic lies from the parameter it is estimating, that is, the error that arises when estimating a quantity.  It is also referred to as "systematic error."  It is the difference between the mean of a model prediction or of a set of measurements and the true value of the quantity being predicted or measured.  (FAO/WHO 2003b)
  4. In general, any factor that distorts the true nature of an event or observation.  In clinical investigations, a bias is any systematic factor other than the intervention of interest that affects the magnitude of (i.e., tends to increase or decrease) an observed difference in the outcomes of a treatment group and a control group.  Bias diminishes the accuracy (though not necessarily the precision) of an observation.  Randomization is a technique used to decrease this form of bias.  Bias also refers to a prejudiced or partial viewpoint that would affect someone’s interpretation of a problem.  Double blinding is a technique used to decrease this type of bias.  (NLM/NICHSR 2004)
  5. A systematic error arising from faulty study design, data collection, analysis, interpretation, etc.  (NZ 2002)
  6. Any difference between the true value and that actually obtained due to all causes other than sampling variability.  (RAIS 2004, SRA 2004)
  7. Bias is a term that refers to how far the average statistic lies from the parameter it is estimating, that is, the error which arises when estimating a quantity.  Errors from chance will cancel each other out in the long run, those from bias will not.  (STEPS 1997)
    RELATED TERMS: parameter

Top of page

binomial distribution
  1. The probability distribution associated with two mutually exclusive outcomes; used to model cumulative incidence rates and prevalence rates.  The Bernoulli distribution is a special case of binomial distribution.  (CancerWEB 2005)
  2. The binomial distribution is used when there are exactly two mutually exclusive outcomes of a trial.  These outcomes are appropriately labeled "success" and "failure." The binomial distribution is used to obtain the probability of observing x successes in N trials, with the probability of success on a single trial denoted by p.  The binomial distribution assumes that p is fixed for all trials.  The formula for the binomial probability mass function is microbial



  1. The binomial distribution is probably the most commonly used discrete distribution.  (NIST/SEMATECH 2005a)
  2. An important theoretical distribution used to model discrete events, especially the count of defectives.  The binomial distribution depends on two parameters, n and p.  N is the total number of trials; for each trial, the chance of observing the event of interest is p, and of not observing it, 1-p.  The binomial distribution assumes each trial’s outcome is independent of that of any other trial, and models the sum of events observed.  Unlike the Poisson distribution, the binomial distribution sets a maximum number of events n, the sample size that can be observed.  Unlike the hypergeometric distribution, the binomial distribution assumes the events it counts are independent.  (NIST/SEMATECH 2005b)
    RELATED TERMS: Poisson distribution, hypergeometric distribution

A numerical method—also referred to as Bootstrap simulation—for inferring sampling distributions and confidence intervals for statistics of random variables.  The methodology to estimate uncertainty involves generating subsets of the data on the basis of random sampling with replacements as the data are sampled.  Such re-sampling means that each datum is equally represented in the randomization scheme (statistics).  (Marsh 1996)

bounding estimate
  1.   An estimate of exposure, dose, or risk that is higher than that incurred by the person in the population with the currently highest exposure, dose, or risk. Bounding estimates are useful in developing statements that exposures, doses, or risks are not greater than an estimated value.  (EPA 2005b)
  2.   An estimate of exposure, dose, or risk that is higher than that incurred by the person with the highest exposure, dose, or risk in the population being assessed.  Bounding estimates are useful in developing statements that exposures, doses, or risks are "not greater than" the estimated value.  (IPCS 2004)
chi-squared distribution
  1. A distribution in which a variable is distributed like the sum of the squares of any given independent random variable, each of which has a normal distribution with mean of zero and variance of one.  (CancerWEB 2005)
  2. The chi-square distribution results when υ independent variables with standard normal distributions are squared and summed.  The formula for the probability density function of the chi-square distribution is


Where υ is the shape parameter and Γ is the gamma function.  The formula for the gamma function is


In a testing context, the chi-square distribution is treated as a "standardized distribution" (i.e., no location or scale parameters).  However, in a distributional modeling context (as with other probability distributions), the chi-square distribution itself can be transformed with a location parameter, µ, and a scale parameter, σ.  (NIST/SEMATECH 2005a)

Also called "chi-square" distribution, even though "chi-squared" is technically correct.

Top of page

chi-squared goodness-of-fit test
  1. The chi-square test is used to test if a sample of data came from a population with a specific distribution.  An attractive feature of the chi-square goodness-of-fit test is that it can be applied to any univariate distribution for which you can calculate the cumulative distribution function.  The chi-square goodness-of-fit test is applied to binned data (i.e., data put into classes).  This is actually not a restriction since for non-binned data one can simply calculate a histogram or frequency table before generating the chi-square test. However, the value of the chi-square test statistic is dependent on how the data is binned.  Another disadvantage of the chi-square test is that it requires a sufficient sample size in order for the chi-square approximation to be valid.  (NIST/SEMATECH 2005a)
  2. The chi-squared goodness of fit test is a test for comparing a theoretical distribution, such as a Normal, Poisson etc, with the observed data from a sample.  (STEPS 1997)
cluster analysis

A set of statistical methods used to group variables or observations into strongly inter-related subgroups.  In epidemiology, it may be used to analyze a closely grouped series of events or cases of disease or other health-related phenomenon with well-defined distribution patterns in relation to time or place or both.  (CancerWEB 2005) 

coefficient of variation          (ACRONYM: CV)
  1. A dimensionless measure of dispersion, equal to the standard deviation divided by the mean, often expressed as a percentage.  (EPA 2004)
  2. The ratio of the standard deviation to the mean. This is meaningful only if the variable is measured on a ratio scale. (Last 1983)
confidence interval
  1. A range of values that has a specified probability (e.g., 95 percent) of containing the statistical parameter (i.e., a quantity such as a mean or variance that describes a statistical population) in question.  The confidence limit refers to the upper or lower value of the range.  (EPA 2004)
  2. A range of values for a variable of interest, e.g., a rate, constructed so that this range has a specified probability of including the true value of the variable.  The specified probability is called the confidence level, and the end points of the confidence interval are called the confidence limits.  (CDC 2005)
  3. A range of values inferred or believed to enclose the actual or true value of an uncertain quantity with a specified degree of probability.  Confidence intervals may be inferred based upon sampling distributions for a statistic.  (FAO/WHO 2003b)
  4. Depicts the range of uncertainty about an estimate of a treatment effect.  It is calculated from the observed differences in outcomes of the treatment and control groups and the sample size of a study.  The confidence interval (CI) is the range of values above and below the point estimate that is likely to include the true value of the treatment effect.  The use of CIs assumes that a study provides one sample of observations out of many possible samples that would be derived if the study were repeated many times.  Investigators typically use CIs of 90%, 95%, or 99%.  For instance, a 95% CI indicates that there is a 95% probability that the CI calculated from a particular study includes the true value of a treatment effect.  If the interval includes a null treatment effect (usually 0.0, but 1.0 if the treatment effect is calculated as an odds ratio or relative risk), the null hypothesis of no true treatment effect cannot be rejected.  (NLM/NICHSR 2004)
  5. (95%) Under the assumption of a given statistical model, the range of values for an estimated statistic constructed so that under repeated sampling its true value will lie within such intervals for 95% of the time.  For a particular interval one cannot say with 95% confidence that the true value lies within the interval—unless one adopts a Bayesian approach, in which case a particular prior distribution will have been (probably unwittingly) adopted.  This is then called a credibility interval.  Confidence and credibility intervals usually have the same numerical limits if the prior distribution posits that all values are equally likely (and this may not be a tenable assumption).  (NZ 2002)
  6. A range of values (a1 < a < a2) determined from a sample of definite rules so chosen that, in repeated random samples from the hypothesized population, an arbitrarily fixed proportion of that range will include the true value, x, of an estimated parameter.  The limits, a1 and a2, are called confidence limits; the relative frequency with which these limits include a is called the confidence coefficient; and the complementary probability is called the confidence level.  As with significance levels, confidence levels are commonly chosen as 0.05 or 0.01, the corresponding confidence coefficients being 0.95 or 0.99.  Confidence intervals should not be interpreted as implying that the parameter itself has a range of values; it has only one value, a. On the other hand, the confidence limits (a1, a2) being derived from a sample, are random variables, the values of which on a particular sample either do or do not include the true value a of the parameter.  However, in repeated samples, a certain proportion of these intervals will include a provided that the actual population satisfied the initial hypothesis.  (RAIS 2004, SRA 2004)
  7. A confidence interval gives an estimated range of values that is likely to include an unknown population parameter, the estimated range being calculated from a given set of sample data.  If independent samples are taken repeatedly from the same population, and a confidence interval calculated for each sample, then a certain percentage (confidence level) of the intervals will include the unknown population parameter.  Confidence intervals are usually calculated so that this percentage is 95%, but we can produce 90%, 99%, 99.9% (or whatever) confidence intervals for the unknown parameter.  The width of the confidence interval gives us some idea about how uncertain we are about the unknown parameter (see precision).  A very wide interval may indicate that more data should be collected before anything very definite can be said about the parameter.  Confidence intervals are more informative than the simple results of hypothesis tests (where we decide "reject H0" or "don’t reject H0") since they provide a range of plausible values for the unknown parameter.  (STEPS 1997)
    RELATED TERMS: parameter

Top of page

contagious distribution

A probability distribution describing a stochastic process consisting of a combination of two or more processes.  Also referred to as a "mixture distribution" (statistics).  (FAO/WHO 2003b)

continuous time model

A model in which the system changes continuously over time. Derivatives (e.g., dY/dt) are the mathematical formalism for describing such continuous change. The differential equation which embodies a model provides the values of these derivatives at any particular time point; calculus or a computer can then be used to move the state of the model forwards in time. Continuous models have the advantage over discrete time models in that they are more amenable to algebraic manipulation, although they are slightly harder to implement on a computer. The same as a differential equation model.  (Swinton 1999)

controllable variability

Sources of heterogeneity of values of time, space or different members of a population that can be modified in part—in principle, at least—by intervention, such as a control strategy.  For example, variability in the time and temperature history of food storage among storage devices influences variability in pathogen growth among food servings and in principle could be modified through a control strategy.  For both population and individual risk, controllable variability is a component of overall variability.  (FAO/WHO 2003b)


Relationship that results when a change in one variable is consistently associated with a change in another one.  (EDVCB 2000)

cumulative distribution function          (ACRONYM: CDF)
  1. Cumulative distribution functions are particularly useful for describing the likelihood that a variable will fall within different ranges of x. F(x) (i.e., the value of y at x in a CDF plot) is the probability that a variable will have a value less than or equal to x.  (EPA 1998a)
  2. The CDF is alternatively referred to in the literature as the distribution function, cumulative frequency function, or the cumulative probability function.  The cumulative distribution function, F(x), expresses the probability the random variable X assumes a value less than or equal to some value x, F(x) = Prob (X # x).  For continuous random variables, the cumulative distribution function is obtained from the probability density function by integration, or by summation in the case of discrete random variables.  (EPA 2004)
  3. The risk of a common toxic effect associated with concurrent exposure by all relevant pathways and routes of exposure to a group of chemicals that share a common mechanism of toxicity.  (EPA 2005e)
  4. All random variables (discrete and continuous) have a cumulative distribution function.  It is a function giving the probability that the random variable X is less than or equal to x, for every value x.  Formally, the cumulative distribution function F(x) is defined to be:
    F(x) = P(X<=x) for  -infinity < x < infinity
  5. For a discrete random variable, the cumulative distribution function is found by summing up the probabilities.  For a continuous random variable, the cumulative distribution function is the integral of its probability density function.  (STEPS 1997)
    RELATED TERMS: probability density function

Top of page

decision tree
  1. A hierarchy of rules within a computer program, represented by a tree-like structure, that enables a set of data to be classified.  A series of selection criteria classify the data into smaller and smaller categories.  (AAS 1999)
  2. Alternative choices available at each stage of deciding how to manage a clinical problem, displayed graphically; at each branch or decision node, the probabilities of each outcome that can be predicted are shown; the relative worth of each outcome is described in terms of its utility or quality of life, e.g., as measured by probability of life expectancy or freedom from disability.  (CancerWEB 2005)
dependent variable
  1. In experiments, a variable that is influenced by or dependent upon changes in the independent variable.  (CancerWEB 2005)
  2. In a statistical analysis, the outcome variable(s) or the variable(s) whose values are a function of other variable(s) (called independent variable(s) in the relationship under study).  (CDC 2005)
    RELATED TERMS: contrast with independent variable

A methodology relying on point (i.e., exact) values as inputs to estimate risk; this obviates quantitative estimates of uncertainty and variability.  Results are also presented as point values.  Uncertainty and variability may be discussed qualitatively, or semi-quantitatively by multiple deterministic risk estimates.  (EPA 2004)

deterministic analysis

Calculation and expression of health risks as single numerical values or "single point" estimates of risk. In risk assessments, the uncertainty and variability are discussed in a qualitative manner. (EPA 2005e)

deterministic model

A mathematical model in which the parameters and variables are not subject to random fluctuations, so that the system is at any time entirely defined by the initial conditions chosen. Contrast with a stochastic model.  (Swinton 1999)

  1. A set of values derived from a specific population or set of measurements that represents the range and array of data for the factor being studied.  (EPA 1997a)
  2. In epidemiology, the frequency and pattern of health-related characteristics and events in a population.  In statistics, the observed or theoretical frequency of values of a variable.  (CDC 2005)
  3. A series of values or a mathematical equation describing a series of values.  (FDA 2002)
  4. The arrangement in space and time of a specific microorganism or disease caused by that microorganism.  (ILSI 2000)
  5. The complete summary of the frequencies of the values or categories of a measurement made on a group of persons.  The distribution tells either how many or what proportion of the group were found to have each value (or each range of values) out of all the possible values that the quantitative measure can have. (Last 1983)
  6. A representation of the frequency of occurrence of values of a variable, especially of a response.  (NIST/SEMATECH 2005b)
    RELATED TERMS: sample, sampling distribution
distribution-free method

A method of testing a hypothesis or of setting up a confidence interval that does not depend on the form of the underlying distribution; in particular, it does not depend upon the variable following a normal distribution.  (Last 1983)

Top of page

dose-response, linear-quadratic

A linear-quadratic dose response is a relationship between dose and biological response that is curved. This implies that the rate of change in response is different at different doses. The response may change slowly at low doses, for example, but rapidly at high doses. A linear-quadratic dose response is written mathematically as follows: if Y represents the expected, or average response and D represents dose, then Y = aD + bD2 where a is the linear coefficient (or slope) and b is the quadratic coefficient (or curvature).  (RERF 1999).

dynamic model

Considers the individual within a community rather than the isolated individual.  Time-dependent elements such as secondary transmission, host immunity, and animal reservoirs are included.  (ILSI 2000 text)


Pertaining to, or founded upon, experiment or experience; depending upon the observation of phenomena; versed in experiments.  (CancerWEB 2005) 

empirical distribution

A representation of observed values or data of a series or population.  (FDA 2002)

erlang distribution

SEE: gamma distribution

  1. Any discrepancy between a computed, observed, or measured quantity and the true, specified, or theoretically correct value of that quantity. (1) random error – in statistics, an error that can be predicted only on a statistical basis; (2) systematic error – in statistics, an error which results from some bias in the measurement process and is not due to chance, in contrast to random error. (AIHA 2000)
  2. A false or mistaken result obtained in a study or experiment. Several kinds of error can occur in epidemiology, for example, due to bias:
    1. Random error (sampling error) is that due to chance, when the result obtained in the sample differs from the result that would be obtained if the entire population ("niverse") were studied. Two varieties of sampling error are type I, or alpha error, and type II, or beta error. In an experiment, if the experimental procedure does not in reality have any effect, an apparent difference between experimental and control groups may nevertheless be observed by chance, a phenomenon known as type I error. Another possibility is that the treatment is effective but by chance the difference is not detected on statistical analysis—type II error. In the theory of testing hypotheses, rejecting a null hypothesis when it is actually true is called "type I error."  Accepting a null hypothesis when it is incorrect is called "type II error."
    2. Systematic error is that due to factors other than chance, such as faulty measuring instruments. It is further considered in bias.  (Last 1983)
  1. In risk assessment, this process entails postulating a biologic reality based on observable responses and developing a mathematical model to describe this reality.  The model may then be used to extrapolate to response levels that cannot be directly observed.  (MERREA 2005, RAIS 2004, SRA 2004)
  2. Extrapolation is when the value of a variable is estimated at times which have not yet been observed.  This estimate may be reasonably reliable for short times into the future, but for longer times, the estimate is liable to become less accurate.  (STEPS 1997)

Top of page

fate and transport modeling

The mathematical equations simulating a physical system which are used to assess and predict the movement and behavior of chemicals in the environment.  (EPA 2004)

fault tree

A method of analyzing potential outcomes which starts with the final event and works backwards, identifying all causes of the event, the contributing factors of those causes, etc., back to the basic events. Probabilities can be assigned to the basic events, and a likelihood thus estimated for the final event.  (AIHA 2000)

fault tree analysis

A technique by which many events that interact to produce other events can be related using simple logical relationships permitting a methodical building of a structure that represents the system.  (SRA 2004)

frequency distribution
  1. A distribution describing the rate or frequency of occurrence of a value in a series or population arranged in ascending or descending order.  (FDA 2002)
  2. A complete summary of the frequencies of the values or categories of a variable; often displayed in a two column table: the left column lists the individual values or categories, the right column indicates the number of observations in each category.  (CDC 2005)
gamma distribution

The general formula for the probability density function of the gamma distribution is


Where γ is the shape parameter, µ is the location parameter, β is the scale parameter, and Γ is the gamma function which has the formula


The case where µ = 0 and β = 1 is called the standard gamma distribution.  The equation for the standard gamma distribution reduces to


The gamma is a flexible life distribution model that may offer a good fit to some sets of failure data.  It is not, however, widely used as a life distribution model for common failure mechanisms.  A common use of the gamma model occurs in Bayesian reliability applications.  (NIST/SEMATECH 2005a)

Top of page

Gaussian distribution

SEE: normal distribution

Gaussian distribution model

A commonly used assumption about the distribution of values for a parameter, also called the normal distribution.  For example, a Gaussian air dispersion model is one in which the pollutant is assumed to spread in air according to such a distribution and described by two parameters, the mean and standard deviation of the normal distribution.  (MERREA 2005, RAIS 2004, SRA 2004)
RELATED TERMS: normal distribution

geometric distribution

Geometric distributions model (some) discrete random variables.  Typically, a geometric random variable is the number of trials required to obtain the first failure, for example, the number of tosses of a coin until the first "tail" is obtained, or a process where components from a production line are tested, in turn, until the first defective item is found.  A discrete random variable X is said to follow a geometric distribution with parameter p, written X ~ Ge(p), if it has probability distribution

P(X=x) = px-1(1-p)x


x = 1, 2, 3, ...
p = success probability; 0 < p < 1

The trials must meet the following requirements:

  1. the total number of trials is potentially infinite;
  2. there are just two outcomes of each trial; success and failure;
  3. the outcomes of all the trials are statistically independent;
  4. all the trials have the same probability of success.

The geometric distribution has expected value E(X)= 1/(1-p) and variance V(X)=p/{(1-p)2}.  The geometric distribution is related to the binomial distribution in that both are based on independent trials in which the probability of success is constant and equal to p.  However, a geometric random variable is the number of trials until the first failure, whereas a binomial random variable is the number of successes in n trials.  (STEPS 1997)

geometric mean
  1. The nth root of the product of n values.  (EPA 1997a)
  2. A measure of central tendency. Calculable only for positive values. It is calculated by taking the logarithms of the values, calculating their arithmetic mean, then converting back by taking the antilogarithm.  (Last 1983)

Top of page

goodness of fit

Degree of agreement between an empirically observed distribution and a mathematical or theoretical distribution.  (CancerWEB 2005 )

goodness-of-fit test

A procedure for critiquing and evaluating the potential inadequacies of a probability distribution model with respect to its fitness to represent a particular set of observations.  (FAO/WHO 2003b)

  1. A graphic columnar or bar representation to compare the magnitudes of frequencies or numbers of items; graphical representation of the frequency distribution of a variable, in which rectangles are drawn with their bases on a uniform linear scale representing intervals, and their heights are proportional to the values within each of the intervals.  (CancerWEB 2005)
  2. A graphic representation of the frequency distribution of a continuous variable.  Rectangles are drawn in such a way that their bases lie on a linear scale representing different intervals, and their heights are proportional to the frequencies of the values within each of the intervals.  (CDC 2005)
  3. The purpose of a histogram is to graphically summarize the distribution of a univariate data set.  The histogram graphically shows the following:
    1. c)center (i.e., the location) of the data;
    2. d)spread (i.e., the scale) of the data;
    3. e)skewness of the data;
    4. f)       presence of outliers; and
    5. g)presence of multiple modes in the data.
  4. These features provide strong indications of the proper distributional model for the data.  The probability plot or a goodness-of-fit test can be used to verify the distributional model.  (NIST/SEMATECH 2005a)
  5. A graphical display of a statistical distribution; a form of bar chart.  One axis (usually x) is the scale of the values observed, the second (usually y) is the frequency that observations occur with (approximately) that value.  (NIST/SEMATEC 2005b)
  6. A histogram is a way of summarizing data that are measured on an interval scale (either discrete or continuous).  It is often used in exploratory data analysis to illustrate the major features of the distribution of the data in a convenient form. It divides up the range of possible values in a data set into classes or groups.  For each group, a rectangle is constructed with a base length equal to the range of values in that specific group, and an area proportional to the number of observations falling into that group.  This means that the rectangles might be drawn of non-uniform height.  The histogram is only appropriate for variables whose values are numerical and measured on an interval scale.  It is generally used when dealing with large data sets (>100 observations), when stem and leaf plots become tedious to construct.  A histogram can also help detect any unusual observations (outliers), or any gaps in the data set.  (STEPS 1997)
hypergeometric distribution

An important distribution used to model discrete events, especially the count of defectives when sampling without replacement.  The hypergeometric distribution depends on three parameters, N, n, and DN is the known and finite population size, n the known sample size (constrained to be less than or equal to N), and D, the unknown number of defectives.  Unlike the binomial distribution, the hypergeometric distribution assumes sampling is without replacement, and that its parameters are all integer-valued.  (NIST/SEMATECH 2005b)
RELATED TERMS: binomial distribution

independent variable
  1. A characteristic being measured or observed that is hypothesised to influence another event or manifestation (the dependent variable) within a defined area of relationships under study; that is, the independent variable is not influenced by the event or manifestation, but may cause it or contribute to its variation.  (CancerWEB 2005) 
  2. An exposure, risk factor, or other characteristic being observed or measured that is hypothesized to influence an event or manifestation (the dependent variable).  (CDC 2005)
    RELATED TERMS: contrast with dependent variable

Top of page


One computational cycle of a set of instructions (e.g., formula, algorithm) that is repeated a specified number of times.  (FDA 2002)

Kolmogorov-Smirnov test          (ACRONYM: K-S test)
  1. The Kolmogorov-Smirnov (K-S) test is used to decide if a sample comes from a population with a specific distribution.  An attractive feature of this test is that the distribution of the K-S test statistic itself does not depend on the underlying cumulative distribution function being tested.  Another advantage is that it is an exact test (the chi-square goodness-of-fit test depends on an adequate sample size for the approximations to be valid).  Despite these advantages, the K-S test has several important limitations:  (1) it only applies to continuous distributions, (2) it tends to be more sensitive near the center of the distribution than at the tails, and (3) perhaps the most serious limitation is that the distribution must be fully specified; that is, if location, scale, and shape parameters are estimated from the data, the critical region of the K-S test is no longer valid.  It typically must be determined by simulation.  Due to limitations 2 and 3 above, many analysts prefer to use the Anderson-Darling goodness-of-fit test.  (NIST/SEMATECH 2005a)
  2. For a single sample of data, the Kolmogorov-Smirnov test is used to test whether or not the sample of data is consistent with a specified distribution function.  When there are two samples of data, it is used to test whether or not these two samples may reasonably be assumed to come from the same distribution.  The Kolmogorov-Smirnov test does not require the assumption that the population is normally distributed.  (STEPS 1997)
    RELATED TERMS: Anderson-Darling goodness of fit test

The probability of the observed data for various values of the unknown model parameters (statistics).  (Last 1995)

linear correlation coefficient

A measure of the tendency for there to be a linear relationship between (X, Y) pairs of data.  Measured by Pearson’s coefficient (r), usually just called "correlation coefficient" (so the unwary may not realize that is measures only a linear relationship).  (NZ 2002)
RELATED TERMS: rank correlation coefficient

linear dose response
  1. A pattern of frequency or severity of biological response that varies directly with the amount of dose of an agent.  (EPA 2003)
  2. A linear dose response is a relationship between dose and biological response that is a straight line. In other words, the rate of change (slope) in the response is the same at any dose. A linear dose response is written mathematically as follows: if Y represents the expected, or average response and D represents dose, then Y = aD where a is the slope, also called the linear coefficient. (RERF 1999)
linear model
  1. Statistical models in which the value of a parameter for a given value of a factor is assumed to be equal to a + bx, where a and b are constants.  The models predict a linear regression.  (CancerWEB 2005)
  2. A statistical model of a dependent variable y as a function of a factor, x: y = a + bx + E, where E represents random variation.  (Last 1983)

Top of page

lognormal distribution
  1. If a variable y is such that x = log y, it is said to have a lognormal distribution; this is a skew distribution.  (CancerWEB 2005)
  2. If a variable Y is such that X = log Y is normally distributed, it is said to have lognormal distribution. This is a skew distribution. (Last 1983)
  3. A variable X is lognormally distributed if Y = LN(X) is normally distributed with "LN" denoting the natural logarithm.  The general formula for the probability density function of the lognormal distribution is


where σ is the shape parameter, Θ is the location parameter and m is the scale parameter. The case where Θ = 0 and m = 1 is called the standard lognormal distribution.  The case where Θ equals zero is called the 2-parameter lognormal distribution.  The equation for the standard lognormal distribution is



log-probit model

A dose-response model which assumes that each animal has its own threshold dose, below which no response occurs and above which a tumor [or other effect] is produced by exposure to a chemical.  (MERREA 2005, RAIS 2004, SRA 2004)

logistic distribution

The continuous distribution with parameters m and b > 0 having probability and distribution functions:









The distribution function is similar in form to the solution to the continuous logistic equation giving the distribution its name.  (Weisstein/MathWorld 2003)

logistic model

A dose-response model used for low-dose extrapolation, of the form:


where: P(d) = probability of cancer from lifetime, continuous exposure at dose rate d, and

α, β = fitted parameters; and

γ = background incidence rate. (EPA 2003)

Top of page

logit model

A dose-response model which, like the probit model, leads to an S-shaped dose-response curve, symmetrical about the 50% response point.  The logit model leads to lower "very safe doses" than the probit model, even when both models are equally descriptive of the data in the observable range.  (MERREA 2005, RAIS 2004, SRA 2004)

low-dose extrapolation

An estimation of the dose-response relationship at doses less than the lowest dose studied experimentally.  (EPA 2004)

low-dose linear model

A low-dose-linear model is one whose slope is greater than zero at a dose of zero.  A low-dose-linear model approximates a straight line only at very low doses; at higher doses near the observed data, a low-dose-linear model can display curvature.  The term "low-dose-linear" is often abbreviated "linear," although a low-dose-linear model is not linear at all doses.  Use of nonlinear approaches does not imply a biological threshold dose below which the response is zero.  Estimating thresholds can be problematic; for example, a response that is not statistically significant can be consistent with a small risk that falls below an experiment’s power of detection.  (EPA 2005a)

Markov chain Monte Carlo
  1. A general method of sampling arbitrary highly-dimensional probability distributions by taking a random walk through configuration space.  One changes the state of the system randomly according to a fixed transition rule, thus generating a random walk through state space, s0,s1,s2, ... .  The definition of a Markov process is that the next step is chosen from a probability distribution that depends only on the present position.  This makes it very easy to describe mathematically.  The process is often called the drunkard’s walk (statistics).  (FAO/WHO 2003b)
  2. A type of quantitative modeling that involves a specified set of mutually exclusive and exhaustive states (e.g., of a given health status), and for which there are transition probabilities of moving from one state to another (including of remaining in the same state).  Typically, states have a uniform time period, and transition probabilities remain constant over time.  (NLM/NICHSR 2004)
    RELATED TERMS: Monte-Carlo simulation
mathematical modeling

A representation of aspects of the behavior of a system by creating an approximate (mathematical) description based on theory or phenomenon that accounts for its known or inferred properties.  (FDA 2002)

maximum likelihood estimate          (ACRONYM: MLE)

Statistical method for estimating model parameters.  Generally provides a mean or central tendency estimate, as opposed to a confidence limit on the estimate.  (EPA 2003)

maximum likelihood method          (ACRONYM: ML method)

SEE: maximum likelihood estimate

  1. mean, arithmetic: The sum of numbers divided by the number of numbers.  (NZ 2002)
  2. mean, geometric: The nth root of the product of n numbers; the same as the antilog of the mean of the logarithms.  Tends to be a better measure of central tendency for skewed distributions, but is zero if any datum is zero.  Estimates the geometric mean of a lognormal distribution, but with some bias (which can be corrected).  (NZ 2002)
measures of central tendency

A general term for several characteristics of the distribution of a set of values or measurements around a value or values at or near the middle of the set.  The principal measures of central tendency are the mean (average), median, and mode.  (Last 1983)

Top of page

  1. The middle value in a population distribution, above and below which lie an equal number of individual values; midpoint. (CARB 2000)
  2. A measure of central tendency. The simplest division of a set of measurements is into two parts—the lower and the upper half. The point on the scale that divides the group in this way is called the "median." (Last 1983)
  3. The midpoint value obtained by ranking all values from highest to lowest and choosing the value in the middle. The median divides a population into two equal halves. (NIAID 2000)
  4. The middle value of n numbers (if n is even it is the arithmetic mean of the middle two numbers).  (NZ 2002)
median dose          (ACRONYM: N50 )

This value is a parameter used in microbial dose-response models (e.g., exponential and Poisson) and has been used in microbial risk assessment to determine the probability of infection. (EPA 2005f)

  1. A family of statistical methods that quantitatively combine the results of separate investigations into a single statement of overall significance.  (NIST/SEMATECH 2005b)
  2. Systematic methods that use statistical techniques for combining results from different studies to obtain a quantitative estimate of the overall effect of a particular intervention or variable on a defined outcome.  This combination may produce a stronger conclusion than can be provided by any individual study.  (NLM/NICHSR 2004)
    RELATED TERMS: systematic review
metamodel calibration

The practice of determining unknown parameters of a model by the following steps:

  1. Run a computer experiment by varying the unknown parameters, and recording the expected responses.
  2. Fit a model of general form, especially a neural network, using the responses of the computer experiment as inputs and the factors as the outputs.
  3. Extract the unknown parameters as the outputs that result from this model when the inputs are taken to be the empirically observed values.  (NIST/SEMATECH 2005b)
  1. The value in the data set that occurs most frequently.  (EPA 1992)
  2. One of the measures of central tendency.  The most frequently occurring value in a set of observations.  (Last 1983)
  1. A mathematical function with parameters that can be adjusted so the function closely describes a set of empirical data.  A mechanistic model usually reflects observed or hypothesized biological or physical mechanisms, and has model parameters with real world interpretation.  In contrast, statistical or empirical models selected for particular numerical properties are fitted to data; model parameters may or may not have real world interpretation.  When data quality is otherwise equivalent, extrapolation from mechanistic models (e.g., biologically based dose-response models) often carries higher confidence than extrapolation using empirical models (e.g., logistic model).  (EPA 2003)
  2. A mathematical representation of a natural system intended to mimic the behavior of the real system, allowing description of empirical data, and predictions about untested states of the system.  (EPA 2004)
  3. A set of constraints restricting the possible joint values of several quantities.  A hypothesis or system of belief regarding how a system works or responds to changes in its inputs.  The purpose of a model is to represent a particular system of interest as accurately and precisely as necessary with respect to particular decision objectives.  (FAO/WHO 2003b)
  4. A representation of something, often idealixed or modified to make it conceptually easier to understand.  (CancerWeb 2005)
  5. A framework for thinking and acting.  (MERREA 2005)
  6. A mathematical statement of the relation(s) among variables.  Models can be of two basic types, or have two basic parts: statistical models, which predict a measured quantity; probability models, which predict the relative frequency of different random outcomes.  (NIST/SEMATECH 2005b)

Top of page

model boundaries

Designated areas of competence of the model, including time, space, pathogens, pathways and exposed populations, and acceptable ranges of values for each input and jointly among all inputs for which the model meets data quality objectives.  (FAO/WHO 2003b)

model detail

Level of simplicity or detail associated with the functional relationships assumed in the model compared to the actual but unknown relationships in the system being modeled.  (FAO/WHO 2003b)

model structure

A set of assumptions and inference options upon which a model is based, including underlying theory as well as specific functional relationships.  (FAO/WHO 2003b)

model uncertainty
  1. Uncertainty due to necessary simplification of real-world processes, misspecification of the model structure, model misuse, or use of inappropriate surrogate variables or inputs.  (EPA 2004)
  2. Bias or imprecision associated with compromises made or lack of adequate knowledge in specifying the structure and calibration (parameter estimation) of a model.  (FAO/WHO 2003b)
model validation

The process by which the model is evaluated by comparing a model prediction with empirical data.  (FDA 2002)


An investigative technique using a mathematical or physical representation of a system or theory that accounts for all or some of its known properties.  (EPA 2004)

modeling node

In air quality modeling, the location where impacts are predicted.  (EPA 2004)

modifying factor          (ACRONYM: MF)

A factor used in the derivation of a reference dose or reference concentration.  The magnitude of the MF reflects the scientific uncertainties of the study and database not explicitly treated with standard uncertainty factors (e.g., the completeness of the overall database).  A MF is greater than zero and less than or equal to 10, and the default value for the MF is 1.  (EPA 2003)

Monte-Carlo analysis

A statistical technique which uses random numbers to determine whether a set of data values is random.  A Monte-Carlo analysis simulates situations where elements of risk are affected by variations in the value of a variable to determine which of these variations have the greatest influence on the various elements of risk.  This technique also can be used for comparing alternatives involving elements of risk or alternative economic decisions.  (NYS 1998)

Monte-Carlo sampling

A computer experimental method that uses random numbers in order to estimate distributions of simulator outputs.  (NIST/SEMATECH 2005b)

Top of page

Monte-Carlo simulation
  1. A process for making repeated calculations with minor variations of the same mathematical equation, usually with the use of a computer.  May be used to integrate variability in the predicted results for a population or the uncertainty of a predicted result.  A two-dimensional Monte Carlo simulation may be used to do both.  (FDA 2002)
  2. A technique used in computer simulations that uses sampling from a random number sequence to simulate characteristics or events or outcomes with multiple possible values. For example, this can be used to represent or model many individual patients in a population with ranges of values for certain health characteristics or outcomes.  In some cases, the random components are added to the values of a known input variable for the purpose of determining the effects of fluctuations of this variable on the values of the output variable.  (NLM/NICHSR 2004)
  3. A technique that can provide a probability function of estimated exposure using distributed values of exposure factors in an exposure scenario.  The Monte Carlo simulation involves assigning a joint probability distribution to the input variables (i.e., exposure factors) of an exposure scenario.  Next, a large number of independent samples from the assigned joint distribution are taken and the corresponding outputs calculated.  This is accomplished by repeated computer runs (i.e., ≥ 1,000 iterations), using random numbers to assign values to the exposure factors.  The simulated output represents a sample from the true output distribution.  Methods of statistical inference are used to estimate, from the exposure output sample, some parameters of the exposure distribution, such as percentiles, mean, variance, and confidence intervals.  The Monte Carlo simulation can also be used to test the effect that an input parameter has on the output distribution.  (REAP 1995)
    RELATED TERMS: Markov chain Monte Carlo
Monte Carlo technique
  1. A repeated random sampling from the distribution of values for each of the parameters in a generic (exposure or dose) equation to derive an estimate of the distribution of (exposures or doses in) the population.  (EPA 1997a)
  2. A repeated random sampling from the distribution of values for each of the parameters in a calculation (e.g., lifetime average daily exposure), to derive a distribution of estimates (of exposures) in the population.  (EPA 2003)
  3. A repeated random sampling from the distribution of values for each of the parameters in a generic exposure or risk equation to derive an estimate of the distribution of exposures or risks in the population.  (EPA 2004)
multiple regression

Multiple linear regression aims to find a linear relationship between a response variable and several possible predictor variables.  (STEPS 1997)
RELATED TERMS: regression analysis

multistage model
  1. A mathematical function used to extrapolate the probability of cancer from animal bioassay data, using the form:



P(d) = probability of cancer from a continuous, lifetime exposure rate d;
qi = fitted dose coefficients of model; i = 0, 1, . . ., k; and
k = number of stages selected through best fit of the model, no greater than one less than the number of available dose groups.  (EPA 2003, EPA 2004)

  1. A carcinogenesis dose-response model where it is assumed that cancer originates as a "malignant" cell, which is initiated by a series of somatic-like mutations occurring in finite steps.  It is also assumed that each mutational stage can be depicted as a Poisson process in which the transition rate is approximately linear in dose rate.  (RAIS 2004, SRA 2004)

Top of page

multistage Weibull model

A dose-response model for low-dose extrapolation which includes a term for decreased survival time associated with tumor incidence:



P(d,t)= the probability of a tumor (or other response) from lifetime, continuous exposure at dose d until age t (when tumor is fatal);
qi = fitted dose parameters, i=0, 1, . . ., k;
k = no greater than the number of dose groups - 1;
t0 = the time between when a potentially fatal tumor becomes observable and when it causes death (t0³ 1); and
z = fitted time parameter (also called "Weibull" parameter).

(EPA 2003, EPA 2004)

normal distribution
  1. A distribution of data points with most values close to the center or norm, and fewer and fewer occurring as we move further and further out from the norm.  If a particular data set closely fits a normal distribution curve, the implication is that nothing is going on in that situation except random variation around a norm.  (Brooks 2001)
  2. Continuous frequency distribution of infinite range.  Its properties are as follows: (1) continuous, symmetrical distribution with both tails extending to infinity; (2) arithmetic mean, mode, and median identical; and (3) shape completely determined by the mean and standard deviation.  (CancerWEB 2005) 
  3. The symmetrical clustering of values around a central location.  The properties of a normal distribution include the following: (1) It is a continuous, symmetrical distribution; both tails extend to infinity; (2) the arithmetic mean, mode, and median are identical; and, (3) its shape is completely determined by the mean and standard deviation.  (CDC 2005)
  4. A symmetric distribution with one high point or mode, sometimes also called the bell curve.  The average is one of many statistical calculations that, even for only a moderate amount of data, tend to have a distribution of that resemble the normal curve.  (NIST/SEMATECH 2005b)
  5. Normal distributions model (some) continuous random variables.  Strictly, a Normal random variable should be capable of assuming any value on the real line, though this requirement is often waived in practice.  For example, height at a given age for a given gender in a given racial group is adequately described by a Normal random variable even though heights must be positive.  A continuous random variable X, taking all real values in the range (−∞, ∞) is said to follow a Normal distribution with parameters µ and σ if it has probability density function  
    microbialf(x) = {1/sqrt(2.pi.sigma^2)}.exp[-.5{(x-mu)/sigma}^2] 

We write microbial

  1. This probability density function (pdf) is a symmetrical, bell-shaped curve, centered at its expected value µ. The variance is σ2.  Many distributions arising in practice can be approximated by a Normal distribution.  Other random variables may be transformed to normality.  The simplest case of the normal distribution, known as the standard normal distribution, has expected value zero and variance one.  This is written as N(0,1).  (STEPS 1997)
    RELATED TERMS: probability density function, Gaussian distribution, Gaussian distribution model

Top of page

null hypothesis
  1. The first step in testing for statistical significance in which it is assumed that the exposure is not related to disease.  (CDC 2005)
  2. In hypothesis testing, the hypothesis that an intervention has no effect, i.e., that there is no true difference in outcomes between a treatment group and a control group.  Typically, if statistical tests indicate that the >P value is at or above the specified a-level (e.g., 0.01 or 0.05), then any observed treatment effect is not statistically significant, and the null hypothesis cannot be rejected.  If the P value is less than the specified a-level, then the treatment effect is statistically significant, and the null hypothesis is rejected. If a confidence interval (e.g., of 95% or 99%) includes zero treatment effect, then the null hypothesis cannot be rejected.  (NLM/NICHSR 2004)
  3. The null hypothesis, H0, represents a theory that has been put forward, either because it is believed to be true or because it is to be used as a basis for argument, but has not been proved.  For example, in a clinical trial of a new drug, the null hypothesis might be that the new drug is no better, on average, than the current drug.  We would write H0: there is no difference between the two drugs on average.  We give special consideration to the null hypothesis.  This is due to the fact that the null hypothesis relates to the statement being tested, whereas the alternative hypothesis relates to the statement to be accepted if / when the null is rejected.  The final conclusion once the test has been carried out is always given in terms of the null hypothesis.  We either "Reject H0 in favor of H1" or "Do not reject H0"; we never conclude "Reject H1," or even "Accept H1."   If we conclude "Do not reject H0," this does not necessarily mean that the null hypothesis is true, it only suggests that there is not sufficient evidence against H0 in favor of H1.  Rejecting the null hypothesis then, suggests that the alternative hypothesis may be true.  (STEPS 1997)
    RELATED TERMS: beta error, type II error, p-value
one-hit model
  1. A dose-response model based on a mechanistic argument that there is a response after a target site has been hit by a single biologically effective unit of dose within a given time period.  The form of the model, a special case of the gamma, multistage, and Weibull models, is given by: 



P(d) = probability of cancer from lifetime continuous exposure at dose rate d, and
λ = fitted dose coefficient.  (EPA 2003)

  1. The basic dose-response model based on the concept that a tumor can be induced by a single receptor that has been exposed to a single quantum or effective dose unit of a chemical.  (RAIS 2004, SRA 2004)
  2. A mathematical model based on the biological theory that a single "hit" of some minimum critical amount of a carcinogen at a cellular target such as DNA can start an irreversible series events leading to a tumor.  (EPA 2005b)
  1. Observations whose value is so extreme that they appear not to be consistent with the rest of the dataset.  In a process monitor, outliers indicate that assignable or special causes are present.  The deletion of a particular outlier from a data analysis is easiest to justify when such an unusual cause has been identified.  (NIST/SEMATECH 2005b)
  2. An outlier is an observation in a data set that is far removed in value from the others in the data set.  It is an unusually large or an unusually small value compared to the others. An outlier might be the result of an error in measurement, in which case it will distort the interpretation of the data, having undue influence on many summary statistics, for example, the mean.  If an outlier is a genuine result, it is important because it might indicate an extreme of behavior of the process under study.  For this reason, all outliers must be examined carefully before embarking on any formal analysis.  Outliers should not routinely be removed without further justification.  (STEPS 1997)

Top of page

  1. The probability value (p-value) of a statistical hypothesis test is the probability of getting a value of the test statistic as extreme as or more extreme than that observed by chance alone, if the null hypothesis H0, is true.  It is the probability of wrongly rejecting the null hypothesis if it is in fact true.  It is equal to the significance level of the test for which we would only just reject the null hypothesis.  The p-value is compared with the actual significance level of our test and, if it is smaller, the result is significant. That is, if the null hypothesis were to be rejected at the 5% significance level, this would be reported as "p < 0.05."  Small p-values suggest that the null hypothesis is unlikely to be true.  The smaller it is, the more convincing is the rejection of the null hypothesis.  It indicates the strength of evidence for say, rejecting the null hypothesis H0, rather than simply concluding "Reject H0" or "Do not reject H0."  (STEPS 1997)
  2. In hypothesis testing, the probability that an observed difference between the intervention and control groups is due to chance alone if the null hypothesis is true.  If P is less than the α-level (typically 0.01 or 0.05) chosen prior to the study, then the null hypothesis is rejected.  (NLM/NICHSR 2004)
  3. A probability calculated from a test statistic; it is the probability of obtaining data at least as extreme as has been obtained if the tested hypothesis were true.  (NZ 2002)
    RELATED TERMS: null hypothesis
  1. A quantity used to calibrate or specify a model, such as parameters of a probability model (e.g., mean and standard deviation for a normal distribution).  Parameter values are often selected by fitting a model to a calibration data set.  (FAO/WHO 2003b)
  2. A parameter is a value, usually unknown (and which therefore has to be estimated), used to represent a certain population characteristic.  For example, the population mean is a parameter that is often used to indicate the average value of a quantity.  Within a population, a parameter is a fixed value which does not vary.  Each sample drawn from the population has its own value of any statistic that is used to estimate this parameter. For example, the mean of the data in a sample is used to give information about the overall mean in the population from which that sample was drawn.  Parameters are often assigned Greek letters (e.g., σ), whereas statistics are assigned Roman letters (e.g., s).  (STEPS 1997)
    RELATED TERMS: population
Pareto distribution

The distribution with probability density function and distribution function:

microbialmicrobialmicrobial                                                                        (1)

microbialmicrobialmicrobial                                                                   (2)

(Weisstein/MathWorld 2005a)

  1. Any one of the points dividing a distribution of values into parts, each of which contains 1/100 of the values.  For example, the 75th percentile is a value such that 75 percent of the values are less than or equal to it.  (EPA 2004)
  2. Percentiles values that divide the rank order of data into 100 equal parts.  (NZ 2002)
point estimate

A single numerical value resulting from calculation(s). (AIHA 2000)

Top of page

Poisson distribution
  1. Poisson distributions model (some) discrete random variables (i.e., variables that may take on only a countable number of distinct values, such as 0, 1, 2, 3, 4, ....).  Typically, a Poisson random variable is a count of the number of events that occur in a certain time interval or spatial area (statistics).  (FAO/WHO 2003b, STEPS 1997)
  2. The Poisson distribution is used to model the number of events occurring within a given time interval.  The formula for the Poisson probability mass function is


Λ is the shape parameter which indicates the average number of events in the given time interval.  (NIST/SEMATECH 2005a)

  1. An important theoretical distribution used to model discrete events, especially the count of defects in an area.  The Poisson distribution depends on one parameter, lambda, which represents the average defect density per observation area (or volume, time interval, etc.). The Poisson distribution assumes that the counts of defects in two non-overlapping observation units are independent.  Further, the Poisson distribution assumes the distribution of defect counts depend only on the area in which they are to be observed. Unlike the binomial distribution, the Poisson distribution in principle sets no limit to the number of defects that can be observed in any area.  Of particular interest the semi-conductor industry, the Poisson probability of observing zero defects in a region of area A, exp{-lambda A}, is useful for yield modeling.  (NIST/SEMATECH 2005b)
    RELATED TERMS: binomial distribution
  1. The probability of detecting a treatment effect of a given magnitude when a treatment effect of at least that magnitude truly exists.  For a true treatment effect of a given magnitude, power is the probability of avoiding Type II error, and is generally defined as (1 - β).  (NLM/NICHSR 2004)
  2. The power of a statistical hypothesis test measures the test’s ability to reject the null hypothesis when it is actually false; that is, to make a correct decision.  In other words, the power of a hypothesis test is the probability of not committing a type II error. It is calculated by subtracting the probability of a type II error from 1, usually expressed as:

    Power = 1 - P(type II error) =  (1-β)

  3. The maximum power a test can have is 1, the minimum is 0. Ideally we want a test to have high power, close to 1.  (STEPS 1997)
    RELATED TERMS: beta error, type II error

A type of statistical modeling approach used to assess the expected frequency and magnitude of a parameter by running repetitive simulations using statistically selected inputs for the determinants of that parameter (e.g., rainfall, pollutants, flows, temperature).  (EPA 2004)

Top of page

probabilistic uncertainty analysis

Technique that assigns a probability density function to each input parameter, then randomly selects values from each of the distributions and inserts them into the exposure equation.  Repeated calculations produce a distribution of predicted values, reflecting the combined impact of variability in each input to the calculation.  Monte Carlo is a common type of probabilistic Uncertainty analysis.  (EPA 1997a)

  1. Depending on philosophical perspective:
    1. (a) The frequency with which we obtain samples within a specified range or for a specified category (e.g., the probability that an average individual with a particular mean dose will develop an illness).
    2. (b) Degree of belief regarding the likelihood of a particular range or category.  (FAO/WHO 2003b)
  2. The subjective assignment of likelihood of future events; or the frequency of occurrence of an experimental or observations outcome.  It refers to the uncertainty or partial knowledge associated with decision-making.  (FDA 2002)
  3. A basic concept that may be considered undefinable, expressing "degree of belief."  Alternatively, it is the limit of the relative frequency of an event in a sequence of n random trials as n approaches infinity: (Number of occurrences of the event) / n.  (Last 1983)
  4. The chance that a particular event will occur given the population of all possible events. (RAIS 2004)
  5. A probability assignment is a numerical encoding of the relative state of knowledge.  (SRA 2004)
probability density function          (ACRONYM: PDF)
  1. Probability density functions are particularly useful in describing the relative likelihood that a variable will have different particular values of x.  The probability that a variable will have a value within a small interval around x can be approximated by multiplying f(x) (i.e., the value of y at x in a PDF plot) by the width of the interval.  (EPA 1998a)
  2. The probability density function of a continuous random variable is a function that can be integrated to obtain the probability that the random variable takes a value in a given interval.  (STEPS 1997)
    RELATED TERMS: sampling distribution
probability distribution
  1. A function that for each possible value of a discrete random variable takes on the probability of that value occurring, or a curve which specifies by means of the area under the curve over an interval the probability that a continuous random variable falls within the interval (the probability density function).  (FAO/WHO 2003b)
  2. Portrays the relative likelihood that a range of values is the true value of a treatment effect.  This distribution often appears in the form of a bell-shaped curve.  An estimate of the most likely true value of the treatment effect is the value at the highest point of the distribution.  The area under the curve between any two points along the range gives the probability that the true value of the treatment effect lies between those two points.  Thus, a probability distribution can be used to determine an interval that has a designated probability (e.g., 95%) of including the true value of the treatment effect.  (NLM/NICHSR 2004)
  3. The probability distribution of a discrete random variable is a list of probabilities associated with each of its possible values.  It is also sometimes called the probability function or the probability mass function.  (STEPS 1997)
    RELATED TERMS: sampling distribution

Top of page

probable error

The magnitude of error which is estimated to have been made in determination of results.  (RAIS 2004, SRA 2004)

probit analysis

A statistical transformation which will make the cumulative normal distribution linear.  In analysis of dose-response, when the data on response rate as a function of dose are given as probits, the linear regression line of these data yields the best estimate of the dose-response curve.  The probit unit is y = 5 + Z(p) , where p = the prevalence of response at each dose level and Z(p) = the corresponding value of the standard cumulative normal distribution.  (RAIS 2004, SRA 2004)

probit model

A dose-response model of the form:



P(d) = the probability that an individual selected at random will respond at dose d, assuming a normal distribution of tolerances;
α, β = fitted parameters; and
γ = background response rate. (EPA 2003)

process criteria

The process control parameters (e.g., time, temperature, dose) at a specified step that can be applied to achieve a performance criteria.  (CAC 2002)


Median of the bottom half of data (lower quartile), or of the top half of the data (upper quartile). That is, values that divides the rank order of data into fourths.  (NZ 2002)

random error
  1. Unexplainable but characterizable variations in repeated measurements of a fixed true value resulting from processes that are random or statistically independent of each other, such as imperfections in measurement techniques.  Some random errors could be reduced by developing improved techniques.  (FAO/WHO 2003b)
  2. Indefiniteness of result due to finite precision of experiment. Measure of fluctuation in result upon repeated experimentation.  (RAIS 2004, SRA 2004)
random sample
  1. Sample selected from a statistical population such that each sample has an equal probability of being selected.  (EPA 1997a)
  2. A sample that is arrived at by selecting sample units such that each possible unit has a fixed and determinate probability of selection.  (Last 1983)
random variable

A quantity which can take on any number of values but whose exact value cannot be known before a direct observation is made.  For example, the outcome of the toss of a pair of dice is a random variable, as is the height or weight of a person selected at random from a city phone book.  (EPA 2004)

Top of page

  1. The difference between the largest and smallest values in a measurement data set.  (EPA 1997a)
  2. In statistics, the difference between the largest and smallest values in a distribution.  In common use, the span of values from smallest to largest.  (CDC 2005)
rank correlation coefficient

A measure of the tendency for a Y value to increase, as it’s associated X value increases.  It allows for monotonic non-linear relationships.  Measured by Spearman’s coefficient (rs), sometimes called "Spearman’s rho" (ρ).  (NZ 2002)
RELATED TERMS: linear correlation coefficient

  1. The relation which one quantity or magnitude has to another of the same kind.  It is expressed by the quotient of the division of the first by the second.  (CancerWEB 2005) 
  2. The value obtained by dividing one quantity by another.  (CDC 2005).
regression analysis
  1. Procedures for finding the mathematical function which best describes the relationship between a dependent variable and one or more independent variables.  In linear regression the relationship is constrained to be a straight line and least-squares analysis is used to determine the best fit.  In logistic regression the dependent variable is qualitative rather than continuously variable and likelihood functions are used to find the best relationship.  In multiple regression the dependent variable is considered to depend on more than a single independent variable.  (CancerWEB 2005)
  2. The analysis of the relationship between a dependent variable and one or more independent variables.  Its purpose is to determine whether a relationship exists and the strength of the relationship.  It is also used to determine the mathematical relationship between the variables, predict the values of the dependent variable and control other independent variables when evaluating the effect of one or more independent variables.  (ESOMAR 2001)
    RELATED TERMS: multiple regression
sensitivity analysis
  1. Process of changing one variable while leaving the others constant to determine its effect on the output.  This procedure fixes each uncertain quantity at its credible lower and upper bounds (holding all others at their nominal values, such as medians) and computes the results of each combination of values.  The results help to identify the variables that have the greatest effect on exposure estimates and help focus further information gathering efforts.  (EPA 1997a)
  2. A method used to examine the behavior of a model by measuring the variation in its outputs resulting from changes to its inputs.  (CAC 1999, FAO/WHO 2003b)
  3. A means to determine the robustness of a mathematical model or analysis (such as a cost-effectiveness analysis or decision analysis) that tests a plausible range of estimates of key independent variables (e.g., costs, outcomes, probabilities of events) to determine if such variations make meaningful changes the results of the analysis.  Sensitivity analysis also can be performed for other types of study; e.g., clinical trials analysis (to see if inclusion/exclusion of certain data changes results) and meta-analysis (to see if inclusion/exclusion of certain studies changes results).  (NLM/NICHSR 2004)
    RELATED TERMS: contrast with uncertainty analysis
sigma g          (ACRONYM: s g)

Geometric standard deviation.  (EPA 2003)

Top of page

standard deviation

  1. The most widely used measure of dispersion of a frequency distribution, equal to the positive square root of the variance.  (CDC 2005)
  2. A measure of spread or dispersion of a distribution.  It estimates the square root of the average squared deviation from the distribution average, sometimes called the root-mean-square.  Among all measures of dispersion, the standard deviation is the most efficient for normally distributed data.  Also, unlike the range, it converges to a single value as more data from the distribution is gathered.  (NIST/SEMATECH 2005b)
  3. A measure of dispersion or variation, usually taken as the square root of the variance.  (RAIS 2004, SRA 2004)
  4. Standard deviation is a measure of the spread or dispersion of a set of data.  It is calculated by taking the square root of the variance and is symbolised by s.d., or s.  In other words


  5. The more widely the values are spread out, the larger the standard deviation.  (STEPS 1997)
    RELATED TERMS: distribution, normal distribution
standard error
  1. A statistical index of the probability that a difference between two sample means is greater than zero.  (CancerWEB 2005) 
  2. The standard deviation of a theoretical distribution of sample means about the true population mean.  (CDC 2005)
standard geometric deviation

Measure of dispersion of values about a geometric mean; the portion of the frequency distribution that is one standard geometric deviation to either side of the geometric mean; accounts for 68% of the total samples.  (RAIS 2004, SRA 2004)

standard normal deviation

Measure of dispersion of values about a mean value; the positive square root of the average of the squares of the individual deviations from the mean.  (RAIS 2004, SRA 2004)


A set of techniques used to remove as far as possible the effects of differences in the age or other confounding variables when comparing two or more populations.  (CancerWEB 2005) 

  1. A function of a random sample of data (e.g., mean, standard deviation, distribution parameters).  (FAO/WHO 2003b)
  2. A value calculated from sample data.  (NIST/SEMATECH 2005b)
  3. A statistic is a quantity that is calculated from a sample of data.  It is used to give information about unknown values in the corresponding population.  For example, the average of the data in a sample is used to give information about the overall average in the population from which that sample was drawn.  It is possible to draw more than one sample from the same population and the value of a statistic will in general vary from sample to sample.  For example, the average value in a sample is a statistic.  The average values in more than one sample, drawn from the same population, will not necessarily be equal.  Statistics are often assigned Roman letters (e.g., m and s), whereas the equivalent unknown values in the population (parameters) are assigned Greek letters (e.g., µ and σ).  (STEPS 1997)
    RELATED TERMS: population, sample, sampling distribution

Top of page

statistical significance
  1. An inference that the probability is low that the observed difference in quantities being measured could be due to variability in the data rather than an actual difference in the quantities themselves. The inference that an observed difference is statistically significant is typically based on a test to reject one hypothesis and accept another.  (EPA 1992)
  2. The probability that a result is not likely to be due to chance alone.  By convention, a difference between two groups is usually considered statistically significant if chance could explain it only 5% of the time or less.  Study design considerations may influence the a priori choice of a different level of statistical significance.  (EPA 2003)
  3. Conclusion that an intervention has a true effect, based upon observed differences in outcomes between the treatment and control groups that are sufficiently large so that these differences are unlikely to have occurred due to chance, as determined by a statistical test. Statistical significance indicates the probability that the observed difference was due to chance if the null hypothesis is true; it does not provide information about the magnitude of a treatment effect.  (NLM/NICHSR 2004)
  4. The condition in which the p-value is smaller than an a priori value (the significance level, α).  If p < α one can then say that a result is unlikely to have arisen merely by chance, and that the result is "statistically significant."  (NZ 2002)
  5. The statistical significance determined by using appropriate standard techniques of statistical analysis with results interpreted at the stated confidence level and based on data relating species which are present in sufficient numbers at control areas to permit a valid statistical comparison with the areas being tested.  (RAIS 2004, SRA 2004)
  1. A branch of mathematics that deals with collecting, reviewing, summarizing, and interpreting data or information.  Statistics are used to determine whether differences between study groups are meaningful.  (ATSDR 2004)
  2. The science and art of collecting, summarizing, and analyzing data that are subject to random variation. The term is also applied to the data themselves and to the summarization of the data.  (CancerWEB 2005)
stochastic model

A mathematical model which takes into consideration the presence of some randomness in one or more of its parameters or variables. The predictions of the model therefore do not give a single point estimate but a probability distribution of possible estimates. Contrast with deterministic. We might distinguish demographic stochasticity that arises from the discreteness of individuals and individual events such as birth, and environmental stochasticity arising from more-or-less unpredictable interactions with the outside world.  (Swinton 1999)

stochastic uncertainty

Also referred to as random error, q.v. (FAO/WHO 2003b)

stochastic variability

Sources of heterogeneity of values associated with members of a population that are a fundamental property of a natural system and that in practical terms cannot be modified, stratified or reduced by any intervention.  For example, variation in human susceptibility to illness for a given dose for which there is no predictive capability to distinguish the response of a specific individual from that of another.  Stochastic variability contributes to overall variability for measures of individual risk and for population risk.  (FAO/WHO 2003b)
RELATED TERMS: aleatory uncertainty

subjective probability distribution

A probability distribution that represents an individual’s or group’s belief about the range and likelihood of values for a quantity, based upon that person’s or group’s expert judgment, q.v.  (FAO/WHO 2003b)


The name given by Gosset (under his pseudonym "Student") to a distribution that arises in the study of small samples, and in various other ways as well (Brooks 2001).

Top of page

triangular distribution

  1. The triangular distribution is a continuous distribution defined on the range image041 with probability density function



and distribution function



(Weisstein/MathWorld 2005b)

  1. A statistical distribution which requires the identification of high, low, and most likely values for each selected variable.  The resultant data points form the basis for the triangular, or three-point distribution.  (USDOT 1996)
type II error

SEE: beta error

  1. Uncertainty represents a lack of knowledge about factors affecting exposure/toxicity assessments and risk characterization and can lead to inaccurate or biased estimates of risk and hazard.  Some of the types of uncertainty include scenario uncertainty, parameter uncertainty, and model uncertainty.  (EPA 1997a, EPA 2004)
  2. Uncertainty occurs because of a lack of knowledge.  It is not the same as variability.  For example, a risk assessor may be very certain that different people drink different amounts of water but may be uncertain about how much variability there is in water intakes within the population.  Uncertainty can often be reduced by collecting more and better data, whereas variability is an inherent property of the population being evaluated.  Variability can be better characterized with more data but it cannot be reduced or eliminated.  Efforts to clearly distinguish between variability and uncertainty are important for both risk assessment and risk characterization.  (EPA 2003)
  3. Lack of knowledge regarding the true value of a quantity, such as a specific characteristic (e.g., mean, variance) of a distribution for variability, or regarding the appropriate and adequate inference options to use to structure a model or scenario.  These are also referred to as model uncertainty and scenario uncertainty.  Lack of knowledge uncertainty can be reduced by obtaining more information through research and data collection, such as through research on mechanisms, larger sample sizes or more representative samples.  (FAO/WHO 2003b)
  4. An expression of the lack of knowledge, usually given as a range or group of plausible alternatives.  (FDA 2002)
  5. Ambiguity in microbial risk assessment arising from lack of knowledge about specific factors, parameters, or models.  (ILSI 2000)
  6. Imperfect knowledge concerning the present or future state of an organism, system or (sub) population under consideration.  (IPCS/OECD 2004)
  7. Imprecision and inaccuracy of an assessment or monitoring method.  (KIWA 2004)
  8. A term for the fuzzy concept of qualifying statement of what is known or concluded with quantitative statements of probability.  Uncertainty usually has two aspects:  (1) that created by the experimental (i.e., observational) error associated with taking observations, and (2) that implied by the use of imperfect models.  (NIST/SEMATECH 2005b)
    RELATED TERMS: variability

Top of page

uncertainty analysis
  1. A detailed examination of the systematic and random errors of a measurement or estimate (in this case a risk or hazard estimate); an analytical process to provide information regarding the uncertainty.  (EPA 2004)
  2. A method used to estimate the uncertainty associated with model inputs, assumptions, and structure/form.  (CAC 1999)
  3. A detailed examination of the systematic and random errors of a measurement or estimate; an analytical process to provide information regarding the uncertainty.  (RAIS 2004, SRA 2004)
uncertainty factor          (ACRONYM: AR )
  1. One of several, generally 10-fold, default factors used in operationally deriving the RfD and RfC from experimental data.  The factors are intended to account for (1) variation in susceptibility among the members of the human population (i.e., inter-individual or intraspecies variability); (2) uncertainty in extrapolating animal data to humans (i.e., interspecies uncertainty); (3) uncertainty in extrapolating from data obtained in a study with less-than-lifetime exposure (i.e., extrapolating from subchronic to chronic exposure); (4) uncertainty in extrapolating from a LOAEL rather than from a NOAEL; and (5) uncertainty associated with extrapolation when the database is incomplete.  (EPA 2003, EPA 2004)
  2. Mathematical adjustments for reasons of safety when knowledge is incomplete. For example, factors used in the calculation of doses that are not harmful (adverse) to people. These factors are applied to the lowest-observed-adverse-effect-level (LOAEL) or the no-observed-adverse-effect-level (NOAEL) to derive a minimal risk level (MRL).  Uncertainty factors are used to account for variations in people’s sensitivity, for differences between animals and humans, and for differences between a LOAEL and a NOAEL.  Scientists use uncertainty factors when they have some, but not all, the information from animal or human studies to decide whether an exposure will cause harm to people [also sometimes called a safety factor].  (ATSDR 2004)
  3. Uncertainty Factor: Reductive factor by which an observed or estimated no-observed-adverse effect level (NOAEL) is divided to arrive at a criterion or standard that is considered safe or without appreciable risk.  (IPCS/OECD 2004)
  4. One of several, generally 10-fold factors, used in operationally deriving the Reference Dose (RfD) from experimental data.  UFs are intended to account for (1) the variation in sensitivity among the members of the human population; (2) the uncertainty in extrapolating animal data to the case of humans; (3) the uncertainty in extrapolating from data obtained in a study that is of less-than-lifetime exposure; and (4) the uncertainty in using LOAEL data rather than NOAEL data.  (RAIS 2004)
    RELATED TERMS: assessment factor, safety factor
uniform distribution
  1. The general formula for the probability density function of the uniform distribution is


where A is the location parameter and (B - A) is the scale parameter. The case where A = 0 and B = 1 is called the standard uniform distribution.  The equation for the standard uniform distribution is


The uniform distribution defines equal probability over a given range for a continuous distribution.  For this reason, it is important as a reference distribution.  (NIST/SEMATECH 2005a)

  1. Uniform distributions model (some) continuous random variables and (some) discrete random variables. The values of a uniform random variable are uniformly distributed over an interval. For example, if buses arrive at a given bus stop every 15 minutes, and you arrive at the bus stop at a random time, the time you wait for the next bus to arrive could be described by a uniform distribution over the interval from 0 to 15.  A discrete random variable X is said to follow a Uniform distribution with parameters a and b, written X ~ Un(a,b), if it has probability distribution

    P(X=x) = 1/(b-a)  where  x = 1, 2, 3, ......., n.

A discrete uniform distribution has equal probability at each of its n values.  A continuous random variable X is said to follow a Uniform distribution with parameters a and b, written X ~ Un(a,b), if its probability density function is constant within a finite interval [a,b], and zero outside this interval (with a less than or equal to b).  The uniform distribution has expected value E(X)=(a+b)/2 and variance {(b-a)2}/12.  (STEPS 1997)

Top of page

upper bound

A plausible upper limit to the true value of a quantity.  This is usually not a true statistical confidence limit.  (EPA 2003)

upper percentile

Values at the upper end of the distribution of values for a particular set of data.  (EPA 1997a)

  1. Comparison of predictions of a model to independently estimated or observed values of the quantity or quantities being predicted, and quantification of biases in mean prediction and precision of predictions.  (FAO/WHO 2003b)
  2. A process by which a simulation model is evaluated for its accuracy in representing a system.  (FDA 2002)
  3. Process by which the reliability and relevance of a particular approach, method, process, or assessment is established for a defined purpose.  Different parties define "reliability" as establishing the reproducibility of the outcome of the approach, method, process, or assessment over time.  "Relevance" is defined as establishing the meaningfulness and usefulness of the approach, method, process, or assessment for the defined purpose.  (IPCS/OECD 2004)
  4. Obtaining evidence that the elements of the Water Safety Plan are effective.  (KIWA 2004)
  1. The degree to which a measurement actually measures or detects what it is supposed to measure.  (CDC 2005)
  2. The extent to which a measure accurately reflects the concept that it is intended to measure.  (NLM/NICHSR 2004)
  1. Variability arises from true heterogeneity across people, places, or time and can affect the precision of exposure estimates and the degree to which they can be generalized.  The types of variability include: spatial, temporal, and inter-individual.  (EPA 1997a)
  2. Variability refers to true heterogeneity or diversity.  For example, among a population that drinks water from the same source and with the same contaminant concentration, the risks from consuming the water may vary.  This may be due to differences in exposure (i.e., different people drinking different amounts of water and having different body weights, different exposure frequencies, and different exposure durations) as well as differences in response (e.g., genetic differences in resistance to a chemical dose).  Those inherent differences are referred to as variability.  Differences among individuals in a population are referred to as inter-individual variability, differences for one individual over time is referred to as intra-individual variability.  (EPA 2003)
  3. Refers to the observed differences attributable to true heterogeneity or diversity in a population or exposure parameter.  Examples include human physiological variation (e.g., natural variation in body weight, height, breathing rate, drinking water intake rate), weather variability, variation in soil types and differences in contaminant concentrations in the environment.  Variability is usually not reducible by further measurement of study, but it can be better characterized.  (EPA 2004)
  4. Observed differences attributable to true heterogeneity or diversity in a population or exposure parameter.  Variability implies real differences among members of that population.  For example, different individuals have different intakes and susceptibility.  Differences over time for a given individual are referred to as intra-individual variability.  Differences over members of a population at a given time are referred to as inter-individual variability.  Variability in microbial risk assessment cannot be reduced but only more precisely characterized.  (FAO/WHO 2003b)
  5. A description of differences among the individual members of a series or population.  (FDA 2002)
  6. Observed differences attributable to true heterogeneity or diversity in a population or exposure parameter.  Variability in microbial risk assessment cannot be reduced but only more precisely characterized.  (ILSI 2000)
  7. Intrinsic heterogeneity in a process or parameter.  (KIWA 2004)
    RELATED TERMS: uncertainty

Any characteristic or attribute that can be measured.  (CDC 2005)

Top of page

  1. Law:
    Government permission for a delay or exception in the application of a given law, ordinance, or regulation.  (EPA 2005b)
    Permission granted for a limited time (under stated conditions) for a person or company to operate outside the limits prescribed in a regulation.  (CARB 2000)
  2. Statistics:
    A measure of the variation shown by a set of observations, defined by the sum of the squares of deviations from the mean, divided by the number of degrees of freedom in the set of observations.  (Last 1983)

The difference among individual outputs of a process.  The causes of variation can be grouped into two major classes—common causes and special causes; the common cause variation can often be decomposed into variance components.  (NIST/SEMATECH 2005b)

Differences between individuals within a population or among populations.  (WHO 1999)

Weibull model

A dose-response model of the form:



P(d) = the probability of a tumor (or other response) from lifetime, continuous exposure at dose d until age t (when tumor is fatal);
α = fitted dose parameter (sometimes called "Weibull" parameter);
β = fitted dose parameter;
γ  = background response rate.

(EPA 2003)

zero order analysis

The simplest approach to quantification of a risk with a limited treatment of each risk component (e.g. source terms, transport, health effects, etc.).  (RAIS 2004, SRA 2004)

Top of page

Go to Thesaurus Contents
Go to Thesaurus Index

Jump to main content.