# Asymptotic Bayesian Theory of Quickest Change Detection for Hidden Markov Models

###### Abstract

In the 1960s, Shiryaev developed a Bayesian theory of change-point detection in the i.i.d. case, which was generalized in the beginning of the 2000s by Tartakovsky and Veeravalli for general stochastic models assuming a certain stability of the log-likelihood ratio process. Hidden Markov models represent a wide class of stochastic processes that are very useful in a variety of applications. In this paper, we investigate the performance of the Bayesian Shiryaev change-point detection rule for hidden Markov models. We propose a set of regularity conditions under which the Shiryaev procedure is first-order asymptotically optimal in a Bayesian context, minimizing moments of the detection delay up to certain order asymptotically as the probability of false alarm goes to zero. The developed theory for hidden Markov models is based on Markov chain representation for the likelihood ratio and -quick convergence for Markov random walks. In addition, applying Markov nonlinear renewal theory, we present a high-order asymptotic approximation for the expected delay to detection of the Shiryaev detection rule. Asymptotic properties of another popular change detection rule, the Shiryaev–Roberts rule, is studied as well. Some interesting examples are given for illustration.

Quickest Change Detection for Hidden Markov Models

and

Bayesian Change Detection Theory \kwdHidden Markov Models \kwdQuickest Change-point Detection \kwdShiryaev Procedure \kwdShiryaev–Roberts Rule

## 1 Introduction

Sequential change-point detection problems deal with detecting changes in a state of a random process via observations which are obtained sequentially one at a time. If the state is normal, then one wants to continue the observations. If the state changes and becomes abnormal, one is interested in detecting this change as rapidly as possible. In such a problem, it is always a tradeoff between false alarms and a speed of detection, which have to be balanced in a reasonable way. A conventional criterion is to minimize the expected delay to detection while controlling a risk associated with false detections. An optimality criterion and a solution depend heavily on what is known about the models for the observations and for the change point.

As suggested by Tartakovsky et al. [27], there are four main problem formulations of a sequential change-point detection problem that differ by assumptions on the point of change and optimality criteria. In this paper, we are interested in a Bayesian criterion assuming that the change point is random with a given prior distribution. We would like to find a detection rule that minimizes an average delay to detection, or more generally, higher moments of the detection delay in the class of rules with a given false alarm probability. At the beginning of the 1960s, Shiryaev [23] developed a Bayesian change-point detection theory in the i.i.d. case when the observations are independent with one distribution before the change and with another distribution after the change and when the prior distribution of the change point is geometric. Shiryaev found an optimal Bayesian detection rule, which prescribes comparing the posterior probability of the change point to a constant threshold. Throughout the paper, we refer to this detection rule as the Shiryaev rule even in a more general non-i.i.d. case. Unfortunately, finding an optimal rule in a general case of dependent data does not seem feasible. The only known generalization, due to the work of Yakir [32], is for the homogeneous finite-state Markov chain. Yakir [32] proved that the rule, based on thresholding the posterior probability of the change point with a random threshold that depends on the current state of the chain, is optimal. Since in general developing a strictly optimal detection rule is problematic, Tartakovsky and Veeravalli [29] considered the asymptotic problem of minimizing the average delay to detection as the probability of a false alarm becomes small and proved that the Shiryaev rule is asymptotically optimal as long as the log-likelihood ratio process (between the “change” and “no change” hypotheses) has certain stability properties expressed via the strong law of large numbers and its strengthening into -quick convergence. A general Bayesian asymptotic theory of change detection in continuous time was developed by Baron and Tartakovsky [2].

While several examples related to Markov and hidden Markov models were considered in [2, 29], these are only very particular cases where the main condition on the -quick convergence of the normalized log-likelihood ratio was verified. Moreover, even these particular examples show that the verification of this condition typically represents a hard task. At the same time, there is a class of very important stochastic models – hidden Markov models (HMM) – that find extraordinary applications in a wide variety of fields such as speech recognition [13, 20]; handwritten recognition [12, 14]; computational molecular biology and bioinformatics, including DNA and protein modeling [4]; human activity recognition [33]; target detection and tracking [3, 30, 31]; and modeling, rapid detection and tracking of malicious activity of terrorist groups [21, 22], to name a few. Our first goal is to focus on this class of models and specify the general results of Tartakovsky and Veeravalli [29] for HMMs, finding a set of general conditions under which the Shiryaev change-point detection procedure is asymptotically optimal as the probability of false alarm goes to zero. Our approach for hidden Markov models is based on Markov chain representation of the likelihood ratio, proposed by Fuh [5], and -quick convergence for Markov random walks (cf. Fuh and Zhang [11]). In addition, by making use uniform Markov renewal theory and Markov nonlinear renewal theory developed in [6, 7], we achieve our second goal by providing a high-order asymptotic approximation to the expected delay to detection of the Shiryaev detection rule. We also study asymptotic operating characteristics of the (generalized) Shiryaev–Roberts procedure in the Bayesian context.

The remainder of the paper is organized as follows. In Section 2, we provide a detailed overview of previous results in change-point detection and give some basic results in the general change detection theory for dependent data that are used in subsequent sections for developing a theory for HMMs. In Section 3, a formulation of the problem for finite state HMMs is given. We develop a change-point detection theory for HMMs in Section 4, where we prove that under a set of quite general conditions on the finite-state HMM, the Shiryaev rule is asymptotically optimal (as the probability of false alarm vanishes), minimizing moments of the detection delay up to a certain order . Section 5 studies the asymptotic performance of the generalized Shiryaev–Roberts procedure. In Section 6, using Markov nonlinear renewal theory, we provide higher order asymptotic approximations to the average detection delay of the Shiryaev and Shiryaev–Roberts detection procedures. Section 7 includes a number of interesting examples that illustrate the general theory. Certain useful auxiliary results are given in the Appendix where we also present a simplified version of the Markov nonlinear renewal theory that helps solve our problem.

## 2 Overview of the Previous Work and Preliminaries

Let denote the series of random observations defined on the complete probability space , , where is the -algebra generated by the observations. Let and be two probability measures defined on this probability space. We assume that these measures are mutually locally absolutely continuous, i.e., the restrictions of the measures and to the -algebras , are absolutely continuous with respect to each other. Let denote the vector of the observations with an attached initial value which is not a real observation but rather an initialization generated by a “system” in order to guarantee some desired property of the observed sequence . Since we will consider asymptotic behavior, this assumption will not affect our resuts. Let denote densities of with respect to a -finite measure. Suppose now that the observations initially follow the measure (normal regime) and at some point in time something happens and they switch to (abnormal regime). For a fixed , the change induces a probability measure with density , which can also be written as

(2.1) |

where stands for the conditional density of given the past history . Note that in general the conditional densities , may depend on the change point , which is often the case for hidden Markov models. Model (2.1) can cover this case as well, allowing to depend on for . Of course the densities may depend on .

In the present paper, we are interested in the Bayesian setting where is a random variable. In general, may be dependent on the observations. This situation was discussed in Tartakovsky and Moustakides [26] and Tartakovsky et al. [27, Sec 6.2.2] in detail, and we only summarize the idea here. Let , be probabilities that depend on the observations up to time , i.e., for , so that the sequence is -adapted. This allows for a very general modeling of the change-point mechanisms, including the case where is a stopping time adapted to the filtration generated by the observations (see Moustakides [17]). However, in the rest of this paper, we limit ourselves to the case where is deterministic and known. In other words, we follow the Bayesian approach proposed by Shiryaev [23] assuming that is a random variable independent of the observations with a known prior distribution , .

A sequential change-point detection rule is a stopping time adapted to the filtration , i.e., , . To avoid triviality we always assume that with probability 1.

Define the probability measure and let stand for the expectation with respect to . The false alarm risk is usually measured by the (weighted) probability of false alarm . Taking into account that with probability 1 and , we obtain

(2.2) |

Usually the speed of detection is expressed by the average detection delay (ADD)

(2.3) |

Since for any finite with probability stopping time, , it follows from (2.3) that

An optimal Bayesian detection scheme is a rule for which the is minimized in the class of rules with the constrained to be below a given level , i.e., the optimal change-point detection rule is the stopping time . Shiryaev [23] considered the case of with a zero-modified geometric distribution

(2.4) |

where , . Note that when , there is a trivial solution since we can stop at . Thus, in the following, we assume that . Shiryaev [23, 24] proved that in the i.i.d. case (i.e., when for in (2.1)) the optimal Bayesian detection procedure exists and has the form

(2.5) |

where threshold is chosen to satisfy .

Consider a general non-i.i.d. model (2.1) and a general, not necessarily geometric prior distribution , for , where , . Write as the conditional likelihood ratio for the -th sample. We take a convention that can be any random or deterministic number, in particular if the initial value is not available, i.e., before the observations become available we have no information except for the prior probability .

Applying the Bayes formula, it is easy to see that

(2.6) |

where

(2.7) |

with

(2.8) |

If , then .

It is more convenient to rewrite the stopping time (2.5) in terms of the statistic , i.e.,

(2.9) |

where . Note that is the likelihood ratio between the hypotheses that a change occurs at the point and (no-change). Therefore, can be interpreted as an average (weighted) likelihood ratio.

Although for general non-i.i.d. models no exact optimality properties are available (similar to the i.i.d. case), there exist asymptotic results. Define the exponential rate of convergence of the prior distribution,

(2.10) |

assuming that the corresponding limit exists. If , then the prior distribution has (asymptotically) exponential right tail. If , then this amounts to a heavy tailed distribution. Note that for the geometric distribution (2.4), .

To study asymptotic properties of change-point detection rules we need the following definition.

###### Definition 2.1.

The last entry time plays an important role in the strong law of large numbers (SLLN). Indeed, for all implies that -a.s. as . Also, by Lemma 2.2 in [25], for all and some implies

which defines the rate of convergence in the strong law. If and are i.i.d. zero-mean, then the necessary and sufficient condition for the quick convergence is the finiteness of the th moment, . To study the first order asymptotic optimality of the Shiryaev change-point detection rule in HMM, we will extend this idea to Markov chains.

Let denote the log-likelihood ratio between the hypotheses and ,

(2.11) |

Assuming, for every , the validity of a strong law of large numbers, i.e., convergence of to a constant as , with a suitable rate, Tartakovsky and Veeravalli [29] proved that the Shiryaev procedure (2.9) with threshold is first-order asymptotically (as ) optimal. Specifically, they proved that the Shiryaev procedure minimizes asymptotically as in class the moments of the detection delay for whenever converges to quickly. Since this result is fundamental in the following study for HMMs, we now present an exact statement that summarizes the general asymptotic Bayesian theory. Recall that denotes the Shiryaev change-point detection rule defined in (2.9). It is easy to show that for an arbitrary general model, (cf. [29]). Hence, selecting implies that (i.e., ) for any . For , define the last entry time

(2.12) |

###### Theorem 2.1 (Tartakovsky and Veeravalli [29]).

Let . Let the prior distribution of the change point satisfy condition (2.10) and, in the case of , let in addition . Assume that converges quickly as under to some positive and finite number , i.e., for all and all , and that

(2.13) |

(i) Then for all

(2.14) |

(ii) If threshold is selected so that and as , in particular , then the Shiryaev rule is asymptotically optimal as in class with respect to moments of the detection delay up to order , i.e., for all ,

(2.15) |

###### Remark 2.1.

The first goal of the present paper is to specify these results for hidden Markov models. That is, we prove that the assertions of the above theorem hold for HMMs under some regularity conditions. Moreover, by making use the specific structure of HMM and Markov nonlinear renewal theory, we also give the higher order asymptotic approximation to . This is also our second goal.

## 3 Problem Formulation for Hidden Markov Models

In this section, we define a finite state hidden Markov model as a Markov chain in a Markovian random environment, in which the underlying environmental Markov chain can be viewed as latent variables. To be more precise, let be an ergodic (positive recurrent, irreducible and aperiodic) Markov chain on a finite state space with transition probability matrix and stationary distribution . Suppose that a random sequence taking values in , is adjoined to the chain such that is a Markov chain on satisfying for , the Borel -algebra of . Moreover, conditioning on the full sequence, we have

(3.1) |

for each and the Borel -algebra of . Furthermore, let be the transition probability density of given and with respect to a -finite measure on , such that

(3.2) |

for . We also assume that the Markov chain has a stationary probability with probability density function .

Now we give a formal definition of the hidden Markov model.

###### Definition 3.1.

We are interested in the change-point detection problem for the HMM, which is of course a particular case of the general stochastic model described in (2.1). In other words, for , let be the transition probability, be the stationary probability, and be the transition probability density of the HMM in Definition 3.1. In the change-point problem, we suppose that the conditional density and the transition probability change at an unknown time from to .

Let be the sample obtained from the HMM and denote

as the likelihood ratio. By (2.1), for , the likelihood ratio of the hypothesis against for the sample is given by

(3.4) |

where .

Recall that in Section 2 we assumed that only the sample can be observed and the initial value is used for producing the observed sequence with the desirable property. The initialization affects the initial value of the likelihood ratio, , which can be either random or deterministic. In turn, this influences the behavior of for . Using the sample in (3) and (3.4) is convenient for Markov and hidden Markov models which can be initialized either randomly or deterministically. If cannot be observed (or properly generated), then we assume , which is equivalent to for all in (3). This is also the case when the change cannot occur before the observations become available, i.e., when .

Of course, the probability measure (likelihood ratio) defined in (3.4) is one of several possible ways of representing the LR, when the change occurs at time For instance, when the post-change hidden state comes from the pre-change hidden state with new transition probability, then the joint marginal -distribution of (with ) becomes

(3.5) | |||||

Note that the first equation in (3.5), the joint marginal distributions formulation, is an alternative expression of (3.4). In the second equation of (3.5), we approximate by the associated stationary distribution for .

###### Remark 3.1.

In the following sections, we investigate the Shiryaev change point detection rule defined in (2.7) and (2.9). We now give certain preliminary results required for this study. Since the detection statistic involves defined in (3.4) and (3.5), we explore the structure of the likelihood ratio in (3) first. For this purpose, we represent (3) as the ratio of -norms of products of Markovian random matrices. This device has been proposed by Fuh [5] to study a recursive CUSUM change-point detection procedure in HMM. Here we carry out the same idea to have a representation of the likelihood ratio . Specifically, given a column vector , where denotes the transpose of the underlying vector in , define the -norm of as . The likelihood ratio then can be represented as

(3.6) |

where

(3.7) |

(3.8) |

for , , and

(3.9) |

Note that each component in represents and , and is a random variable with transition probability density for and . Therefore the are random matrices. Since is a Markov chain by definition (3.1) and (3.2), this implies that is a sequence of Markov random matrices for Hence, is the ratio of the -norm of the products of Markov random matrices via representation (3.6). Note that is fixed in (3.6).

Let be the Markov chain defined in (3.1) and (3.2). Denote and . Define as the set of invertible matrices with real entries. For given and , let be the random matrix from to , as defined in (3.7) and (3.8). For each , let

(3.10) |

Then the system is called a product of Markov random matrices on . Denote as the probability distribution of with , and as the expectation under .

Let be a -dimensional vector, the normalization of (), and denote as the projection space of which contains all elements . For given and , denote and , for . Let

(3.11) |

Then, is a Markov chain on the state space with the transition kernel

(3.12) |

for all , and , the -algebra of . For simplicity, we let and denote as the expectation under . Since the Markov chain has transition probability density and the random matrix is driven by , it implies that the induced transition probability has a density with respect to . Denote the density as for simplicity. According to Theorem 1(iii) in Fuh [5], under conditions C1 and C2 given below, the stationary distribution of exists. Denote it by .

The crucial observation is that the log-likelihood ratio can now be written as an additive functional of the Markov chain . That is,

(3.13) |

where

(3.14) |

In the following sections, we show that the Shiryaev procedure with a certain threshold is asymptotically first-order optimal as for a large class of prior distributions and provide a higher order approximation to the average detection delay for the geometric prior.

Regarding prior distributions , we will assume throughout that condition (2.10) holds for some . A case where a fixed positive is replaced with the value of that depends on and vanishes when with a certain appropriate rate will also be handled.

## 4 First Order Asymptotic Optimality

For ease of notation, let be the state space of the Markov chain . Denote and , where is the initial state of taken from . To prove first-order asymptotic optimality of the Shiryaev rule and to derive a high-order asymptotic approximation to the average detection delay for HMMs, the conditions C1–C2 set below are assumed throughout this paper. Before that we need the following definitions and notations.

Abusing the notation a little bit, a Markov chain on a general state space is called -uniformly ergodic if there exists a measurable function , with , such that

(4.1) |

Under irreducibility and aperiodicity assumption, -uniform ergodicity implies that is Harris recurrent in the sense that there exist a recurrent set , a probability measure on and an integer such that for all , and there exists such that

(4.2) |

for all and . Under (4.2), Athreya and Ney [1] proved that admits a regenerative scheme with i.i.d. inter-regeneration times for an augmented Markov chain, which is called the “split chain”. Recall that is defined in (3.13). Let be the first time to reach the atom of the split chain, and define for , where is an initial distribution on . Assume that

(4.3) |

Ney and Nummelin [19] showed that is an open set and that for the transition kernel has a maximal simple real eigenvalue , where is the unique solution of the equation with the corresponding eigenfunction