Questions tagged [markov-chains]

75 questions
9
votes
2 answers

What are Markov chains?

I'm currently reading some papers about Markov chain lumping and I'm failing to see the difference between a Markov chain and a plain directed weighted graph. For example in the article Optimal state-space lumping in Markov chains they provide the…
9
votes
2 answers

Why are HMMs appropriate for speech recognition when the problem doesn't seem to satisfy the Markov property

I'm learning about HMMs and their applications and trying to understand their usages. My knowledge is a bit spotty, so please correct any incorrect assumptions I'm making. The specific example I'm wondering about is for using HMMs for speech…
7
votes
1 answer

Can the solution to a POMDP be found using linear programming?

It is known that Markov decision processes (MDPs) can be solved using linear programming (see page 24 of Carlos Guestrin's PhD dissertation). The linear program is: $$min_{V(x)} \sum_x \alpha(x)V(x)\\ \text{subject to: } V(x) \ge R(x,a) +…
jonem
  • 381
  • 1
  • 8
6
votes
4 answers

What are the uses of Markov Chains in CS?

We all know that Markov Chains can be used for generating real-looking text (or real-sounding music). I've also heard that Markov Chains has some applications in the image processing, is that true? What are some other uses of MCs in CS?
Daniil
  • 2,207
  • 19
  • 24
6
votes
1 answer

Clarification of the definition of a POMDP

From what I understand, a $MDP=(G, A, P, R)$ (markov decision process) is represented as: A complete directed graph $G=(V, E)$ A set of actions $A_u$ for each vertex $u \in V$ A reward function $R$ that maps any vertex to some reward, i.e., $R…
6
votes
1 answer

Understanding simulated annealing information theoretically

So I recently rediscovered simulated annealing through a path that others seem to be well aware of. I was aware of Metropolis-Hastings as a sampling algorithm that creates a Markov-Chain whose stationary distribution is the distribution produced by…
4
votes
1 answer

Absorbing Markov Chains: An efficient algorithmic approach

Following this procedure I have successfully written a program to calculate the probability of ending in a given absorbing state given the initial state. The procedure is as follows: Given the transition matrix (P), row and column swap until the…
4
votes
2 answers

Average vs Worst-Case Hitting Time

Consider a simple random walk on an undirected graph and let $H_{ij}$ be the hitting time from $i$ to $j$. How much bigger can $$ H_{\rm max} = \max_{i,j} H_{ij}, $$ be compared to $$ H_{\rm ave} = \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n H_{ij}.$$…
4
votes
1 answer

Algorithm for computing $Pr[s \vDash C \bigcup^{\geq n} B]$ for probabilistic verification

I'm having some difficulty trying to come up with an algorithm for computing $Pr[s \vDash C ~\bigcup^{\geq n} B]$ given a finite Markov chain where $S$ is the set of states, $s \in S$, $B,C \subseteq S$, and $n \in \mathbb{N}$ where $n \geq 1$. I…
3
votes
1 answer

Perturbing a Markov chain to be closer to a target stationary distribution

Suppose we are a given a Markov chain $A_0 \in \mathbb{R}^{n \times n}$ and a desired stationary probability vector $\pi_0 \in \mathbb{R}^n$. I would like to find a Markov chain that is as close as possible to $A_0$ and whose stationary probability…
3
votes
1 answer

How many possible policies in a Markov Decision Process?

If a policy yields an action for a state, how come a 3-state MDP with 2 possible actions, i.e. $S = \{Hot, Mild, Cold\}$, $A = \{East, West\}$, has 8 possible policies? Isn't it 6 if there are 2 possible action for every state?
3
votes
0 answers

Form of conditional observation probabilities in a POMDP

Consider a partially observable Markov decision process (POMDP), see here for a complete definition. My question is in relation to the conditional observation probabilities (denoted by $O(o|s',a)$ in the above link). This represents the probability…
jonem
  • 381
  • 1
  • 8
3
votes
0 answers

Calssifying the Partitions of the problem Cycle Decomposition of Markov Chains

The book Cycle Representations of Markov Processes solves the problem of Mapping Stochastic Matrices induced from a Markov Chain into Partitions using a $\lambda$-preserving ($\lambda$ is a Lebesgue Measure) transformation of the interval $f_{t} =…
R. S.
  • 199
  • 1
  • 8
3
votes
2 answers

The transition function in a Markov decision process

A Markov decision process is typically described as a tuple $\langle A,U,T,R \rangle $ where $A$ is the state space $U$ is the action space $T: A \times U \times A \mapsto [0,\infty) $ is the state transition probability function $R:A \times U…
Astrid
  • 317
  • 4
  • 12
3
votes
0 answers

Multicommodity flows with minimum congestion: NP-hard?

I have a question related to a paper of Chen, Lovasz and Pak [1]. The paper concerns the construction of the Markov chain with optimal mixing time on an arbitrary graph. They prove the optimal bound (conductance bound) can be reached by "lifting"…
smapers
  • 206
  • 1
  • 7
1
2 3 4 5