2

I'm attempting to model probability for a finite state machine. The probability of the next state depends on the current state. However, I'd like to handle the case where I don't know the starting state.

Originally I just averaged all the output values, but I realized they'd need to be weighted by the probability of being in a given system. I feel like I can construct a system of linear equations, but I'm not as comfortable in this area of mathematics (also, please correct me if I'm making a faulty assumption or saying the wrong thing).

To hopefully help with my explanation, I'll try a simple case. Say there are two states, A and B.

State A
0.8: A
0.2: B
State B
0.5: A
0.5: B

In this system, B is clearly the less likely state, but what is the probability of going into B from an unknown state? Is this the same thing as asking: given no information, what is the probability we are in state B? Am I wrong and it is as simple as averaging the probabilities?

Any help you can offer is appreciated (or directions to find the answer on my own)

EDIT for more details:

Specifically I'm create a computer program that calculates the likelihood of being in a certain state based on discrete observable results (using a simple Bayesian calculation). However, since this is fundamentally a state machine (and therefore the previous state affects the likelihood of the current state), I'm trying to get a set of values that get a general probability distribute of each state when I don't have the information for last state.

It didn't occur to me that it'd be helpful, but I do know which state the system starts on, though I don't know how many iterations through the system have occurred.

Daphoa
  • 21

2 Answers2

0

This idea of a entirely unknown state is problematic. To be able to determine the probability of any given next state, we have to have some idea of the probability of the current state. If we make the natural assumption, that we suppose it is equally likely that we are in any of the possible initial states, then you can really just average the probabilities. Without any assumption on the initial probability distribution of the states, you can't say anything.

  • Say I know the first state, but I don't know how many times the states have changed? Would that be enough to start with something? – Daphoa Apr 11 '20 at 02:39
0

You could pick a prior about the initial state, use the known properties of the system to compute a likelihood function based on its observed behavior. Since there are a finite number of states, you could then pick the prior to maximize the likelihood of the observed behavior given the history. With enough data, you would get a very tight estimate.

Another, more general approach, might be the Hidden Markov Model. You introduce hidden states to the system and the observer guesses which regime the system is in from its actions. You could allow more general uncertainty about not just initial, but subsequent states

  • I'm sorry, this is not a subject where I have enough knowledge to be super familiar with terminology. I'm not really sure what a prior is in this context and quick googling didn't help me. Is there a quick way to describe the context in other words, or another phrase I could use when researching? – Daphoa Apr 11 '20 at 02:48