The Markovian assumption in HMM states that the state at time T + 1 is independent of all states up to T due to T.
Your option 2 is close to what I would like to suggest, except that you use the maximum likelihood assignment for the last state. Instead, calculate the distribution by the latent state of the last element in the sequence. This boils down to replacing the βmaximaβ with the βsumsβ in the Viterbi algorithm. (See https://www.coursera.org/course/pgm and find the sum-product algorithm, otherwise known as belief dissemination).
Then, to try the future, you first sample the last state, given its distribution. Then spend the next hidden state using the transition matrix and repeat the announcement. Since you do not have actual observations after the last point in the sequence, you select a selection from the chain of marks. This will give you samples of the future, given everything you know about partial sequencing. The reason this differs from Viterbi is that even the most likely assignment of hidden partial variables may have a low probability. Using the entire distribution in the last state, you can get a much better estimate of the following (unobservable future) conditions.
source share