[SGM Markov Chain]Introduction
A Markov chain is a mathematical model that describes a system evolving over time among a finite number of states. This model is based on the assumption that the future state of the system depends only on the current state and not on previous states, the so-called Markov property. In the context of financial markets, Markov chains can be used to model transitions between different market conditions, for example, the probability of a price going up after going up, or going down after going down.
Script Description
This script uses a Markov chain to calculate closing price transition probabilities across the entire accessible chart. It displays the probabilities of the following transitions:
- Up after Up (HH): Probability that the price rises after going up.
- Down after Down (BB): Probability that the price will go down after going down.
- Up after Down (HB): Probability that the price goes up after going down.
- Down after Up (BH): Probability that the price will go down after going up.
Features
- Color customization: Choose colors for each transition type.
- Table Position: Select the position of the probability display table (top/left, top/right, bottom/left, bottom/right).
Markov
MarkovAlgorithmLibrary "MarkovAlgorithm"
Markov algorithm is a string rewriting system that uses grammar-like rules to operate on strings of
symbols. Markov algorithms have been shown to be Turing-complete, which means that they are suitable as a
general model of computation and can represent any mathematical expression from its simple notation.
~ wikipedia
.
reference:
en.wikipedia.org
rosettacode.org
parse(rules, separator)
Parameters:
rules (string)
separator (string)
Returns: - `array _rules`: List of rules.
---
Usage:
- `parse("|0 -> 0|| 1 -> 0| 0 -> ")`
apply(expression, rules)
Aplies rules to a expression.
Parameters:
expression (string) : `string`: Text expression to be formated by the rules.
rules (rule ) : `string`: Rules to apply to expression on a string format to be parsed.
Returns: - `string _result`: Formated expression.
---
Usage:
- `apply("101", parse("|0 -> 0|| 1 -> 0| 0 -> "))`
apply(expression, rules)
Parameters:
expression (string)
rules (string)
Returns: - `string _result`: Formated expression.
---
Usage:
- `apply("101", parse("|0 -> 0|| 1 -> 0| 0 -> "))`
rule
String pair that represents `pattern -> replace`, each rule may be ordinary or terminating.
Fields:
pattern (series string) : Pattern to replace.
replacement (series string) : Replacement patterns.
termination (series bool) : Termination rule.
MarkovChainLibrary "MarkovChain"
Generic Markov Chain type functions.
---
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the
probability of each event depends only on the state attained in the previous event.
---
reference:
Understanding Markov Chains, Examples and Applications. Second Edition. Book by Nicolas Privault.
en.wikipedia.org
www.geeksforgeeks.org
towardsdatascience.com
github.com
stats.stackexchange.com
timeseriesreasoning.com
www.ris-ai.com
github.com
gist.github.com
github.com
gist.github.com
writings.stephenwolfram.com
kevingal.com
towardsdatascience.com
spedygiorgio.github.io
github.com
www.projectrhea.org
method to_string(this)
Translate a Markov Chain object to a string format.
Namespace types: MC
Parameters:
this (MC) : `MC` . Markov Chain object.
Returns: string
method to_table(this, position, text_color, text_size)
Namespace types: MC
Parameters:
this (MC)
position (string)
text_color (color)
text_size (string)
method create_transition_matrix(this)
Namespace types: MC
Parameters:
this (MC)
method generate_transition_matrix(this)
Namespace types: MC
Parameters:
this (MC)
new_chain(states, name)
Parameters:
states (state )
name (string)
from_data(data, name)
Parameters:
data (string )
name (string)
method probability_at_step(this, target_step)
Namespace types: MC
Parameters:
this (MC)
target_step (int)
method state_at_step(this, start_state, target_state, target_step)
Namespace types: MC
Parameters:
this (MC)
start_state (int)
target_state (int)
target_step (int)
method forward(this, obs)
Namespace types: HMC
Parameters:
this (HMC)
obs (int )
method backward(this, obs)
Namespace types: HMC
Parameters:
this (HMC)
obs (int )
method viterbi(this, observations)
Namespace types: HMC
Parameters:
this (HMC)
observations (int )
method baumwelch(this, observations)
Namespace types: HMC
Parameters:
this (HMC)
observations (int )
Node
Target node.
Fields:
index (series int) : . Key index of the node.
probability (series float) : . Probability rate of activation.
state
State reference.
Fields:
name (series string) : . Name of the state.
index (series int) : . Key index of the state.
target_nodes (Node ) : . List of index references and probabilities to target states.
MC
Markov Chain reference object.
Fields:
name (series string) : . Name of the chain.
states (state ) : . List of state nodes and its name, index, targets and transition probabilities.
size (series int) : . Number of unique states
transitions (matrix) : . Transition matrix
HMC
Hidden Markov Chain reference object.
Fields:
name (series string) : . Name of thehidden chain.
states_hidden (state ) : . List of state nodes and its name, index, targets and transition probabilities.
states_obs (state ) : . List of state nodes and its name, index, targets and transition probabilities.
transitions (matrix) : . Transition matrix
emissions (matrix) : . Emission matrix
initial_distribution (float )
FunctionProbabilityViterbiLibrary "FunctionProbabilityViterbi"
The Viterbi Algorithm calculates the most likely sequence of hidden states *(called Viterbi path)*
that results in a sequence of observed events.
viterbi(observations, transitions, emissions, initial_distribution)
Calculate most probable path in a Markov model.
Parameters:
observations (int ) : array . Observation states data.
transitions (matrix) : matrix . Transition probability table, (HxH, H:Hidden states).
emissions (matrix) : matrix . Emission probability table, (OxH, O:Observed states).
initial_distribution (float ) : array . Initial probability distribution for the hidden states.
Returns: array. Most probable path.
FunctionBaumWelchLibrary "FunctionBaumWelch"
Baum-Welch Algorithm, also known as Forward-Backward Algorithm, uses the well known EM algorithm
to find the maximum likelihood estimate of the parameters of a hidden Markov model given a set of observed
feature vectors.
---
### Function List:
> `forward (array pi, matrix a, matrix b, array obs)`
> `forward (array pi, matrix a, matrix b, array obs, bool scaling)`
> `backward (matrix a, matrix b, array obs)`
> `backward (matrix a, matrix b, array obs, array c)`
> `baumwelch (array observations, int nstates)`
> `baumwelch (array observations, array pi, matrix a, matrix b)`
---
### Reference:
> en.wikipedia.org
> github.com
> en.wikipedia.org
> www.rdocumentation.org
> www.rdocumentation.org
forward(pi, a, b, obs)
Computes forward probabilities for state `X` up to observation at time `k`, is defined as the
probability of observing sequence of observations `e_1 ... e_k` and that the state at time `k` is `X`.
Parameters:
pi (float ) : Initial probabilities.
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing
states given a state matrix is size (M x M) where M is number of states.
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. Given
state matrix is size (M x O) where M is number of states and O is number of different
possible observations.
obs (int ) : List with actual state observation data.
Returns: - `matrix _alpha`: Forward probabilities. The probabilities are given on a logarithmic scale (natural logarithm). The first
dimension refers to the state and the second dimension to time.
forward(pi, a, b, obs, scaling)
Computes forward probabilities for state `X` up to observation at time `k`, is defined as the
probability of observing sequence of observations `e_1 ... e_k` and that the state at time `k` is `X`.
Parameters:
pi (float ) : Initial probabilities.
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing
states given a state matrix is size (M x M) where M is number of states.
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. Given
state matrix is size (M x O) where M is number of states and O is number of different
possible observations.
obs (int ) : List with actual state observation data.
scaling (bool) : Normalize `alpha` scale.
Returns: - #### Tuple with:
> - `matrix _alpha`: Forward probabilities. The probabilities are given on a logarithmic scale (natural logarithm). The first
dimension refers to the state and the second dimension to time.
> - `array _c`: Array with normalization scale.
backward(a, b, obs)
Computes backward probabilities for state `X` and observation at time `k`, is defined as the probability of observing the sequence of observations `e_k+1, ... , e_n` under the condition that the state at time `k` is `X`.
Parameters:
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing states
given a state matrix is size (M x M) where M is number of states
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. given state
matrix is size (M x O) where M is number of states and O is number of different possible observations
obs (int ) : Array with actual state observation data.
Returns: - `matrix _beta`: Backward probabilities. The probabilities are given on a logarithmic scale (natural logarithm). The first dimension refers to the state and the second dimension to time.
backward(a, b, obs, c)
Computes backward probabilities for state `X` and observation at time `k`, is defined as the probability of observing the sequence of observations `e_k+1, ... , e_n` under the condition that the state at time `k` is `X`.
Parameters:
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing states
given a state matrix is size (M x M) where M is number of states
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. given state
matrix is size (M x O) where M is number of states and O is number of different possible observations
obs (int ) : Array with actual state observation data.
c (float ) : Array with Normalization scaling coefficients.
Returns: - `matrix _beta`: Backward probabilities. The probabilities are given on a logarithmic scale (natural logarithm). The first dimension refers to the state and the second dimension to time.
baumwelch(observations, nstates)
**(Random Initialization)** Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the
unknown parameters of a hidden Markov model (HMM). It makes use of the forward-backward algorithm
to compute the statistics for the expectation step.
Parameters:
observations (int ) : List of observed states.
nstates (int)
Returns: - #### Tuple with:
> - `array _pi`: Initial probability distribution.
> - `matrix _a`: Transition probability matrix.
> - `matrix _b`: Emission probability matrix.
---
requires: `import RicardoSantos/WIPTensor/2 as Tensor`
baumwelch(observations, pi, a, b)
Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the
unknown parameters of a hidden Markov model (HMM). It makes use of the forward-backward algorithm
to compute the statistics for the expectation step.
Parameters:
observations (int ) : List of observed states.
pi (float ) : Initial probaility distribution.
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing states
given a state matrix is size (M x M) where M is number of states
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. given state
matrix is size (M x O) where M is number of states and O is number of different possible observations
Returns: - #### Tuple with:
> - `array _pi`: Initial probability distribution.
> - `matrix _a`: Transition probability matrix.
> - `matrix _b`: Emission probability matrix.
---
requires: `import RicardoSantos/WIPTensor/2 as Tensor`
FunctionSMCMCLibrary "FunctionSMCMC"
Methods to implement Markov Chain Monte Carlo Simulation (MCMC)
markov_chain(weights, actions, target_path, position, last_value) a basic implementation of the markov chain algorithm
Parameters:
weights : float array, weights of the Markov Chain.
actions : float array, actions of the Markov Chain.
target_path : float array, target path array.
position : int, index of the path.
last_value : float, base value to increment.
Returns: void, updates target array
mcmc(weights, actions, start_value, n_iterations) uses a monte carlo algorithm to simulate a markov chain at each step.
Parameters:
weights : float array, weights of the Markov Chain.
actions : float array, actions of the Markov Chain.
start_value : float, base value to start simulation.
n_iterations : integer, number of iterations to run.
Returns: float array with path.
FunctionDecisionTreeLibrary "FunctionDecisionTree"
Method to generate decision tree based on weights.
decision_tree(weights, depth) Method to generate decision tree based on weights.
Parameters:
weights : float array, weights for decision consideration.
depth : int, depth of the tree.
Returns: int array
SMA Stochastic ForecastThis tool uses a discrete-time non-Markovian Martingale stochastic process (Please do not confuse with the strategy of the same name) under the hood to forecast a future (up to 28 bars, customizable) behaviour of the Simple Moving Average. The longer the average period, the more accurate the forecast.
The common cases are the next:
You can apply two instances of this indicator to your chart to obtain a crossover forecast
You can decrease an interval between forecasts to obtain a bunch of possible traectories
Decreasing a forecast interval for two instances, you will get the Kraken
This is the further improvement of my research work on forecasting
Mr. @syrinxflunki was the only one who provided a clear and useful feedback after testing, so he get a free lifetime access. I respect a fair play.
If you have any questions you can concat me via private messages.
Good luck.