Seasonality Monte Carlo Forecaster [BackQuant]Seasonality Monte Carlo Forecaster
Plain-English overview
This tool projects a cone of plausible future prices by combining two ideas that traders already use intuitively: seasonality and uncertainty. It watches how your market typically behaves around this calendar date, turns that seasonal tendency into a small daily “drift,” then runs many randomized price paths forward to estimate where price could land tomorrow, next week, or a month from now. The result is a probability cone with a clear expected path, plus optional overlays that show how past years tended to move from this point on the calendar. It is a planning tool, not a crystal ball: the goal is to quantify ranges and odds so you can size, place stops, set targets, and time entries with more realism.
What Monte Carlo is and why quants rely on it
• Definition . Monte Carlo simulation is a way to answer “what might happen next?” when there is randomness in the system. Instead of producing a single forecast, it generates thousands of alternate futures by repeatedly sampling random shocks and adding them to a model of how prices evolve.
• Why it is used . Markets are noisy. A single point forecast hides risk. Monte Carlo gives a distribution of outcomes so you can reason in probabilities: the median path, the 68% band, the 95% band, tail risks, and the chance of hitting a specific level within a horizon.
• Core strengths in quant finance .
– Path-dependent questions : “What is the probability we touch a stop before a target?” “What is the expected drawdown on the way to my objective?”
– Pricing and risk : Useful for path-dependent options, Value-at-Risk (VaR), expected shortfall (CVaR), stress paths, and scenario analysis when closed-form formulas are unrealistic.
– Planning under uncertainty : Portfolio construction and rebalancing rules can be tested against a cloud of plausible futures rather than a single guess.
• Why it fits trading workflows . It turns gut feel like “seasonality is supportive here” into quantitative ranges: “median path suggests +X% with a 68% band of ±Y%; stop at Z has only ~16% odds of being tagged in N days.”
How this indicator builds its probability cone
1) Seasonal pattern discovery
The script builds two day-of-year maps as new data arrives:
• A return map where each calendar day stores an exponentially smoothed average of that day’s log return (yesterday→today). The smoothing (90% old, 10% new) behaves like an EWMA, letting older seasons matter while adapting to new information.
• A volatility map that tracks the typical absolute return for the same calendar day.
It calculates the day-of-year carefully (with leap-year adjustment) and indexes into a 365-slot seasonal array so “March 18” is compared with past March 18ths. This becomes the seasonal bias that gently nudges simulations up or down on each forecast day.
2) Choice of randomness engine
You can pick how the future shocks are generated:
• Daily mode uses a Gaussian draw with the seasonal bias as the mean and a volatility that comes from realized returns, scaled down to avoid over-fitting. It relies on the Box–Muller transform internally to turn two uniform random numbers into one normal shock.
• Weekly mode uses bootstrap sampling from the seasonal return history (resampling actual historical daily drifts and then blending in a fraction of the seasonal bias). Bootstrapping is robust when the empirical distribution has asymmetry or fatter tails than a normal distribution.
Both modes seed their random draws deterministically per path and day, which makes plots reproducible bar-to-bar and avoids flickering bands.
3) Volatility scaling to current conditions
Markets do not always live in average volatility. The engine computes a simple volatility factor from ATR(20)/price and scales the simulated shocks up or down within sensible bounds (clamped between 0.5× and 2.0×). When the current regime is quiet, the cone narrows; when ranges expand, the cone widens. This prevents the classic mistake of projecting calm markets into a storm or vice versa.
4) Many futures, summarized by percentiles
The model generates a matrix of price paths (capped at 100 runs for performance inside TradingView), each path stepping forward for your selected horizon. For each forecast day it sorts the simulated prices and pulls key percentiles:
• 5th and 95th → approximate 95% band (outer cone).
• 16th and 84th → approximate 68% band (inner cone).
• 50th → the median or “expected path.”
These are drawn as polylines so you can immediately see central tendency and dispersion.
5) A historical overlay (optional)
Turn on the overlay to sketch a dotted path of what a purely seasonal projection would look like for the next ~30 days using only the return map, no randomness. This is not a forecast; it is a visual reminder of the seasonal drift you are biasing toward.
Inputs you control and how to think about them
Monte Carlo Simulation
• Price Series for Calculation . The source series, typically close.
• Enable Probability Forecasts . Master switch for simulation and drawing.
• Simulation Iterations . Requested number of paths to run. Internally capped at 100 to protect performance, which is generally enough to estimate the percentiles for a trading chart. If you need ultra-smooth bands, shorten the horizon.
• Forecast Days Ahead . The length of the cone. Longer horizons dilute seasonal signal and widen uncertainty.
• Probability Bands . Draw all bands, just 95%, just 68%, or a custom level (display logic remains 68/95 internally; the custom number is for labeling and color choice).
• Pattern Resolution . Daily leans on day-of-year effects like “turn-of-month” or holiday patterns. Weekly biases toward day-of-week tendencies and bootstraps from history.
• Volatility Scaling . On by default so the cone respects today’s range context.
Plotting & UI
• Probability Cone . Plots the outer and inner percentile envelopes.
• Expected Path . Plots the median line through the cone.
• Historical Overlay . Dotted seasonal-only projection for context.
• Band Transparency/Colors . Customize primary (outer) and secondary (inner) band colors and the mean path color. Use higher transparency for cleaner charts.
What appears on your chart
• A cone starting at the most recent bar, fanning outward. The outer lines are the ~95% band; the inner lines are the ~68% band.
• A median path (default blue) running through the center of the cone.
• An info panel on the final historical bar that summarizes simulation count, forecast days, number of seasonal patterns learned, the current day-of-year, expected percentage return to the median, and the approximate 95% half-range in percent.
• Optional historical seasonal path drawn as dotted segments for the next 30 bars.
How to use it in trading
1) Position sizing and stop logic
The cone translates “volatility plus seasonality” into distances.
• Put stops outside the inner band if you want only ~16% odds of a stop-out due to noise before your thesis can play.
• Size positions so that a test of the inner band is survivable and a test of the outer band is rare but acceptable.
• If your target sits inside the 68% band at your horizon, the payoff is likely modest; outside the 68% but inside the 95% can justify “one-good-push” trades; beyond the 95% band is a low-probability flyer—consider scaling plans or optionality.
2) Entry timing with seasonal bias
When the median path slopes up from this calendar date and the cone is relatively narrow, a pullback toward the lower inner band can be a high-quality entry with a tight invalidation. If the median slopes down, fade rallies toward the upper band or step aside if it clashes with your system.
3) Target selection
Project your time horizon to N bars ahead, then pick targets around the median or the opposite inner band depending on your style. You can also anchor dynamic take-profits to the moving median as new bars arrive.
4) Scenario planning & “what-ifs”
Before events, glance at the cone: if the 95% band already spans a huge range, trade smaller, expect whips, and avoid placing stops at obvious band edges. If the cone is unusually tight, consider breakout tactics and be ready to add if volatility expands beyond the inner band with follow-through.
5) Options and vol tactics
• When the cone is tight : Prefer long gamma structures (debit spreads) only if you expect a regime shift; otherwise premium selling may dominate.
• When the cone is wide : Debit structures benefit from range; credit spreads need wider wings or smaller size. Align with your separate IV metrics.
Reading the probability cone like a pro
• Cone slope = seasonal drift. Upward slope means the calendar has historically favored positive drift from this date, downward slope the opposite.
• Cone width = regime volatility. A widening fan tells you that uncertainty grows fast; a narrow cone says the market typically stays contained.
• Mean vs. price gap . If spot trades well above the median path and the upper band, mean-reversion risk is high. If spot presses the lower inner band in an up-sloping cone, you are in the “buy fear” zone.
• Touches and pierces . Touching the inner band is common noise; piercing it with momentum signals potential regime change; the outer band should be rare and often brings snap-backs unless there is a structural catalyst.
Methodological notes (what the code actually does)
• Log returns are used for additivity and better statistical behavior: sim_ret is applied via exp(sim_ret) to evolve price.
• Seasonal arrays are updated online with EWMA (90/10) so the model keeps learning as each bar arrives.
• Leap years are handled; indexing still normalizes into a 365-slot map so the seasonal pattern remains stable.
• Gaussian engine (Daily mode) centers shocks on the seasonal bias with a conservative standard deviation.
• Bootstrap engine (Weekly mode) resamples from observed seasonal returns and adds a fraction of the bias, which captures skew and fat tails better.
• Volatility adjustment multiplies each daily shock by a factor derived from ATR(20)/price, clamped between 0.5 and 2.0 to avoid extreme cones.
• Performance guardrails : simulations are capped at 100 paths; the probability cone uses polylines (no heavy fills) and only draws on the last confirmed bar to keep charts responsive.
• Prerequisite data : at least ~30 seasonal entries are required before the model will draw a cone; otherwise it waits for more history.
Strengths and limitations
• Strengths :
– Probabilistic thinking replaces single-point guessing.
– Seasonality adds a small but meaningful directional bias that many markets exhibit.
– Volatility scaling adapts to the current regime so the cone stays realistic.
• Limitations :
– Seasonality can break around structural changes, policy shifts, or one-off events.
– The number of paths is performance-limited; percentile estimates are good for trading, not for academic precision.
– The model assumes tomorrow’s randomness resembles recent randomness; if regime shifts violently, the cone will lag until the EWMA adapts.
– Holidays and missing sessions can thin the seasonal sample for some assets; be cautious with very short histories.
Tuning guide
• Horizon : 10–20 bars for tactical trades; 30+ for swing planning when you care more about broad ranges than precise targets.
• Iterations : The default 100 is enough for stable 5/16/50/84/95 percentiles. If you crave smoother lines, shorten the horizon or run on higher timeframes.
• Daily vs. Weekly : Daily for equities and crypto where month-end and turn-of-month effects matter; Weekly for futures and FX where day-of-week behavior is strong.
• Volatility scaling : Keep it on. Turn off only when you intentionally want a “pure seasonality” cone unaffected by current turbulence.
Workflow examples
• Swing continuation : Cone slopes up, price pulls into the lower inner band, your system fires. Enter near the band, stop just outside the outer line for the next 3–5 bars, target near the median or the opposite inner band.
• Fade extremes : Cone is flat or down, price gaps to the upper outer band on news, then stalls. Favor mean-reversion toward the median, size small if volatility scaling is elevated.
• Event play : Before CPI or earnings on a proxy index, check cone width. If the inner band is already wide, cut size or prefer options structures that benefit from range.
Good habits
• Pair the cone with your entry engine (breakout, pullback, order flow). Let Monte Carlo do range math; let your system do signal quality.
• Do not anchor blindly to the median; recalc after each bar. When the cone’s slope flips or width jumps, the plan should adapt.
• Validate seasonality for your symbol and timeframe; not every market has strong calendar effects.
Summary
The Seasonality Monte Carlo Forecaster wraps institutional risk planning into a single overlay: a data-driven seasonal drift, realistic volatility scaling, and a probabilistic cone that answers “where could we be, with what odds?” within your trading horizon. Use it to place stops where randomness is less likely to take you out, to set targets aligned with realistic travel, and to size positions with confidence born from distributions rather than hunches. It will not predict the future, but it will keep your decisions anchored to probabilities—the language markets actually speak.
Simulation
DCA Simulation for CryptoCommunity v1.1Overview
This script provides a detailed simulation of a Dollar-Cost Averaging (DCA) strategy tailored for crypto traders. It allows users to visualize how their DCA strategy would perform historically under specific parameters. The script is designed to help traders understand the mechanics of DCA and how it influences average price movement, budget utilization, and trade outcomes.
Key Features:
Combines Interval and Safety Order DCA:
Interval DCA: Regular purchases based on predefined time intervals.
Safety Order DCA: Additional buys triggered by percentage price drops.
Interactive Visualization:
Displays buy levels, average price, and profit-taking points on the chart.
Allows traders to assess how their strategy adapts to price movements.
Comprehensive Dashboard:
Tracks money spent, contracts acquired, and budget utilization.
Shows maximum amounts used if profit-taking is active.
Dynamic Safety Orders:
Resets safety orders when a new higher high is established.
Customizable Parameters:
Adjustable buy frequency, safety order settings, and profit-taking levels.
Suitable for traders with varying budgets and risk tolerances.
Default Strategy Settings:
Account Size: Default account size is set to $10,000 to represent a realistic budget for the average trader.
Commission & Slippage: Includes realistic trading fees and slippage assumptions to ensure accurate backtesting results.
Risk Management: Defaults to risking no more than 5% of the account balance per trade.
Sample Size: Optimized to generate a minimum of 100 trades for meaningful statistical analysis. Users can adjust parameters to fit longer timeframes or different datasets.
Usage Instructions:
Configure Your Strategy: Set the base order, safety order size, and buy frequency based on your preferred DCA approach.
Analyze Historical Performance: Use the chart and dashboard to understand how the strategy performs under different market conditions.
Optimize Parameters: Adjust settings to align with your risk tolerance and trading objectives.
Important Notes:
This script is for educational and simulation purposes. It is not intended to provide financial advice or guarantee profitability.
If the strategy's default settings do not meet your needs, feel free to adjust them while keeping risk management in mind.
TradingView limits the number of open trades to 999, so reduce the buy frequency if necessary to fit longer timeframes.
Dual price forecast with Projection Zone [FXSMARTLAB]The Dual Price Forecast with Projection Zone indicator is built to simulate potential future price paths based on historical price movements over two defined lookback periods. By running multiple trials (or simulations) on these historical price movements, the indicator achieves a more robust forecast, incorporating the inherent variability of price behavior.
Key Components and Calculation Details
1. Lookback Periods and Historical Price Movements
Lookback Period 1 and Lookback Period 2 specify the range of past data used to generate each projection. For each period, the indicator calculates the price variations (differences between the closing and opening prices) and stores these in arrays.
These historical price variations capture the volatility and price patterns within each period, serving as templates for future price behavior.
2. Trials: Purpose and Function
The trials are a critical element in the projection calculation. Each trial represents a single simulation of possible future price movements, derived from a random reordering of the historical price variations in each lookback period.
By running multiple trials , the indicator explores various sequences of historical movements, simulating different possible future paths. Each trial adds to the projection’s robustness by capturing a unique potential price path based on past behavior.
Running these multiple trials allows the indicator to account for randomness in price behavior, making the projections more comprehensive by covering a range of scenarios rather than relying on a single deterministic forecast.
3. Reverse Option
The reverse option allows the indicator to invert the direction of price movements within each lookback period. When enabled, historical uptrends are treated as downtrends, and vice versa.
This feature is particularly valuable in scenarios where traders expect a potential reversal in market direction. By enabling the reverse option, the indicator can simulate what might happen if past trends inverted, providing an alternative forecast path that considers possible market reversals.
This allows traders to assess both continuation and reversal scenarios, giving them a more balanced view of potential future price paths and helping them prepare for either market direction.
4. Generating the Average Projection Path
Once the trials are complete, the indicator calculates an average projected price path for each lookback period by averaging the results of all trials. This average represents the most likely price trend based on historical data and provides a smoothed projection that mitigates extreme outliers.
By averaging across all trial paths, the indicator generates a more reliable and balanced forecast line, smoothing out the fluctuations that might appear if only one trial or a small number of trials were used.
5. Projection Zone Visualization
The indicator plots the two average projection paths (one for each lookback period) as Projection 1 and Projection 2, each in a user-defined color.
The Projection Zone is the area between these two lines, filled with a semi-transparent color. This zone visually represents the potential range of future price movement, highlighting where prices are likely to oscillate if historical trends persist.
The Projection Zone effectively functions as a potential support and resistance boundary, providing traders with a visual reference for possible price fluctuations within a specific range.
6. Display of Lookback Zones
To give context to the projections, the indicator can also display colored lookback zones on the chart. These zones correspond to Lookback Period 1 and Lookback Period 2 and are color-coded to match their respective projection lines.
These zones allow traders to see the sections of historical data used in the calculation, helping them understand which past price behaviors influenced the current projections.
Benefits of the Indicator
The "Dual Price Forecast with Projection Zone" indicator provides a multi-scenario forecast based on past price dynamics. Its use of trials ensures that projections are not based on a single deterministic path but on a range of possible scenarios that better reflect the inherent randomness in financial markets.
By generating a probabilistic forecast within a defined zone, the indicator helps traders to:
Anticipate potential price ranges and areas of support/resistance based on historical trends.
Understand the influence of different timeframes (short-term and long-term lookbacks) on future price behavior.
Make informed decisions by visualizing the likely variability of future prices within a controlled projection zone.
Prepare for both continuation and reversal scenarios, thanks to the reverse option. This feature is especially useful in markets where trends may change direction, as it allows traders to explore what might happen
Anchored Monte Carlo Shuffled Projection [LuxAlgo]The Anchored Monte Carlo Shuffled Projection tool randomly simulates future price points based on historical bar movements made before a user-anchored point in time.
By anchoring our data and projections to a single point in time, users can better understand and reflect on how the price played out while taking into consideration our random simulations.
🔶 USAGE
After selecting the indicator to apply to the chart, you will be prompted to "Set the Anchor Point". Do so by clicking on the desired location on your chart, only time is used as the anchor point.
Note: To select a new anchor point when applied to the chart, click on the 'More' dropdown next to the indicator status bar (○○○), then select "Reset points...".
Alternate Method: You are also able to click and drag the vertical line that displays on the anchor point bar when the indicator is highlighted.
By randomly simulating bar movements, a range is developed of potential price action which could be utilized to locate future price development as well as potential support/resistance levels.
Performing numerous simulations and taking the average at each step will converge toward the result highlighted by the "Average Line", and can point out where the price might develop, assuming the trend and amount of volatility persist.
Current closing price + Sum of changes in the calculation window
This constraint will cause the simulations always to display an endpoint consistent with the current lookback's slope.
While this may be helpful to some traders, this indicator includes an option to produce a less biased range, as seen below:
🔶 DETAILS
The Anchored Monte Carlo Shuffled Projection tool creates simulations based on prices within a user-set lookback window originating at the specified anchor point. Simulations are done as follows:
Collect each bar's price changes in the user-set window.
Randomize the order of each change in the window.
Project the cumulative sum of the shuffled changes from the current closing price.
Collect data on each point along the way.
This is the process for the Default calculation; for the 'Randomize Direction' calculation, when added onto the front for every other change, the value is inverted, creating the randomized endpoints for each simulation.
The script contains each simulation's data for that bar, with a maximum of 1000 simulations.
To get a glimpse behind the scenes, each simulation (up to 99) can be viewed using the 'Visualize Simulations' Options, as seen below.
Because the script holds the full simulation data, the script can also calculate this data, such as standard deviations.
In this script the Standard deviation lines are the average of all standard deviations across the vertical data groups, this provides a singular value that can be displayed a distance away from the simulation center line.
🔶 SETTINGS
Lookback: Sets the number of Bars to include in calculations.
Simulation Count: Sets the number of randomized simulations to calculate. (Max 1000)
Randomize Direction: See Details Above. Creates a more 'Normalized' Distribution
Visualize Simulations: See Details Above. Turns on Visualizations, and colors are randomly generated. Visualized max does not cap the calculated max. If 1000 simulations are used, the data will be from 1000 simulations, however, only the last 99 simulations will be visualized.
🔹 Standard Deviations
Standard Deviation Multiplier: Sets the multiplier to use for the Standard Deviation distance away from the center line.
🔹 Style
Extend Lines: Extends the Simulated Value Lines into the future for further reference and analysis.
Monte Carlo Shuffled Projection [LuxAlgo]The Monte Carlo Shuffled Projection tool randomly simulates future price points based on historical bar movements made within a user-selected window.
The tool shows potential paths price might take in the future, as well as highlighting potential support/resistance levels.
Note that simulations and their resulting elements are subject to slight changes over time.
🔶 USAGE
By randomly simulating bar movements, a range is developed of potential price action which could be utilized to locate future price development as well as potential support/resistance levels.
Performing a large number of simulations and taking the average at each step will converge toward the result highlighted by the "Average Line", and can point out where the price might develop assuming the trend and amount of volatility persist.
Current closing price + Sum of changes in the calculation window)
This constraint will cause the simulations to always display an endpoint consistent with the current lookback's slope.
While this may be helpful to some traders, this indicator includes an option to produce a less biased range as seen below:
🔶 DETAILS
The Monte Carlo Shuffled Projection tool creates simulations based on the most recent prices within a user-set window. Simulations are done as follows:
Collect each bar's price changes in the user-set window.
Randomize the order of each change in the window.
Project the cumulative sum of the shuffled changes from the current closing price.
Collect data on each point along the way.
This is the process for the Default calculation, for the 'Randomize Direction' calculation, when added onto the front for every other change, the value is inverted, creating the randomized endpoints for each simulation.
The script contains each simulation's data for that bar with a maximum of 1000 simulations.
To get a glimpse behind the scenes each simulation (up to 99) can be viewed using the 'Visualize Simulations' Options as seen below.
Because the script holds the full simulation data, the script can also do calculations on this data, such as calculating standard deviations.
In this script the Standard deviation lines are the average of all standard deviations across the vertical data groups, this provides a singular value that can be displayed a distance away from the simulation center line.
🔶 SETTINGS
Color and Toggle Options are Provided throughout.
Lookback: Sets the number of Bars to include in calculations.
Simulation Count: Sets the number of randomized simulations to calculate. (Max 1000)
Randomize Direction: See Details Above. Creates a more 'Normalized' Distribution
Visualize Simulations: See Details Above. Turns on Visualizations, and colors are randomly generated. Visualized max does not cap the calculated max. If 1000 simulations are used, the data will be from 1000 simulations, however only the last 99 simulations will be visualized.
Standard Deviation Multiplier: Sets the multiplier to use for the Standard Deviation distance away from the center line.
Monte Carlo Range Forecast [DW]This is an experimental study designed to forecast the range of price movement from a specified starting point using a Monte Carlo simulation.
Monte Carlo experiments are a broad class of computational algorithms that utilize random sampling to derive real world numerical results.
These types of algorithms have a number of applications in numerous fields of study including physics, engineering, behavioral sciences, climate forecasting, computer graphics, gaming AI, mathematics, and finance.
Although the applications vary, there is a typical process behind the majority of Monte Carlo methods:
-> First, a distribution of possible inputs is defined.
-> Next, values are generated randomly from the distribution.
-> The values are then fed through some form of deterministic algorithm.
-> And lastly, the results are aggregated over some number of iterations.
In this study, the Monte Carlo process used generates a distribution of aggregate pseudorandom linear price returns summed over a user defined period, then plots standard deviations of the outcomes from the mean outcome generate forecast regions.
The pseudorandom process used in this script relies on a modified Wichmann-Hill pseudorandom number generator (PRNG) algorithm.
Wichmann-Hill is a hybrid generator that uses three linear congruential generators (LCGs) with different prime moduli.
Each LCG within the generator produces an independent, uniformly distributed number between 0 and 1.
The three generated values are then summed and modulo 1 is taken to deliver the final uniformly distributed output.
Because of its long cycle length, Wichmann-Hill is a fantastic generator to use on TV since it's extremely unlikely that you'll ever see a cycle repeat.
The resulting pseudorandom output from this generator has a minimum repetition cycle length of 6,953,607,871,644.
Fun fact: Wichmann-Hill is a widely used PRNG in various software applications. For example, Excel 2003 and later uses this algorithm in its RAND function, and it was the default generator in Python up to v2.2.
The generation algorithm in this script takes the Wichmann-Hill algorithm, and uses a multi-stage transformation process to generate the results.
First, a parent seed is selected. This can either be a fixed value, or a dynamic value.
The dynamic parent value is produced by taking advantage of Pine's timenow variable behavior. It produces a variable parent seed by using a frozen ratio of timenow/time.
Because timenow always reflects the current real time when frozen and the time variable reflects the chart's beginning time when frozen, the ratio of these values produces a new number every time the cache updates.
After a parent seed is selected, its value is then fed through a uniformly distributed seed array generator, which generates multiple arrays of pseudorandom "children" seeds.
The seeds produced in this step are then fed through the main generators to produce arrays of pseudorandom simulated outcomes, and a pseudorandom series to compare with the real series.
The main generators within this script are designed to (at least somewhat) model the stochastic nature of financial time series data.
The first step in this process is to transform the uniform outputs of the Wichmann-Hill into outputs that are normally distributed.
In this script, the transformation is done using an estimate of the normal distribution quantile function.
Quantile functions, otherwise known as percent-point or inverse cumulative distribution functions, specify the value of a random variable such that the probability of the variable being within the value's boundary equals the input probability.
The quantile equation for a normal probability distribution is μ + σ(√2)erf^-1(2(p - 0.5)) where μ is the mean of the distribution, σ is the standard deviation, erf^-1 is the inverse Gauss error function, and p is the probability.
Because erf^-1() does not have a simple, closed form interpretation, it must be approximated.
To keep things lightweight in this approximation, I used a truncated Maclaurin Series expansion for this function with precomputed coefficients and rolled out operations to avoid nested looping.
This method provides a decent approximation of the error function without completely breaking floating point limits or sucking up runtime memory.
Note that there are plenty of more robust techniques to approximate this function, but their memory needs very. I chose this method specifically because of runtime favorability.
To generate a pseudorandom approximately normally distributed variable, the uniformly distributed variable from the Wichmann-Hill algorithm is used as the input probability for the quantile estimator.
Now from here, we get a pretty decent output that could be used itself in the simulation process. Many Monte Carlo simulations and random price generators utilize a normal variable.
However, if you compare the outputs of this normal variable with the actual returns of the real time series, you'll find that the variability in shocks (random changes) doesn't quite behave like it does in real data.
This is because most real financial time series data is more complex. Its distribution may be approximately normal at times, but the variability of its distribution changes over time due to various underlying factors.
In light of this, I believe that returns behave more like a convoluted product distribution rather than just a raw normal.
So the next step to get our procedurally generated returns to more closely emulate the behavior of real returns is to introduce more complexity into our model.
Through experimentation, I've found that a return series more closely emulating real returns can be generated in a three step process:
-> First, generate multiple independent, normally distributed variables simultaneously.
-> Next, apply pseudorandom weighting to each variable ranging from -1 to 1, or some limits within those bounds. This modulates each series to provide more variability in the shocks by producing product distributions.
-> Lastly, add the results together to generate the final pseudorandom output with a convoluted distribution. This adds variable amounts of constructive and destructive interference to produce a more "natural" looking output.
In this script, I use three independent normally distributed variables multiplied by uniform product distributed variables.
The first variable is generated by multiplying a normal variable by one uniformly distributed variable. This produces a bit more tailedness (kurtosis) than a normal distribution, but nothing too extreme.
The second variable is generated by multiplying a normal variable by two uniformly distributed variables. This produces moderately greater tails in the distribution.
The third variable is generated by multiplying a normal variable by three uniformly distributed variables. This produces a distribution with heavier tails.
For additional control of the output distributions, the uniform product distributions are given optional limits.
These limits control the boundaries for the absolute value of the uniform product variables, which affects the tails. In other words, they limit the weighting applied to the normally distributed variables in this transformation.
All three sets are then multiplied by user defined amplitude factors to adjust presence, then added together to produce our final pseudorandom return series with a convoluted product distribution.
Once we have the final, more "natural" looking pseudorandom series, the values are recursively summed over the forecast period to generate a simulated result.
This process of generation, weighting, addition, and summation is repeated over the user defined number of simulations with different seeds generated from the parent to produce our array of initial simulated outcomes.
After the initial simulation array is generated, the max, min, mean and standard deviation of this array are calculated, and the values are stored in holding arrays on each iteration to be called upon later.
Reference difference series and price values are also stored in holding arrays to be used in our comparison plots.
In this script, I use a linear model with simple returns rather than compounding log returns to generate the output.
The reason for this is that in generating outputs this way, we're able to run our simulations recursively from the beginning of the chart, then apply scaling and anchoring post-process.
This allows a greater conservation of runtime memory than the alternative, making it more suitable for doing longer forecasts with heavier amounts of simulations in TV's runtime environment.
From our starting time, the previous bar's price, volatility, and optional drift (expected return) are factored into our holding arrays to generate the final forecast parameters.
After these parameters are computed, the range forecast is produced.
The basis value for the ranges is the mean outcome of the simulations that were run.
Then, quarter standard deviations of the simulated outcomes are added to and subtracted from the basis up to 3σ to generate the forecast ranges.
All of these values are plotted and colorized based on their theoretical probability density. The most likely areas are the warmest colors, and least likely areas are the coolest colors.
An information panel is also displayed at the starting time which shows the starting time and price, forecast type, parent seed value, simulations run, forecast bars, total drift, mean, standard deviation, max outcome, min outcome, and bars remaining.
The interesting thing about simulated outcomes is that although the probability distribution of each simulation is not normal, the distribution of different outcomes converges to a normal one with enough steps.
In light of this, the probability density of outcomes is highest near the initial value + total drift, and decreases the further away from this point you go.
This makes logical sense since the central path is the easiest one to travel.
Given the ever changing state of markets, I find this tool to be best suited for shorter term forecasts.
However, if the movements of price are expected to remain relatively stable, longer term forecasts may be equally as valid.
There are many possible ways for users to apply this tool to their analysis setups. For example, the forecast ranges may be used as a guide to help users set risk targets.
Or, the generated levels could be used in conjunction with other indicators for meaningful confluence signals.
More advanced users could even extrapolate the functions used within this script for various purposes, such as generating pseudorandom data to test systems on, perform integration and approximations, etc.
These are just a few examples of potential uses of this script. How you choose to use it to benefit your trading, analysis, and coding is entirely up to you.
If nothing else, I think this is a pretty neat script simply for the novelty of it.
----------
How To Use:
When you first add the script to your chart, you will be prompted to confirm the starting date and time, number of bars to forecast, number of simulations to run, and whether to include drift assumption.
You will also be prompted to confirm the forecast type. There are two types to choose from:
-> End Result - This uses the values from the end of the simulation throughout the forecast interval.
-> Developing - This uses the values that develop from bar to bar, providing a real-time outlook.
You can always update these settings after confirmation as well.
Once these inputs are confirmed, the script will boot up and automatically generate the forecast in a separate pane.
Note that if there is no bar of data at the time you wish to start the forecast, the script will automatically detect use the next available bar after the specified start time.
From here, you can now control the rest of the settings.
The "Seeding Settings" section controls the initial seed value used to generate the children that produce the simulations.
In this section, you can control whether the seed is a fixed value, or a dynamic one.
Since selecting the dynamic parent option will change the seed value every time you change the settings or refresh your chart, there is a "Regenerate" input built into the script.
This input is a dummy input that isn't connected to any of the calculations. The purpose of this input is to force an update of the dynamic parent without affecting the generator or forecast settings.
Note that because we're running a limited number of simulations, different parent seeds will typically yield slightly different forecast ranges.
When using a small number of simulations, you will likely see a higher amount of variance between differently seeded results because smaller numbers of sampled simulations yield a heavier bias.
The more simulations you run, the smaller this variance will become since the outcomes become more convergent toward the same distribution, so the differences between differently seeded forecasts will become more marginal.
When using a dynamic parent, pay attention to the dispersion of ranges.
When you find a set of ranges that is dispersed how you like with your configuration, set your fixed parent value to the parent seed that shows in the info panel.
This will allow you to replicate that dispersion behavior again in the future.
An important thing to note when settings alerts on the plotted levels, or using them as components for signals in other scripts, is to decide on a fixed value for your parent seed to avoid minor repainting due to seed changes.
When the parent seed is fixed, no repainting occurs.
The "Amplitude Settings" section controls the amplitude coefficients for the three differently tailed generators.
These amplitude factors will change the difference series output for each simulation by controlling how aggressively each series moves.
When "Adjust Amplitude Coefficients" is disabled, all three coefficients are set to 1.
Note that if you expect volatility to significantly diverge from its historical values over the forecast interval, try experimenting with these factors to match your anticipation.
The "Weighting Settings" section controls the weighting boundaries for the three generators.
These weighting limits affect how tailed the distributions in each generator are, which in turn affects the final series outputs.
The maximum absolute value range for the weights is . When "Limit Generator Weights" is disabled, this is the range that is automatically used.
The last set of inputs is the "Display Settings", where you can control the visual outputs.
From here, you can select to display either "Forecast" or "Difference Comparison" via the "Output Display Type" dropdown tab.
"Forecast" is the type displayed by default. This plots the end result or developing forecast ranges.
There is an option with this display type to show the developing extremes of the simulations. This option is enabled by default.
There's also an option with this display type to show one of the simulated price series from the set alongside actual prices.
This allows you to visually compare simulated prices alongside the real prices.
"Difference Comparison" allows you to visually compare a synthetic difference series from the set alongside the actual difference series.
This display method is primarily useful for visually tuning the amplitude and weighting settings of the generators.
There are also info panel settings on the bottom, which allow you to control size, colors, and date format for the panel.
It's all pretty simple to use once you get the hang of it. So play around with the settings and see what kinds of forecasts you can generate!
----------
ADDITIONAL NOTES & DISCLAIMERS
Although I've done a number of things within this script to keep runtime demands as low as possible, the fact remains that this script is fairly computationally heavy.
Because of this, you may get random timeouts when using this script.
This could be due to either random drops in available runtime on the server, using too many simulations, or running the simulations over too many bars.
If it's just a random drop in runtime on the server, hide and unhide the script, re-add it to the chart, or simply refresh the page.
If the timeout persists after trying this, then you'll need to adjust your settings to a less demanding configuration.
Please note that no specific claims are being made in regards to this script's predictive accuracy.
It must be understood that this model is based on randomized price generation with assumed constant drift and dispersion from historical data before the starting point.
Models like these not consider the real world factors that may influence price movement (economic changes, seasonality, macro-trends, instrument hype, etc.), nor the changes in sample distribution that may occur.
In light of this, it's perfectly possible for price data to exceed even the most extreme simulated outcomes.
The future is uncertain, and becomes increasingly uncertain with each passing point in time.
Predictive models of any type can vary significantly in performance at any point in time, and nobody can guarantee any specific type of future performance.
When using forecasts in making decisions, DO NOT treat them as any form of guarantee that values will fall within the predicted range.
When basing your trading decisions on any trading methodology or utility, predictive or not, you do so at your own risk.
No guarantee is being issued regarding the accuracy of this forecast model.
Forecasting is very far from an exact science, and the results from any forecast are designed to be interpreted as potential outcomes rather than anything concrete.
With that being said, when applied prudently and treated as "general case scenarios", forecast models like these may very well be potentially beneficial tools to have in the arsenal.
Particle Physics Moving AverageThis indicator simulates the physics of a particle attracted by a distance-dependent force towards the evolving value of the series it's applied to.
Its parameters include:
The mass of the particle
The exponent of the force function f=d^x
A "medium damping factor" (viscosity of the universe)
Compression/extension damping factors (for simulating spring-damping functions)
This implementation also adds a second set of all of these parameters, and tracks 16 particles evenly interpolated between the two sets.
It's a kind of Swiss Army Knife of Moving Average-type functions; For instance, because the position and velocity of the particle include a "historical knowlege" of the series, it turns out that the Exponential Moving Average function simply "falls out" of the algorithm in certain configurations; instead of being configured by defining a period of samples over which to calculate an Exponential Moving Average, in this derivation, it is tuned by changing the mass and/or medium damping parameters.
But the algorithm can do much more than simply replicate an EMA... A particle acted on by a force that is a linear function of distance (force exponent=1) simulates the physics of a sprung-mass system, with a mass-dependent resonant frequency. By altering the particle mass and damping parameters, you can simulate something like an automobile suspension, letting your particle track a stock's price like a Cadillac or a Corvette (or both, including intermediates) depending on your setup. Particles will have a natural resonance with a frequency that depends on its mass... A higher mass particle (i.e. higher inertia) will resonate at a lower frequency than one with a lower mass (and of course, in this indicator, you can display particles that interpolate through a range of masses.)
The real beauty of this general-purpose algorithm is that the force function can be extended with other components, affecting the trajectory of the particle; For instance "volume" could be factored into the current distance-based force function, strengthening or weakening the impulse accordingly. (I'll probably provide updates to the script that incoroprate different ideas I come up with.)
As currently pictured above, the indicator is interpolating between a medium-damped EMA-like configuration (red) and a more extension-damped suspension-like configuration (blue).
This indicator is merely a tool that provides a space to explore such a simulation, to let you see how tweaking parameters affects the simulations. It doesn't provide buy or sell signals, although you might find that it could be adapted into an MACD-like signal generator... But you're on your own for that.