chrono_utilsLibrary "chrono_utils"
📝 Description
Collection of objects and common functions that are related to datetime windows session days and time ranges. The main purpose of this library is to handle time-related functionality and make it easy to reason about a future bar checking if it will be part of a predefined session and/or inside a datetime window. All existing session functionality I found in the documentation e.g. "not na(time(timeframe, session, timezone))" are not suitable for strategy scripts, since the execution of the orders is delayed by one bar, due to the script execution happening at the bar close. Moreover, a history operator with a negative value that looks forward is not allowed in any pinescript expression. So, a prediction for the next bar using the bars_back argument of "time()"" and "time_close()" was necessary. Thus, I created this library to overcome this small but very important limitation. In the meantime, I added useful functionality to handle session-based behavior. An interesting utility that emerged from this development is data anomaly detection where a comparison between the prediction and the actual value is happening. If those two values are different then a data inconsistency happens between the prediction bar and the actual bar (probably due to a holiday, half session day, a timezone change etc..)
🤔 How to Guide
To use the functionality this library provides in your script you have to import it first!
Copy the import statement of the latest release by pressing the copy button below and then paste it into your script. Give a short name to this library so you can refer to it later on. The import statement should look like this:
import jason5480/chrono_utils/2 as chr
To check if a future bar will be inside a window first of all you have to initialize a DateTimeWindow object.
A code example is the following:
var dateTimeWindow = chr.DateTimeWindow.new().init(fromDateTime = timestamp('01 Jan 2023 00:00'), toDateTime = timestamp('01 Jan 2024 00:00'))
Then you have to "ask" the dateTimeWindow if the future bar defined by an offset (default is 1 that corresponds th the next bar), will be inside that window:
// Filter bars outside of the datetime window
bool dateFilterApproval = dateTimeWindow.is_bar_included()
You can visualize the result by drawing the background of the bars that are outside the given window:
bgcolor(color = dateFilterApproval ? na : color.new(color.fuchsia, 90), offset = 1, title = 'Datetime Window Filter')
In the same way, you can "ask" the Session if the future bar defined by an offset it will be inside that session.
First of all, you should initialize a Session object.
A code example is the following:
var sess = chr.Session.new().from_sess_string(sess = '0800-1700:23456', refTimezone = 'UTC')
Then check if the given bar defined by the offset (default is 1 that corresponds th the next bar), will be inside the session like that:
// Filter bars outside the sessions
bool sessionFilterApproval = view.sess.is_bar_included()
You can visualize the result by drawing the background of the bars that are outside the given session:
bgcolor(color = sessionFilterApproval ? na : color.new(color.red, 90), offset = 1, title = 'Session Filter')
In case you want to visualize multiple session ranges you can create a SessionView object like that:
var view = SessionView.new().init(SessionDays.new().from_sess_string('2345'), array.from(SessionTimeRange.new().from_sess_string('0800-1600'), SessionTimeRange.new().from_sess_string('1300-2200')), array.from('London', 'New York'), array.from(color.blue, color.orange))
and then call the draw method of the SessionView object like that:
view.draw()
🏋️♂️ Please refer to the "EXAMPLE DATETIME WINDOW FILTER" and "EXAMPLE SESSION FILTER" regions of the script for more advanced code examples of how to utilize the full potential of this library, including user input settings and advanced visualization!
⚠️ Caveats
As I mentioned in the description there are some cases that the prediction of the next bar is not accurate. A wrong prediction will affect the outcome of the filtering. The main reasons this could happen are the following:
Public holidays when the market is closed
Half trading days usually before public holidays
Change in the daylight saving time (DST)
A data anomaly of the chart, where there are missing and/or inconsistent data.
A bug in this library (Please report by PM sending the symbol, timeframe, and settings)
Special thanks to @robbatt and @skinra for the constructive feedback 🏆. Without them, the exposed API of this library would be very lengthy and complicated to use. Thanks to them, now the user of this library will be able to get the most, with only a few lines of code!
Komut dosyalarını "如何用wind搜索股票的发行价和份数" için ara
STD/C-Filtered, N-Order Power-of-Cosine FIR Filter [Loxx]STD/C-Filtered, N-Order Power-of-Cosine FIR Filter is a Discrete-Time, FIR Digital Filter that uses Power-of-Cosine Family of FIR filters. This is an N-order algorithm that turns the following indicator from a static max 16 orders to a N orders, but limited to 50 in code. You can change the top end value if you with to higher orders than 50, but the signal is likely too noisy at that level. This indicator also includes a clutter and standard deviation filter.
See the static order version of this indicator here:
STD/C-Filtered, Power-of-Cosine FIR Filter
Amplitudes for STD/C-Filtered, N-Order Power-of-Cosine FIR Filter:
What are FIR Filters?
In discrete-time signal processing, windowing is a preliminary signal shaping technique, usually applied to improve the appearance and usefulness of a subsequent Discrete Fourier Transform. Several window functions can be defined, based on a constant (rectangular window), B-splines, other polynomials, sinusoids, cosine-sums, adjustable, hybrid, and other types. The windowing operation consists of multipying the given sampled signal by the window function. For trading purposes, these FIR filters act as advanced weighted moving averages.
What is Power-of-Sine Digital FIR Filter?
Also called Cos^alpha Window Family. In this family of windows, changing the value of the parameter alpha generates different windows.
f(n) = math.cos(alpha) * (math.pi * n / N) , 0 ≤ |n| ≤ N/2
where alpha takes on integer values and N is a even number
General expanded form:
alpha0 - alpha1 * math.cos(2 * math.pi * n / N)
+ alpha2 * math.cos(4 * math.pi * n / N)
- alpha3 * math.cos(4 * math.pi * n / N)
+ alpha4 * math.cos(6 * math.pi * n / N)
- ...
Special Cases for alpha:
alpha = 0: Rectangular window, this is also just the SMA (not included here)
alpha = 1: MLT sine window (not included here)
alpha = 2: Hann window (raised cosine = cos^2)
alpha = 4: Alternative Blackman (maximized roll-off rate)
This indicator contains a binomial expansion algorithm to handle N orders of a cosine power series. You can read about how this is done here: The Binomial Theorem
What is Pascal's Triangle and how was it used here?
In mathematics, Pascal's triangle is a triangular array of the binomial coefficients that arises in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Blaise Pascal, although other mathematicians studied it centuries before him in India, Persia, China, Germany, and Italy.
The rows of Pascal's triangle are conventionally enumerated starting with row n = 0 at the top (the 0th row). The entries in each row are numbered from the left beginning with k=0 and are usually staggered relative to the numbers in the adjacent rows. The triangle may be constructed in the following manner: In row 0 (the topmost row), there is a unique nonzero entry 1. Each entry of each subsequent row is constructed by adding the number above and to the left with the number above and to the right, treating blank entries as 0. For example, the initial number in the first (or any other) row is 1 (the sum of 0 and 1), whereas the numbers 1 and 3 in the third row are added to produce the number 4 in the fourth row.
Rows of Pascal's Triangle
0 Order: 1
1 Order: 1 1
2 Order: 1 2 1
3 Order: 1 3 3 1
4 Order: 1 4 6 4 1
5 Order: 1 5 10 10 5 1
6 Order: 1 6 15 20 15 6 1
7 Order: 1 7 21 35 35 21 7 1
8 Order: 1 8 28 56 70 56 28 8 1
9 Order: 1 9 36 34 84 126 126 84 36 9 1
10 Order: 1 10 45 120 210 252 210 120 45 10 1
11 Order: 1 11 55 165 330 462 462 330 165 55 11 1
12 Order: 1 12 66 220 495 792 924 792 495 220 66 12 1
13 Order: 1 13 78 286 715 1287 1716 1716 1287 715 286 78 13 1
For a 12th order Power-of-Cosine FIR Filter
1. We take the coefficients from the Left side of the 12th row
1 13 78 286 715 1287 1716 1716 1287 715 286 78 13 1
2. We slice those in half to
1 13 78 286 715 1287 1716
3. We reverse the array
1716 1287 715 286 78 13 1
This is our array of alphas: alpha1, alpha2, ... alphaN
4. We then pull alpha one from the previous order, order 11, the middle value
11 Order: 1 11 55 165 330 462 462 330 165 55 11 1
The middle value is 462, this value becomes our alpha0 in the calculation
5. We apply these alphas to the cosine calculations
example: + alpha4 * math.cos(6 * math.pi * n / N)
6. We then divide by the sum of the alphas to derive our final coefficient weighting kernel
**This is only useful for orders that are EVEN, if you use odd ordering, the following are the coefficient outputs and these aren't useful since they cancel each other out and result in a value of zero. See below for an odd numbered oder and compare with the amplitude of the graphic posted above of the even order amplitude:
What is a Standard Deviation Filter?
If price or output or both don't move more than the (standard deviation) * multiplier then the trend stays the previous bar trend. This will appear on the chart as "stepping" of the moving average line. This works similar to Super Trend or Parabolic SAR but is a more naive technique of filtering.
What is a Clutter Filter?
For our purposes here, this is a filter that compares the slope of the trading filter output to a threshold to determine whether to shift trends. If the slope is up but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. If the slope is down but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. Alternatively if either up or down slope exceeds the threshold then the trend turns green for up and red for down. Fro demonstration purposes, an EMA is used as the moving average. This acts to reduce the noise in the signal.
Included
Bar coloring
Loxx's Expanded Source Types
Signals
Alerts
STD/C-Filtered, Power-of-Cosine FIR Filter [Loxx]STD/C-Filtered, Power-of-Cosine FIR Filter is a Discrete-Time, FIR Digital Filter that uses Power-of-Cosine Family of FIR filters. This indicator also includes a clutter and standard deviation filter.
Amplitudes
What are FIR Filters?
In discrete-time signal processing, windowing is a preliminary signal shaping technique, usually applied to improve the appearance and usefulness of a subsequent Discrete Fourier Transform. Several window functions can be defined, based on a constant (rectangular window), B-splines, other polynomials, sinusoids, cosine-sums, adjustable, hybrid, and other types. The windowing operation consists of multipying the given sampled signal by the window function. For trading purposes, these FIR filters act as advanced weighted moving averages.
What is Power-of-Sine Digital FIR Filter?
Also called Cos^alpha Window Family. In this family of windows, changing the value of the parameter alpha generates different windows.
f(n) = math.cos(alpha) * (math.pi * n / N) , 0 ≤ |n| ≤ N/2
where alpha takes on integer values and N is a even number
General expanded form:
alpha0 - alpha1 * math.cos(2 * math.pi * n / N)
+ alpha2 * math.cos(4 * math.pi * n / N)
- alpha3 * math.cos(4 * math.pi * n / N)
+ alpha4 * math.cos(6 * math.pi * n / N)
- ...
Special Cases for alpha:
alpha = 0: Rectangular window, this is also just the SMA (not included here)
alpha = 1: MLT sine window (not included here)
alpha = 2: Hann window (raised cosine = cos^2)
alpha = 4: Alternative Blackman (maximized roll-off rate)
For this indicator, I've included alpha values from 2 to 16
What is a Standard Devaition Filter?
If price or output or both don't move more than the (standard deviation) * multiplier then the trend stays the previous bar trend. This will appear on the chart as "stepping" of the moving average line. This works similar to Super Trend or Parabolic SAR but is a more naive technique of filtering.
What is a Clutter Filter?
For our purposes here, this is a filter that compares the slope of the trading filter output to a threshold to determine whether to shift trends. If the slope is up but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. If the slope is down but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. Alternatively if either up or down slope exceeds the threshold then the trend turns green for up and red for down. Fro demonstration purposes, an EMA is used as the moving average. This acts to reduce the noise in the signal.
Included
Bar coloring
Loxx's Expanded Source Types
Signals
Alerts
HorizonSigma Pro [CHE]HorizonSigma Pro
Disclaimer
Not every timeframe will yield good results . Very short charts are dominated by microstructure noise, spreads, and slippage; signals can flip and the tradable edge shrinks after costs. Very high timeframes adapt more slowly, provide fewer samples, and can lag regime shifts. When you change timeframe, you also change the ratios between horizon, lookbacks, and correlation windows—what works on M5 won’t automatically hold on H1 or D1. Liquidity, session effects (overnight gaps, news bursts), and volatility do not scale linearly with time. Always validate per symbol and timeframe, then retune horizon, z-length, correlation window, and either the neutral band or the z-threshold. On fast charts, “components” mode adapts quicker; on slower charts, “super” reduces noise. Keep prior-shift and calibration enabled, monitor Hit Rate with its confidence interval and the Brier score, and execute only on confirmed (closed-bar) values.
For example, what do “UP 61%” and “DOWN 21%” mean?
“UP 61%” is the model’s estimated probability that the close will be higher after your selected horizon—directional probability, not a price target or profit guarantee. “DOWN 21%” still reports the probability of up; here it’s 21%, which implies 79% for down (a short bias). The label switches to “DOWN” because the probability falls below your short threshold. With a neutral-band policy, for example ±7%, signals are: Long above 57%, Short below 43%, Neutral in between. In z-score mode, fixed z-cutoffs drive the call instead of percentages. The arrow length on the chart is an ATR-scaled projection to visualize reach; treat it as guidance, not a promise.
Part 1 — Scientific description
Objective.
The indicator estimates the probability that price will be higher after a user-defined horizon (a chosen number of bars) and emits long, short, or neutral decisions under explicit thresholds. It combines multi‑feature, z‑normalized inputs, adaptive correlation‑based weighting, a prior‑shifted sigmoid mapping, optional rolling probability calibration, and repaint‑safe confirmation. It also visualizes an ATR‑scaled forward projection and prints a compact statistics panel.
Data and labeling.
For each bar, the target label is whether price increased over the past chosen horizon. Learning is deliberately backward‑looking to avoid look‑ahead: features are associated with outcomes that are only known after that horizon has elapsed.
Feature engineering.
The feature set includes momentum, RSI, stochastic %K, MACD histogram slope, a normalized EMA(20/50) trend spread, ATR as a share of price, Bollinger Band width, and volume normalized by its moving average. All features are standardized over rolling windows. A compressed “super‑feature” is available that aggregates core trend and momentum components while penalizing excessive width (volatility). Users can switch between a “components” mode (weighted sum of individual features) and a “super” mode (single compressed driver).
Weighting and learning.
Weights are the rolling correlations between features (evaluated one horizon ago) and realized directional outcomes, smoothed by an EMA and optionally clamped to a bounded range to stabilize outliers. This produces an adaptive, regime‑aware weighting without explicit machine‑learning libraries.
Scoring and probability mapping.
The raw score is either the weighted component sum or the weighted super‑feature. The score is standardized again and passed through a sigmoid whose steepness is user‑controlled. A “prior shift” moves the sigmoid’s midpoint to the current base rate of up moves, estimated over the evaluation window, so that probabilities remain well‑calibrated when markets drift bullish or bearish. Probabilities and standardized scores are EMA‑smoothed for stability.
Decision policy.
Two modes are supported:
- Neutral band: go long if the probability is above one half plus a user‑set band; go short if it is below one half minus that band; otherwise stay neutral.
- Z‑score thresholds: use symmetric positive/negative cutoffs on the standardized score to trigger long/short.
Repaint protection.
All values used for decisions can be locked to confirmed (closed) bars. Intrabar updates are available as a preview, but confirmed values drive evaluation and stats.
Calibration.
An optional rolling linear calibration maps past confirmed probabilities to realized outcomes over the evaluation window. The mapping is clipped to the unit interval and can be injected back into the decision logic if desired. This improves reliability (probabilities that “mean what they say”) without necessarily improving raw separability.
Evaluation metrics.
The table reports: hit rate on signaled bars; a Wilson confidence interval for that hit rate at a chosen confidence level; Brier score as a measure of probability accuracy; counts of long/short trades; average realized return by side; profit factor; net return; and exposure (signal density). All are computed on rolling windows consistent with the learning scheme.
Visualization.
On the chart, an arrowed projection shows the predicted direction from the current bar to the chosen horizon, with magnitude scaled by ATR (optionally scaled by the square‑root of the horizon). Labels display either the decision probability or the standardized score. Neutral states can display a configurable icon for immediate recognition.
Computational properties.
The design relies on rolling means, standard deviations, correlations, and EMAs. Per‑bar cost is constant with respect to history length, and memory is constant per tracked series. Graphical objects are updated in place to obey platform limits.
Assumptions and limitations.
The method is correlation‑based and will adapt after regime changes, not before them. Calibration improves probability reliability but not necessarily ranking power. Intrabar previews are non‑binding and should not be evaluated as historical performance.
Part 2 — Trader‑facing description
What it does.
This tool tells you how likely price is to be higher after your chosen number of bars and converts that into Long / Short / Neutral calls. It learns, in real time, which components—momentum, trend, volatility, breadth, and volume—matter now, adjusts their weights, and shows you a probability line plus a forward arrow scaled by volatility.
How to set it up.
1) Choose your horizon. Intraday scalps: 5–10 bars. Swings: 10–30 bars. The default of 14 bars is a balanced starting point.
2) Pick a feature mode.
- components: granular and fast to adapt when leadership rotates between signals.
- super: cleaner single driver; less noise, slightly slower to react.
3) Decide how signals are triggered.
- Neutral band (probability based): intuitive and easy to tune. Widen the band for fewer, higher‑quality trades; tighten to catch more moves.
- Z‑score thresholds: consistent numeric cutoffs that ignore base‑rate drift.
4) Keep reliability helpers on. Leave prior shift and calibration enabled to stabilize probabilities across bullish/bearish regimes.
5) Smoothing. A short EMA on the probability or score reduces whipsaws while preserving turns.
6) Overlay. The arrow shows the call and a volatility‑scaled reach for the next horizon. Treat it as guidance, not a promise.
Reading the stats table.
- Hit Rate with a confidence interval: your recent accuracy with an uncertainty range; trust the range, not only the point.
- Brier Score: lower is better; it checks whether a stated “70%” really behaves like 70% over time.
- Profit Factor, Net Return, Exposure: quick triage of tradability and signal density.
- Average Return by Side: sanity‑check that the long and short calls each pull their weight.
Typical adjustments.
- Too many trades? Increase the neutral band or raise the z‑threshold.
- Missing the move? Tighten the band, or switch to components mode to react faster.
- Choppy timeframe? Lengthen the z‑score and correlation windows; keep calibration on.
- Volatility regime change? Revisit the ATR multiplier and enable square‑root scaling of horizon.
Execution and risk.
- Size positions by volatility (ATR‑based sizing works well).
- Enter on confirmed values; use intrabar previews only as early signals.
- Combine with your market structure (levels, liquidity zones). This model is statistical, not clairvoyant.
What it is not.
Not a black‑box machine‑learning model. It is transparent, correlation‑weighted technical analysis with strong attention to probability reliability and repaint safety.
Suggested defaults (robust starting point).
- Horizon 14; components mode; weight EMA 10; correlation window 500; z‑length 200.
- Neutral band around seven percentage points, or z‑threshold around one‑third of a standard deviation.
- Prior shift ON, Calibration ON, Use calibrated for decisions OFF to start.
- ATR multiplier 1.0; square‑root horizon scaling ON; EMA smoothing 3.
- Confidence setting equivalent to about 95%.
Disclaimer
No indicator guarantees profits. HorizonSigma Pro is a decision aid; always combine with solid risk management and your own judgment. Backtest, forward test, and size responsibly.
The content provided, including all code and materials, is strictly for educational and informational purposes only. It is not intended as, and should not be interpreted as, financial advice, a recommendation to buy or sell any financial instrument, or an offer of any financial product or service. All strategies, tools, and examples discussed are provided for illustrative purposes to demonstrate coding techniques and the functionality of Pine Script within a trading context.
Any results from strategies or tools provided are hypothetical, and past performance is not indicative of future results. Trading and investing involve high risk, including the potential loss of principal, and may not be suitable for all individuals. Before making any trading decisions, please consult with a qualified financial professional to understand the risks involved.
By using this script, you acknowledge and agree that any trading decisions are made solely at your discretion and risk.
Enhance your trading precision and confidence 🚀
Best regards
Chervolino
Relational Quadratic Kernel Channel [Vin]The Relational Quadratic Kernel Channel (RQK-Channel-V) is designed to provide more valuable potential price extremes or continuation points in the price trend.
Example:
Usage:
Lookback Window: Adjust the "Lookback Window" parameter to control the number of previous bars considered when calculating the Rational Quadratic Estimate. Longer windows capture longer-term trends, while shorter windows respond more quickly to price changes.
Relative Weight: The "Relative Weight" parameter allows you to control the importance of each data point in the calculation. Higher values emphasize recent data, while lower values give more weight to historical data.
Source: Choose the data source (e.g., close price) that you want to use for the kernel estimate.
ATR Length: Set the length of the Average True Range (ATR) used for channel width calculation. A longer ATR length results in wider channels, while a shorter length leads to narrower channels.
Channel Multipliers: Adjust the "Channel Multiplier" parameters to control the width of the channels. Higher multipliers result in wider channels, while lower multipliers produce narrower channels. The indicator provides three sets of channels, each with its own multiplier for flexibility.
Details:
Rational Quadratic Kernel Function:
The Rational Quadratic Kernel Function is a type of smoothing function used to estimate a continuous curve or line from discrete data points. It is often used in time series analysis to reduce noise and emphasize trends or patterns in the data.
The formula for the Rational Quadratic Kernel Function is generally defined as:
K(x) = (1 + (x^2) / (2 * α * β))^(-α)
Where:
x represents the distance or difference between data points.
α and β are parameters that control the shape of the kernel. These parameters can be adjusted to control the smoothness or flexibility of the kernel function.
In the context of this indicator, the Rational Quadratic Kernel Function is applied to a specified source (e.g., close prices) over a defined lookback window. It calculates a smoothed estimate of the source data, which is then used to determine the central value of the channels. The kernel function allows the indicator to adapt to different market conditions and reduce noise in the data.
The specific parameters (length and relativeWeight) in your indicator allows to fine-tune how the Rational Quadratic Kernel Function is applied, providing flexibility in capturing both short-term and long-term trends in the data.
To know more about unsupervised ML implementations, I highly recommend to follow the users, @jdehorty and @LuxAlgo
Optimizing the parameters:
Lookback Window (length): The lookback window determines how many previous bars are considered when calculating the kernel estimate.
For shorter-term trading strategies, you may want to use a shorter lookback window (e.g., 5-10).
For longer-term trading or investing, consider a longer lookback window (e.g., 20-50).
Relative Weight (relativeWeight): This parameter controls the importance of each data point in the calculation.
A higher relative weight (e.g., 2 or 3) emphasizes recent data, which can be suitable for trend-following strategies.
A lower relative weight (e.g., 1) gives more equal importance to historical and recent data, which may be useful for strategies that aim to capture both short-term and long-term trends.
ATR Length (atrLength): The length of the Average True Range (ATR) affects the width of the channels.
Longer ATR lengths result in wider channels, which may be suitable for capturing broader price movements.
Shorter ATR lengths result in narrower channels, which can be helpful for identifying smaller price swings.
Channel Multipliers (channelMultiplier1, channelMultiplier2, channelMultiplier3): These parameters determine the width of the channels relative to the ATR.
Adjust these multipliers based on your risk tolerance and desired channel width.
Higher multipliers result in wider channels, which may lead to fewer signals but potentially larger price movements.
Lower multipliers create narrower channels, which can result in more frequent signals but potentially smaller price movements.
Machine Learning BBPct [BackQuant]Machine Learning BBPct
What this is (in one line)
A Bollinger Band %B oscillator enhanced with a simplified K-Nearest Neighbors (KNN) pattern matcher. The model compares today’s context (volatility, momentum, volume, and position inside the bands) to similar situations in recent history and blends that historical consensus back into the raw %B to reduce noise and improve context awareness. It is informational and diagnostic—designed to describe market state, not to sell a trading system.
Background: %B in plain terms
Bollinger %B measures where price sits inside its dynamic envelope: 0 at the lower band, 1 at the upper band, ~ 0.5 near the basis (the moving average). Readings toward 1 indicate pressure near the envelope’s upper edge (often strength or stretch), while readings toward 0 indicate pressure near the lower edge (often weakness or stretch). Because bands adapt to volatility, %B is naturally comparable across regimes.
Why add (simplified) KNN?
Classic %B is reactive and can be whippy in fast regimes. The simplified KNN layer builds a “nearest-neighbor memory” of recent market states and asks: “When the market looked like this before, where did %B tend to be next bar?” It then blends that estimate with the current %B. Key ideas:
• Feature vector . Each bar is summarized by up to five normalized features:
– %B itself (normalized)
– Band width (volatility proxy)
– Price momentum (ROC)
– Volume momentum (ROC of volume)
– Price position within the bands
• Distance metric . Euclidean distance ranks the most similar recent bars.
• Prediction . Average the neighbors’ prior %B (lagged to avoid lookahead), inverse-weighted by distance.
• Blend . Linearly combine raw %B and KNN-predicted %B with a configurable weight; optional filtering then adapts to confidence.
This remains “simplified” KNN: no training/validation split, no KD-trees, no scaling beyond windowed min-max, and no probabilistic calibration.
How the script is organized (by input groups)
1) BBPct Settings
• Price Source – Which price to evaluate (%B is computed from this).
• Calculation Period – Lookback for SMA basis and standard deviation.
• Multiplier – Standard deviation width (e.g., 2.0).
• Apply Smoothing / Type / Length – Optional smoothing of the %B stream before ML (EMA, RMA, DEMA, TEMA, LINREG, HMA, etc.). Turning this off gives you the raw %B.
2) Thresholds
• Overbought/Oversold – Default 0.8 / 0.2 (inside ).
• Extreme OB/OS – Stricter zones (e.g., 0.95 / 0.05) to flag stretch conditions.
3) KNN Machine Learning
• Enable KNN – Switch between pure %B and hybrid.
• K (neighbors) – How many historical analogs to blend (default 8).
• Historical Period – Size of the search window for neighbors.
• ML Weight – Blend between raw %B and KNN estimate.
• Number of Features – Use 2–5 features; higher counts add context but raise the risk of overfitting in short windows.
4) Filtering
• Method – None, Adaptive, Kalman-style (first-order),
or Hull smoothing.
• Strength – How aggressively to smooth. “Adaptive” uses model confidence to modulate its alpha: higher confidence → stronger reliance on the ML estimate.
5) Performance Tracking
• Win-rate Period – Simple running score of past signal outcomes based on target/stop/time-out logic (informational, not a robust backtest).
• Early Entry Lookback – Horizon for forecasting a potential threshold cross.
• Profit Target / Stop Loss – Used only by the internal win-rate heuristic.
6) Self-Optimization
• Enable Self-Optimization – Lightweight, rolling comparison of a few canned settings (K = 8/14/21 via simple rules on %B extremes).
• Optimization Window & Stability Threshold – Governs how quickly preferred K changes and how sensitive the overfitting alarm is.
• Adaptive Thresholds – Adjust the OB/OS lines with volatility regime (ATR ratio), widening in calm markets and tightening in turbulent ones (bounded 0.7–0.9 and 0.1–0.3).
7) UI Settings
• Show Table / Zones / ML Prediction / Early Signals – Toggle informational overlays.
• Signal Line Width, Candle Painting, Colors – Visual preferences.
Step-by-step logic
A) Compute %B
Basis = SMA(source, len); dev = stdev(source, len) × multiplier; Upper/Lower = Basis ± dev.
%B = (price − Lower) / (Upper − Lower). Optional smoothing yields standardBB .
B) Build the feature vector
All features are min-max normalized over the KNN window so distances are in comparable units. Features include normalized %B, normalized band width, normalized price ROC, normalized volume ROC, and normalized position within bands. You can limit to the first N features (2–5).
C) Find nearest neighbors
For each bar inside the lookback window, compute the Euclidean distance between current features and that bar’s features. Sort by distance, keep the top K .
D) Predict and blend
Use inverse-distance weights (with a strong cap for near-zero distances) to average neighbors’ prior %B (lagged by one bar). This becomes the KNN estimate. Blend it with raw %B via the ML weight. A variance of neighbor %B around the prediction becomes an uncertainty proxy ; combined with a stability score (how long parameters remain unchanged), it forms mlConfidence ∈ . The Adaptive filter optionally transforms that confidence into a smoothing coefficient.
E) Adaptive thresholds
Volatility regime (ATR(14) divided by its 50-bar SMA) nudges OB/OS thresholds wider or narrower within fixed bounds. The aim: comparable extremeness across regimes.
F) Early entry heuristic
A tiny two-step slope/acceleration probe extrapolates finalBB forward a few bars. If it is on track to cross OB/OS soon (and slope/acceleration agree), it flags an EARLY_BUY/SELL candidate with an internal confidence score. This is explicitly a heuristic—use as an attention cue, not a signal by itself.
G) Informational win-rate
The script keeps a rolling array of trade outcomes derived from signal transitions + rudimentary exits (target/stop/time). The percentage shown is a rough diagnostic , not a validated backtest.
Outputs and visual language
• ML Bollinger %B (finalBB) – The main line after KNN blending and optional filtering.
• Gradient fill – Greenish tones above 0.5, reddish below, with intensity following distance from the midline.
• Adaptive zones – Overbought/oversold and extreme bands; shaded backgrounds appear at extremes.
• ML Prediction (dots) – The KNN estimate plotted as faint circles; becomes bright white when confidence > 0.7.
• Early arrows – Optional small triangles for approaching OB/OS.
• Candle painting – Light green above the midline, light red below (optional).
• Info panel – Current value, signal classification, ML confidence, optimized K, stability, volatility regime, adaptive thresholds, overfitting flag, early-entry status, and total signals processed.
Signal classification (informational)
The indicator does not fire trade commands; it labels state:
• STRONG_BUY / STRONG_SELL – finalBB beyond extreme OS/OB thresholds.
• BUY / SELL – finalBB beyond adaptive OS/OB.
• EARLY_BUY / EARLY_SELL – forecast suggests a near-term cross with decent internal confidence.
• NEUTRAL – between adaptive bands.
Alerts (what you can automate)
• Entering adaptive OB/OS and extreme OB/OS.
• Midline cross (0.5).
• Overfitting detected (frequent parameter flipping).
• Early signals when early confidence > 0.7.
These are purely descriptive triggers around the indicator’s state.
Practical interpretation
• Mean-reversion context – In range markets, adaptive OS/OB with ML smoothing can reduce whipsaws relative to raw %B.
• Trend context – In persistent trends, the KNN blend can keep finalBB nearer the mid/upper region during healthy pullbacks if history supports similar contexts.
• Regime awareness – Watch the volatility regime and adaptive thresholds. If thresholds compress (high vol), “OB/OS” comes sooner; if thresholds widen (calm), it takes more stretch to flag.
• Confidence as a weight – High mlConfidence implies neighbors agree; you may rely more on the ML curve. Low confidence argues for de-emphasizing ML and leaning on raw %B or other tools.
• Stability score – Rising stability indicates consistent parameter selection and fewer flips; dropping stability hints at a shifting backdrop.
Methodological notes
• Normalization uses rolling min-max over the KNN window. This is simple and scale-agnostic but sensitive to outliers; the distance metric will reflect that.
• Distance is unweighted Euclidean. If you raise featureCount, you increase dimensionality; consider keeping K larger and lookback ample to avoid sparse-neighbor artifacts.
• Lag handling intentionally uses neighbors’ previous %B for prediction to avoid lookahead bias.
• Self-optimization is deliberately modest: it only compares a few canned K/threshold choices using simple “did an extreme anticipate movement?” scoring, then enforces a stability regime and an overfitting guard. It is not a grid search or GA.
• Kalman option is a first-order recursive filter (fixed gain), not a full state-space estimator.
• Hull option derives a dynamic length from 1/strength; it is a convenience smoothing alternative.
Limitations and cautions
• Non-stationarity – Nearest neighbors from the recent window may not represent the future under structural breaks (policy shifts, liquidity shocks).
• Curse of dimensionality – Adding features without sufficient lookback can make genuine neighbors rare.
• Overfitting risk – The script includes a crude overfitting detector (frequent parameter flips) and will fall back to defaults when triggered, but this is only a guardrail.
• Win-rate display – The internal score is illustrative; it does not constitute a tradable backtest.
• Latency vs. smoothness – Smoothing and ML blending reduce noise but add lag; tune to your timeframe and objectives.
Tuning guide
• Short-term scalping – Lower len (10–14), slightly lower multiplier (1.8–2.0), small K (5–8), featureCount 3–4, Adaptive filter ON, moderate strength.
• Swing trading – len (20–30), multiplier ~2.0, K (8–14), featureCount 4–5, Adaptive thresholds ON, filter modest.
• Strong trends – Consider higher adaptive_upper/lower bounds (or let volatility regime do it), keep ML weight moderate so raw %B still reflects surges.
• Chop – Higher ML weight and stronger Adaptive filtering; accept lag in exchange for fewer false extremes.
How to use it responsibly
Treat this as a state descriptor and context filter. Pair it with your execution signals (structure breaks, volume footprints, higher-timeframe bias) and risk management. If mlConfidence is low or stability is falling, lean less on the ML line and more on raw %B or external confirmation.
Summary
Machine Learning BBPct augments a familiar oscillator with a transparent, simplified KNN memory of recent conditions. By blending neighbors’ behavior into %B and adapting thresholds to volatility regime—while exposing confidence, stability, and a plain early-entry heuristic—it provides an informational, probability-minded view of stretch and reversion that you can interpret alongside your own process.
Volume Profile + VAH, VAL, and POCWhat it is
A clean, on-chart volume profile that approximates your visible range using a configurable Bars Back window. It builds a horizontal histogram of volume by price, splits each price bin into Buy vs Sell volume, draws POC, and computes Value Area High/Low (VAH/VAL). A Stealth Mode toggle switches to a subtle grayscale palette for low-key charts.
Why this instead of the built-in VPVR?
Buy/Sell split per bin: See which prices were defended by buyers vs sellers, not just total volume.
Value Area from POC outward: Classic expansion method until the selected % of total volume (default 70%).
Sleek borders & Stealth Mode: Crisp bin outlines and a one-click professional colorway.
Deterministic & fast: No sessions or anchors needed—set your Bars Back and go.
How it works (under the hood)
Window selection – Pine can’t read your viewport, so we approximate it with Bars Back (user input).
Binning – The window’s price range is divided into N bins.
Volume allocation – For each bar in the window:
Distribute Across Hi–Lo (optional): Spread volume across all bins the bar overlaps, weighted by overlap; or
Single-price mode: Assign all volume to one bin using a representative price (hlc3).
Buy/Sell split (two methods):
Body Proportional (recommended): Split by relative up/down body size (|close−open|).
Up/Down Candle: 100% buy if close ≥ open, else 100% sell.
POC & VA: Point of Control is the bin with max total volume. VAH/VAL expands from POC toward the higher-volume neighbor until the selected % of total volume is included.
Reading the visuals
Horizontal bars (right side): Total volume per price bin.
Left sub-segment = Sell volume
Right sub-segment = Buy volume
POC line: Price level with peak total volume.
VAH / VAL (dashed): Upper and lower bounds of the selected Value Area.
Borders: Each bin has a clean outer outline so the profile looks tight and organized.
Stealth Mode: Grayscale palette that preserves contrast without loud colors.
Key inputs (organized for clarity)
Theme
Stealth Mode: Toggles the grayscale look.
Core
Price Bins: Vertical resolution of the profile.
Lookback (Bars): Approximates your visible range.
Style
Profile Width (bars): How far the histogram extends to the right.
Bin Border Width: Outline thickness.
Markers & Lines
Show POC, Show VAH/VAL, Value Area %, VA line width.
Advanced
Distribute Volume Across Hi–Lo: More accurate, heavier compute.
Buy/Sell Split Method: Body Proportional (realistic) or Up/Down (simple).
Tips & best practices
Start with Body Proportional + Distribute Across ON for intraday accuracy.
If the chart lags, reduce Price Bins or Bars Back, or switch off distribution.
For small windows, fewer bins often looks cleaner (e.g., 30–60).
Stealth Mode plays nicely with both dark and light chart themes.
Limitations & notes
Viewport: Pine can’t access the actual visible bars; Bars Back is a practical stand-in.
Buy/Sell split: This is an approximation from candle bodies, not true bid/ask delta.
Designed for overlay; profile renders to the right of the latest bar.
Meta-LR ForecastThis indicator builds a forward-looking projection from the current bar by combining twelve time-compressed “mini forecasts.” Each forecast is a linear-regression-based outlook whose contribution is adaptively scaled by trend strength (via ADX) and normalized to each timeframe’s own volatility (via that timeframe’s ATR). The result is a 12-segment polyline that starts at the current price and extends one bar at a time into the future (1× through 12× the chart’s timeframe). Alongside the plotted path, the script computes two summary measures:
* Per-TF Bias% — a directional efficiency × R² score for each micro-forecast, expressed as a percent.
* Meta Bias% — the same score, but applied to the final, accumulated 12-step path. It summarizes how coherent and directional the combined projection is.
This tool is an indicator, not a strategy. It does not place orders. Nothing here is trade advice; it is a visual, quantitative framework to help you assess directional bias and trend context across a ladder of timeframe multiples.
The core engine fits a simple least-squares line on a normalized price series for each small forecast horizon and extrapolates one bar forward. That “trend” forecast is paired with its mirror, an “anti-trend” forecast, constructed around the current normalized price. The model then blends between these two wings according to current trend strength as measured by ADX.
ADX is transformed into a weight (w) in using an adaptive band centered on the rolling mean (μ) with width derived from the standard deviation (σ) of ADX over a configurable lookback. When ADX is deeply below the lower band, the weight approaches -1, favoring anti-trend behavior. Inside the flat band, the weight is near zero, producing neutral behavior. Clearly above the upper band, the weight approaches +1, favoring a trend-following stance. The transitions between these regions are linear so the regime shift is smooth rather than abrupt.
You can shape how quickly the model commits to either wing using two exponents. One exponent controls how aggressively positive weights lean into the trend forecast; the other controls how aggressively negative weights lean into the anti-trend forecast. Raising these exponents makes the response more gradual; lowering them makes the shift more decisive. An optional switch can force full anti-trend behavior when ADX registers a deep-low condition far below the lower tail, if you prefer a categorical stance in very flat markets.
A key design choice is volatility normalization. Every micro-forecast is computed in ATR units of its own timeframe. The script fetches that timeframe’s ATR inside each security call and converts normalized outputs back to price with that exact ATR. This avoids scaling higher-timeframe effects by the chart ATR or by square-root time approximations. Using “ATR-true” for each timeframe keeps the cross-timeframe accumulation consistent and dimensionally correct.
Bias% is defined as directional efficiency multiplied by R², expressed as a percent. Directional efficiency captures how much net progress occurred relative to the total path length; R² captures how well the path aligns with a straight line. If price meanders without net progress, efficiency drops; if the variation is well-explained by a line, R² rises. Multiplying the two penalizes choppy, low-signal paths and rewards sustained, coherent motion.
The forward path is built by converting each per-timeframe Bias% into a small ATR-sized delta, then cumulatively adding those deltas to form a 12-step projection. This produces a polyline anchored at the current close and stepping forward one bar per timeframe multiple. Segment color flips by slope, allowing a quick read of the path’s direction and inflection.
Inputs you can tune include:
* Max Regression Length. Upper bound for each micro-forecast’s regression window. Larger values smooth the trend estimate at the cost of responsiveness; smaller values react faster but can add noise.
* Price Source. The price series analyzed (for example, close or typical price).
* ADX Length. Period used for the DMI/ADX calculation.
* ATR Length (normalization). Window used for ATR; this is applied per timeframe inside each security call.
* Band Lookback (for μ, σ). Lookback used to compute the adaptive ADX band statistics. Larger values stabilize the band; smaller values react more quickly.
* Flat half-width (σ). Width of the neutral band on both sides of μ. Wider flats spend more time neutral; narrower flats switch regimes more readily.
* Tail width beyond flat (σ). Distance from the flat band edge to the extreme trend/anti-trend zone. Larger tails create a longer ramp; smaller tails reach extremes sooner.
* Polyline Width. Visual thickness of the plotted segments.
* Negative Wing Aggression (anti-trend). Exponent shaping for negative weights; higher values soften the tilt into mean reversion.
* Positive Wing Aggression (trend). Exponent shaping for positive weights; lower values make trend commitment stronger and sooner.
* Force FULL Anti-Trend at Deep-Low ADX. Optional hard switch for extremely low ADX conditions.
On the chart you will see:
* A 12-segment forward polyline starting from the current close to bar\_index + 1 … +12, with green segments for up-steps and red for down-steps.
* A small label at the latest bar showing Meta Bias% when available, or “n/a” when insufficient data exists.
Interpreting the readouts:
* Trend-following contexts are characterized by ADX above the adaptive upper band, pushing w toward +1. The blended forecast leans toward the regression extrapolation. A strongly positive Meta Bias% in this environment suggests directional alignment across the ladder of timeframes.
* Mean-reversion contexts occur when ADX is well below the lower tail, pushing w toward -1 (or forcing anti-trend if enabled). After a sharp advance, a negative Meta Bias% may indicate the model projects pullback tendencies.
* Neutral contexts occur when ADX sits inside the flat band; w is near zero, the blended forecast remains close to current price, and Meta Bias% tends to hover near zero.
These are analytical cues, not rules. Always corroborate with your broader process, including market structure, time-of-day behavior, liquidity conditions, and risk limits.
Practical usage patterns include:
* Momentum confirmation. Combine a rising Meta Bias% with higher-timeframe structure (such as higher highs and higher lows) to validate continuation setups. Treat the 12th step’s distance as a coarse sense of potential room rather than as a target.
* Fade filtering. If you prefer fading extremes, require ADX to be near or below the lower ramp before acting on counter-moves, and avoid fades when ADX is decisively above the upper band.
* Position planning. Because per-step deltas are ATR-scaled, the path’s vertical extent can be mentally mapped to typical noise for the instrument, informing stop distance choices. The script itself does not compute orders or size.
* Multi-timeframe alignment. Each step corresponds to a clean multiple of your chart timeframe, so the polyline visualizes how successively larger windows bias price, all referenced to the current bar.
House-rules and repainting disclosures:
* Indicator, not strategy. The script does not execute, manage, or suggest orders. It displays computed paths and bias scores for analysis only.
* No performance claims. Past behavior of any measure, including Meta Bias%, does not guarantee future results. There are no assurances of profitability.
* Higher-timeframe updates. Values obtained via security for higher-timeframe series can update intrabar until the higher-timeframe bar closes. The forward path and Meta Bias% may change during formation of a higher-timeframe candle. If you need confirmed higher-timeframe inputs, consider reading the prior higher-timeframe value or acting only after the higher-timeframe close.
* Data sufficiency. The model requires enough history to compute ATR, ADX statistics, and regression windows. On very young charts or illiquid symbols, parts of the readout can be unavailable until sufficient data accumulates.
* Volatility regimes. ATR normalization helps compare across timeframes, but unusual volatility regimes can make the path look deceptively flat or exaggerated. Judge the vertical scale relative to your instrument’s typical ATR.
Tuning tips:
* Stability versus responsiveness. Increase Max Regression Length to steady the micro-forecasts but accept slower response. If you lower it, consider slightly increasing Band Lookback so regime boundaries are not too jumpy.
* Regime bands. Widen the flat half-width to spend more time neutral, which can reduce over-trading tendencies in chop. Shrink the tail width if you want the model to commit to extremes sooner, at the cost of more false swings.
* Wing shaping. If anti-trend behavior feels too abrupt at low ADX, raise the negative wing exponent. If you want trend bias to kick in more decisively at high ADX, lower the positive wing exponent. Small changes have large effects.
* Forced anti-trend. Enable the deep-low option only if you explicitly want a categorical “markets are flat, fade moves” policy. Many users prefer leaving it off to keep regime decisions continuous.
Troubleshooting:
* Nothing plots or the label shows “n/a.” Ensure the chart has enough history for the ADX band statistics, ATR, and the regression windows. Exotic or illiquid symbols with missing data may starve the higher-timeframe computations. Try a more liquid market or a higher timeframe.
* Path flickers or shifts during the bar. This is expected when any higher-timeframe input is still forming. Wait for the higher-timeframe close for fully confirmed behavior, or modify the code to read prior values from the higher timeframe.
* Polyline looks too flat or too steep. Check the chart’s vertical scale and recent ATR regime. Adjust Max Regression Length, the wing exponents, or the band widths to suit the instrument.
Integration ideas for manual workflows:
* Confluence checklist. Use Meta Bias% as one of several independent checks, alongside structure, session context, and event risk. Act only when multiple cues align.
* Stop and target thinking. Because deltas are ATR-scaled at each timeframe, benchmark your proposed stops and targets against the forward steps’ magnitude. Stops that are much tighter than the prevailing ATR often sit inside normal noise.
* Session context. Consider session hours and microstructure. The same ADX value can imply different tradeability in different sessions, particularly in index futures and FX.
This indicator deliberately avoids:
* Fixed thresholds for buy or sell decisions. Markets vary and fixed numbers invite overfitting. Decide what constitutes “high enough” Meta Bias% for your market and timeframe.
* Automatic risk sizing. Proper sizing depends on account parameters, instrument specifications, and personal risk tolerance. Keep that decision in your risk plan, not in a visual bias tool.
* Claims of edge. These measures summarize path geometry and trend context; they do not ensure a tradable edge on their own.
Summary of how to think about the output:
* The script builds a 12-step forward path by stacking linear-regression micro-forecasts across increasing multiples of the chart timeframe.
* Each micro-forecast is blended between trend and anti-trend using an adaptive ADX band with separate aggression controls for positive and negative regimes.
* All computations are done in ATR-true units for each timeframe before reconversion to price, ensuring dimensional consistency when accumulating steps.
* Bias% (per-timeframe and Meta) condenses directional efficiency and trend fidelity into a compact score.
* The output is designed to serve as an analytical overlay that helps assess whether conditions look trend-friendly, fade-friendly, or neutral, while acknowledging higher-timeframe update behavior and avoiding prescriptive trade rules.
Use this tool as one component within a disciplined process that includes independent confirmation, event awareness, and robust risk management.
ABS NR — Fail-Safe Confirm (v4.2.2)
# ABS NR — Fail-Safe Confirm (v4.2.2)
## What it is (quick take)
**ABS NR FS** is a **non-repainting “arm → confirm” entry framework** for intraday and swing execution. It blends:
* **Regime** (EMA stack + 60-min slope),
* **Location** (Keltner basis/edges),
* **Stretch** (session-anchored **VWAP Z-score**),
* **Momentum gating** (TSI cross/slope),
* **Guards** (session window, minimum ATR%, gap filter, optional market alignment).
You’ll see a **small dot** when a setup is **armed** (candidate) and a **triangle** when that setup **confirms** within a user-defined number of bars. A **gray “X”** marks a timeout (candidate canceled).
> Tip: This entry tool works best when paired with a trend context filter and a dedicated exit tool.
---
## How to use it (operational workflow)
1. **Read the regime**
* **Bull trend**: fast > slow > long EMA **and** 60-min slope up.
* **Bear trend**: fast < slow < long EMA **and** 60-min slope down.
* **Range**: neither bull nor bear.
2. **Wait for a candidate (dot)**
Two families:
* **Reclaim (trend-following):** price crosses the **KC basis** with acceptable |Z| (not overstretched) and passes the TSI gate.
* **Fade (range-revert):** price **pokes a KC band**, prints a **reversal wick**, |Z| is stretched, and TSI gate agrees.
3. **Trade the confirmation (triangle)**
The confirm must occur **within N bars** and follow your chosen **Confirm mode** logic (see Inputs). If confirmation doesn’t arrive in time, an **X** cancels the candidate.
4. **Use guards to avoid junk**
Session windows (US focus), minimum ATR%, gap guard, and optional **market alignment** (e.g., SPY above EMA20 for longs).
5. **Manage the position**
* Entries: take **triangles** in the direction of your playbook (reclaims with trend; fades in clean ranges).
* Filters and exits: use your own process or pair with a trend/exit companion.
---
## Visual semantics & alerts
* **Candidate L / S (dot)** → a setup armed on this bar.
* **CONFIRM L / S (triangle)** → actionable signal that met confirm rules within your time window.
* **Cancel L / S (X)** → candidate expired without confirmation; ignore the dot.
**Alerts (stable names for automation):**
* **ABS FS — Confirmed** → fires on confirmed long or short.
* **ABS FS — Candidate Armed** → fires as a candidate arms.
---
## Non-repainting behavior (why signals don’t repaint)
* All HTF requests use **lookahead\_off**.
* With **Strict NR = true**, the 60-min slope uses the **prior completed** 60-min bar and arming/confirming only occurs on confirmed bars.
* Confirmation triangles finalize on bar close.
* If you disable strictness, signals may appear slightly earlier but with more intrabar sensitivity.
---
## Inputs reference (what each control does and the trade-offs)
### A) Behavior / Modes
**Mode** (`Turbo / Aggressive / Balanced / Conservative`)
Changes multiple internal thresholds:
* **Turbo** → most signals; relaxes prior-bar break & VWAP-side checks and time/vol/gap guards. Highest frequency, highest noise.
* **Aggressive** → more signals than Balanced, fewer than Turbo.
* **Balanced** → default; steady trade-off of frequency vs. quality.
* **Conservative** → tightens |Z| and other checks; fewest but cleanest signals.
**Strict NR (bar close + prior HTF 60m)**
* **true** = safer: uses prior 60-min slope; arms/confirms on confirmed bars → **fewer/cleaner** signals.
* **false** = earlier and more reactive; slightly noisier.
---
### B) Keltner Channel (location engine)
* **KC EMA Length (`kcLen`)**
Higher → smoother basis (fewer basis crosses). Lower → snappier basis (more crosses).
* **ATR Length (`atrLen`)**
Higher → steadier band width; Lower → more reactive band width.
* **KC ATR Mult (`kcMult`)**
Higher → wider bands (fewer edge pokes → fewer fades). Lower → narrower (more fades).
---
### C) Trend & HTF slope
* **Trend EMA Fast/Slow/Long (`emaFastLen / emaSlowLen / emaLongLen`)**
Larger = slower regime flips (fewer reclaims); smaller = faster flips (more reclaims).
* **HTF EMA Len (60m) (`htfLen`)**
Larger = steadier HTF slope (fewer signals); smaller = more sensitive (more signals).
---
### D) VWAP Z-Score (stretch / mean-revert logic)
* **VWAP Z-Length (`zLen`)**
Window for Z over session-anchored VWAP distance. Larger = smoother |Z| (fewer fades/re-entries). Smaller = more reactive (more).
* **Range Fade |Z| (base) (`zFadeBase`)**
Minimum |Z| to allow **fades** in ranges. Raise to demand more stretch (fewer fades). Lower to take more fades.
* **Max |Z| Trend Re-entry (base) (`maxZTrendBase`)**
Caps how stretched price can be and still permit **reclaims** with trend. Lower = stricter (avoid chases). Higher = will chase further.
---
### E) TSI Momentum Gate
* **TSI Long/Short/Signal (`tsiLong / tsiShort / tsiSig`)**
Larger = smoother/laggier momentum; smaller = snappier.
* **TSI gate (`CrossOnly / CrossOrSlope / Off`)**
* **CrossOnly**: require TSI cross of its signal (strict).
* **CrossOrSlope**: cross *or* favorable slope (balanced default).
* **Off**: no momentum gate (most signals, most noise).
---
### F) Guards (filters to avoid low-quality tape)
* **US focus 09:35–10:30 & 14:00–15:45 (base) (`useTimeBase`)**
`true` limits to high-quality windows. `false` trades all session.
* **Skip N bars after 09:30 ET (`skipFirst`)**
Skips the open scramble. Larger = skip longer.
* **Min volatility ATR% (base)** = `useVolMinBase` + `atrPctMinBase`
Requires `ATR(10)/Close*100 ≥ atrPctMinBase`. Raise threshold to avoid dead tape; lower to accept quieter sessions.
* **Gap guard (base)** = `gapGuardBase` + `gapMul`
Blocks signals when the opening gap exceeds `gapMul * ATR`. Increase `gapMul` to allow more gapped opens; decrease to be stricter.
---
### G) Visuals & Sides
* **Plot Keltner (`plotKC`)** → show/hide basis & bands.
* **Show Longs / Show Shorts** → enable/disable each side.
---
### H) Fail-Safe Confirmation
* **Confirm mode (`BreakHighOnly / BreakHigh+Hold / TwoBarImpulse`)**
* **BreakHighOnly**: confirm by taking out the armed bar’s extreme. Fastest, most frequent.
* **BreakHigh+Hold**: must **break**, have **body ≥ X·ATR**, **and** hold above/below the basis → higher quality, fewer signals.
* **TwoBarImpulse**: decisive follow-through vs. prior bar with **body ≥ X·ATR** → momentum-biased confirmations.
* **Confirm within N bars (`confirmBars`)**
Confirmation window size. Smaller = faster validation; larger = more patience (can be later).
* **Impulse body ≥ X·ATR (`impulseBodyATR`)**
Raise for stronger confirmations (fewer weak triangles). Lower to accept lighter pushes.
* **Require market alignment (`needMarket`) + `marketTicker`**
When enabled: Longs require **market > EMA20 (5m)**; Shorts require **market < EMA20 (5m)**.
* **Diagnostics: Show debug letters (`debug`)**
Tiny “B/C” audit marks for base/confirm while tuning.
---
## Tuning recipes (quick, practical)
* **If you’re getting chopped:**
* Set **Mode = Conservative**
* **Confirm mode = BreakHigh+Hold**
* Raise **impulseBodyATR** (e.g., 0.45)
* Keep **needMarket = true**
* Keep **Strict NR = true**
* **If you need more signals:**
* **Mode = Aggressive** (or Turbo if you accept more noise)
* **Confirm mode = BreakHighOnly**
* Lower **impulseBodyATR** (0.25–0.30)
* Increase **confirmBars** to 3
* **Range-day focus (fades):**
* Keep session guard on
* Raise **zFadeBase** to demand real stretch
* Keep **maxZTrendBase** moderate (don’t chase)
* **Trend-day focus (reclaims):**
* Slightly **lower `maxZTrendBase`** (avoid chasing excessive stretch)
* Use **CrossOrSlope** TSI gating
* Consider turning **needMarket** on
---
## Best practices & notes
* **Instrument specificity:** Tune Z, TSI, and guards per symbol and timeframe.
* **Session awareness:** Session filter uses **exchange-local** time; adjust for non-US markets.
* **Automation:** Use the two provided alert names; they’re stable.
* **Risk management:** Confirmation improves quality but doesn’t remove risk. Always pre-define stop/size logic.
---
## Suggested starting point (balanced profile)
* **Mode = balanced**
* **Strict NR = true**
* **Confirm mode = BreakHigh+Hold**
* **confirmBars = 2**
* **impulseBodyATR ≈ 0.35**
* **needMarket = off** (turn on for extra confluence)
* Leave Keltner/TSI defaults; then nudge `zFadeBase` and `maxZTrendBase` to match your symbol.
---
*This tool is a signal generator, not a broker or strategy. Validate on your markets/timeframes and integrate with your risk plan.*
Price Imbalance as Consecutive Levels of AveragesOverview
The Price Imbalance as Consecutive Levels of Averages indicator is an advanced technical analysis tool designed to identify and visualize price imbalances in financial markets. Unlike traditional moving average (MA) indicators that update continuously with each new price bar, this indicator employs moving averages calculated over consecutive, non-overlapping historical windows. This unique approach leverages comparative historical data to provide deeper insights into trend strength and potential reversals, offering traders a more nuanced understanding of market dynamics and reducing the likelihood of false signals or fakeouts.
Key Features
Consecutive Rolling Moving Averages: Utilizes three distinct simple moving averages (SMAs) calculated over consecutive, non-overlapping windows to capture different historical segments of price data.
Dynamic Color-Coded Visualization: SMA lines change color and style based on the relationship between the averages, highlighting both extreme and normal market conditions.
Median and Secondary Median Lines: Provides additional layers of price distribution insight during normal trend conditions through the plotting of primary and secondary median lines.
Fakeout Prevention: Filters out short-term volatility and sharp price movements by requiring consistent historical alignment of multiple moving averages.
Customizable Parameters: Offers flexibility to adjust SMA window lengths and line extensions to align with various trading strategies and timeframes.
Real-Time Updates with Historical Context: Continuously recalculates and updates SMA lines based on comparative historical windows, ensuring that the indicator reflects both current and past market conditions.
Inputs & Settings
Rolling Window Lengths:
Window 1 Length (Most Recent) Bars: Number of bars used to calculate the most recent SMA. (Default: 5, Range: 2–300)
Window 2 Length (Preceding) Bars: Number of bars for the second SMA, shifted by Window 1. (Default: 8, Range: 2–300)
Window 3 Length (Third Rolling) Bars: Number of bars for the third SMA, shifted by the combined lengths of Window 1 and Window 2. (Default: 13, Range: 2–300)
Horizontal Line Extension:
Horizontal Line Extension (Bars): Determines how far each SMA line extends horizontally on the chart. (Default: 10 bars, Range: 1–100)
Functionality and Theory
1. Calculating Consecutive Simple Moving Averages (SMAs):
The indicator calculates three SMAs, each based on distinct and consecutive historical windows of price data. This approach contrasts with traditional MAs that continuously update with each new price bar, offering a static view of past trends rather than an ongoing one.
Mean1 (SMA1): Calculated over the most recent Window 1 Length bars. Represents the short-term trend.
Mean1=∑i=1N1CloseiN1
Mean1=N1∑i=1N1Closei
Where N1N1 is the length of Window 1.
Mean2 (SMA2): Calculated over the preceding Window 2 Length bars, shifted back by Window 1 Length bars. Represents the medium-term trend.
\text{Mean2} = \frac{\sum_{i=1}^{N_2} \text{Close}_{i + N_1}}}{N_2}
Where N2N2 is the length of Window 2.
Mean3 (SMA3): Calculated over the third rolling Window 3 Length bars, shifted back by the combined lengths of Window 1 and Window 2 bars. Represents the long-term trend.
\text{Mean3} = \frac{\sum_{i=1}^{N_3} \text{Close}_{i + N_1 + N_2}}}{N_3}
Where N3N3 is the length of Window 3.
2. Determining Market Conditions:
The relationship between the three SMAs categorizes the market condition into either extreme or normal states, enabling traders to quickly assess trend strength and potential reversals.
Extreme Bullish:
Mean3Mean2>Mean1
Mean3>Mean2>Mean1
Indicates a strong and sustained downward trend. SMA lines are colored purple and styled as dashed lines.
Normal Bullish:
Mean1>Mean2andnot in extreme bullish condition
Mean1>Mean2andnot in extreme bullish condition
Indicates a standard upward trend. SMA lines are colored green and styled as solid lines.
Normal Bearish:
Mean1Mean2>Mean1
Mean3>Mean2>Mean1
Normal Bullish:
Mean1>Mean2andnot in Extreme Bullish
Mean1>Mean2andnot in Extreme Bullish
Normal Bearish:
Mean1 Mean2 > Mean3
Visualization: All three SMAs are displayed as gold dashed lines.
Median Lines: Not displayed to maintain chart clarity.
Interpretation: Indicates a strong and sustained upward trend. Traders may consider entering long positions, confident in the trend's strength without the distraction of additional lines.
2. Normal Bullish Condition:
SMAs Alignment: Mean1 > Mean2 (not in extreme condition)
Visualization: Mean1 and Mean2 are green solid lines; Mean3 is gray.
Median Lines: A thin blue dotted median line is plotted between Mean1 and Mean2, with two additional thin blue dashed lines as secondary medians.
Interpretation: Confirms an upward trend while providing deeper insights into price distribution. Traders can use the median and secondary median lines to identify optimal entry points and manage risk more effectively.
3. Extreme Bearish Condition:
SMAs Alignment: Mean3 > Mean2 > Mean1
Visualization: All three SMAs are displayed as purple dashed lines.
Median Lines: Not displayed to maintain chart clarity.
Interpretation: Indicates a strong and sustained downward trend. Traders may consider entering short positions, confident in the trend's strength without the distraction of additional lines.
4. Normal Bearish Condition:
SMAs Alignment: Mean1 < Mean2 (not in extreme condition)
Visualization: Mean1 and Mean2 are red solid lines; Mean3 is gray.
Median Lines: A thin blue dotted median line is plotted between Mean1 and Mean2, with two additional thin blue dashed lines as secondary medians.
Interpretation: Confirms a downward trend while providing deeper insights into price distribution. Traders can use the median and secondary median lines to identify optimal entry points and manage risk more effectively.
Customization and Flexibility
The Price Imbalance as Consecutive Levels of Averages indicator is highly adaptable, allowing traders to tailor it to their specific trading styles and market conditions through adjustable parameters:
SMA Window Lengths: Modify the lengths of Window 1, Window 2, and Window 3 to capture different historical trend segments, whether focusing on short-term fluctuations or long-term movements.
Line Extension: Adjust the horizontal extension of SMA and median lines to align with different trading horizons and chart preferences.
Color and Style Preferences: While default colors and styles are optimized for clarity, traders can customize these elements to match their personal chart aesthetics and enhance visual differentiation.
This flexibility ensures that the indicator remains versatile and applicable across various markets, asset classes, and trading strategies, providing valuable insights tailored to individual trading needs.
Conclusion
The Price Imbalance as Consecutive Levels of Averages indicator offers a comprehensive and innovative approach to analyzing price trends and imbalances within financial markets. By utilizing three consecutive, non-overlapping SMAs and incorporating median lines during normal trend conditions, the indicator provides clear and actionable insights into trend strength and price distribution. Its unique design leverages comparative historical data, distinguishing it from traditional moving averages and enhancing its utility in identifying genuine market movements while minimizing false signals. This dynamic and customizable tool empowers traders to refine their technical analysis, optimize their trading strategies, and navigate the markets with greater confidence and precision.
Physics CandlesPhysics Candles embed volume and motion physics directly onto price candles or market internals according to the cyclic pattern of financial securities. The indicator works on both real-time “ticks” and historical data using statistical modeling to highlight when these values, like volume or momentum, is unusual or relatively high for some periodic window in time. Each candle is made out of one or more sub-candles that each contain their own information of motion, which converts to the color and transparency, or brightness, of that particular candle segment. The segments extend throughout the entire candle, both body and wicks, and Thick Wicks can be implemented to see the color coding better. This candle segmentation allows you to see if all the volume or energy is evenly distributed throughout the candle or highly contained in one small portion of it, and how intense these values are compared to similar time periods without going to lower time frames. Candle segmentation can also change a trader’s perspective on how valuable the information is. A “low” volume candle, for instance, could signify high value short-term stopping volume if the volume is all concentrated in one segment.
The Candles are flexible. The physics information embedded on the candles need not be from the same price security or market internal as the chart when using the Physics Source option, and multiple Candles can be overlayed together. You could embed stock price Candles with market volume, market price Candles with stock momentum, market structure with internal acceleration, stock price with stock force, etc. My particular use case is scalping the SPX futures market (ES), whose price action is also dictated by the volume action in the associated cash market, or SPY, as well as a host of other securities. Physics allows you to embed the ES volume on the SPY price action, or the SPY volume on the ES price action, or you can combine them both by overlaying two Candle streams and increasing the Number of Overlays option to two. That option decreases the transparency levels of your coloring scheme so that overlaying multiple Candles converges toward the same visual color intensity as if you had one. The Candle and Physics Sources allows for both Symbols and Spreads to visualize Candle physics from a single ticker or some mathematical transformation of tickers.
Due to certain TradingView programming restrictions, each Candle can only be made out of a maximum of 8 candle segments, or an “8-bit” resolution. Since limits are just an opportunity to go beyond, the user has the option to stack multiple Candle indicators together to further increase the candle resolution. If you don’t want to see the Candles for some particular period of the day, you can hide them, or use the hiding feature to have multiple Candles calibrated to show multiple parts of the trading day. Securities tend to have low volume after hours with sharp spikes at the open or close. Multiple Candles can be used for multiple parts of the trading day to accommodate these different cycles in volume.
The Candles do not need be associated with the nominal security listed on the TV chart. The Candle Source allows the user to look at AAPL Candles, for instance, while on a TSLA or SPY chart, each with their respective volume actions integrated into the candles, for instance, to allow the user to see multiple security price and volume correlation on a single chart.
The physics information currently embeddable on Candles are volume or time, velocity, momentum, acceleration, force, and kinetic energy. In order to apply equations of motion containing a mass variable to financial securities, some analogous value for mass must be assumed. Traders often regard volume or time as inextricable variables to a securities price that can indicate the direction and strength of a move. Since mass is the inextricable variable to calculating the momentum, force, or kinetic energy of motion, the user has the option to assume either time or volume is analogous to mass. Volume may be a better option for mass as it is not strictly dependent on the speed of a security, whereas time is.
Data transformations and outlier statistics are used to color code the intensity of the physics for each candle segment relative to past periodic behavior. A million shares during pre-market or a million shares during noontime may be more intense signals than a typical million shares traded at the open, and should have more intense color signals. To account for a specific cyclic behavior in the market, the user can specify the Window and Cycle Time Frames. The Window Time Frame splits up a Cycle into windows, samples and aggregates the statistics for each window, then compares the current physics values against past values in the same window. Intraday traders may benefit from using a Daily Cycle with a 30-minute Window Time Frame and 1-minute Sample Time Frame. These settings sample and compare the physics of 1-minute candles within the current 30-minute window to the same 30-minute window statistics for all past trading days, up until the data limit imposed by TradingView, or until the Data Collection Start Date specified in the settings. Longer-term traders may benefit from using a Monthly Cycle with a Weekly Time Frame, or a Yearly Cycle with a Quarterly Time Frame.
Multiple statistics and data transformation methods are available to convey relative intensity in different ways for different trading signals. Physics Candles allows for both Normal and Log-Normal assumptions in the physics distribution. The data can then be transformed by Linear, Logarithmic, Z-Score, or Power-Law scoring, where scoring simply assigns an intensity to the relative physics value of each candle segment based on some mathematical transformation. Z-scoring often renders adequate detection by scoring the segment value, such as volume or momentum, according to the mean and standard deviation of the data set in each window of the cycle. Logarithmic or power-law transformation with a gamma below 1 decreases the disparity between intensities so more less-important signals will show up, whereas the power-law transformation with gamma values above 1 increases the disparity between intensities, so less more-important signals will show up. These scores are then converted to color and transparency between the Min Score and the Max Score Cutoffs. The Auto-Normalization feature can automatically pick these cutoffs specific to each window based on the mean and standard deviation of the data set, or the user can manually set them. Physics was developed with novices in mind so that most users could calibrate their own settings by plotting the candle segment distributions directly on the chart and fiddling with the settings to see how different cutoffs capture different portions of the distribution and affect the relative color intensities differently. Security distributions are often skewed with fat-tails, known as kurtosis, where high-volume segments for example, have a higher-probabilities than expected for a normal distribution. These distribution are really log-normal, so that taking the logarithm leads to a standard bell-shaped distribution. Taking the Z-score of the Log-Normal distribution could make the most statistical sense, but color sensitivity is a discretionary preference.
Background Philosophy
This indicator was developed to study and trade the physics of motion in financial securities from a visually intuitive perspective. Newton’s laws of motion are loosely applied to financial motion:
“A body remains at rest, or in motion at a constant speed in a straight line, unless acted upon by a force”.
Financial securities remain at rest, or in motion at constant speed up or down, unless acted upon by the force of traders exchanging securities.
“When a body is acted upon by a force, the time rate of change of its momentum equals the force”.
Momentum is the product of mass and velocity, and force is the product of mass and acceleration. Traders render force on the security through the mass of their trading activity and the acceleration of price movement.
“If two bodies exert forces on each other, these forces have the same magnitude but opposite directions.”
Force arises from the interaction of traders, buyers and sellers. One body of motion, traders’ capitalization, exerts an equal and opposite force on another body of motion, the financial security. A securities movement arises at the expense of a buyer or seller’s capitalization.
Volume
The premise of this indicator assumes that volume, v, is an analogous means of measuring physical mass, m. This premise allows the application of the equations of motion to the movement of financial securities. We know from E=mc^2 that mass has energy. Energy can be used to create motion as kinetic energy. Taking a simple hypothetical example, the interaction of one short seller looking to cover lower and one buyer looking to sell higher exchange shares in a security at an agreed upon price to create volume or mass, and therefore, potential energy. Eventually the short seller will actively cover and buy the security from the previous buyer, moving the security higher, or the buyer will actively sell to the short seller, moving the security lower. The potential energy inherent in the initial consolidation or trading activity between buy and seller is now converted to kinetic energy on the subsequent trading activity that moves the securities price. The more potential energy that is created in the consolidation, the more kinetic energy there is to move price. This is why point and figure traders are said to give price targets based on the level of volatility or size of a consolidation range, or why Gann traders square price and time, as time is roughly proportional to mass and trading activity. The build-up of potential energy between short sellers and buyers in GME or TSLA led to their explosive moves beyond their standard fundamental valuations.
Position
Position, p, is simply the price or value of a financial security or market internal.
Time
Time, t, is another means of measuring mass to discover price behavior beyond the time snapshots that simple candle charts provide. We know from E=mc^2 that time is related to rest mass and energy given the speed of light, c, where time ≈ distance * sqrt(mass/E). This relation can also be derived from F=ma. The more mass there is, the longer it takes to compute the physics of a system. The more energy there is, the shorter it takes to compute the physics of a system. Similarly, more time is required to build a “resting” low-volatility trading consolidation with more mass. More energy added to that trading consolidation by competing buyers and sellers decreases the time it takes to build that same mass. Time is also related to price through velocity.
Velocity = (p(t1) – p(t0)) / p(t0)
Velocity, v, is the relative percent change of a securities price, p, over a period of time, t0 to t1. The period of time is between subsequent candles, and since time is constant between candles within the same timeframe, it is not used to calculate velocity or acceleration. Price moves faster with higher velocity, and slower with slower velocity, over the same fixed period of time. The product of velocity and mass gives momentum.
Momentum = mv
This indicator uses physics definition of momentum, not finance’s. In finance, momentum is defined as the amount of change in a securities price, either relative or absolute. This is definition is unfortunate, pun intended, since a one dollar move in a security from a thousand shares traded between a few traders has the exact same “momentum” as a one dollar move from millions of shares traded between hundreds of traders with everything else equal. If momentum is related to the energy of the move, momentum should consider both the level of activity in a price move, and the amount of that price move. If we equate mass to volume to account for the level of trading activity and use physics definition of momentum as the product of mass and velocity, this revised definition now gives a thousand-times more momentum to a one-dollar price move that has a thousand-times more volume behind it. If you want to use finance’s volume-less definition of momentum, use velocity in this indicator.
Acceleration = v(t1) – v(t0)
Acceleration, a, is the difference between velocities over some period of time, t0 to t1. Positive acceleration is necessary to increase a securities speed in the positive direction, while negative acceleration is necessary to decrease it. Acceleration is related to force by mass.
Force = ma
Force is required to change the speed of a securities valuation. Price movements with considerable force have considerably more impact on future direction. A change in direction requires force.
Kinetic Energy = 0.5mv^2
Kinetic energy is the energy that a financial security gains from the change in its velocity by force. The built-up of potential energy in trading consolidations can be converted to kinetic energy on a breakout from the consolidation.
Cycle Theory and Relativity
Just as the physics of motion is relative to a point of reference, so too should the physics of financial securities be relative to a point of reference. An object moving at a 100 mph towards another object moving in the same direction at 100 mph will not appear to be moving relative to each other, nor will they collide, but from an outsider observer, the objects are going 100 mph and will collide with significant impact if they run into a stationary object relative to the observer. Similarly, trading with a hundred thousand shares at the open when the average volume is a couple million may have a much smaller impact on the price compared to trading a hundred thousand shares pre-market when the average volume is ten thousand shares. The point of reference used in this indicator is the average statistics collected for a given Window Time Frame for every Cycle Time Frame. The physics values are normalized relative to these statistics.
Examples
The main chart of this publication shows the Force Candles for the SPY. An intense force candle is observed pre-market that implicates the directional overtone of the day. The assumption that direction should follow force arises from physical observation. If a large object is accelerating intensely in a particular direction, it may be fair to assume that the object continues its direction for the time being unless acted upon by another force.
The second example shows a similar Force Candle for the SPY that counters the assumption made in the first example and emphasizes the importance of both motion and context. While it’s fair to assume that a heavy highly accelerating object should continue its course, if that object runs into an obstacle, say a brick wall, it’s course may deviate. This example shows SPY running into the 50% retracement wall from the low of Mar 2020, a significant support level noted in literature. The example also conveys Gann’s idea of “lost motion”, where the SPY penetrated the 50% price but did not break through it. A brick wall is not one atom thick and price support is not one tick thick. An object can penetrate only one layer of a wall and not go through it.
The third example shows how Volume Candles can be used to identify scalping opportunities on the SPY and conveys why price behavior is as important as motion and context. It doesn’t take a brick wall to impede direction if you know that the person driving the car tends to forget to feed the cats before they leave. In the chart below, the SPY breaks down to a confluence of the 5-day SMA, 20-day SMA, and an important daily trendline (not shown) after the bullish bounce from the 50% retracement days earlier. High volume candles on the SMA signify stopping volume that reverse price direction. The character of the day changes. Bulls become more aggressive than bears with higher volume on upswings and resistance, whiles bears take on a defensive position with lower volume on downswings and support. High volume stopping candles are seen after rallies, and can tell you when to take profit, get out of a position, or go short. The character change can indicate that its relatively safe to re-enter bullish positions on many major supports, especially given the overarching bullish theme from the large reaction off the 50% retracement level.
The last example emphasizes the importance of relativity. The Volume Candles in the chart below are brightest pre-market even though the open has much higher volume since the pre-market activity is much higher compared to past pre-markets than the open is compared to past opens. Pre-market behavior is a good indicator for the character of the day. These bullish Volume Candles are some of the brightest seen since the bounce off the 50% retracement and indicates that bulls are making a relatively greater attempt to bring the SPY higher at the start of the day.
Infrequently Asked Questions
Where do I start?
The default settings are what I use to scalp the SPY throughout most of the extended trading day, on a one-minute chart using SPY volume. I also overlay another Candle set containing ES future volume on the SPY price structure by setting the Physics Source to ES1! and the Number of Overlays setting to 2 for each Candle stream in order to account for pre- and post-market trading activity better. Since the closing volume is exponential-like up until the end of the regular trading day, adding additional Candle streams with a tighter Window Time Frame (e.g., 2-5 minute) in the last 15 minutes of trading can be beneficial. The Hide feature can allow you to set certain intraday timeframes to hide one Candle set in order to show another Candle set during that time.
How crazy can you get with this indicator?
I hope you can answer this question better. One interesting use case is embedding the velocity of market volume onto an internal market structure. The PCTABOVEVWAP.US is a market statistic that indicates the percent of securities above their VWAP among US stocks and is helpful for determining short term trends in the US market. When securities are rising above their VWAP, the average long is up on the day and a rising PCTABOVEVWAP.US can be viewed as more bullish. When securities are falling below their VWAP, the average short is up on the day and a falling PCTABOVEVWAP.US can be viewed as more bearish. (UPVOL.US - DNVOL.US) / TVOL.US is a “spread” symbol, in TV parlance, that indicates the decimal percent difference between advancing volume and declining volume in the US market, showing the relative flow of volume between stocks that are up on the day, and stocks that are down on the day. Setting PCTABOVEVWAP.US in the Candle Source, (UPVOL.US - DNVOL.US) / TVOL.US in the Physics Source, and selecting the Physics to Velocity will embed the relative velocity of the spread symbol onto the PCTABOVEVWAP.US candles. This can be helpful in seeing short term trends in the US market that have an increasing amount of volume behind them compared to other trends. The chart below shows Volume Candles (top) and these Spread Candles (bottom). The first top at 9:30 and second top at 10:30, the high of the day, break down when the spread candles light up, showing a high velocity volume transfer from up stocks to down stocks.
How do I plot the indicator distribution and why should I even care?
The distribution is visually helpful in seeing how different normalization settings effect the distribution of candle segments. It is also helpful in seeing what physics intensities you want to ignore or show by segmenting part of the distribution within the Min and Max Cutoff values. The intensity of color is proportional to the physics value between the Min and Max Cutoff values, which correspond to the Min and Max Colors in your color scheme. Any physics value outside these Min and Max Cutoffs will be the same as the Min and Max Colors.
Select the Print Windows feature to show the window numbers according to the Cycle Time Frame and Window Time Frame settings. The window numbers are labeled at the start of each window and are candle width in size, so you may need to zoom into to see them. Selecting the Plot Window feature and input the window number of interest to shows the distribution of physics values for that particular window along with some statistics.
A log-normal volume distribution of segmented z-scores is shown below for 30-minute opening of the SPY. The Min and Max Cutoff at the top of the graph contain the part of the distribution whose intensities will be linearly color-coded between the Min and Max Colors of the color scheme. The part of the distribution below the Min Cutoff will be treated as lowest quality signals and set to the Min Color, while the few segments above the Max Cutoff will be treated as the highest quality signals and set to the Max Color.
What do I do if I don’t see anything?
Troubleshooting issues with this indicator can involve checking for error messages shown near the indicator name on the chart or using the Data Validation section to evaluate the statistics and normalization cutoffs. For example, if the Plot Window number is set to a window number that doesn’t exist, an error message will tell you and you won’t see any candles. You can use the Print Windows option to show windows that do exist for you current settings. The auto-normalization cutoff values may be inappropriate for your particular use case and literally cut the candles out of the chart. Try changing the chart time frame to see if they are appropriate for your cycle, sample and window time frames. If you get a “Timeframe passed to the request.security_lower_tf() function must be lower than the timeframe of the main chart” error, this means that the chart timeframe should be increased above the sample time frame. If you get a “Symbol resolve error”, ensure that you have correct symbol or spread in the Candle or Physics Source.
How do I see a relative physics values without cycles?
Set the Window Time Frame to be equal to the Cycle Time Frame. This will aggregate all the statistics into one bucket and show the physics values, such as volume, relative to all the past volumes that TV will allow.
How do I see candles without segmentation?
Segmentation can be very helpful in one context or annoying in another. Segmentation can be removed by setting the candle resolution value to 1.
Notes
I have yet to find a trading platform that consistently provides accurate real-time volume and pricing information, lacking adequate end-user data validation or quality control. I can provide plenty of examples of real-time volume counts or prices provided by TradingView and other platforms that were significantly off from what they should have been when comparing against the exchanges own data, and later retroactively corrected or not corrected at all. Since no indicator can work accurately with inaccurate data, please use at your own discretion.
The first version is a beta version. Debugging and validating code in Pine script is difficult without proper unit testing. Please report any bugs with enough information to reproduce them and indicate why they are important. I also encourage you to export the data from TradingView and verify the calculations for your particular use case.
The indicator works on real-time updates that occur at a higher frequency than the candle time frame, which TV incorrectly refers to as ticks. They use this terminology inaccurately as updates are really aggregated tick data that can take place at different prices and may not accurately reflect the real tick price action. Consequently, this inaccuracy also impacts the real-time segmentation accuracy to some degree. TV does not provide a means of retaining “tick” information, so the higher granularity of information seen real-time will be lost on a disconnect.
TV does not provide time and sales information. The volume and price information collected using the Sample Time Frame is intraday, which provides only part of the picture. Intraday volume is generally 50 to 80% of the end of day volume. Consequently, the daily+ OHLC prices are intraday, and may differ significantly from exchanged settled OHLC prices.
The Cycle and Window Time Frames refer to calendar days and time, not trading days or time. For example, the first window week of a monthly cycle is the first seven days of the month, not the first Monday through Friday of trading for the month.
Chart Time Frames that are higher than the Window Time Frames average the normalized physics for price action that occurred within a given Candle segment. It does not average price action that did not occur.
One of the main performance bottleneck in TradingView’s Pine Script is client-side drawing and plotting. The performance of this indicator can be increased by lowering the resolution (the number of sub-candles this indicator plots), getting a faster computer, or increasing the performance of your computer like plugging your laptop in and eliminating unnecessary processes.
The statistical integrity of this indicator relies on the number of samples collected per sample window in a given cycle. Higher sample counts can be obtained by increasing the chart time frame or upgrading the TradingView plan for a higher bar count. While increasing the chart time frame doesn’t increase the visual number of bars plotted on the chart, it does increase the number of bars that can be pulled at a lower time frame, up to 100,000.
Due to a limitation in Pine Scripts request_lower_tf() function, using a spread symbol will only work for regular trading hours, not extended trading hours.
Ideally, velocity or momentum should be calculated between candle closes. To eliminate the need to deal with price gaps that would lead to an incorrect statistical distributions, momentum is calculated between candle open and closes as a percent change of the price or value, which should not be an issue for most liquid securities.
Stock WatchOverview
Watch list are very common in trading, but most of them simply provide the means of tracking a list of symbols and their current price. Then, you click through the list and perform some additional analysis individually from a chart setup. What this indicator is designed to do is provide a watch list that employs a high/low price range analysis in a table view across multiple time ranges for a much faster analysis of the symbols you are watching.
Discussion
The concept of this Stock Watch indicator is best understood when you think in terms of a 52 Week Range indication on many financial web sites. Taken a given symbol, what is the high and the low over a 52 week range and then determine where current price is within that range from a percentage perspective between 0% and 100%.
With this concept in mind, let's see how this Stock Watch indicator is meant to benefit.
There are four different H/L ranges relative to the chart's setting and a Scope property. Let's use a three month (3M) chart as our example and set the indicator's Scope = 4. A 3M chart provides three months of data in a single candle, now when we set the Scope = 4 we are stating that 1X is going to look over four candles for the high/low range.
The Scope property is used to determine how many candles it is to scan to determine the high/low range for the corresponding 1X, 3X, 5X and 10X periods. This is how different time ranges are put into perspective. Using a 3M chart with Scope = 4 would represent the following time windows:
- 1X = 3M * 4 is a 12 Months or 1 Year High/Low Range
- 3X = 3M * 4 * 3 is a 36 Months or 3 Years High/Low Range
- 5X = 3M * 4 * 5 is a 60 Months or 5 Years High/Low Range
- 10X = 3M * 4 * 10 is a 120 Months or 10 Years High/Low Range.
With these calculations, the indicator then determines where current price is within each of these High/Low ranges from a percentage perspective between 0% and 100%.
Once the 0% to 100% value is calculated, it then will shade the value according to a color gradient from red to green (or any other two colors you set the indictor to). This color shading really helps to interpret current price quickly.
The greater power to this range and color shading comes when you are able to see where price is according to price history across the multiple time windows. In this example, there is quick analysis across 1 Year, 3 Year, 5 Year and 10 Year windows.
Now let's further improve this quick analysis over 15 different stocks for which the indicator allows you to watch up to at any one time.
For value traders this is huge, because we're always looking for the bargains and we wait for price to be in the value range. Using this indicator helps to instantly see if price has entered a value range before we decide to do further analysis with other charting and fundamental tools.
The Code
The heart of all this is really very simple as you can see in the following code snippet. We're simply looking for the highest high and lowest low across the different scopes and calculating the percentage of the range where current price is for each symbol being watched.
scope = baseScope
watch1X = math.round(((watchClose - ta.lowest(watchLow, scope)) / (ta.highest(watchHigh, scope) - ta.lowest(watchLow, scope))) * 100, 0)
table.cell(tblWatch, columnId, 2, str.format("{0, number, #}%", watch1X), text_size = size.small, text_color = colorText, bgcolor = getBackColor(watch1X))
//3X Lookback
scope := baseScope * 3
watch3X = math.round(((watchClose - ta.lowest(watchLow, scope)) / (ta.highest(watchHigh, scope) - ta.lowest(watchLow, scope))) * 100, 0)
table.cell(tblWatch, columnId, 3, str.format("{0, number, #}%", watch3X), text_size = size.small, text_color = colorText, bgcolor = getBackColor(watch3X))
Conclusion
The example I've laid out here are for large time windows, because I'm a long term investor. However, keep in mind that this can work on any chart setting, you just need to remember that your chart's time period and scope work together to determine what 1X, 3X, 5X and 10X represent.
Let me try and give you one last scenario on this. Consider your chart is set for a 60 minute chart, meaning each candle represents 60 minutes of time and you set the Stock Watch indicator to a scope = 4. These settings would now represent the following and you would be watching up to 15 different stocks across these windows at one time.
1X = 60 minutes * 4 is 240 minutes or 4 hours of time.
3X = 60 minutes * 4 * 3 = 720 minutes or 12 hours of time.
5X = 60 minutes * 4 * 5 = 1200 minutes or 20 hours of time.
10X = 60 minutes * 4 * 10 = 2400 minutes or 40 hours of time.
I hope you find value in my contribution to the cause of trading, and if you have any comments or critiques, I would love to here from you in the comments.
Machine Learning : Cosine Similarity & Euclidean DistanceIntroduction:
This script implements a comprehensive trading strategy that adheres to the established rules and guidelines of housing trading. It leverages advanced machine learning techniques and incorporates customised moving averages, including the Conceptive Price Moving Average (CPMA), to provide accurate signals for informed trading decisions in the housing market. Additionally, signal processing techniques such as Lorentzian, Euclidean distance, Cosine similarity, Know sure thing, Rational Quadratic, and sigmoid transformation are utilised to enhance the signal quality and improve trading accuracy.
Features:
Market Analysis: The script utilizes advanced machine learning methods such as Lorentzian, Euclidean distance, and Cosine similarity to analyse market conditions. These techniques measure the similarity and distance between data points, enabling more precise signal identification and enhancing trading decisions.
Cosine similarity:
Cosine similarity is a measure used to determine the similarity between two vectors, typically in a high-dimensional space. It calculates the cosine of the angle between the vectors, indicating the degree of similarity or dissimilarity.
In the context of trading or signal processing, cosine similarity can be employed to compare the similarity between different data points or signals. The vectors in this case represent the numerical representations of the data points or signals.
Cosine similarity ranges from -1 to 1, with 1 indicating perfect similarity, 0 indicating no similarity, and -1 indicating perfect dissimilarity. A higher cosine similarity value suggests a closer match between the vectors, implying that the signals or data points share similar characteristics.
Lorentzian Classification:
Lorentzian classification is a machine learning algorithm used for classification tasks. It is based on the Lorentzian distance metric, which measures the similarity or dissimilarity between two data points. The Lorentzian distance takes into account the shape of the data distribution and can handle outliers better than other distance metrics.
Euclidean Distance:
Euclidean distance is a distance metric widely used in mathematics and machine learning. It calculates the straight-line distance between two points in Euclidean space. In two-dimensional space, the Euclidean distance between two points (x1, y1) and (x2, y2) is calculated using the formula sqrt((x2 - x1)^2 + (y2 - y1)^2).
Dynamic Time Windows: The script incorporates a dynamic time window function that allows users to define specific time ranges for trading. It checks if the current time falls within the specified window to execute the relevant trading signals.
Custom Moving Averages: The script includes the CPMA, a powerful moving average calculation. Unlike traditional moving averages, the CPMA provides improved support and resistance levels by considering multiple price types and employing a combination of Exponential Moving Averages (EMAs) and Simple Moving Averages (SMAs). Its adaptive nature ensures responsiveness to changes in price trends.
Signal Processing Techniques: The script applies signal processing techniques such as Know sure thing, Rational Quadratic, and sigmoid transformation to enhance the quality of the generated signals. These techniques improve the accuracy and reliability of the trading signals, aiding in making well-informed trading decisions.
Trade Statistics and Metrics: The script provides comprehensive trade statistics and metrics, including total wins, losses, win rate, win-loss ratio, and early signal flips. These metrics offer valuable insights into the performance and effectiveness of the trading strategy.
Usage:
Configuring Time Windows: Users can customize the time windows by specifying the start and finish time ranges according to their trading preferences and local market conditions.
Signal Interpretation: The script generates long and short signals based on the analysis, custom moving averages, and signal processing techniques. Users should pay attention to these signals and take appropriate action, such as entering or exiting trades, depending on their trading strategies.
Trade Statistics: The script continuously tracks and updates trade statistics, providing users with a clear overview of their trading performance. These statistics help users assess the effectiveness of the strategy and make informed decisions.
Conclusion:
With its adherence to housing trading rules, advanced machine learning methods, customized moving averages like the CPMA, and signal processing techniques such as Lorentzian, Euclidean distance, Cosine similarity, Know sure thing, Rational Quadratic, and sigmoid transformation, this script offers users a powerful tool for housing market analysis and trading. By leveraging the provided signals, time windows, and trade statistics, users can enhance their trading strategies and improve their overall trading performance.
Disclaimer:
Please note that while this script incorporates established tradingview housing rules, advanced machine learning techniques, customized moving averages, and signal processing techniques, it should be used for informational purposes only. Users are advised to conduct their own analysis and exercise caution when making trading decisions. The script's performance may vary based on market conditions, user settings, and the accuracy of the machine learning methods and signal processing techniques. The trading platform and developers are not responsible for any financial losses incurred while using this script.
By publishing this script on the platform, traders can benefit from its professional presentation, clear instructions, and the utilisation of advanced machine learning techniques, customised moving averages, and signal processing techniques for enhanced trading signals and accuracy.
I extend my gratitude to TradingView, LUX ALGO, and JDEHORTY for their invaluable contributions to the trading community. Their innovative scripts, meticulous coding patterns, and insightful ideas have profoundly enriched traders' strategies, including my own.
STD-Filtered, Variety FIR Digital Filters w/ ATR Bands [Loxx]STD-Filtered, Variety FIR Digital Filters w/ ATR Bands is a FIR Digital Filter indicator with ATR bands. This indicator contains 12 different digital filters. Some of these have already been covered by indicators that I've recently posted. The difference here is that this indicator has ATR bands, allows for frequency filtering, adds a frequency multiplier, and attempts show causality by lagging price input by 1/2 the period input during final application of weights. Period is restricted to even numbers.
The 3 most important parameters are the frequency cutoff, the filter window type and the "causal" parameter.
Included filter types:
- Hamming
- Hanning
- Blackman
- Blackman Harris
- Blackman Nutall
- Nutall
- Bartlet Zero End Points
- Bartlet Hann
- Hann
- Sine
- Lanczos
- Flat Top
Frequency cutoff can vary between 0 and 0.5. General rule is that the greater the cutoff is the "faster" the filter is, and the smaller the cutoff is the smoother the filter is.
You can read more about discrete-time signal processing and some of the windowing functions in this indicator here:
Window function
Window Functions and Their Applications in Signal Processing
What are FIR Filters?
In discrete-time signal processing, windowing is a preliminary signal shaping technique, usually applied to improve the appearance and usefulness of a subsequent Discrete Fourier Transform. Several window functions can be defined, based on a constant (rectangular window), B-splines, other polynomials, sinusoids, cosine-sums, adjustable, hybrid, and other types. The windowing operation consists of multipying the given sampled signal by the window function. For trading purposes, these FIR filters act as advanced weighted moving averages.
A finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely (usually decaying).
The impulse response (that is, the output in response to a Kronecker delta input) of an Nth-order discrete-time FIR filter lasts exactly {\displaystyle N+1}N+1 samples (from first nonzero element through last nonzero element) before it then settles to zero.
FIR filters can be discrete-time or continuous-time, and digital or analog.
A FIR filter is (similar to, or) just a weighted moving average filter, where (unlike a typical equally weighted moving average filter) the weights of each delay tap are not constrained to be identical or even of the same sign. By changing various values in the array of weights (the impulse response, or time shifted and sampled version of the same), the frequency response of a FIR filter can be completely changed.
An FIR filter simply CONVOLVES the input time series (price data) with its IMPULSE RESPONSE. The impulse response is just a set of weights (or "coefficients") that multiply each data point. Then you just add up all the products and divide by the sum of the weights and that is it; e.g., for a 10-bar SMA you just add up 10 bars of price data (each multiplied by 1) and divide by 10. For a weighted-MA you add up the product of the price data with triangular-number weights and divide by the total weight.
What is a Standard Deviation Filter?
If price or output or both don't move more than the (standard deviation) * multiplier then the trend stays the previous bar trend. This will appear on the chart as "stepping" of the moving average line. This works similar to Super Trend or Parabolic SAR but is a more naive technique of filtering.
Included
Bar coloring
Loxx's Expanded Source Types
Signals
Alerts
Related indicators
STD/C-Filtered, N-Order Power-of-Cosine FIR Filter
STD/C-Filtered, Power-of-Cosine FIR Filter
STD/C-Filtered, Truncated Taylor Family FIR Filter
STD/Clutter-Filtered, Variety FIR Filters
STD/Clutter-Filtered, Kaiser Window FIR Digital Filter
Sunil BB Blast Heikin Ashi StrategySunil BB Blast Heikin Ashi Strategy
The Sunil BB Blast Heikin Ashi Strategy is a trend-following trading strategy that combines Bollinger Bands with Heikin-Ashi candles for precise market entries and exits. It aims to capitalize on price volatility while ensuring controlled risk through dynamic stop-loss and take-profit levels based on a user-defined Risk-to-Reward Ratio (RRR).
Key Features:
Trading Window:
The strategy operates within a user-defined time window (e.g., from 09:20 to 15:00) to align with market hours or other preferred trading sessions.
Trade Direction:
Users can select between Long Only, Short Only, or Long/Short trade directions, allowing flexibility depending on market conditions.
Bollinger Bands:
Bollinger Bands are used to identify potential breakout or breakdown zones. The strategy enters trades when price breaks through the upper or lower Bollinger Band, indicating a possible trend continuation.
Heikin-Ashi Candles:
Heikin-Ashi candles help smooth price action and filter out market noise. The strategy uses these candles to confirm trend direction and improve entry accuracy.
Risk Management (Risk-to-Reward Ratio):
The strategy automatically adjusts the take-profit (TP) level and stop-loss (SL) based on the selected Risk-to-Reward Ratio (RRR). This ensures that trades are risk-managed effectively.
Automated Alerts and Webhooks:
The strategy includes automated alerts for trade entries and exits. Users can set up JSON webhooks for external execution or trading automation.
Active Position Tracking:
The strategy tracks whether there is an active position (long or short) and only exits when price hits the pre-defined SL or TP levels.
Exit Conditions:
The strategy exits positions when either the take-profit (TP) or stop-loss (SL) levels are hit, ensuring risk management is adhered to.
Default Settings:
Trading Window:
09:20-15:00
This setting confines the strategy to the specified hours, ensuring trading only occurs during active market hours.
Strategy Direction:
Default: Long/Short
This allows for both long and short trades depending on market conditions. You can select "Long Only" or "Short Only" if you prefer to trade in one direction.
Bollinger Band Length (bbLength):
Default: 19
Length of the moving average used to calculate the Bollinger Bands.
Bollinger Band Multiplier (bbMultiplier):
Default: 2.0
Multiplier used to calculate the upper and lower bands. A higher multiplier increases the width of the bands, leading to fewer but more significant trades.
Take Profit Multiplier (tpMultiplier):
Default: 2.0
Multiplier used to determine the take-profit level based on the calculated stop-loss. This ensures that the profit target aligns with the selected Risk-to-Reward Ratio.
Risk-to-Reward Ratio (RRR):
Default: 1.0
The ratio used to calculate the take-profit relative to the stop-loss. A higher RRR means larger profit targets.
Trade Automation (JSON Webhooks):
Allows for integration with external systems for automated execution:
Long Entry JSON: Customizable entry condition for long positions.
Long Exit JSON: Customizable exit condition for long positions.
Short Entry JSON: Customizable entry condition for short positions.
Short Exit JSON: Customizable exit condition for short positions.
Entry Logic:
Long Entry:
The strategy enters a long position when:
The Heikin-Ashi candle shows a bullish trend (green close > open).
The price is above the upper Bollinger Band, signaling a breakout.
The previous candle also closed higher than it opened.
Short Entry:
The strategy enters a short position when:
The Heikin-Ashi candle shows a bearish trend (red close < open).
The price is below the lower Bollinger Band, signaling a breakdown.
The previous candle also closed lower than it opened.
Exit Logic:
Take-Profit (TP):
The take-profit level is calculated as a multiple of the distance between the entry price and the stop-loss level, determined by the selected Risk-to-Reward Ratio (RRR).
Stop-Loss (SL):
The stop-loss is placed at the opposite Bollinger Band level (lower for long positions, upper for short positions).
Exit Trigger:
The strategy exits a trade when either the take-profit or stop-loss level is hit.
Plotting and Visuals:
The Heikin-Ashi candles are displayed on the chart, with green candles for uptrends and red candles for downtrends.
Bollinger Bands (upper, lower, and basis) are plotted for visual reference.
Entry points for long and short trades are marked with green and red labels below and above bars, respectively.
Strategy Alerts:
Alerts are triggered when:
A long entry condition is met.
A short entry condition is met.
A trade exits (either via take-profit or stop-loss).
These alerts can be used to trigger notifications or webhook events for automated trading systems.
Notes:
The strategy is designed for use on intraday charts but can be applied to any timeframe.
It is highly customizable, allowing for tailored risk management and trading windows.
The Sunil BB Blast Heikin Ashi Strategy combines two powerful technical analysis tools (Bollinger Bands and Heikin-Ashi candles) with strong risk management, making it suitable for both beginners and experienced traders.
Feebacks are welcome from the users.
Heartbeat Momentum Strategy BetaHeartbeat Momentum Strategy Beta
Overview
The Heartbeat Momentum Strategy is an innovative approach to market analysis that draws inspiration from the rhythmic patterns of a heartbeat. This strategy aims to identify significant momentum shifts in the market by comparing short-term and long-term moving averages, analogous to detecting irregularities in a heartbeat.
Key Concepts
Market Heartbeat: The difference between short-term and long-term moving averages, representing the market's current 'pulse'.
Heartbeat Volatility: Measured by the standard deviation of the market heartbeat.
Momentum Signals: Generated when the heartbeat deviates significantly from its normal range.
How It Works
Calculates a short-term moving average (default 5 periods) and a long-term moving average (default 20 periods) of the closing price.
Computes the 'heartbeat' by subtracting the long-term MA from the short-term MA.
Measures the volatility of the heartbeat using its standard deviation over the long-term period.
Generates buy signals when the heartbeat exceeds 2 standard deviations above its mean.
Generates sell signals when the heartbeat falls 2 standard deviations below its mean.
Indicator Components
Blue Line: Short-term moving average
Red Line: Long-term moving average
Green Triangles: Buy signals
Red Triangles: Sell signals
Background Color: Light green during buy signals, light red during sell signals
Strategy Parameters
Short MA Window: The period for the short-term moving average (default: 5)
Long MA Window: The period for the long-term moving average (default: 20)
Standard Deviation Threshold: The number of standard deviations to trigger a signal (default: 2.0)
Interpretation
Buy Signal: Indicates a potential strong upward momentum shift. Consider opening long positions or closing short positions.
Sell Signal: Suggests a potential strong downward momentum shift. Consider opening short positions or closing long positions.
No Signal: The market is moving within its normal rhythm. Maintain current positions or look for other entry opportunities.
Customization
Users can adjust the strategy parameters to suit different assets, timeframes, or trading styles:
Decrease the MA windows for more frequent signals (more suitable for shorter timeframes).
Increase the MA windows for fewer, potentially more significant signals (better for longer timeframes).
Adjust the Standard Deviation Threshold to fine-tune sensitivity (lower for more signals, higher for fewer but potentially stronger signals).
Risk Management
While this strategy can provide valuable insights into market momentum, it should not be used in isolation:
Always use stop-loss orders to manage potential losses.
Consider the overall market context and other technical/fundamental factors.
Be aware of potential false signals, especially in ranging or highly volatile markets.
Backtest and forward-test the strategy with different parameters before live trading.
Conclusion
The Heartbeat Momentum Strategy offers a unique perspective on market movements by treating price action like a heartbeat. By identifying significant deviations from the normal market rhythm, it aims to capture strong momentum shifts while filtering out market noise. As with any trading strategy, use it as part of a comprehensive trading plan and always practice sound risk management.
Index Options Expirations and Calendar EffectsFeatures
- Highlights monthly equity options expiration (opex) dates.
- Marks VIX options expiration dates based on standard 30-day offset.
- Shows configurable vanna/charm pre-expiration window (green shading).
- Shows configurable post-opex weakness window (red shading).
- Adjustable colors, start/end offsets, and on/off toggles for each element.
What this does
This overlay highlights option-driven calendar windows around monthly equity options expiration (opex) and VIX options expiration. It draws:
- Solid blue lines on the third Friday of each month (typical monthly opex).
- Dashed orange lines on the Wednesday ~30 days before next month’s opex (typical VIX expiration schedule).
- Green shading during a pre-expiration window when vanna/charm effects are often strongest.
- Red shading during the post-expiration "window of non-strength" often observed into the Tuesday after opex.
How it works
1. Monthly opex is detected when Friday falls between the 15th–21st of the month.
2. VIX expiration is calculated by finding next month’s opex date, then subtracting 30 calendar days and marking that Wednesday.
3. Vanna/charm window (green) : starts on the Monday of the week before opex and ends on Tuesday of opex week.
4. Post-opex weakness window (red) : starts Wednesday of opex week and ends Tuesday after opex.
How to use
- Add to any chart/timeframe.
- Adjust inputs to toggle VIX/opex lines, choose colors, and fine-tune the start/end offsets for shaded windows.
- This is an educational visualization of typical timing and not a trading signal.
Limitations
- Exchange holidays and contract-specific exceptions can shift expirations; this script uses standard calendar rules.
- No forward-looking data is used; all dates are derived from historical and current bar time.
- Past patterns do not guarantee future behavior.
Originality
Provides a single, adjustable visualization combining opex, VIX expiration, and configurable vanna/charm/weakness windows into one tool. Fully explained so non-coders can use it without reading the source code.
Market Flow Volatility Oscillator (AiBitcoinTrend)The Market Flow Volatility Oscillator (AiBitcoinTrend) is a cutting-edge technical analysis tool designed to evaluate and classify market volatility regimes. By leveraging Gaussian filtering and clustering techniques, this indicator provides traders with clear insights into periods of high and low volatility, helping them adapt their strategies to evolving market conditions. Built for precision and clarity, it combines advanced mathematical models with intuitive visual feedback to identify trends and volatility shifts effectively.
👽 How the Indicator Works
👾 Volatility Classification with Gaussian Filtering
The indicator detects volatility levels by applying Gaussian filters to the price series. Gaussian filters smooth out noise while preserving significant price movements. Traders can adjust the smoothing levels using sigma parameters, enabling greater flexibility:
Low Sigma: Emphasizes short-term volatility.
High Sigma: Captures broader trends with reduced sensitivity to small fluctuations.
👾 Clustering Algorithm for Regime Detection
The core of this indicator is its clustering model, which classifies market conditions into two distinct regimes:
Low Volatility Regime: Calm periods with reduced market activity.
High Volatility Regime: Intense periods with heightened price movements.
The clustering process works as follows:
A rolling window of data is analyzed to calculate the standard deviation of price returns.
Two cluster centers are initialized using the 25th and 75th percentiles of the data distribution.
Each price volatility value is assigned to the nearest cluster based on its distance to the centers.
The cluster centers are refined iteratively, providing an accurate and adaptive classification.
👾 Oscillator Generation with Slope R-Values
The indicator computes Gaussian filter slopes to generate oscillators that visualize trends:
Oscillator Low: Captures low-frequency market behavior.
Oscillator High: Tracks high-frequency, faster-changing trends.
The slope is measured using the R-value of the linear regression fit, scaled and adjusted for easier interpretation.
👽 Applications
👾 Trend Trading
When the oscillator rises above 0.5, it signals potential bullish momentum, while dips below 0.5 suggest bearish sentiment.
👾 Pullback Detection
When the oscillator peaks, especially in overbought or oversold zones, provide early warnings of potential reversals.
👽 Indicator Settings
👾 Oscillator Settings
Sigma Low/High: Controls the smoothness of the oscillators.
Smaller Values: React faster to price changes but introduce more noise.
Larger Values: Provide smoother signals with longer-term insights.
👾 Window Size and Refit Interval
Window Size: Defines the rolling period for cluster and volatility calculations.
Shorter windows: adapt faster to market changes.
Longer windows: produce stable, reliable classifications.
Disclaimer: This information is for entertainment purposes only and does not constitute financial advice. Please consult with a qualified financial advisor before making any investment decisions.
Poisson Projection of Price Levels### **Poisson Projection of Price Levels**
**Overview:**
The *Poisson Projection of Price Levels* is a cutting-edge technical indicator designed to identify and visualize potential support and resistance levels based on historical price interactions. By leveraging the Poisson distribution, this tool dynamically adjusts the significance of each price level's past "touches" to project future interactions with varying degrees of probability. This probabilistic approach offers traders a nuanced view of where price levels may hold or react in upcoming bars, enhancing both analysis and trading strategies.
---
**🔍 **Math & Methodology**
1. **Strata Levels:**
- **Definition:** Strata are horizontal lines spaced evenly around the current closing price.
- **Calculation:**
\
where \(i\) ranges from 0 to \(\text{Strata Count} - 1\).
2. **Forecast Iterations:**
- **Structure:** The indicator projects five forecast iterations into the future, each spaced by a Fibonacci sequence of bars: 2, 3, 5, 8, and 13 bars ahead. This spacing is inspired by the Fibonacci sequence, which is prevalent in financial market analysis for identifying key levels.
- **Purpose:** Each iteration represents a distinct forecast point where the price may interact with the strata, allowing for a multi-step projection of potential price levels.
3. **Touch Counting:**
- **Definition:** A "touch" occurs when the closing price of a bar is within half the increment of a stratum level.
- **Process:** For each stratum and each forecast iteration, the indicator counts the number of touches within a specified lookback window (e.g., 80 bars), offset by the forecasted position. This ensures that each iteration's touch count is independent and contextually relevant to its forecast horizon.
- **Adjustment:** Each forecast iteration analyzes a unique segment of the lookback window, offset by its forecasted position to ensure independent probability calculations.
4. **Poisson Probability Calculation:**
- **Formula:**
\
\
- **Interpretation:** \(p(k=1)\) represents the probability of exactly one touch occurring within the lookback window for each stratum and iteration.
- **Application:** This probability is used to determine the transparency of each stratum line, where higher probabilities result in more opaque (less transparent) lines, indicating stronger historical significance.
5. **Transparency Mapping:**
- **Calculation:**
\
- **Purpose:** Maps the Poisson probability to a visual transparency level, enhancing the readability of significant strata levels.
- **Outcome:** Strata with higher probabilities (more historical touches) appear more opaque, while those with lower probabilities appear fainter.
---
**📊 **Comparability to Standard Techniques**
1. **Support and Resistance Levels:**
- **Traditional Approach:** Traders identify support and resistance based on historical price reversals, pivot points, or psychological price levels.
- **Poisson Projection:** Automates and quantifies this process by statistically analyzing the frequency of price interactions with specific levels, providing a probabilistic measure of significance.
2. **Statistical Modeling:**
- **Standard Models:** Techniques like Moving Averages, Bollinger Bands, or Fibonacci Retracements offer dynamic and rule-based levels but lack direct probabilistic interpretation.
- **Poisson Projection:** Introduces a discrete event probability framework, offering a unique blend of statistical rigor and visual clarity that complements traditional indicators.
3. **Event-Based Analysis:**
- **Financial Industry Practices:** Event studies and high-frequency trading models often use Poisson processes to model order arrivals or price jumps.
- **Indicator Application:** While not identical, the use of Poisson probabilities in this indicator draws inspiration from event-based modeling, applying it to the context of price level interactions.
---
**💡 **Strengths & Advantages**
1. **Innovative Visualization:**
- Combines statistical probability with traditional support/resistance visualization, offering a fresh perspective on price level significance.
2. **Dynamic Adaptability:**
- Parameters like strata increment, lookback window, and probability threshold are user-defined, allowing customization across different markets and timeframes.
3. **Independent Probability Calculations:**
- Each forecast iteration calculates its own Poisson probability, ensuring that projections are contextually relevant and independent of other iterations.
4. **Clear Visual Cues:**
- Transparency-based coloring intuitively highlights significant price levels, making it easier for traders to identify key areas of interest at a glance.
---
**⚠️ **Limitations & Considerations**
1. **Poisson Assumptions:**
- Assumes that touches occur independently and at a constant average rate (\(\lambda\)), which may not always align with market realities characterized by trends and volatility clustering.
2. **Computational Intensity:**
- Managing multiple iterations and strata can be resource-intensive, potentially affecting performance on lower-powered devices or with very high lookback windows.
3. **Interpretation Complexity:**
- While transparency offers visual clarity, understanding the underlying probability calculations requires a basic grasp of Poisson statistics, which may be a barrier for some traders.
---
**📢 **How to Use It**
1. **Add to TradingView:**
- Open TradingView and navigate to the Pine Script Editor.
- Paste the script above and click **Add to Chart**.
2. **Configure Inputs:**
- **Strata Increment:** Set the desired price step between strata (e.g., `0.1` for 10 cents).
- **Lookback Window:** Define how many past bars to consider for calculating Poisson probabilities (e.g., `80`).
- **Probability Transparency Threshold (%):** Set the threshold percentage to map probabilities to line transparency (e.g., `25%`).
3. **Understand the Forecast Iterations:**
- The indicator projects five forecast points into the future at bar spacings of 2, 3, 5, 8, and 13 bars ahead.
- Each iteration independently calculates its Poisson probability based on the touch counts within its specific lookback window offset by its forecasted position.
4. **Interpret the Visualization:**
- **Opaque Lines:** Indicate higher Poisson probabilities, suggesting historically significant price levels that are more likely to interact again.
- **Fainter Lines:** Represent lower probabilities, indicating less historically significant levels that may be less likely to interact.
- **Forecast Spacing:** The spacing of 2, 3, 5, 8, and 13 bars ahead aligns with Fibonacci principles, offering a natural progression in forecast horizons.
5. **Apply to Trading Strategies:**
- **Support/Resistance Identification:** Use the opaque lines as potential support and resistance levels for placing trades.
- **Entry and Exit Points:** Anticipate price interactions at forecasted levels to plan strategic entries and exits.
- **Risk Management:** Utilize the transparency mapping to determine where to place stop-loss and take-profit orders based on the probability of price interactions.
6. **Customize as Needed:**
- Adjust the **Strata Increment** to fit different price ranges or volatility levels.
- Modify the **Lookback Window** to capture more or fewer historical touches, adapting to different timeframes or market conditions.
- Tweak the **Probability Transparency Threshold** to control the sensitivity of transparency mapping to Poisson probabilities.
**📈 **Practical Applications**
1. **Identifying Key Levels:**
- Quickly visualize which price levels have historically had significant interactions, aiding in the identification of potential support and resistance zones.
2. **Forecasting Price Reactions:**
- Use the forecast iterations to anticipate where price may interact in the near future, assisting in planning entry and exit points.
3. **Risk Management:**
- Determine areas of high probability for price reversals or consolidations, enabling better placement of stop-loss and take-profit orders.
4. **Market Analysis:**
- Assess the strength of market levels over different forecast horizons, providing a multi-layered understanding of market structure.
---
**🔗 **Conclusion**
The *Poisson Projection of Price Levels* bridges the gap between statistical modeling and traditional technical analysis, offering traders a sophisticated tool to quantify and visualize the significance of price levels. By integrating Poisson probabilities with dynamic transparency mapping, this indicator provides a unique and insightful perspective on potential support and resistance zones, enhancing both analysis and trading strategies.
---
**📞 **Contact:**
For support or inquiries, please contact me on TradingView!
---
**📢 **Join the Conversation!**
Have questions, feedback, or suggestions for further enhancements? Feel free to comment below or reach out directly. Your input helps refine and evolve this tool to better serve the trading community.
---
**Happy Trading!** 🚀
Quick scan for signal🙏🏻 Hey TV, this is QSFS, following:
^^ Quick scan for drift (QSFD)
^^ Quick scan for cycles (QSFC)
As mentioned before, ML trading is all about spotting any kind of non-randomness, and this metric (along with 2 previously posted) gonna help ya'll do it fast. This one will show you whether your time series possibly exhibits mean-reverting / consistent / noisy behavior, that can be later confirmed or denied by more sophisticated tools. This metric is O(n) in windowed mode and O(1) if calculated incrementally on each data update, so you can scan Ks of datasets w/o worrying about melting da ice.
^^ windowed mode
Now the post will be divided into several sections, and a couple of things I guess you’ve never seen or thought about in your life:
1) About Efficiency Ratios posted there on TV;
Some of you might say this is the Efficiency Ratio you’ve seen in Perry's book. Firstly, I can assure you that neither me nor Perry, just as X amount of quants all over the world and who knows who else, would say smth like, "I invented it," lol. This is just a thing you R&D when you need it. Secondly, I invite you (and mods & admin as well) to take a lil glimpse at the following screenshot:
^^ not cool...
So basically, all the Efficiency Ratios that were copypasted to our platform suffer the same bug: dudes don’t know how indexing works in Pine Script. I mean, it’s ok, I been doing the same mistakes as well, but loxx, cmon bro, you... If you guys ever read it, the lines 20 and 22 in da code are dedicated to you xD
2) About the metric;
This supports both moving window mode when Length > 0 and all-data expanding window mode when Length < 1, calculating incrementally from the very first data point in the series: O(n) on history, O(1) on live updates.
Now, why do I SQRT transform the result? This is a natural action since the metric (being a ratio in essence) is bounded between 0 and 1, so it can be modeled with a beta distribution. When you SQRT transform it, it still stays beta (think what happens when you apply a square root to 0.01 or 0.99), but it becomes symmetric around its typical value and starts to follow a bell-shaped curve. This can be easily checked with a normality test or by applying a set of percentiles and seeing the distances between them are almost equal.
Then I noticed that on different moving window sizes, the typical value of the metric seems to slide: higher window sizes lead to lower typical values across the moving windows. Turned out this can be modeled the same way confidence intervals are made. Lines 34 and 35 explain it all, I guess. You can see smth alike on an autocorrelogram. These two match the mean & mean + 1 stdev applied to the metric. This way, we’ve just magically received data to estimate alpha and beta parameters of the beta distribution using the method of moments. Having alpha and beta, we can now estimate everything further. Btw, there’s an alternative parameterization for beta distributions based on data length.
Now what you’ll see next is... u guys actually have no idea how deep and unrealistically minimalistic the underlying math principles are here.
I’m sure I’m not the only one in the universe who figured it out, but the thing is, it’s nowhere online or offline. By calculating higher-order moments & combining them, you can find natural adaptive thresholds that can later be used for anomaly detection/control applications for any data. No hardcoded thresholds, purely data-driven. Imma come back to this in one of the next drops, but the truest ones can already see it in this code. This way we get dem thresholds.
Your main thresholds are: basis, upper, and lower deviations. You can follow the common logic I’ve described in my previous scripts on how to use them. You just register an event when the metric goes higher/lower than a certain threshold based on what you’re looking for. Then you take the time series and confirm a certain behavior you were looking for by using an appropriate stat test. Or just run a certain strategy.
To avoid numerous triggers when the metric jitters around a threshold, you can follow this logic: forget about one threshold if touched, until another threshold is touched.
In general, when the metric gets higher than certain thresholds, like upper deviation, it means the signal is stronger than noise. You confirm it with a more sophisticated tool & run momentum strategies if drift is in place, or volatility strategies if there’s no drift in place. Otherwise, you confirm & run ~ mean-reverting strategies, regardless of whether there’s drift or not. Just don’t operate against the trend—hedge otherwise.
3) Flex;
Extension and limit thresholds based on distribution moments gonna be discussed properly later, but now you can see this:
^^ magic
Look at the thresholds—adaptive and dynamic. Do you see any optimizations? No ML, no DL, closed-form solution, but how? Just a formula based on a couple of variables? Maybe it’s just how the Universe works, but how can you know if you don’t understand how fundamentally numbers 3 and 15 are related to the normal distribution? Hm, why do they always say 3 sigmas but can’t say why? Maybe you can be different and say why?
This is the primordial power of statistical modeling.
4) Thanks;
I really wanna dedicate this to Charlotte de Witte & Marion Di Napoli, and their new track "Sanctum." It really gets you connected to the Source—I had it in my soul when I was doing all this ∞
BTC/USD Confluence Breakout Pro – IST EditionBTC/USD Confluence Breakout Pro – IST Edition is a multi-factor breakout trading system designed for intraday and swing traders.
It combines trend, momentum, price action, volume, and candlestick analysis with time-based volatility windows to deliver high-probability Buy/Sell signals.
Key Features:
Trend Filters: EMA 9/21 crossover + optional EMA 200 bias filter.
Price Action Breakouts: Detects closes above/below the last N bars’ range.
Candlestick Patterns: Bullish/Bearish engulfing, hammer, and shooting star.
Momentum Indicators: RSI (14) with configurable thresholds, MACD (12/26/9).
Volume Confirmation: Volume spike vs 20-period SMA.
IST Breakout Windows: Highlights Early London, London–US Overlap, and US Open momentum periods (Hyderabad/IST time). Optionally restricts signals to these windows.
Risk Management: ATR-based stop-loss + auto-plotted 1R, 2R, and 3R take-profit levels.
Visual Aids: EMA plots, bar coloring, shaded volatility windows, and clear entry/exit labels.
Alerts: Configurable alerts for both Buy and Sell signals.
Best Use:
Apply on 1m–15m charts for intraday trading or 1H–4H for swings.
Works best during high-volatility IST windows (London–US overlap & US open).
Ideal for BTC/USD but adaptable to other crypto or forex pairs.
40 Ticker Cross-Sectional Z-Scores [BackQuant]40 Ticker Cross-Sectional Z-Scores
BackQuant’s 40 Ticker Cross-Sectional Z-Scores is a powerful portfolio management strategy that analyzes the relative performance of up to 40 different assets, comparing them on a cross-sectional basis to identify the top and bottom performers. This indicator computes Z-scores for each asset based on their log returns and evaluates them relative to the mean and standard deviation over a rolling window. The Z-scores represent how far an asset's return deviates from the average, and these values are used to rank the assets, allowing for dynamic asset allocation based on performance.
By focusing on the strongest-performing assets and avoiding the weakest, this strategy aims to enhance returns while managing risk. Additionally, by adjusting for standard deviations, the system offers a risk-adjusted method of ranking assets, making it suitable for traders who want to dynamically allocate capital based on performance metrics rather than just price movements.
Key Features
1. Cross-Sectional Z-Score Calculation:
The system calculates Z-scores for 40 different assets, evaluating their log returns against the mean and standard deviation over a rolling window. This enables users to assess the relative performance of each asset dynamically, highlighting which assets are performing better or worse compared to their historical norms. The Z-score is a useful statistical tool for identifying outliers in asset performance.
2. Asset Ranking and Allocation:
The system ranks assets based on their Z-scores and allocates capital to the top performers. It identifies the top and bottom assets, and traders can allocate capital to the top-performing assets, ensuring that their portfolio is aligned with the best performers. Conversely, the bottom assets are removed from the portfolio, reducing exposure to underperforming assets.
3. Rolling Window for Mean and Standard Deviation Calculations:
The Z-scores are calculated based on rolling means and standard deviations, making the system adaptive to changing market conditions. This rolling calculation window allows the strategy to adjust to recent performance trends and minimize the impact of outdated data.
4. Mean and Standard Deviation Visualization:
The script provides real-time visualizations of the mean (x̄) and standard deviation (σ) of asset returns, helping traders quickly identify trends and volatility in their portfolio. These visual indicators are useful for understanding the current market environment and making more informed allocation decisions.
5. Top & Bottom Performer Tables:
The system generates tables that display the top and bottom performers, ranked by their Z-scores. Traders can quickly see which assets are outperforming and underperforming. These tables provide clear and actionable insights, helping traders make informed decisions about which assets to include in their portfolio.
6. Customizable Parameters:
The strategy allows traders to customize several key parameters, including:
Rolling Calculation Window: Set the window size for the rolling mean and standard deviation calculations.
Top & Bottom Tickers: Choose how many of the top and bottom assets to display and allocate capital to.
Table Orientation: Select between vertical or horizontal table formats to suit the user’s preference.
7. Forward Test & Out-of-Sample Testing:
The system includes out-of-sample forward tests, ensuring that the strategy is evaluated based on real-time performance, not just historical data. This forward testing approach helps validate the robustness of the strategy in dynamic market conditions.
8. Visual Feedback and Alerts:
The system provides visual feedback on the current asset rankings and allocations, with dynamic labels and plots on the chart. Additionally, users receive alerts when allocations change, keeping them informed of important adjustments.
9. Risk Management via Z-Scores and Std Dev:
The system’s approach to asset selection is based on Z-scores, which normalize performance relative to the historical mean. By incorporating standard deviation, it accounts for the volatility and risk associated with each asset. This allows for more precise risk management and portfolio construction.
10. Note on Mean Reversion Strategy:
If you take the inverse of the signals provided by this indicator, the strategy can be used for mean-reversion rather than trend-following. This would involve buying the underperforming assets and selling the outperforming ones. However, it's important to note that this approach does not work well with highly correlated assets, as the relationship between the assets could result in the same directional movement, undermining the effectiveness of the mean-reversion strategy.
References
www.uts.edu.au
onlinelibrary.wiley.com
www.cmegroup.com
Final Thoughts
The 40 Ticker Cross-Sectional Z-Scores strategy offers a data-driven approach to portfolio management, dynamically allocating capital based on the relative performance of assets. By using Z-scores and standard deviations, this strategy ensures that capital is directed to the strongest performers while avoiding weaker assets, ultimately improving the risk-adjusted returns of the portfolio. Whether you’re focused on trend-following or looking to explore mean-reversion strategies, this flexible system can be tailored to suit your investment goals.
Gold Opening 15-Min ORB INDICATOR by AdéThis indicator is designed for trading Gold (XAUUSD) during the first 15 minutes of major market openings: Asian, European, and US sessions. It highlights these key time windows, plots the high and low ranges of each session, and generates breakout-based buy/sell signals. Ideal for traders focusing on volatility at market opens.
Features:Session Windows:
Asian: 1:00–1:15 AM Barcelona time (23:00–23:15 UTC, CEST-adjusted).
European: 9:00–9:15 AM Barcelona time (07:00–07:15 UTC).
US: 3:30–3:45 PM Barcelona time (13:30–13:45 UTC).
Marked with yellow (Asian), green (Europe), and blue (US) triangles below bars.
High/Low Ranges:Plots horizontal lines showing the highest high and lowest low of each session’s first 15 minutes.Lines appear after each session ends and persist until the next day, color-coded to match the sessions.Breakout Signals:Buy (Long): Triggers when the closing price breaks above the highest high of the previous 5 bars during a session window (lime triangle above bar).Sell (Short): Triggers when the closing price breaks below the lowest low of the previous 5 bars during a session window (red triangle below bar).
Signals are restricted to the 15-minute session periods for focused trading.Usage:Timeframe: Optimized for 1-minute XAUUSD charts.Timezone: Set your chart to UTC for accurate session timing (script uses UTC internally, based on Barcelona CEST, UTC+2 in April).Strategy:
Use buy/sell signals for breakout trades during volatile market opens, with session ranges as support/resistance levels.Customization: Adjust the lookback variable (default: 5) to tweak signal sensitivity.Notes:Tested for April 2025 (CEST, UTC+2).
Adjust timestamp values if using outside daylight saving time (CET, UTC+1) or for different broker timezones.Best for scalping or short-term trades during high-volatility periods. Combine with other indicators for confirmation if desired.How to Use:Apply to a 1-minute XAUUSD chart.Watch for session markers (triangles) and breakout signals during the 15-minute windows.Use the high/low lines to gauge potential breakout targets or reversals.