TOP 20 ALTCOIN INDEXIndicator Description
The "ALT20 INDEX" is a financial analysis tool designed to track the aggregate value of the top 20 cryptocurrencies by market capitalization and closing prices over specific periods. This indicator reflects changes in the combined value of these 20 ALTCOINs, providing an overview of trends in the cryptocurrency market.
=================================
Purpose and Practical Applications
1. Tracking Top Cryptocurrencies:
- The indicator allows monitoring the value of the top 20 ALTCOINs, reflecting the general volatility of the cryptocurrency market.
- Helps investors focus on high-capitalization assets.
2. Performance Comparison:
- Serves as a tool to compare the performance of the ALT20 group against other assets like Bitcoin, Ethereum, or traditional financial indices.
3. Assessing Market Health:
- Enables evaluation of market trends, identifying growth or decline periods.
4. Practical Applications:
- Suitable for fund managers, long-term investors, or trend traders to make decisions based on the overall ALTCOIN market performance.
-------------------------------------------
How the Indicator Works
1. Selection of Top 20 ALTCOINs:
- Cryptocurrencies are selected based on their market capitalization at each rebalancing period.
2. Weight Allocation and Calculation:
- Weight: Determined by the market capitalization of each ALTCOIN relative to the total market capitalization of the top 20.
- Token Quantity: Calculated based on weight, total allocation points (e.g., 100 points for T1, 722.63 points for T2, etc.), and each ALTCOIN's closing price.
Formula: Token Quantity = Weight × Total Allocation Points/Closing Price
3. Periodic Rebalancing:
- Rebalancing frequency: Once a year.
- At each rebalancing period, the weights and token quantities are adjusted based on new market capitalization and prices.
4. Portfolio Value Calculation:
- The value of each ALTCOIN is calculated as:
Token Value = Closing Price × Token Quantity
- Index Total: ALT20 Index = 20∑'i=1'Token Value'i'
------------------------------------------
Rebalancing Periods
T1 (2020-2021): Initial period, token quantities calculated based on weights and a total of 100 points.
T2 (2021-2022): Rebalanced with a total allocation of 722.63 points.
T3 (2022-2023): Total allocation of 252.26 points, reflecting portfolio adjustments based on new prices and market caps.
T4 (2023-2024): Total allocation of 261.43 points.
T5 (2024-Present): Total allocation of 437.42 points, updated to reflect the current market.
-----------------------------------------
Indicator Features
- Displays Index Value Over Time:
+ index_value_T1 to index_value_T5 represent the portfolio value during specific timeframes.
+ Values are calculated based on the daily closing prices of ALTCOINs.
- Visualization:
+ The index for each period is plotted on the chart, enabling easy observation of market trends over time.
---------------------------------------
Practical Applications
- Portfolio Management:
+ The indicator helps track the performance of asset groups within the ALTCOIN portfolio.
- Integration into Trading Systems:
+ Used as a reference for automated or manual trading strategies.
- Market Analysis:
+ Assists analysts in evaluating cryptocurrency market movements based on the top 20 ALTCOINs.
Let me know if further optimization or additional information is needed! Thank!
Statistics
M2 Money Shift for Bitcoin [SAKANE]M2 Money Shift for Bitcoin was developed to visualize the impact of M2 Money, a macroeconomic indicator, on the Bitcoin market and to support trade analysis.
Bitcoin price fluctuations have a certain correlation with cycles in M2 money supply.In particular, it has been noted that changes in M2 supply can affect the bitcoin price 70 days in advance.Very high correlations have been observed in recent years in particular, making it useful as a supplemental analytical tool for trading.
Support for M2 data from multiple countries
M2 supply data from the U.S., Europe, China, Japan, the U.K., Canada, Australia, and India are integrated and all are displayed in U.S. dollar equivalents.
Slide function
Using the "Slide Days Forward" setting, M2 data can be slid up to 500 days, allowing for flexible analysis that takes into account the time difference from the bitcoin price.
Plotting Total Liquidity
Plot total liquidity (in trillions of dollars) by summing the M2 supply of multiple countries.
How to use
After applying the indicator to the chart, activate the M2 data for the required country from the settings screen. 2.
2. adjust "Slide Days Forward" to analyze the relationship between changes in M2 supply and bitcoin price
3. refer to the Gross Liquidity plot to build a trading strategy that takes into account macroeconomic influences.
Notes.
This indicator is an auxiliary tool for trade analysis and does not guarantee future price trends.
The relationship between M2 supply and bitcoin price depends on many factors and should be used in conjunction with other analysis methods.
N-Degree Moment-Based Adaptive Detection🙏🏻 N-Degree Moment-Based Adaptive Detection (NDMBAD) method is a generalization of MBAD since the horizontal line fit passing through the data's mean can be simply treated as zero-degree polynomial regression. We can extend the MBAD logic to higher-degree polynomial regression.
I don't think I need to talk a lot about the thing there; the logic is really the same as in MBAD, just hit the link above and read if you want. The only difference is now we can gather cumulants not only from the horizontal mean fit (degree = 0) but also from higher-order polynomial regression fit, including linear regression (degree = 1).
Why?
Simply because residuals from the 0-degree model don't contain trend information, and while in some cases that's exactly what you need, in other cases, you want to model your trend explicitly. Imagine your underlying process trends in a steady manner, and you want to control the extreme deviations from the process's core. If you're going to use 0-degree, you'll be treating this beautiful steady trend as a residual itself, which "constantly deviates from the process mean." It doesn't make much sense.
How?
First, if you set the length to 0, you will end up with the function incrementally applied to all your data starting from bar_index 0. This can be called the expanding window mode. That's the functionality I include in all my scripts lately (where it makes sense). As I said in the MBAD description, choosing length is a matter of doing business & applied use of my work, but I think I'm open to talk about it.
I don't see much sense in using degree > 1 though (still in research on it). If you have dem curves, you can use Fourier transform -> spectral filtering / harmonic regression (regression with Fourier terms). The job of a degree > 0 is to model the direction in data, and degree 1 gets it done. In mean reversion strategies, it means that you don't wanna put 0-degree polynomial regression (i.e., the mean) on non-stationary trending data in moving window mode because, this way, your residuals will be contaminated with the trend component.
By the way, you can send thanks to @aaron294c , he said like mane MBAD is dope, and it's gonna really complement his work, so I decided to drop NDMBAD now, gonna be more useful since it covers more types of data.
I wanned to call it N-Order Moment Adaptive Detection because it abbreviates to NOMAD, which sounds cool and suits me well, because when I perform as a fire dancer, nomad style is one of my outfits. Burning Man stuff vibe, you know. But the problem is degree and order really mean two different things in the polynomial context, so gotta stay right & precise—that's the priority.
∞
AadTrend [InvestorUnknown]The AadTrend indicator is an experimental trading tool that combines a user-selected moving average with the Average Absolute Deviation (AAD) from this moving average. This combination works similarly to the Supertrend indicator but offers additional flexibility and insights. In addition to generating Long and Short signals, the AadTrend indicator identifies RISK-ON and RISK-OFF states for each trade direction, highlighting areas where taking on more risk may be considered.
Core Concepts and Features
Moving Average (User-Selected Type)
The indicator allows users to select from various types of moving averages to suit different trading styles and market conditions:
Simple Moving Average (SMA)
Exponential Moving Average (EMA)
Hull Moving Average (HMA)
Double Exponential Moving Average (DEMA)
Triple Exponential Moving Average (TEMA)
Relative Moving Average (RMA)
Fractal Adaptive Moving Average (FRAMA)
Average Absolute Deviation (AAD)
The Average Absolute Deviation measures the average distance between each data point and the mean, providing a robust estimation of volatility.
aad(series float src, simple int length, simple string avg_type) =>
avg = // Moving average as selected by the user
abs_deviations = math.abs(src - avg)
ta.sma(abs_deviations, length)
This provides a volatility measure that adapts to recent market conditions.
Combining Moving Average and AAD
The indicator creates upper and lower bands around the moving average using the AAD, similar to how the Supertrend indicator uses Average True Range (ATR) for its bands.
AadTrend(series float src, simple int length, simple float aad_mult, simple string avg_type) =>
// Calculate AAD (volatility measure)
aad_value = aad(src, length, avg_type)
// Calculate the AAD-based moving average by scaling the price data with AAD
avg = switch avg_type
"SMA" => ta.sma(src, length)
"EMA" => ta.ema(src, length)
"HMA" => ta.hma(src, length)
"DEMA" => ta.dema(src, length)
"TEMA" => ta.tema(src, length)
"RMA" => ta.rma(src, length)
"FRAMA" => ta.frama(src, length)
avg_p = avg + (aad_value * aad_mult)
avg_m = avg - (aad_value * aad_mult)
var direction = 0
if ta.crossover(src, avg_p)
direction := 1
else if ta.crossunder(src, avg_m)
direction := -1
A chart displaying the moving average with upper and lower AAD bands enveloping the price action.
Signals and Trade States
1. Long and Short Signals
Long Signal: Generated when the price crosses above the upper AAD band,
Short Signal: Generated when the price crosses below the lower AAD band.
2. RISK-ON and RISK-OFF States
These states provide additional insight into the strength of the current trend and potential opportunities for taking on more risk.
RISK-ON Long: When the price moves significantly above the upper AAD band after a Long signal.
RISK-OFF Long: When the price moves back below the upper AAD band, suggesting caution.
RISK-ON Short: When the price moves significantly below the lower AAD band after a Short signal.
RISK-OFF Short: When the price moves back above the lower AAD band.
Highlighted areas on the chart representing RISK-ON and RISK-OFF zones for both Long and Short positions.
A chart showing the filled areas corresponding to trend directions and RISK-ON zones
Backtesting and Performance Metrics
While the AadTrend indicator focuses on generating signals and highlighting risk areas, it can be integrated with backtesting frameworks to evaluate performance over historical data.
Integration with Backtest Library:
import InvestorUnknown/BacktestLibrary/1 as backtestlib
Customization and Calibration
1. Importance of Calibration
Default Settings Are Experimental: The default parameters are not optimized for any specific market condition or asset.
User Calibration: Traders should adjust the length, aad_mult, and avg_type parameters to align the indicator with their trading strategy and the characteristics of the asset being analyzed.
2. Factors to Consider
Market Volatility: Higher volatility may require adjustments to the aad_mult to avoid false signals.
Trading Style: Short-term traders might prefer faster-moving averages like EMA or HMA, while long-term traders might opt for SMA or FRAMA.
Alerts and Notifications
The AadTrend indicator includes built-in alert conditions to notify traders of significant market events:
Long and Short Alerts:
alertcondition(long_alert, "LONG (AadTrend)", "AadTrend flipped ⬆LONG⬆")
alertcondition(short_alert, "SHORT (AadTrend)", "AadTrend flipped ⬇Short⬇")
RISK-ON and RISK-OFF Alerts:
alertcondition(risk_on_long, "RISK-ON LONG (AadTrend)", "RISK-ON LONG (AadTrend)")
alertcondition(risk_off_long, "RISK-OFF LONG (AadTrend)", "RISK-OFF LONG (AadTrend)")
alertcondition(risk_on_short, "RISK-ON SHORT (AadTrend)", "RISK-ON SHORT (AadTrend)")
alertcondition(risk_off_short, "RISK-OFF SHORT (AadTrend)", "RISK-OFF SHORT (AadTrend)")
Important Notes and Disclaimer
Experimental Nature: The AadTrend indicator is experimental and should be used with caution.
No Guaranteed Performance: Past performance is not indicative of future results. Backtesting results may not reflect real trading conditions.
User Responsibility: Traders and investors should thoroughly test and calibrate the indicator settings before applying it to live trading.
Risk Management: Always use proper risk management techniques, including stop-loss orders and position sizing.
COT Report Indicator with Selectable Data TypeOverview
The COT Report Indicator with Selectable Data Types is a powerful tool for traders who want to gain deeper insights into market sentiment using the Commitment of Traders (COT) data. This indicator allows you to visualize the net positions of different participant categories—Commercial, Noncommercial, and Nonreportable—directly on your chart.
The indicator is fully customizable, allowing you to select the type of data to display, sync with your chart's timeframe, or choose a custom timeframe. Whether you're analyzing gold, crude oil, indices, or forex pairs, this indicator adapts seamlessly to your trading needs.
Features
Dynamic Data Selection:
Choose between Commercial, Noncommercial, or Nonreportable data types.
Analyze the net positions of market participants for more informed decision-making.
Flexible Timeframes:
Sync with the chart's timeframe for quick analysis.
Select a custom timeframe to view COT data at your preferred granularity.
Wide Asset Coverage:
Supports various assets, including gold, silver, crude oil, indices, and forex pairs.
Automatically adjusts to the ticker you're analyzing.
Clear Visual Representation:
Displays Net Long, Net Short, and Net Difference (Long - Short) positions with distinct colors for easy interpretation.
Error Handling:
Alerts you if the symbol is unsupported, ensuring you know when COT data isn't available for a specific asset.
How to Use
Add the Indicator:
Click "Indicators" in TradingView and search for "COT Report Indicator with Selectable Data Types."
Add it to your chart.
Customize the Settings:
Data Type: Choose between Commercial, Noncommercial, or Nonreportable positions.
Data Source: Select "Futures Only" or "Futures and Options."
Timeframe: Sync with the chart's timeframe or specify a custom one (e.g., weekly, monthly).
Interpret the Data:
Green Line: Net Long Positions.
Red Line: Net Short Positions.
Black Line: Net Difference (Long - Short).
Supported Symbols:
Gold, Silver, Crude Oil, Natural Gas, Forex Pairs, S&P 500, US30, NAS100, and more.
Who Can Benefit
Trend Followers: Identify the buying/selling trends of Commercial and Noncommercial participants.
Sentiment Analysts: Understand shifts in sentiment among major market players.
Long-Term Traders: Use COT data to confirm or contradict your fundamental analysis.
Example Use Case
For example, if you're trading gold (XAUUSD) and select Noncommercial Positions, you’ll see the long and short positions of speculators. An increase in net long positions may signal bullish sentiment, while an increase in net short positions may indicate bearish sentiment.
If you switch to Commercial Positions, you'll get insights into how hedgers and institutions are positioning themselves, helping you confirm or counterbalance your current trading strategy.
Limitations
The indicator only works with supported symbols (COT data availability is limited to specific assets).
The COT data is updated weekly, so it is not suitable for short-term intraday trading.
Weekly H/L DOTWThe Weekly High/Low Day Breakdown indicator provides a detailed statistical analysis of the days of the week (Monday to Sunday) on which weekly highs and lows occur for a given timeframe. It helps traders identify recurring patterns, correlations, and tendencies in price behavior across different days of the week. This can assist in planning trading strategies by leveraging day-specific patterns.
The indicator visually displays the statistical distribution of weekly highs and lows in an easy-to-read tabular format on your chart. Users can customize how the data is displayed, including whether the table is horizontal or vertical, the size of the text, and the position of the table on the chart.
Key Features:
Weekly Highs and Lows Identification:
Tracks the highest and lowest price of each trading week.
Records the day of the week on which these events occur.
Customizable Table Layout:
Option to display the table horizontally or vertically.
Text size can be adjusted (Small, Normal, or Large).
Table position is customizable (top-right, top-left, bottom-right, or bottom-left of the chart).
Flexible Value Representation:
Allows the display of values as percentages or as occurrences.
Default setting is occurrences, but users can toggle to percentages as needed.
Day-Specific Display:
Option to hide Saturday or Sunday if these days are not relevant to your trading strategy.
Visible Date Range:
Users can define a start and end date for the analysis, focusing the results on a specific period of interest.
User-Friendly Interface:
The table dynamically updates based on the selected timeframe and visibility of the chart, ensuring the displayed data is always relevant to the current context.
Adaptable to Custom Needs:
Includes all-day names from Monday to Sunday, but allows for specific days to be excluded based on the user’s preferences.
Indicator Logic:
Data Collection:
The indicator collects daily high, low, day of the week, and time data from the selected ticker using the request.security() function with a daily timeframe ('D').
Weekly Tracking:
Tracks the start and end times of each week.
During each week, it monitors the highest and lowest prices and the days they occurred.
Weekly Closure:
When a week ends (detected by Sunday’s daily candle), the indicator:
Updates the statistics for the respective days of the week where the weekly high and low occurred.
Resets tracking variables for the next week.
Visible Range Filter:
Only processes data for weeks that fall within the visible range of the chart, ensuring the table reflects only the visible portion of the chart.
Statistical Calculations:
Counts the number of weekly highs and lows for each day.
Calculates percentages relative to the total number of weeks in the visible range.
Dynamic Table Display:
Depending on user preferences, displays the data either horizontally or vertically.
Formats the table with proper alignment, colors, and text sizes for easy readability.
Custom Value Representation:
If set to "percentages," displays the percentage of weeks a high/low occurred on each day.
If set to "occurrences," displays the raw count of weekly highs/lows for each day.
Input Parameters:
High Text Color:
Color for the text in the "Weekly High" row or column.
Low Text Color:
Color for the text in the "Weekly Low" row or column.
High Background Color:
Background color for the "Weekly High" row or column.
Low Background Color:
Background color for the "Weekly Low" row or column.
Table Background Color:
General background color for the table.
Hide Saturday:
Option to exclude Saturday from the analysis and table.
Hide Sunday:
Option to exclude Sunday from the analysis and table.
Values Format:
Dropdown menu to select "percentages" or "occurrences."
Default value: "occurrences."
Table Position:
Dropdown menu to select the table position on the chart: "top_right," "top_left," "bottom_right," "bottom_left."
Default value: "top_right."
Text Size:
Dropdown menu to select text size: "Small," "Normal," "Large."
Default value: "Normal."
Vertical Table Format:
Checkbox to toggle the table layout:
Checked: Table displays days vertically, with Monday at the top.
Unchecked: Table displays days horizontally.
Start Date:
Allows users to specify the starting date for the analysis.
End Date:
Allows users to specify the ending date for the analysis.
Use Cases:
Day-Specific Pattern Recognition:
Identify if specific days, such as Monday or Friday, are more likely to form weekly highs or lows.
Seasonal Analysis:
Use the start and end date filters to analyze patterns during specific trading seasons.
Strategy Development:
Plan day-based entry and exit strategies by identifying recurring patterns in weekly highs/lows.
Historical Review:
Study historical data to understand how market behavior has changed over time.
TradingView TOS Compliance Notes:
Originality:
This script is uniquely designed to provide day-based statistics for weekly highs and lows, which is not a common feature in other publicly available indicators.
Usefulness:
Offers practical insights for traders interested in understanding day-specific price behavior.
Detailed Description:
Fully explains the purpose, features, logic, input settings, and use cases of the indicator.
Includes clear and concise details on how each input works.
Clear Input Descriptions:
All input parameters are clearly named and explained in the script and this description.
No Redundant Functionality:
Focused specifically on tracking weekly highs and lows, ensuring the indicator serves a distinct purpose without unnecessary features.
Simple Decesion Matrix Classification Algorithm [SS]Hello everyone,
It has been a while since I posted an indicator, so thought I would share this project I did for fun.
This indicator is an attempt to develop a pseudo Random Forest classification decision matrix model for Pinescript.
This is not a full, robust Random Forest model by any stretch of the imagination, but it is a good way to showcase how decision matrices can be applied to trading and within Pinescript.
As to not market this as something it is not, I am simply calling it the "Simple Decision Matrix Classification Algorithm". However, I have stolen most of the aspects of this machine learning algo from concepts of Random Forest modelling.
How it works:
With models like Support Vector Machines (SVM), Random Forest (RF) and Gradient Boosted Machine Learning (GBM), which are commonly used in Machine Learning Classification Tasks (MLCTs), this model operates similarity to the basic concepts shared amongst those modelling types. While it is not very similar to SVM, it is very similar to RF and GBM, in that it uses a "voting" system.
What do I mean by voting system?
How most classification MLAs work is by feeding an input dataset to an algorithm. The algorithm sorts this data, categorizes it, then introduces something called a confusion matrix (essentially sorting the data in no apparently order as to prevent over-fitting and introduce "confusion" to the algorithm to ensure that it is not just following a trend).
From there, the data is called upon based on current data inputs (so say we are using RSI and Z-Score, the current RSI and Z-Score is compared against other RSI's and Z-Scores that the model has saved). The model will process this information and each "tree" or "node" will vote. Then a cumulative overall vote is casted.
How does this MLA work?
This model accepts 2 independent variables. In order to keep things simple, this model was kept as a three node model. This means that there are 3 separate votes that go in to get the result. A vote is casted for each of the two independent variables and then a cumulative vote is casted for the overall verdict (the result of the model's prediction).
The model actually displays this system diagrammatically and it will likely be easier to understand if we look at the diagram to ground the example:
In the diagram, at the very top we have the classification variable that we are trying to predict. In this case, we are trying to predict whether there will be a breakout/breakdown outside of the normal ATR range (this is either yes or no question, hence a classification task).
So the question forms the basis of the input. The model will track at which points the ATR range is exceeded to the upside or downside, as well as the other variables that we wish to use to predict these exceedences. The ATR range forms the basis of all the data flowing into the model.
Then, at the second level, you will see we are using Z-Score and RSI to predict these breaks. The circle will change colour according to "feature importance". Feature importance basically just means that the indicator has a strong impact on the outcome. The stronger the importance, the more green it will be, the weaker, the more red it will be.
We can see both RSI and Z-Score are green and thus we can say they are strong options for predicting a breakout/breakdown.
So then we move down to the actual voting mechanisms. You will see the 2 pink boxes. These are the first lines of voting. What is happening here is the model is identifying the instances that are most similar and whether the classification task we have assigned (remember out ATR exceedance classifier) was either true or false based on RSI and Z-Score.
These are our 2 nodes. They both cast an individual vote. You will see in this case, both cast a vote of 1. The options are either 1 or 0. A vote of 1 means "Yes" or "Breakout likely".
However, this is not the only voting the model does. The model does one final vote based on the 2 votes. This is shown in the purple box. We can see the final vote and result at the end with the orange circle. It is 1 which means a range exceedance is anticipated and the most likely outcome.
The Data Table Component
The model has many moving parts. I have tried to represent the pivotal functions diagrammatically, but some other important aspects and background information must be obtained from the companion data table.
If we bring back our diagram from above:
We can see the data table to the left.
The data table contains 2 sections, one for each independent variable. In this case, our independent variables are RSI and Z-Score.
The data table will provide you with specifics about the independent variables, as well as about the model accuracy and outcome.
If we take a look at the first row, it simply indicates which independent variable it is looking at. If we go down to the next row where it reads "Weighted Impact", we can see a corresponding percent. The "weighted impact" is the amount of representation each independent variable has within the voting scheme. So in this case, we can see its pretty equal, 45% and 55%, This tells us that there is a slight higher representation of z-score than RSI but nothing to worry about.
If there was a major over-respresentation of greater than 30 or 40%, then the model would risk being skewed and voting too heavily in favour of 1 variable over the other.
If we move down from there we will see the next row reads "independent accuracy". The voting of each independent variable's accuracy is considered separately. This is one way we can determine feature importance, by seeing how well one feature augments the accuracy. In this case, we can see that RSI has the greatest importance, with an accuracy of around 87% at predicting breakouts. That makes sense as RSI is a momentum based oscillator.
Then if we move down one more, we will see what each independent feature (node) has voted for. In this case, both RSI and Z-Score voted for 1 (Breakout in our case).
You can weigh these in collaboration, but its always important to look at the final verdict of the model, which if we move down, we can see the "Model prediction" which is "Bullish".
If you are using the ATR breakout, the model cannot distinguish between "Bullish" or "Bearish", must that a "Breakout" is likely, either bearish or bullish. However, for the other classification tasks this model can do, the results are either Bullish or Bearish.
Using the Function:
Okay so now that all that technical stuff is out of the way, let's get into using the function. First of all this function innately provides you with 3 possible classification tasks. These include:
1. Predicting Red or Green Candle
2. Predicting Bullish / Bearish ATR
3. Predicting a Breakout from the ATR range
The possible independent variables include:
1. Stochastics,
2. MFI,
3. RSI,
4. Z-Score,
5. EMAs,
6. SMAs,
7. Volume
The model can only accept 2 independent variables, to operate within the computation time limits for pine execution.
Let's quickly go over what the numbers in the diagram mean:
The numbers being pointed at with the yellow arrows represent the cases the model is sorting and voting on. These are the most identical cases and are serving as the voting foundation for the model.
The numbers being pointed at with the pink candle is the voting results.
Extrapolating the functions (For Pine Developers:
So this is more of a feature application, so feel free to customize it to your liking and add additional inputs. But here are some key important considerations if you wish to apply this within your own code:
1. This is a BINARY classification task. The prediction must either be 0 or 1.
2. The function consists of 3 separate functions, the 2 first functions serve to build the confusion matrix and then the final "random_forest" function serves to perform the computations. You will need all 3 functions for implementation.
3. The model can only accept 2 independent variables.
I believe that is the function. Hopefully this wasn't too confusing, it is very statsy, but its a fun function for me! I use Random Forest excessively in R and always like to try to convert R things to Pinescript.
Hope you enjoy!
Safe trades everyone!
Moment-Based Adaptive DetectionMBAD (Moment-Based Adaptive Detection) : a method applicable to a wide range of purposes, like outlier or novelty detection, that requires building a sensible interval/set of thresholds. Unlike other methods that are static and rely on optimizations that inevitably lead to underfitting/overfitting, it dynamically adapts to your data distribution without any optimizations, MLE, or stuff, and provides a set of data-driven adaptive thresholds, based on closed-form solution with O(n) algo complexity.
1.5 years ago, when I was still living in Versailles at my friend's house not knowing what was gonna happen in my life tomorrow, I made a damn right decision not to give up on one idea and to actually R&D it and see what’s up. It allowed me to create this one.
The Method Explained
I’ve been wandering about z-values, why exactly 6 sigmas, why 95%? Who decided that? Why would you supersede your opinion on data? Based on what? Your ego?
Then I consciously noticed a couple of things:
1) In control theory & anomaly detection, the popular threshold is 3 sigmas (yet nobody can firmly say why xD). If your data is Laplace, 3 sigmas is not enough; you’re gonna catch too many values, so it needs a higher sigma.
2) Yet strangely, the normal distribution has kurtosis of 3, and 6 for Laplace.
3) Kurtosis is a standardized moment, a moment scaled by stdev, so it means "X amount of something measured in stdevs."
4) You generate synthetic data, you check on real data (market data in my case, I am a quant after all), and you see on both that:
lower extension = mean - standard deviation * kurtosis ≈ data minimum
upper extension = mean + standard deviation * kurtosis ≈ data maximum
Why not simply use max/min?
- Lower info gain: We're not using all info available in all data points to estimate max/min; we just pick the current higher and lower values. Lol, it’s the same as dropping exponential smoothing with alpha = 0 on stationary data & calling it a day.
You can’t update the estimates of min and max when new data arrives containing info about the matter. All you can do is just extend min and max horizontally, so you're not using new info arriving inside new data.
- Mixing order and non-order statistics is a bad idea; we're losing integrity and coherence. That's why I don't like the Hurst exponent btw (and yes, I came up with better metrics of my own).
- Max & min are not even true order statistics, unlike a percentile (finding which requires sorting, which requires multiple passes over your data). To find min or max, you just need to do one traversal over your data. Then with or without any weighting, 100th percentile will equal max. So unlike a weighted percentile, you can’t do weighted max. Then while you can always check max and min of a geometric shape, now try to calculate the 56th percentile of a pentagram hehe.
TL;DR max & min are rather topological characteristics of data, just as the difference between starting and ending points. Not much to do with statistics.
Now the second part of the ballet is to work with data asymmetry:
1) Skewness is also scaled by stdev -> so it must represent a shift from the data midrange measured in stdevs -> given asymmetric data, we can include this info in our models. Unlike kurtosis, skewness has a sign, so we add it to both thresholds:
lower extension = mean - standard deviation * kurtosis + standard deviation * skewness
upper extension = mean + standard deviation * kurtosis + standard deviation * skewness
2) Now our method will work with skewed data as well, omg, ain’t it cool?
3) Hold up, but what about 5th and 6th moments (hyperskewness & hyperkurtosis)? They should represent something meaningful as well.
4) Perhaps if extensions represent current estimated extremums, what goes beyond? Limits, beyond which we expect data not to be able to pass given the current underlying process generating the data?
When you extend this logic to higher-order moments, i.e., hyperskewness & hyperkurtosis (5th and 6th moments), they measure asymmetry and shape of distribution tails, not its core as previous moments -> makes no sense to mix 4th and 3rd moments (skewness and kurtosis) with 5th & 6th, so we get:
lower limit = mean - standard deviation * hyperkurtosis + standard deviation * hyperskewness
upper limit = mean + standard deviation * hyperkurtosis + standard deviation * hyperskewness
While extensions model your data’s natural extremums based on current info residing in the data without relying on order statistics, limits model your data's maximum possible and minimum possible values based on current info residing in your data. If a new data point trespasses limits, it means that a significant change in the data-generating process has happened, for sure, not probably—a confirmed structural break.
And finally we use time and volume weighting to include order & process intensity information in our model.
I can't stress it enough: despite the popularity of these non-weighted methods applied in mainstream open-access time series modeling, it doesn’t make ANY sense to use non-weighted calculations on time series data . Time = sequence, it matters. If you reverse your time series horizontally, your means, percentiles, whatever, will stay the same. Basically, your calculations will give the same results on different data. When you do it, you disregard the order of data that does have order naturally. Does it make any sense to you? It also concerns regressions applied on time series as well, because even despite the slope being opposite on your reversed data, the centroid (through which your regression line always comes through) will be the same. It also might concern Fourier (yes, you can do weighted Fourier) and even MA and AR models—might, because I ain’t researched it extensively yet.
I still can’t believe it’s nowhere online in open access. No chance I’m the first one who got it. It’s literally in front of everyone’s eyes for centuries—why no one tells about it?
How to use
That’s easy: can be applied to any, even non-stationary and/or heteroscedastic time series to automatically detect novelties, outliers, anomalies, structural breaks, etc. In terms of quant trading, you can try using extensions for mean reversion trades and limits for emergency exits, for example. The market-making application is kinda obvious as well.
The only parameter the model has is length, and it should NOT be optimized but picked consciously based on the process/system you’re applying it to and based on the task. However, this part is not about sharing info & an open-access instrument with the world. This is about using dem instruments to do actual business, and we can’t talk about it.
∞
Z-ScoresTLDR
Z-Scores ask "How many standard deviations is the current price, away from the moving average?"
Or put another way, it tells you how an asset is performing relative to its own moving average, centered about the zero-point.
INTRODUCTION
Z-Scores are a fundamental statistical concept which take any dataset (in this case price), and present the data in terms of standard deviation. In the case of price, we're using *moving* standard deviations, much like we use a moving average.
A useful aspect of z-scores is that data oscillates around a zero point. The chart then presents the "number_of_standard_deviations" that price is, away from the moving average. So for example, if we're looking at the 100-day z-score, and the price = 0, that means that the current price is right at the 100-day moving average. If price = 1, that means that the price is 1 standard deviation above the 100-day moving average.
HOW DO I USE THIS?
This particular script offers a ribbon of z-scores (much like how you might have a ribbon of moving averages). You can enter up to 5 z-scores in the options. Enter "0" to remove a ribbon from the chart.
If you're a math nerd, you can also select "Use Log Transform," which effectively uses the geometric mean for z-score calculation. For charts that are rangebound, you dont really need this option, but for charts that are highly exponential, you might want to select this.
I've noticed that z-scores tend to behave similar to RSI. I prefer z-scores because they're a non-arbitrary, fundamental statistical concept, whereas the RSI calculation is somewhat arbitrary. I offer the z-score as a ribbon, because it removes the arbitrary nature of selecting one particular moving average
PRACTICAL APPLICATION
Z-Score prices are akin to a momentum and/or trend indicator. They're useful for identifying a trending market, or trend reversals, before the actual price begins to reverse in earnest. The RSI concept of "divergence" can be applied. It can also be used to see how far out of trend a particularly violent movement up/down is, historically.
Another particularly useful aspect of z-scores, is comparing the performance of two different assets with very different price points. For example, maybe one chart is measured in cents, and another chart is measured in billions. Z-Scores normalize prices to a zero point, and normalize differing volatility by presenting it in terms of standard deviations. I use this for comparing things like in-asset-class performance, and also comparing various asset classes to others.
Remember, the z-score tells you how an asset is performing relative to its own moving average, centered about the zero-point.
CLOSING
I'm adding this to the public repository because there isnt a good z-score implementation here on TradingView, and especially not one that offers an adjustable ribbon like this. For the sharp eye, there is useful signal in z-scores, whether applied to a single asset, or to compare the performance of two assets against each other.
IU Price Density(Market Noise)This Price density Indicator will help you understand what and how market noise is calculated and treated.
Market noise = when the market is moving up and down without any clear direction
The Price Density Indicator is a technical analysis tool used to measure the concentration or "density" of price movements within a specific range. It helps traders differentiate between noisy, choppy markets and trending ones.
I’ve developed a custom Pine Script indicator, "IU Price Density," designed to help traders distinguish between noisy, indecisive markets and clear trading opportunities. It can be applied across multiple markets.
How this work:
Formula = (Σ (High𝑖 - Low𝑖)) / (Max(High) - Min(Low))
Where,
High𝑖 = the high price at the 𝑖 data point.
Low𝑖 = the low price at the 𝑖 data point.
Max(High) = highest price over the data set.
Max(Low) = Lowest price over the data set.
How to use it :
This indicator ranges from 0 to 10
Green(0-3) = Trending Market
Orange(3-6) = Market is normal
Red(6-10) = Noise market
💡 Key Features:
Dynamic Visuals: The indicator uses color-coded signals—green for trending markets and red for noisy, volatile conditions—making it easy to identify optimal trading periods at a glance.
Background Shading: With background colors highlighting significant market conditions, traders can quickly assess when to engage or avoid certain trades.
Customizable Parameters: The length and smoothing factors allow for flexibility in adapting the indicator to various assets and timeframes.
Whether you're a swing trader or an intraday strategist, this tool provides valuable insights to improve your market analysis. I’m excited to bring this indicator to the community!
Relative Vigor Index Z-ScoreThe Relative Vigor Index with Z-Score (RVIZ) is a combined technical analysis tool that helps traders assess the strength and volatility of price movements relative to a market's recent price behavior. This indicator incorporates two distinct concepts:
Relative Vigor Index (RVI):
The RVI is a momentum indicator that measures the strength of a trend by comparing the current price range (high vs low) with the opening and closing prices. The RVI is primarily used to determine if the current price movement is "vigorous" (strong) or weak, and it is plotted as a line that oscillates around a zero baseline.
Z-Score:
The Z-score is a statistical measurement that shows how many standard deviations the RVI is from its historical mean over a given period. This helps identify if the current RVI value is unusually high or low relative to past values, providing a normalized view of the indicator's extremity.
The combination of the RVI and Z-Score offers a more nuanced view of market momentum and volatility, allowing traders to assess both the strength of the current trend and its statistical significance.
Inputs:
Length (RVI Length):
The number of bars used to calculate the Relative Vigor Index. A larger length (e.g., 20 or more) results in a smoother RVI, while a shorter length makes the RVI more sensitive to short-term price changes.
Z-Score Length:
The number of bars used to compute the mean and standard deviation of the RVI for the Z-score calculation. This length determines how "historically" the Z-score will assess the RVI's behavior.
Offset:
Allows users to shift the indicator plot to the left or right for visual adjustment, particularly useful in certain charting setups.
Position Size Using Manual Stop Loss [odnac]
This indicator calculates the risk per position based on user-defined settings.
Two Calculation Methods
1. Manual Stop Loss (%) & Manual Leverage
2. Manual Stop Loss (%) & Optimized Leverage
Settings
1. init_capital
Enter your current total capital.
2. Maximum Risk (%) per Position of Total Capital
Specify the percentage of your total funds to be risked for a single position.
3. manual_SL(%)
Enter the stop-loss percentage.
Range: 0.01 ~ 100
4. manual_leverage
Enter the leverage you wish to use.
Range: 1 ~ 100
Used in the first method (Manual Stop Loss (%) & Manual Leverage).
5. Safety Margin
Specify the safety margin for optimized leverage.
Range: 0.01 ~ 1
Used in the second method (Manual Stop Loss (%) & Optimized Leverage). Details are explained below.
Indicator Colors
Black: Indicates which method is being used.
White: Leverage.
First Green: Funds to be invested.
Second Green: Funds to be invested * Leverage.
First Red: Stop-loss (%).
Second Red: Stop-loss (%) * Leverage.
Details for Each Method:
1. Manual Stop Loss (%) & Manual Leverage
This method calculates the size of the funds based on user-defined stop-loss (%) and leverage settings.
White: manual_leverage.
First Green: Investment = Maximum Risk / (manual_SL / 100) / manual_leverage
Second Green: Maximum Risk * (manual_SL / 100)
First Red: manual_SL.
Second Red: manual_SL * manual_leverage
Ensure that the product of manual_SL and manual_leverage does not exceed 100.
If it does, there is a risk of liquidation.
2. Manual Stop Loss (%) & Optimized Leverage
This method calculates optimized leverage based on the user-defined stop-loss (%) and determines the size of the funds.
Optimization_LEVER = auto_leverage * safety_margin
auto_leverage = 100 / stop-loss (%), rounded down to the nearest whole number.
(Exception: If the stop-loss (%) is in the range of 0 ~ 1%, auto_leverage is always 100.)
Example:
If the stop-loss is 4%, auto_leverage = 25 (100 / 4 = 25).
However, 4% × 25 leverage equals 100%, meaning liquidation occurs even with a stop-loss.
To reduce this risk, the safety_margin value is applied.
White: auto_leverage * safety_margin
First Green: Investment = Maximum Risk / (manual_SL / 100) / optimization_LEVER
Second Green: Maximum Risk * (manual_SL / 100)
First Red: manual_SL.
Second Red: manual_SL * optimization_LEVER
Candle ThermalsThis indicator color candles based on their percentage price change, relative to the average, maximum, and minimum changes over the last 100 candles.
-It calculates the percentage change of all candles
-Calculates the minimum, maximum and average in the last 100 bars in percentage change
-Changes color of the candle based on the range between the current percent and min/max value
-The brightest candle provides the highest compound effect to you account if you act on it at the open.
-Candles that have a percentage close to the average then they are barely visible = lowest compound effect to your account
This indicator functions like a "heatmap" for candles, highlighting the relative volatility of price movements in both directions. Strong bullish candles are brighter green, and strong bearish candles are brighter red. It's particularly useful for traders wanting quick visual feedback on price volatility and strength trends within the last 100 bars.
Volatility-Adjusted Trend Deviation Statistics (C-Ratios)The Pine Script logic provided generates and displays a table with key information derived from VWMA, EMA, and ATR-based "C Ratios," alongside stochastic oscillators, correlation coefficients, Z-scores, and bias indicators. Here’s an explanation of the logic and what the output in the table informs:
Key Calculations and Their Purpose
VWMA and EMA (Smoothing Lengths):
Multiple EMAs are calculated using VWMA as the source, with lengths spanning short-term (13) to long-term (233).
These EMAs provide a hierarchy of smoothed price levels to assess trends over various time horizons.
ATR-Based "C Ratios":
The C Ratios measure deviations of smoothed prices (a_1 to a_7) from the source price relative to ATR at corresponding lengths.
These values normalize deviations, giving insight into the price's relative movement strength and direction over various periods.
Stochastic Oscillator for C Ratios:
Calculates normalized stochastic values for each C Ratio to assess overbought/oversold conditions dynamically over a rolling window.
Helps identify short-term momentum trends within the broader context of C Ratios.
Displays the average stochastic value derived from all C Ratios.
Text: Shows overbought/oversold conditions (Overbought, Oversold, or ---).
Color: Green for strong upward momentum, red for downward, and white for neutral.
Weighted and Mean C Ratio:
The script computes both an arithmetic mean (c_mean) and a weighted mean (c_mean_w) for all C Ratios.
Weighted mean emphasizes short-term values using predefined weights.
Trend Bias and Reversal Detection:
The script calculates Z-scores for c_mean to identify statistically significant deviations.
It combines Z-scores and weighted C Ratio values to determine:
Bias (Bullish/Bearish based on Z-score thresholds and mean values).
Reversals (Based on relative positioning and how the weighted c_mean and un-weighted C_mean move. ).
Correlation Coefficient:
Correlation of mean C Ratios (c_mean) with bar indices over the short-term length (sl) assesses the strength and direction of trend consistency.
Table Output and Its Meaning
Stochastic Strength:
Long-term Correlation:
List of Lengths: Define the list of lengths for EMA and ATR explicitly (e.g., ).
Calculate Mean C Ratios: For each length in the list, calculate the mean C Ratio
Average these values over the entire dataset.
Store Lengths and Mean C Ratios: Maintain arrays for lengths and their corresponding mean C Ratios.
Correlation: compute the Pearson correlation between the list of lengths and the mean C Ratios.
Text: Indicates Uptrend, Downtrend, or neutral (---).
Color: Green for positive (uptrend), red for negative (downtrend), and white for neutral.
Z-Score Bias:
Assesses the statistical deviation of C Ratios from their historical mean.
Text: Bullish Bias, Bearish Bias, or --- (neutral).
Color: Green or red based on the direction and significance of the Z-score.
C-Ratio Mean:
Displays the weighted average C Ratio (c_mean_w) or a reversal condition.
Text: If no reversal is detected, shows c_mean_w; otherwise, a reversal condition (Bullish Reversal, Bearish Reversal).
Color: Indicates the strength and direction of the bias or reversal.
Practical Insights
Trend Identification: Correlation coefficients, Z-scores, and stochastic values collectively highlight whether the market is trending and the trend's direction.
Momentum and Volatility: Stochastic and ATR-normalized C Ratios provide insights into the momentum and price movement consistency across different timeframes.
Bias and Reversal Detection: The script highlights potential shifts in market sentiment or direction (bias or reversal) using statistical measures.
Customization: Users can toggle plots and analyze specific EMA lengths or focus on combined metrics like the weighted C Ratio.
Portfolio [Afnan]🚀 Portfolio - Advanced Portfolio Management Indicator 📊
A game-changing portfolio management tool designed to help traders stay on top of their positions and manage risk efficiently. This indicator combines detailed tracking, real-time analytics, and visual clarity to ensure traders are well-equipped for the dynamic world of financial markets.
📈 Key Features 💡
Track up to 14 positions with ease
Real-time Profit & Loss (P&L) updates and risk metrics
Visual representation of entry, stop-loss (SL), and target levels
Alerts for stop-loss breaches and target achievements
Comprehensive portfolio summaries for quick analysis
Customizable options to suit individual trading styles
🔍 Main Components ⚙️
📊 1. Position Tracking
Detailed position data: entry, stop-loss, target levels, and more
Real-time risk-reward ratios
Insights into position size and exposure percentages
Continuous updates on P&L in real-time
📉 2. Visual Indicators
Clear visual markers for entry, SL, and target prices
Price labels with detailed percentage changes
Indicators that show the current position's market status
💼 3. Portfolio Summary
Aggregate account values and exposure
Summarized P&L metrics across all positions
Risk management insights for better decision-making
Daily performance tracking to evaluate strategies
⚠️ 4. Alert System
Instant notifications for stop-loss breaches
Alerts when target prices are hit
Alerts operate for the current chart symbol
⚡ Customization Options 🎨
Show or hide specific data columns
Adjust the table's position and size for better visibility
Personalize color schemes and text styles
Switch between full portfolio view and single symbol focus
📱 How to Use 📝
Input your positions in the indicator's settings
Enable or disable specific positions dynamically
Customize display preferences to your liking
Set up alerts for proactive risk management
Monitor all your trading activities in one comprehensive dashboard
📌 Important Notes ℹ️
Compatible with any trading symbol
Updates seamlessly during market hours
Alerts are specific to the currently active chart symbol
Maximum capacity: 14 simultaneous positions
Created by: @AfnanTAjuddin
⚠️ Disclaimer ⚠️
This indicator is a tool for informational purposes only. Ensure all calculations are verified and consult a financial professional before making investment decisions.
🎯 "Stay disciplined, trade smart, and let data guide your decisions." 📊
Quick scan for signal🙏🏻 Hey TV, this is QSFS, following:
^^ Quick scan for drift (QSFD)
^^ Quick scan for cycles (QSFC)
As mentioned before, ML trading is all about spotting any kind of non-randomness, and this metric (along with 2 previously posted) gonna help ya'll do it fast. This one will show you whether your time series possibly exhibits mean-reverting / consistent / noisy behavior, that can be later confirmed or denied by more sophisticated tools. This metric is O(n) in windowed mode and O(1) if calculated incrementally on each data update, so you can scan Ks of datasets w/o worrying about melting da ice.
^^ windowed mode
Now the post will be divided into several sections, and a couple of things I guess you’ve never seen or thought about in your life:
1) About Efficiency Ratios posted there on TV;
Some of you might say this is the Efficiency Ratio you’ve seen in Perry's book. Firstly, I can assure you that neither me nor Perry, just as X amount of quants all over the world and who knows who else, would say smth like, "I invented it," lol. This is just a thing you R&D when you need it. Secondly, I invite you (and mods & admin as well) to take a lil glimpse at the following screenshot:
^^ not cool...
So basically, all the Efficiency Ratios that were copypasted to our platform suffer the same bug: dudes don’t know how indexing works in Pine Script. I mean, it’s ok, I been doing the same mistakes as well, but loxx, cmon bro, you... If you guys ever read it, the lines 20 and 22 in da code are dedicated to you xD
2) About the metric;
This supports both moving window mode when Length > 0 and all-data expanding window mode when Length < 1, calculating incrementally from the very first data point in the series: O(n) on history, O(1) on live updates.
Now, why do I SQRT transform the result? This is a natural action since the metric (being a ratio in essence) is bounded between 0 and 1, so it can be modeled with a beta distribution. When you SQRT transform it, it still stays beta (think what happens when you apply a square root to 0.01 or 0.99), but it becomes symmetric around its typical value and starts to follow a bell-shaped curve. This can be easily checked with a normality test or by applying a set of percentiles and seeing the distances between them are almost equal.
Then I noticed that on different moving window sizes, the typical value of the metric seems to slide: higher window sizes lead to lower typical values across the moving windows. Turned out this can be modeled the same way confidence intervals are made. Lines 34 and 35 explain it all, I guess. You can see smth alike on an autocorrelogram. These two match the mean & mean + 1 stdev applied to the metric. This way, we’ve just magically received data to estimate alpha and beta parameters of the beta distribution using the method of moments. Having alpha and beta, we can now estimate everything further. Btw, there’s an alternative parameterization for beta distributions based on data length.
Now what you’ll see next is... u guys actually have no idea how deep and unrealistically minimalistic the underlying math principles are here.
I’m sure I’m not the only one in the universe who figured it out, but the thing is, it’s nowhere online or offline. By calculating higher-order moments & combining them, you can find natural adaptive thresholds that can later be used for anomaly detection/control applications for any data. No hardcoded thresholds, purely data-driven. Imma come back to this in one of the next drops, but the truest ones can already see it in this code. This way we get dem thresholds.
Your main thresholds are: basis, upper, and lower deviations. You can follow the common logic I’ve described in my previous scripts on how to use them. You just register an event when the metric goes higher/lower than a certain threshold based on what you’re looking for. Then you take the time series and confirm a certain behavior you were looking for by using an appropriate stat test. Or just run a certain strategy.
To avoid numerous triggers when the metric jitters around a threshold, you can follow this logic: forget about one threshold if touched, until another threshold is touched.
In general, when the metric gets higher than certain thresholds, like upper deviation, it means the signal is stronger than noise. You confirm it with a more sophisticated tool & run momentum strategies if drift is in place, or volatility strategies if there’s no drift in place. Otherwise, you confirm & run ~ mean-reverting strategies, regardless of whether there’s drift or not. Just don’t operate against the trend—hedge otherwise.
3) Flex;
Extension and limit thresholds based on distribution moments gonna be discussed properly later, but now you can see this:
^^ magic
Look at the thresholds—adaptive and dynamic. Do you see any optimizations? No ML, no DL, closed-form solution, but how? Just a formula based on a couple of variables? Maybe it’s just how the Universe works, but how can you know if you don’t understand how fundamentally numbers 3 and 15 are related to the normal distribution? Hm, why do they always say 3 sigmas but can’t say why? Maybe you can be different and say why?
This is the primordial power of statistical modeling.
4) Thanks;
I really wanna dedicate this to Charlotte de Witte & Marion Di Napoli, and their new track "Sanctum." It really gets you connected to the Source—I had it in my soul when I was doing all this ∞
MTF ADX TableThis Indicator displays a table on the chart with the Average Directional Index (ADX) values for two different timeframes. It calculates the ADX using a custom formula and shows the ADX values along with their corresponding timeframes. The table's position, font size, and background color can be customized, and the timeframes are labeled with "ADX" appended to their unit (e.g., "5m ADX", "1D ADX"). The table updates dynamically with the latest ADX values for each timeframe. The indicator also provides a rating, based of the thresholds settings
BTC vs Altcoin CorrelationThis Pine Script indicator calculates and visualizes the rolling correlation between Bitcoin (BTC) and a selected altcoin, while providing insights into the percentage of time the correlation remains above a user-defined threshold. Users can independently configure the correlation calculation period and the lookback period for measuring the percentage of time above the threshold. The correlation is displayed as a color-coded line: green when above the threshold and red otherwise, with a dashed horizontal line marking the threshold level. A dynamic table displays the current correlation value and the percentage of time spent above the threshold within the specified period, enabling quick evaluation of correlation dynamics between BTC and the chosen altcoin.
Consecutive Lower Highs/Higher Lows v1 [tradinggeniusberlin]This indicator counts the lower highs and higher low streaks. If the streak is above a certain threshold a buy or exit arrow is shown.
Idea:
The probability of a reversal is rising the more lower highs the asset had already because if mean reversion tendencies of asset prices. Especially in uptrend above the 20ma and/or 50ma.
How to use it:
In Uptrends, lower high streak of 3 or more, enter at first new high.
Daily PlayDaily Play Indicator
The Daily Play Indicator is a clean and versatile tool designed to help traders organize and execute their daily trading plan directly on their charts. This indicator simplifies your workflow by visually displaying key inputs like market trend, directional bias, and key levels, making it easier to focus on your trading strategy.
Features
Dropdown Selection for Trend and Bias:
• Set the overall market trend (Bullish, Bearish, or Neutral) and your directional bias (Long, Short, or Neutral) using intuitive dropdown menus. No more manual typing or guesswork!
Key Levels:
Quickly input and display the Previous Day High and Previous Day Low. These levels are essential for many trading strategies, such as breakouts.
Real-Time News Notes:
Add a quick note about impactful news or market events (e.g., “Fed meeting today” or “Earnings season”) to keep contextual awareness while trading.
Simple On-Chart Display:
The indicator creates a “table-like” structure on the chart, aligning your inputs in an easy-to-read format. The data is positioned dynamically so it doesn’t obstruct the price action.
Customisable Visual Style:
Simple labels with clear text to ensure that your chart remains neat and tidy.
----
Use Case
The Daily Play Indicator is ideal for:
• Day traders and scalpers who rely on precise planning and real-time execution.
• Swing traders looking to mark critical levels and develop a trade plan before the session begins.
• Anyone who needs a structured way to stay focused and disciplined during volatile market conditions.
By integrating this tool into your workflow, you can easily align your daily preparation with live market action.
----
How to Use
Open the indicator settings to configure your inputs:
• Trend: Use the dropdown to choose between Bullish, Bearish, or Neutral.
• Bias: Select Long, Short, or Neutral to align your personal bias with the market.
• Previous Day Levels: Enter the High and Low of the previous trading session for key reference points.
• News: Add a short description of any relevant market-moving events.
Contraction & Expansion Multi-Screener █ Overview:
The Contraction & Expansion Multi-Screener analyzes market volatility across many symbols. It provides insights into whether a market is contracting or expanding in volatility. With using a range of statistical models for modeling realized volatility, the script calculates, ranks, and monitors the degree of contraction or expansions in market volatility. The objective is to provide actionable insights into the current market phases by using historical data to model current volatility conditions.
This indicator accomplishes this by aggregating a variety of volatility measures, computing ranks, and applying threshold-based methods to identify transitions in market behavior. Volatility itself helps you understand if the market is moving a lot. High volatility or volatility that is increasing over time, means that the price is moving a lot. Volatility also mean reverts so if its extremely low, you can eventually expect it to return to its expected value, meaning there will be bigger price moves, and vice versa.
█ Features of the Indicator
This indicator allows the user to select up to 14 different symbols and retrieve their price data. There is five different types of volatility models that you can choose from in the settings of this indicator for how to use the screener.
Volatility Settings:
Standard Deviation
Relative Standard Deviation
Mean Absolute Deviation
Exponentially Weighted Moving Average (EWMA)
Average True Range (ATR)
Standard Deviation, Mean Absolute Deviation, and EWMA use returns to model the volatility, meanwhile Relative Standard Deviation uses price instead due to its geometric properties, and Average True Range for capturing the absolute movement in price. In this indicator the volatility is ranked, so if the volatility is at 0 or near 0 then it is contracting and the volatility is low. If the volatility is near 100 or at 100 then the volatility is at its maximum.
For traders that use the Forex Master Pattern Indicator 2 and want to use this indicator for that indicator, it is recommended to set your volatility type to Relative Standard Deviation.
Users can also modify the location of the screener to be on the top left, top right, bottom left, or bottom right. You also can disable sections of the screener and show a smaller list if you want to.
The Contraction & Expansion Screener shows you the following information:
Confirmation of whether or not there is a contraction or expansion
Percentage Rank of the volatility
Volatility MA direction: This screener uses moving averages on the volatility to determine if its increasing over time or decreasing over time.
MadTrend [InvestorUnknown]The MadTrend indicator is an experimental tool that combines the Median and Median Absolute Deviation (MAD) to generate signals, much like the popular Supertrend indicator. In addition to identifying Long and Short positions, MadTrend introduces RISK-ON and RISK-OFF states for each trade direction, providing traders with nuanced insights into market conditions.
Core Concepts
Median and Median Absolute Deviation (MAD)
Median: The middle value in a sorted list of numbers, offering a robust measure of central tendency less affected by outliers.
Median Absolute Deviation (MAD): Measures the average distance between each data point and the median, providing a robust estimation of volatility.
Supertrend-like Functionality
MadTrend utilizes the median and MAD in a manner similar to how Supertrend uses averages and volatility measures to determine trend direction and potential reversal points.
RISK-ON and RISK-OFF States
RISK-ON: Indicates favorable conditions for entering or holding a position in the current trend direction.
RISK-OFF: Suggests caution, signaling RISK-ON end and potential trend weakening or reversal.
Calculating MAD
The mad function calculates the median of the absolute deviations from the median, providing a robust measure of volatility.
// Function to calculate the Median Absolute Deviation (MAD)
mad(series float src, simple int length) =>
med = ta.median(src, length) // Calculate median
abs_deviations = math.abs(src - med) // Calculate absolute deviations from median
ta.median(abs_deviations, length) // Return the median of the absolute deviations
MADTrend Function
The MADTrend function calculates the median and MAD-based upper (med_p) and lower (med_m) bands. It determines the trend direction based on price crossing these bands.
MADTrend(series float src, simple int length, simple float mad_mult) =>
// Calculate MAD (volatility measure)
mad_value = mad(close, length)
// Calculate the MAD-based moving average by scaling the price data with MAD
median = ta.median(close, length)
med_p = median + (mad_value * mad_mult)
med_m = median - (mad_value * mad_mult)
var direction = 0
if ta.crossover(src, med_p)
direction := 1
else if ta.crossunder(src, med_m)
direction := -1
Trend Direction and Signals
Long Position (direction = 1): When the price crosses above the upper MAD band (med_p).
Short Position (direction = -1): When the price crosses below the lower MAD band (med_m).
RISK-ON: When the price moves further in the direction of the trend (beyond median +- MAD) after the initial signal.
RISK-OFF: When the price retraces towards the median, signaling potential weakening of the trend.
RISK-ON and RISK-OFF States
RISK-ON LONG: Price moves above the upper band after a Long signal, indicating strengthening bullish momentum.
RISK-OFF LONG: Price falls back below the upper band, suggesting potential weakness in the bullish trend.
RISK-ON SHORT: Price moves below the lower band after a Short signal, indicating strengthening bearish momentum.
RISK-OFF SHORT: Price rises back above the lower band, suggesting potential weakness in the bearish trend.
Picture below show example RISK-ON periods which can be identified by “cloud”
Note: Highlighted areas on the chart indicating RISK-ON and RISK-OFF periods for both Long and Short positions.
Implementation Details
Inputs and Parameters:
Source (input_src): The price data used for calculations (e.g., close, open, high, low).
Median Length (length): The number of periods over which the median and MAD are calculated.
MAD Multiplier (mad_mult): Determines the distance of the upper and lower bands from the median.
Calculations:
Median and MAD are recalculated each period based on the specified length.
Upper (med_p) and Lower (med_m) Bands are computed by adding and subtracting the scaled MAD from the median.
Visual representation of the indicator on a price chart:
Backtesting and Performance Metrics
The MadTrend indicator includes a Backtesting Mode with a performance metrics table to evaluate its effectiveness compared to a simple buy-and-hold strategy.
Equity Calculation:
Calculates the equity curve based on the signals generated by the indicator.
Performance Metrics:
Metrics such as Mean Returns, Standard Deviation, Sharpe Ratio, Sortino Ratio, and Omega Ratio are computed.
The metrics are displayed in a table for both the strategy and the buy-and-hold approach.
Note: Due to the use of labels and plot shapes, automatic chart scaling may not function ideally in Backtest Mode.
Alerts and Notifications
MadTrend provides alert conditions to notify traders of significant events:
Trend Change Alerts
RISK-ON and RISK-OFF Alerts - Provides real-time notifications about the RISK-ON and RISK-OFF states for proactive trade management.
Customization and Calibration
Default Settings: The provided default settings are experimental and not optimized. They serve as a starting point for users.
Parameter Adjustment: Traders are encouraged to calibrate the indicator's parameters (e.g., length, mad_mult) to suit their specific trading style and the characteristics of the asset being analyzed.
Source Input: The indicator allows for different price inputs (open, high, low, close, etc.), offering flexibility in how the median and MAD are calculated.
Important Notes
Market Conditions: The effectiveness of the MadTrend indicator can vary across different market conditions. Regular calibration is recommended.
Backtest Limitations: Backtesting results are historical and do not guarantee future performance.
Risk Management: Always apply sound risk management practices when using any trading indicator.
Mean Price
^^ Plotting switched to Line.
This method of financial time series (aka bars) downsampling is literally, naturally, and thankfully the best you can do in terms of maximizing info gain. You can finally chill and feed it to your studies & eyes, and probably use nothing else anymore.
(HL2 and occ3 also have use cases, but other aggregation methods? Not really, even if they do, the use cases are ‘very’ specific). Tho in order to understand why, you gotta read the following wall, or just believe me telling you, ‘I put it on my momma’.
The true story about trading volumes and why this is all a big misdirection
Actually, you don’t need to be a quant to get there. All you gotta do is stop blindly following other people’s contextual (at best) solutions, eg OC2 aggregation xD, and start using your own brain to figure things out.
Every individual trade (basically an imprint on 1D price space that emerges when market orders hit the order book) has several features like: price, time, volume, AND direction (Up if a market buy order hits the asks, Down if a market sell order hits the bids). Now, the last two features—volume and direction—can be effectively combined into one (by multiplying volume by 1 or -1), and this is probably how every order matching engine should output data. If we’re not considering size/direction, we’re leaving data behind. Moreover, trades aren’t just one-price dots all the time. One trade can consume liquidity on several levels of the order book, so a single trade can be several ticks big on the price axis.
You may think now that there are no zero-volume ticks. Well, yes and no. It depends on how you design an exchange and whether you allow intra-spread trades/mid-spread trades (now try to Google it). Intra-spread trades could happen if implemented when a matching engine receives both buy and sell orders at the same microsecond period. This way, you can match the orders with each other at a better price for both parties without even hitting the book and consuming liquidity. Also, if orders have different sizes, the remaining part of the bigger order can be sent to the order book. Basically, this type of trade can be treated as an OTC trade, having zero volume because we never actually hit the book—there’s no imprint. Another reason why it makes sense is when we think about volume as an impact or imbalance act, and how the medium (order book in our case) responds to it, providing information. OTC and mid-spread trades are not aggressive sells or buys; they’re neutral ticks, so to say. However huge they are, sometimes many blocks on NYSE, they don’t move the price because there’s no impact on the medium (again, which is the order book)—they’re not providing information.
... Now, we need to aggregate these trades into, let’s say, 1-hour bars (remember that a trade can have either positive or negative volume). We either don’t want to do it, or we don’t have this kind of information. What we can do is take already aggregated OHLC bars and extract all the info from them. Given the market is fractal, bars & trades gotta have the same set of features:
- Highest & lowest ticks (high & low) <- by price;
- First & last ticks (open & close) <- by time;
- Biggest and smallest ticks <- by volume.*
*e.g., in the array ,
2323: biggest trade,
-1212: smallest trade.
Now, in our world, somehow nobody started to care about the biggest and smallest trades and their inclusion in OHLC data, while this is actually natural. It’s the same way as it’s done with high & low and open & close: we choose the minimum and maximum value of a given feature/axis within the aggregation period.
So, we don’t have these 2 values: biggest and smallest ticks. The best we can do is infer them, and given the fact the biggest and smallest ticks can be located with the same probability everywhere, all we can do is predict them in the middle of the bar, both in time and price axes. That’s why you can see two HL2’s in each of the 3 formulas in the code.
So, summed up absolute volumes that you see in almost every trading platform are actually just a derivative metric, something that I call Type 2 time series in my own (proprietary ‘for now’) methods. It doesn’t have much to do with market orders hitting the non-uniform medium (aka order book); it’s more like a statistic. Still wanna use VWAP? Ok, but you gotta understand you’re weighting Type 1 (natural) time series by Type 2 (synthetic) ones.
How to combine all the data in the right way (khmm khhm ‘order’)
Now, since we have 6 values for each bar, let’s see what information we have about them, what we don’t have, and what we can do about it:
- Open and close: we got both when and where (time (order) and price);
- High and low: we got where, but we don’t know when;
- Biggest & smallest trades: we know shit, we infer it the way it was described before.'
By using the location of the close & open prices relative to the high & low prices, we can make educated guesses about whether high or low was made first in a given bar. It’s not perfect, but it’s ultimately all we can do—this is the very last bit of info we can extract from the data we have.
There are 2 methods for inferring volume delta (which I call simply volume) that are presented everywhere, even here on TradingView. Funny thing is, this is actually 2 parts of the 1 method. I wonder how many folks see through it xD. The same method can be used for both inferring volume delta AND making educated guesses whether high or low was made first.
Imagine and/or find the cases on your charts to understand faster:
* Close > open means we have an up bar and probably the volume is positive, and probably high was made later than low.
* Close < open means we have a down bar and probably the volume is negative, and probably low was made later than high.
Now that’s the point when you see that these 2 mentioned methods are actually parts of the 1 method:
If close = open, we still have another clue: distance from open/close pair to high (HC), and distance from open/close pair to low (LC):
* HC < LC, probably high was made later.
* HC > LC, probably low was made later.
And only if close = open and HC = LC, only in this case we have no clue whether high or low was made earlier within a bar. We simply don’t have any more information to even guess. This bar is called a neutral bar.
At this point, we have both time (order) and price info for each of our 6 values. Now, we have to solve another weighted average problem, and that’s it. We’ll weight prices according to the order we’ve guessed. In the neutral bar case, open has a weight of 1, close has a weight of 3, and both high and low have weights of 2 since we can’t infer which one was made first. In all cases, biggest and smallest ticks are modeled with HL2 and weighted like they’re located in the middle of the bar in a time sense.
P.S.: I’ve also included a "robust" method where all the bars are treated like neutral ones. I’ve used it before; obviously, it has lesser info gain -> works a bit worse.