US Bond Yield Magma CurveBond Market is the most important to monitor while investing and making macroeconomic desicions.
Inverted bond yeld curve is when short term bond yelds are greater than long term ones and this is a signal of fear for an imminent recession and disaster like a vulcano's eruption
The indicatior is applied to US Bond market but you can try to other countries like EU, China and so on. The code is free.
No further word is more powerful than a clear image as following:
Let's enjoy in making yourself decisions
"curve" için komut dosyalarını ara
US Treasury All Yield Curve FEDFUNDS WeightedRather than picking a benchmark pair of treasuries to express a yield curve, this indicator weighs all (excluding the new 20 Year) durations, each against the next, and weights that against the FEDFUNDS rate.
Coppock Curve StrategyCoppock Curve Strategy
Description:
The Coppock Indicator is a long-term price momentum oscillator which is used primarily to pinpoint major bottoms in the stock market. Crosses above the zero line indicate buying pressure, crosses below the center (zero) line indicate selling pressure.
This script generates a long entry signal when the Coppock value crosses over a signal line, and/or generates a short entry signal when the value crosses under a signal line.
Also it generates a long exit signal when the Coppock value crosses under a signal line, and/or generates a short exit signal when the value crosses over a signal line.
Signals can be filtered out by volatility, volume or both. To minimize repainting use higher percentage values of Time Threshold parameter.
Straightened Price CurveThis is another among zillions of attempts at a moving average of a security. More precisely, two attempts at one go). The zzoid function generates a zigzag-like MA that can adopt different forms. The stepline function creates, sure enough, a stepline.
US Treasury Yield CurveThis indicator plots the US treasury yield curve as maturity (x-axis/time) vs yield (y-axis/price)
Coppock CurveThis indicator was originally developed by Edwin "Sedge" Coppock (Barron's Magazine, October 1962).
Specially for @AlexMayorov :
1) Buy when indicator crosses the zero line upside
2) Sell when indicator crosses the zero line downside
Mongoose Yield Spread Dashboard v5 – Labeled, Alerted, ReadableCurveGuard: Mongoose Edition
Track the macro tide before it turns.
This tool visualizes the three most-watched U.S. Treasury yield curve spreads:
2s10s (10Y - 2Y)
5s30s (30Y - 5Y)
3M10Y (10Y - 3M)
Each spread is plotted with dynamic color logic, inversion alerts, and floating labels. Background shading highlights historical inversion zones to help spot macro regime shifts in real time.
✅ Alert-ready
✅ Dark mode optimized
✅ Floating labels
✅ Clean layout for fast macro insight
📌 For educational and informational purposes only.
This script does not provide financial advice or trade recommendations.
Grid Bot Parabolic [xxattaxx]🟩 The Grid Bot Parabolic, a continuation of the Grid Bot Simulator Series , enhances traditional gridbot theory by employing a dynamic parabolic curve to visualize potential support and resistance levels. This adaptability is particularly useful in volatile or trending markets, enabling traders to explore grid-based strategies and gain deeper market insights. The grids are divided into customizable trade zones that trigger signals as prices move into new zones, empowering traders to gain deeper insights into market dynamics and potential turning points.
While traditional grid bots excel in ranging markets, the Grid Bot Parabolic’s introduction of acceleration and curvature adds new dimensions, enabling its use in trending markets as well. It can function as a traditional grid bot with horizontal lines, a tilted grid bot with linear slopes, or a fully parabolic grid with curves. This dynamic nature allows the indicator to adapt to various market conditions, providing traders with a versatile tool for visualizing dynamic support and resistance levels.
🔑 KEY FEATURES 🔑
Adaptable Grid Structures (Horizontal, Linear, Curved)
Buy and Sell Signals with Multiple Trigger/Confirmation Conditions
Secondary Buy and Secondary Sell Signals
Projected Grid Lines
Customizable Grid Spacing and Zones
Acceleration and Curvature Control
Sensitivity Adjustments
📐 GRID STRUCTURES 📐
Beyond its core parabolic functionality, the Parabolic Grid Bot offers a range of grid configurations to suit different market conditions and trading preferences. By adjusting the "Acceleration" and "Curvature" parameters, you can transform the grid's structure:
Parabolic Grids
Setting both acceleration and curvature to non-zero values results in a parabolic grid.This configuration can be particularly useful for visualizing potential turning points and trend reversals. Example: Accel = 10, Curve = -10)
Linear Grids
With a non-zero acceleration and zero curvature, the grid tilts to represent a linear trend, aiding in identifying potential support and resistance levels during trending phases. Example: Accel =1.75, Curve = 0
Horizontal Grids
When both acceleration and curvature are set to zero, the indicator reverts to a traditional grid bot with horizontal lines, suitable for ranging markets. Example: Accel=0, Curve=0
⚙️ INITIAL SETUP ⚙️
1.Adding the Indicator to Your Chart
Locate a Starting Point: To begin, visually identify a price point on your chart where you want the grid to start.This point will anchor your grid.
2. Setting Up the Grid
Add the Grid Bot Parabolic Indicator to your chart. A “Start Time/Price” dialog will appear
CLICK on the chart at your chosen start point. This will anchor the start point and open a "Confirm Inputs" dialog box.
3. Configure Settings. In the dialog box, you can set the following:
Acceleration: Adjust how quickly the grid reacts to price changes.
Curve: Define the shape of the parabola.
Intervals: Determine the distance between grid levels.
If you choose to keep the default settings, with acceleration set to 0 and curve set to 0, the grid will display as traditional horizontal lines. The grid will align with your selected price point, and you can adjust the settings at any time through the indicator’s settings panel.
⚙️ CONFIGURATION AND SETTINGS ⚙️
Grid Settings
Accel (Acceleration): Controls how quickly the price reacts to changes over time.
Curve (Curvature): Defines the overall shape of the parabola.
Intervals (Grid Spacing): Determines the vertical spacing between the grid lines.
Sensitivity: Fine tunes the magnitude of Acceleration and Curve.
Buy Zones & Sell Zones: Define the number of grid levels used for potential buy and sell signals.
* Each zone is represented on the chart with different colors:
* Green: Buy Zones
* Red: Sell Zones
* Yellow: Overlap (Buy and Sell Zones intersect)
* Gray: Neutral areas
Trigger: Chooses which part of the candlestick is used to trigger a signal.
* `Wick`: Uses the high or low of the candlestick
* `Close`: Uses the closing price of the candlestick
* `Midpoint`: Uses the middle point between the high and low of the candlestick
* `SWMA`: Uses the Symmetrical Weighted Moving Average
Confirm: Specifies how a signal is confirmed.
* `Reverse`: The signal is confirmed if the price moves in the opposite direction of the initial trigger
* `Touch`: The signal is confirmed when the price touches the specified level or zone
Sentiment: Determines the market sentiment, which can influence signal generation.
* `Slope`: Sentiment is based on the direction of the curve, reflecting the current trend
* `Long`: Sentiment is bullish, favoring buy signals
* `Short`: Sentiment is bearish, favoring sell signals
* `Neutral`: Sentiment is neutral. No secondary signals will be generated
Show Signals: Toggles the display of buy and sell signals on the chart
Chart Settings
Grid Colors: These colors define the visual appearance of the grid lines
Projected: These colors define the visual appearance of the projected lines
Parabola/SWMA: Adjust colors as needed. These are disabled by default.
Time/Price
Start Time & Start Price: These set the starting point for the parabolic curve.
* These fields are automatically populated when you add the indicator to the chart and click on an initial location
* These can be adjusted manually in the settings panel, but he easiest way to change these is by directly interacting with the start point on the chart
Please note: Time and Price must be adjusted for each chart when switching assets. For example, a Start Price on BTCUSD of $60,000 will not work on an ETHUSD chart.
🤖 ALGORITHM AND CALCULATION 🤖
The Parabolic Function
At the core of the Parabolic Grid Bot lies the parabolic function, which calculates a dynamic curve that adapts to price action over time. This curve serves as the foundation for visualizing potential support and resistance levels.
The shape and behavior of the parabola are influenced by three key user-defined parameters:
Acceleration: This parameter controls the rate of change of the curve's slope, influencing its tilt or steepness. A higher acceleration value results in a more pronounced tilt, while a lower value leads to a gentler slope. This applies to both curved and linear grid configurations.
Curvature: This parameter introduces and controls the curvature or bend of the grid. A higher curvature value results in a more pronounced parabolic shape, while a lower value leads to a flatter curve or even a straight line (when set to zero).
Sensitivity: This setting fine-tunes the overall responsiveness of the grid, influencing how strongly the Acceleration and Curvature parameters affect its shape. Increasing sensitivity amplifies the impact of these parameters, making the grid more adaptable to price changes but potentially leading to more frequent adjustments. Decreasing sensitivity reduces their impact, resulting in a more stable grid structure with fewer adjustments. It may be necessary to adjust Sensitivity when switching between different assets or timeframes to ensure optimal scaling and responsiveness.
The parabolic function combines these parameters to generate a curve that visually represents the potential path of price movement. By understanding how these inputs influence the parabola's shape and behavior, traders can gain valuable insights into potential support and resistance areas, aiding in their decision-making process.
Sentiment
The Parabolic Grid Bot incorporates sentiment to enhance signal generation. The "Sentiment" input allows you to either:
Manually specify the market sentiment: Choose between 'Long' (bullish), 'Short' (bearish), or 'Neutral'.
Let the script determine sentiment based on the slope of the parabolic curve: If 'Slope' is selected, the sentiment will be considered 'Long' when the curve is sloping upwards, 'Short' when it's sloping downwards, and 'Neutral' when it's flat.
Buy and Sell Signals
The Parabolic Grid Bot generates buy and sell signals based on the interaction between the price and the grid levels.
Trigger: The "Trigger" input determines which part of the candlestick is used to trigger a signal (wick, close, midpoint, or SWMA).
Confirmation: The "Confirm" input specifies how a signal is confirmed ('Reverse' or 'Touch').
Zones: The number of "Buy Zones" and "Sell Zones" determines the areas on the grid where buy and sell signals can be generated.
When the trigger condition is met within a buy zone and the confirmation criteria are satisfied, a buy signal is generated. Similarly, a sell signal is generated when the trigger and confirmation occur within a sell zone.
Secondary Signals
Secondary signals are generated when a regular buy or sell signal contradicts the prevailing sentiment. For example:
A buy signal in a bearish market (Sentiment = 'Short') would be considered a "secondary buy" signal.
A sell signal in a bullish market (Sentiment = 'Long') would be considered a "secondary sell" signal.
These secondary signals are visually represented on the chart using hollow triangles, differentiating them from regular signals (filled triangles).
While they can be interpreted as potential contrarian trade opportunities, secondary signals can also serve other purposes within a grid trading strategy:
Exit Signals: A secondary signal can suggest a potential shift in market sentiment or a weakening trend. This could be a cue to consider exiting an existing position, even if it's currently profitable, to lock in gains before a potential reversal
Risk Management: In a strong trend, secondary signals might offer opportunities for cautious counter-trend trades with controlled risk. These trades could utilize smaller position sizes or tighter stop-losses to manage potential downside if the main trend continues
Dollar-Cost Averaging (DCA): During a prolonged trend, the parabolic curve might generate multiple secondary signals in the opposite direction. These signals could be used to implement a DCA strategy, gradually accumulating a position at potentially favorable prices as the market retraces or consolidates within the larger trend
Secondary signals should be interpreted with caution and considered in conjunction with other technical indicators and market context. They provide additional insights into potential market reversals or consolidation phases within a broader trend, aiding in adapting your grid trading strategy to the evolving market dynamics.
Examples
Trigger=Wick, Confirm=Touch. Signals are generated when the wick touches the next gridline.
Trigger=Close, Confirm=Touch. Signals require the close to touch the next gridline.
Trigger=SWMA, Confirm=Reverse. Signals are triggered when the Symmetrically Weighted Moving Average reverse crosses the next gridline.
🧠THEORY AND RATIONALE 🧠
The innovative approach of the Parabolic Grid Bot can be better understood by first examining the limitations of traditional grid trading strategies and exploring how this indicator addresses them by incorporating principles of market cycles and dynamic price behavior
Traditional Grid Bots: One-Dimensional and Static
Traditional grid bots operate on a simple premise: they divide the price chart into a series of equally spaced horizontal lines, creating a grid of trading zones. These bots excel in ranging markets where prices oscillate within a defined range. Buy and sell orders are placed at these grid levels, aiming to profit from mean reversion as prices bounce between the support and resistance zones.
However, traditional grid bots face challenges in trending markets. As the market moves in one direction, the bot continues to place orders in that direction, leading to a stacking of positions. If the market eventually reverses, these stacked trades can be profitable, amplifying gains. But the risk lies in the potential for the market to continue trending, leaving the trader with a series of losing trades on the wrong side of the market
The Parabolic Grid Bot: Adding Dimensions
The Parabolic Grid Bot addresses the limitations of traditional grid bots by introducing two additional dimensions:
Acceleration (Second Dimension): This parameter introduces a second dimension to the grid, allowing it to tilt upwards or downwards to align with the prevailing market trend. A positive acceleration creates an upward-sloping grid, suitable for uptrends, while a negative acceleration results in a downward-sloping grid, ideal for downtrends. The magnitude of acceleration controls the steepness of the tilt, enabling you to fine-tune the grid's responsiveness to the trend's strength
Curvature (Third Dimension): This parameter adds a third dimension to the grid by introducing a parabolic curve. The curve's shape, ranging from gentle bends to sharp turns, is controlled by the curvature value. This flexibility allows the grid to closely mirror the market's evolving structure, potentially identifying turning points and trend reversals.
Mean Reversion in Trending Markets
Even in trending markets, the Parabolic Grid Bot can help identify opportunities for mean reversion strategies. While the grid may be tilted to reflect the trend, the buy and sell zones can capture short-term price oscillations or consolidations within the broader trend. This allows traders to potentially pinpoint entry and exit points based on temporary pullbacks or reversals.
Visualize and Adapt
The Parabolic Grid Bot acts as a visual aid, enhancing your understanding of market dynamics. It allows you to "see the curve" by adapting the grid to the market's patterns. If the market shows a parabolic shape, like an upward curve followed by a peak and a downward turn (similar to a head and shoulders pattern), adjust the Accel and Curve to match. This highlights potential areas of interest for further analysis.
Beyond Straight Lines: Visualizing Market Cycle
Traditional technical analysis often employs straight lines, such as trend lines and support/resistance levels, to interpret market movements. However, many analysts, including Brian Millard, contend that these lines can be misleading. They propose that what might appear as a straight line could represent just a small part of a larger curve or cycle that's not fully visible on the chart.
Markets are inherently cyclical, marked by phases of expansion, contraction, and reversal. The Parabolic Grid Bot acknowledges this cyclical behavior by offering a dynamic, curved grid that adapts to these shifts. This approach helps traders move beyond the limitations of straight lines and visualize potential support and resistance levels in a way that better reflects the market's true nature
By capturing these cyclical patterns, whether subtle or pronounced, the Parabolic Grid Bot offers a nuanced understanding of market dynamics, potentially leading to more accurate interpretations of price action and informed trading decisions.
⚠️ DISCLAIMER⚠️
This indicator utilizes a parabolic curve fitting approach to visualize potential support and resistance levels. The mathematical formulas employed have been designed with adaptability and scalability in mind, aiming to accommodate various assets and price ranges. While the resulting curves may visually resemble parabolas, it's important to note that they might not strictly adhere to the precise mathematical definition of a parabola.
The indicator's calculations have been tested and generally produce reliable results. However, no guarantees are made regarding their absolute mathematical accuracy. Traders are encouraged to use this tool as part of their broader analysis and decision-making process, combining it with other technical indicators and market context.
Please remember that trading involves inherent risks, and past performance is not indicative of future results. It is always advisable to conduct your own research and exercise prudent risk management before making any trading decisions.
🧠 BEYOND THE CODE 🧠
The Parabolic Grid Bot, like the other grid bots in this series, is designed with education and community collaboration in mind. Its open-source nature encourages exploration, experimentation, and the development of new grid trading strategies. We hope this indicator serves as a framework and a starting point for future innovations in the field of grid trading.
Your comments, suggestions, and discussions are invaluable in shaping the future of this project. We welcome your feedback and look forward to seeing how you utilize and enhance the Parabolic Grid Bot.
MIDAS VWAP Jayy his is just a bash together of two MIDAS VWAP scripts particularly AkifTokuz and drshoe.
I added the ability to show more MIDAS curves from the same script.
The algorithm primarily uses the "n" number but the date can be used for the 8th VWAP
I have not converted the script to version 3.
To find bar number go into "Chart Properties" select " "background" then select Indicator Titles and "Indicator values". When you place your cursor over a bar the first number you see adjacent to the script title is the bar number. Put that in the dialogue box midline is MIDAS VWAP . The resistance is a MIDAS VWAP using bar highs. The resistance is MIDAS VWAP using bar lows.
In most case using N will suffice. However, if you are flipping around charts inputting a specific date can be handy. In this way, you can compare the same point in time across multiple instruments eg first trading day of the year or an election date.
Adding dates into the dialogue box is a bit cumbersome so in this version, it is enabled for only one curve. I have called it VWAP and it follows the typical VWAP algorithm. (Does that make a difference? Read below re my opinion on the Difference between MIDAS VWAP and VWAP ).
I have added the ability to start from the bottom or top of the initiating bar.
In theory in a probable uptrend pick a low of a bar for a low pivot and start the MIDAS VWAP there using the support.
For a downtrend use the high pivot bar and select resistance. The way to see is to play with these values.
Difference between MIDAS VWAP and the regular VWAP
MIDAS itself as described by Levine uses a time anchored On-Balance Volume (OBV) plotted on a graph where the horizontal (abscissa) arm of the graph is cumulative volume not time. He called his VWAP curves Support/Resistance VWAP or S/R curves. These S/R curves are often referred to as "MIDAS curves".
These are the main components of the MIDAS chart. A third algorithm called the Top-Bottom Finder was also described. (Separate script).
Additional tools have been described in "MIDAS_Technical_Analysis"
Midas Technical Analysis: A VWAP Approach to Trading and Investing in Today’s Markets by Andrew Coles, David G. Hawkins
Copyright © 2011 by Andrew Coles and David G. Hawkins.
Denoting the different way in which Levine approached the calculation.
The difference between "MIDAS" VWAP and VWAP is, in my opinion, much ado about nothing. The algorithms generate identical curves albeit the MIDAS algorithm launches the curve one bar later than the VWAP algorithm which can be a pain in the neck. All of the algorithms that I looked at on Tradingview step back one bar in time to initiate the MIDAS curve. As such the plotted curves are identical to traditional VWAP assuming the initiation is from the candle/bar midpoint.
How did Levine intend the curves to be drawn?
On a reversal, he suggested the initiation of the Support and Resistance VVWAP (S/R curve) to be started after a reversal.
It is clear in his examples this happens occasionally but in many cases he initiates the so-called MIDAS S/R VWAP right at the reversal point. In any case, the algorithm is problematic if you wish to start a curve on the first bar of an IPO .
You will get nothing. That is a pain. Also in Levine's writings, he describes simply clicking on the point where a
S/R VWAP is to be drawn from. As such, the generally accepted method of initiating the curve at N-1 is a practical and sensible method. The only issue is that you cannot draw the curve from the first bar on any security, as mentioned without resorting to the typical VWAP algorithm. There is another difference. VWAP is launched from the middle of the bar (as per AlphaTrends), You can also launch from the top of the bar or the bottom (or anywhere for that matter). The calculation proceeds using the top or bottom for each new bar.
The potential applications are discussed in the MIDAS Technical Analysis book.
TZtraderTZtrader
This is a TrendZones version with features to set stoploss and targets in short and long positions meant for use in intraday charts. It aims to provide signals for opening and closing long and short positions. In the comments under the TrendZones publication several people expressed a need for features for a short position similar to those for a long position as implemented in TrendZones, some want to use it for scalping, some asked for alerts. When I proposed to create a version for day trading with target lines based on ATR, several people liked the idea.
Full disclosure: I don’t do day trading, because, after I lost a lot of money, I had to promise my wife to stay away from it. I restrict myself to long term investing in stocks which are in uptrend. However I understand what a day trader needs. I gather from my experience that day trading or scalping is an attempt to earn something by opening a position in the morning and close, reopen and close it again during the day with a profit. It is usually done with leveraged instruments like CFD’s, futures, options, and what have you. Opening and closing positions is done within minutes, so the trader needs a quick and efficient way to set proper stoploss and target. TZtrader supports this by showing only three or four numbers on the price bar: The price of the instrument, The logical stop level (gray or green or maroon dots), and the target level (navy). All other numbers are suppressed to prevent mistakes. Also a clear feedback for current settings at the top-center of the pane and an alert feedback at bottom that flashes alerts during the development of the current bar and gives suppression status.
The script
First I made a bare bones version of TrendZones to which I added code for long and short trading setups and a bare setup for no position. The code for the logical stops in long setup had to be reviewed, after which this became the basis for stops in short setup.
Then I added code for 10 alert messages, which was a hassle, because this is the first time I coded alerts and the first time I used an array as a stack to avoid a complicated if-then construction. During testing the array caused a runtime error which I solved by adding ‘array.clear’ to the code, also I discovered that in TradingView separate alerts have to be created for all three setups - short, long and bare. Flipping setups is done in the inputs with a dropdown menu because Pine Script has no function for a clickable button.
One visual with three setups.
The visual has the TrendZones structure: Three near parallel very smooth curves, which border the moderate uptrend (green) and downtrend (orange) zone over and under the curve in the middle, the COG (Center Of Gravity). Where the price breaks out of these curves, strong trend zones show up over and under the curves, respectively strong uptrend (blue) and strong downtrend (red).
Three setups were made clearly different to avoid confusion and to provide oversight in case of multiple trades going on simultaneously which I imagine are monitored in one screen. You have to see which one is long, which short and which have no position. The long setup should not trigger short signals, nor should the short trigger long signals nor the bare setup exclusive long or short signals.
The Long setup is default, shown on the example chart. In this setup the Stoploss suggestions (green, gray and maroon dots) are under the price bars and the target line (navy) at a set distance above the High Border. A zone with a width of 1 ATR is drawn under the Low Border. In this setup 5 specific alerts are provided
The Short setup has the Stoploss suggestions over the price bars, the target line at a set distance under the Low Border. A zone with a width of 1 ATR is drawn above the High Border. This setup also has 5 specific alerts.
The Bare setup has no Stoploss suggestions, no target line and supports 4 alerts, 2 in common with the Long setup and 2 with Short.
The table below gives a summary of scripted alerts:
Setup = Where = When = Purpose
Long, Bare = Green Zone = Bars come from lower zones = Uptrend starts
Long, Bare = Green Zone = Sideways ends in uptrend = Uptrend resumes
Long = COG = First crossing = Uptrend might end warning
Long = Orange Zone = Bars come from higher zones = Uptrend ended take care
Long = Red Zone = Bars come from higher zones = Strong downtrend->close Long
Short, Bare = Orange Zone = Bars come from higher zones = Downtrend starts
Short, Bare = Orange Zone = Sideways ends in downtrend = Downtrend resumes
Short = COG = First crossing = Downtrend might end warning
Short = Green Zone = Bars come from lower zones = Downtrend ended take care
Short = Blue Zone = Bars come from lower zones = Strong uptrend -> close short
You can use script alerts in TradingView by clicking the clock in the sidebar, then ‘create alert’ or plus, as condition you choose ‘Tztrader’ in the dialog box, then the “Any alert() function call” option (the first item in the list). The script lets the valid alert trigger by TradingView after the bar is completed, this can differ from the flashed messages during its formation.
When you create alerts in Tradingview, I advice to do that for each setup, then to make only the alert active which matches the current setup, pause the other ones.
Suppressing false and annoying signals
The script has two ways to suppress such signals, which have to do with the numbers in the alert feedback. The numbers left and right of the message with a colored background, depict the zones in which the previous (left) and current (right) bar move. 1 is the strong downtrend zone (red), 2 the moderate downtrend zone (orange), 3 the sideways zones (gray), 4 the COG (gray), 5 the moderate uptrend zone (green), 6 the strong uptrend zone (blue), 7 something went wrong with assigning a zone (black). In extensive testing the number 7 never occurs, because I catch that error in the code. The idea is that an alert is only triggered if the previous bar was in a different zone. When the bars are in the same zone, no alert is possible. This way all annoying signals are suppressed and long, short and bare get the appropriate alerts.
The third number is a counter. It counts how often the COG is crossed without touching the outer curves. The counter will reset to zero when the upper or lower curve is touched. When the count is 1 you have zone situation 4 and appropriate alerts are flashed. When the count is 2 or higher, a sideways situation (3) is called and while the recrossings are going on, no alerts can be flashed. This suppresses false signals. The ATR zone and curves are brownish-gray where sideways happens(ed). When the channel is narrowed down to just the three curves, some false signals still might occur.
Inputs
“Setup”, default is long, drop down menu provides long, short and bare.
“Target ATR”, default is 2, sets the amount of ATR for the target line. In 1 minute charts 4 seems an appropriate setting, you have to learn by experience which setting works.
“show feedback …” default is on, This creates two feedback labels, a Setup feedback on top of the pane, which shows charted instrument, Setup type, Trend and timeframe of the chart. Background color of Trend feedback is green when it matches the setup, red when mismatches and gray when no match. The alert feedback at the bottom of the pane shows a number, a message and two numbers. The numbers will be explained in the chapter about false and annoying signals below. During formation of the bar, valid alerts are flashed with a blue background, otherwise the message ‘alerts for current bar suppressed’.
Logical Stops
The curves are the logical place to put stops, because, as these are averages of the high and low border of a Donchian channel, they signify the ‘natural’ current highest, lowest and main level in the lookback period that fit the monitored trend setup. A downtrend turns into an uptrend when a breakout of the upper curve occurs. If you are short, that is where you want to close position, so the logical place for the stoploss is the upper curve. Vice versa, when you are long, the logical stop is on the lower curve. The stops show up as green or gray dots on the curves, the green dots signify a nice entry level, the gray stops are there to suggest levels where unrealized profits might be secured, the maroon dots indicate that the trend mismatches the setup.
COG versus other lines
Any line used to identify a trend, be it some MA or some other line, is interpreted the same way: When the bars move above the line there is an uptrend and when below, a downtrend. COG is not different in that sense. If you put such a line in the same chart as TZtrader, you can see situations in which the other line shows uptrend or downtrend earlier than COG, also some other lines, e.g. Hull MA, are very good at showing tops and bottoms, while COG ignores these. On the other hand the other lines are usually a little nervous and let you shake out of position too soon. Just like the other lines, COG gives false signals when it is near horizontal. The advantage of the placement COG is the tolerance for pull backs. This way TZtrader keeps you longer in the trend. Such pull backs are often ‘flags’ which are interpreted in TA as confirming the trend. Tztrader aims to get you in position reasonably soon when a trend begins and out of position as soon as the trend turns against you. The placement of COG is done with a fundamentally different algorithm than other lines as it is not an average of prices, but the middle of two averages of borders of a Donchian channel. This gives the two zones between the curves the same quality as the two zones above and below the middle line of a standard Donchian Channel.
A multi timeframe application.
In this scenario you put a 5 minutes and 1 minute chart with Tztrader side by side. If the 5 minutes shows uptrend, set the 1 minute on long trading and open positions when the trend matches uptrend en close when it mismatches. Don’t open short positions. Once the 5 minute changes to downtrend, set Tztrader in the 1 minute to short trading and open positions when the trend matches downtrend and close when it mismatches.
The idea is that in a long ‘context’, provided by the 5 minutes, the uptrends in the 1 minute will last longer and go further, vice versa for the short ‘context’. This way you do swing trading in the 5 minute in a smart way, maximizing profits.
You can do this with any timeframe pairs with a proportion of around 5:1, 4:1, 6:1, like e.g. 60 minutes and 15 minutes or weeks and days (5 trading days in a week).
Dear day-traders, may this tool be helpful and may your days be blessed.
Take care
Polynomial Regression Bands + Channel [DW]This is an experimental study designed to calculate polynomial regression for any order polynomial that TV is able to support.
This study aims to educate users on polynomial curve fitting, and the derivation process of Least Squares Moving Averages (LSMAs).
I also designed this study with the intent of showcasing some of the capabilities and potential applications of TV's fantastic new array functions.
Polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modeled as a polynomial of nth degree (order).
For clarification, linear regression can also be described as a first order polynomial regression. The process of deriving linear, quadratic, cubic, and higher order polynomial relationships is all the same.
In addition, although deriving a polynomial regression equation results in a nonlinear output, the process of solving for polynomials by least squares is actually a special case of multiple linear regression.
So, just like in multiple linear regression, polynomial regression can be solved in essentially the same way through a system of linear equations.
In this study, you are first given the option to smooth the input data using the 2 pole Super Smoother Filter from John Ehlers.
I chose this specific filter because I find it provides superior smoothing with low lag and fairly clean cutoff. You can, of course, implement your own filter functions to see how they compare if you feel like experimenting.
Filtering noise prior to regression calculation can be useful for providing a more stable estimation since least squares regression can be rather sensitive to noise.
This is especially true on lower sampling lengths and higher degree polynomials since the regression output becomes more "overfit" to the sample data.
Next, data arrays are populated for the x-axis and y-axis values. These are the main datasets utilized in the rest of the calculations.
To keep the calculations more numerically stable for higher periods and orders, the x array is filled with integers 1 through the sampling period rather than using current bar numbers.
This process can be thought of as shifting the origin of the x-axis as new data emerges.
This keeps the axis values significantly lower than the 10k+ bar values, thus maintaining more numerical stability at higher orders and sample lengths.
The data arrays are then used to create a pseudo 2D matrix of x power sums, and a vector of x power*y sums.
These matrices are a representation the system of equations that need to be solved in order to find the regression coefficients.
Below, you'll see some examples of the pattern of equations used to solve for our coefficients represented in augmented matrix form.
For example, the augmented matrix for the system equations required to solve a second order (quadratic) polynomial regression by least squares is formed like this:
(∑x^0 ∑x^1 ∑x^2 | ∑(x^0)y)
(∑x^1 ∑x^2 ∑x^3 | ∑(x^1)y)
(∑x^2 ∑x^3 ∑x^4 | ∑(x^2)y)
The augmented matrix for the third order (cubic) system is formed like this:
(∑x^0 ∑x^1 ∑x^2 ∑x^3 | ∑(x^0)y)
(∑x^1 ∑x^2 ∑x^3 ∑x^4 | ∑(x^1)y)
(∑x^2 ∑x^3 ∑x^4 ∑x^5 | ∑(x^2)y)
(∑x^3 ∑x^4 ∑x^5 ∑x^6 | ∑(x^3)y)
This pattern continues for any n ordered polynomial regression, in which the coefficient matrix is a n + 1 wide square matrix with the last term being ∑x^2n, and the last term of the result vector being ∑(x^n)y.
Thanks to this pattern, it's rather convenient to solve the for our regression coefficients of any nth degree polynomial by a number of different methods.
In this script, I utilize a process known as LU Decomposition to solve for the regression coefficients.
Lower-upper (LU) Decomposition is a neat form of matrix manipulation that expresses a 2D matrix as the product of lower and upper triangular matrices.
This decomposition method is incredibly handy for solving systems of equations, calculating determinants, and inverting matrices.
For a linear system Ax=b, where A is our coefficient matrix, x is our vector of unknowns, and b is our vector of results, LU Decomposition turns our system into LUx=b.
We can then factor this into two separate matrix equations and solve the system using these two simple steps:
1. Solve Ly=b for y, where y is a new vector of unknowns that satisfies the equation, using forward substitution.
2. Solve Ux=y for x using backward substitution. This gives us the values of our original unknowns - in this case, the coefficients for our regression equation.
After solving for the regression coefficients, the values are then plugged into our regression equation:
Y = a0 + a1*x + a1*x^2 + ... + an*x^n, where a() is the ()th coefficient in ascending order and n is the polynomial degree.
From here, an array of curve values for the period based on the current equation is populated, and standard deviation is added to and subtracted from the equation to calculate the channel high and low levels.
The calculated curve values can also be shifted to the left or right using the "Regression Offset" input
Changing the offset parameter will move the curve left for negative values, and right for positive values.
This offset parameter shifts the curve points within our window while using the same equation, allowing you to use offset datapoints on the regression curve to calculate the LSMA and bands.
The curve and channel's appearance is optionally approximated using Pine's v4 line tools to draw segments.
Since there is a limitation on how many lines can be displayed per script, each curve consists of 10 segments with lengths determined by a user defined step size. In total, there are 30 lines displayed at once when active.
By default, the step size is 10, meaning each segment is 10 bars long. This is because the default sampling period is 100, so this step size will show the approximate curve for the entire period.
When adjusting your sampling period, be sure to adjust your step size accordingly when curve drawing is active if you want to see the full approximate curve for the period.
Note that when you have a larger step size, you will see more seemingly "sharp" turning points on the polynomial curve, especially on higher degree polynomials.
The polynomial functions that are calculated are continuous and differentiable across all points. The perceived sharpness is simply due to our limitation on available lines to draw them.
The approximate channel drawings also come equipped with style inputs, so you can control the type, color, and width of the regression, channel high, and channel low curves.
I also included an input to determine if the curves are updated continuously, or only upon the closing of a bar for reduced runtime demands. More about why this is important in the notes below.
For additional reference, I also included the option to display the current regression equation.
This allows you to easily track the polynomial function you're using, and to confirm that the polynomial is properly supported within Pine.
There are some cases that aren't supported properly due to Pine's limitations. More about this in the notes on the bottom.
In addition, I included a line of text beneath the equation to indicate how many bars left or right the calculated curve data is currently shifted.
The display label comes equipped with style editing inputs, so you can control the size, background color, and text color of the equation display.
The Polynomial LSMA, high band, and low band in this script are generated by tracking the current endpoints of the regression, channel high, and channel low curves respectively.
The output of these bands is similar in nature to Bollinger Bands, but with an obviously different derivation process.
By displaying the LSMA and bands in tandem with the polynomial channel, it's easy to visualize how LSMAs are derived, and how the process that goes into them is drastically different from a typical moving average.
The main difference between LSMA and other MAs is that LSMA is showing the value of the regression curve on the current bar, which is the result of a modelled relationship between x and the expected value of y.
With other MA / filter types, they are typically just averaging or frequency filtering the samples. This is an important distinction in interpretation. However, both can be applied similarly when trading.
An important distinction with the LSMA in this script is that since we can model higher degree polynomial relationships, the LSMA here is not limited to only linear as it is in TV's built in LSMA.
Bar colors are also included in this script. The color scheme is based on disparity between source and the LSMA.
This script is a great study for educating yourself on the process that goes into polynomial regression, as well as one of the many processes computers utilize to solve systems of equations.
Also, the Polynomial LSMA and bands are great components to try implementing into your own analysis setup.
I hope you all enjoy it!
--------------------------------------------------------
NOTES:
- Even though the algorithm used in this script can be implemented to find any order polynomial relationship, TV has a limit on the significant figures for its floating point outputs.
This means that as you increase your sampling period and / or polynomial order, some higher order coefficients will be output as 0 due to floating point round-off.
There is currently no viable workaround for this issue since there isn't a way to calculate more significant figures than the limit.
However, in my humble opinion, fitting a polynomial higher than cubic to most time series data is "overkill" due to bias-variance tradeoff.
Although, this tradeoff is also dependent on the sampling period. Keep that in mind. A good rule of thumb is to aim for a nice "middle ground" between bias and variance.
If TV ever chooses to expand its significant figure limits, then it will be possible to accurately calculate even higher order polynomials and periods if you feel the desire to do so.
To test if your polynomial is properly supported within Pine's constraints, check the equation label.
If you see a coefficient value of 0 in front of any of the x values, reduce your period and / or polynomial order.
- Although this algorithm has less computational complexity than most other linear system solving methods, this script itself can still be rather demanding on runtime resources - especially when drawing the curves.
In the event you find your current configuration is throwing back an error saying that the calculation takes too long, there are a few things you can try:
-> Refresh your chart or hide and unhide the indicator.
The runtime environment on TV is very dynamic and the allocation of available memory varies with collective server usage.
By refreshing, you can often get it to process since you're basically just waiting for your allotment to increase. This method works well in a lot of cases.
-> Change the curve update frequency to "Close Only".
If you've tried refreshing multiple times and still have the error, your configuration may simply be too demanding of resources.
v4 drawing objects, most notably lines, can be highly taxing on the servers. That's why Pine has a limit on how many can be displayed in the first place.
By limiting the curve updates to only bar closes, this will significantly reduce the runtime needs of the lines since they will only be calculated once per bar.
Note that doing this will only limit the visual output of the curve segments. It has no impact on regression calculation, equation display, or LSMA and band displays.
-> Uncheck the display boxes for the drawing objects.
If you still have troubles after trying the above options, then simply stop displaying the curve - unless it's important to you.
As I mentioned, v4 drawing objects can be rather resource intensive. So a simple fix that often works when other things fail is to just stop them from being displayed.
-> Reduce sampling period, polynomial order, or curve drawing step size.
If you're having runtime errors and don't want to sacrifice the curve drawings, then you'll need to reduce the calculation complexity.
If you're using a large sampling period, or high order polynomial, the operational complexity becomes significantly higher than lower periods and orders.
When you have larger step sizes, more historical referencing is used for x-axis locations, which does have an impact as well.
By reducing these parameters, the runtime issue will often be solved.
Another important detail to note with this is that you may have configurations that work just fine in real time, but struggle to load properly in replay mode.
This is because the replay framework also requires its own allotment of runtime, so that must be taken into consideration as well.
- Please note that the line and label objects are reprinted as new data emerges. That's simply the nature of drawing objects vs standard plots.
I do not recommend or endorse basing your trading decisions based on the drawn curve. That component is merely to serve as a visual reference of the current polynomial relationship.
No repainting occurs with the Polynomial LSMA and bands though. Once the bar is closed, that bar's calculated values are set.
So when using the LSMA and bands for trading purposes, you can rest easy knowing that history won't change on you when you come back to view them.
- For those who intend on utilizing or modifying the functions and calculations in this script for their own scripts, I included debug dialogues in the script for all of the arrays to make the process easier.
To use the debugs, see the "Debugs" section at the bottom. All dialogues are commented out by default.
The debugs are displayed using label objects. By default, I have them all located to the right of current price.
If you wish to display multiple debugs at once, it will be up to you to decide on display locations at your leisure.
When using the debugs, I recommend commenting out the other drawing objects (or even all plots) in the script to prevent runtime issues and overlapping displays.
TrendZonesTrendZones
This is an indicator which I use, have tested, tweaked and added features to for use in my trend following investing system. I got the idea for it when for some reason I was looking for a dynamic reference to measure the height of a channel or something. In search of this I made MA’s of the high and low borders of a Donchian channel which turned out to be two near parallel and stunningly smooth curves. This visual was so appealing that I immediately tried to turn it into a replacement for the KeltCOG which I previously used in my system. First I created a curve in the middle of the upper and lower curves, which I called COG (Center Of Gravity). Then I decided to enter only one lookback and let the script create a Donchian channel with half the lookback and use this to create the curves with an MA of whole lookback. For this reason the minimum lookback is set to 14, enough room for the Donchian Channel of 7 periods. This Donchian ChanneI has a special way of calculating the borders, involving a 5 period Median value. Thanks to this these borders are really a resistance and support level, which won’t change at a whim, e.g. when a ‘dead cat bounce’ occurs. I prevented the Donchian channel to show itself between the curves and only pop out from behind these. These pop outs now function as “strong trend zones”. I gave it colors (blue:-strong up, green: moderate up, orange: moderate down, red: strong down, near COG: gray, curves horizontal: gray) and it looked very appealing. I tested it in different time frames. In some weekend, when I was bored, I observed for a few hours the minute chart of bitcoin. It turned out that you can reliably tell that an uptrend ends when the candles go under the COG beginning a downtrend. Uptrend starts again once the candles go above COG. As Trends on minute charts only last around half an hour, this entertainment made the potential of this indicator very clear to me in just one afternoon.
Risk Management, Safe Level and Logical Stops.
In the inputs are settings for “Risk Tolerance”, and to activate “Show Logical Stop Level” (activated in example chart) and “Show Safe Level”. As a rule of thump a trade should not expose the invested capital to a risk of losing more than 2 percent. I divided my investment capital in ten equal parts which are allocated to ten different stocks or other instruments or kept liquid. This means that when a position is closed by triggering a Stop with a loss of 20 percent, the invested capital suffers only 2 percent (20% x 10% = 2%). This is why the value for “Risk Tolerance” has a default of 20. Because I put my Stops on the lower curve, a “Safe Level” can be calculated such that when you buy for a price below or at this level, the stop will protect the position sufficiently. Because I only buy when the instrument is in uptrend, the buying price should be between COG and Safe Level. Although I never do that, putting the stop at other curves is feasible and when you want to widen the stop (I never lower my stops btw) in a downtrend situation, even 1 ATR below the “Low Border”. I call these “Logical Stop Levels”, marked with dark green circles on the lower curve when safe buying by placing the Stoploss on this curve is possible, gray circles on the other curves, on the Upper Curve navy when price enters very profitable level. In a downtrend situation maroon circles appear.
Target lines
When I open a position I always set a Stoploss and a Target, for this purpose two types of Target values can be set and corresponding Target lines activated. These lines are drawn above the “High Border” at the set distance. If one expects some price to be used, differences will occur.
Other Features
Support Zone, this is 1 ATR below the “Low Border”, the maroon circles of the “Logal Stops” are placed on this “Support level”.
Stop distance and Channel Width. (activated in example chart) These are reported in a two cell table in the right lower corner of the main panel. I created this because I want to be able to check the volatility, whether the channel shows a situation in which safe buying in most levels of the channel is possible or what risk you take when you buy now and set the Stop at the nearest logical level (which is not always the “Lower curve”). This feature comes in handy for creating a setup I propose in the “Day Trading Fantasy” below.
Some General and User Settings. I never activate this, perhaps you will.
Use Of TrendZones In My System.
Create a list of stocks in uptrend. I define ‘stock in uptrend’ as in uptrend zone in all three monthly, weekly and daily charts, all three should at the same time be in uptrend. The advantage of TrendZones is that you can immediately see in which zone the candle moves.
Opening a position in a stock from the above list. I do this only when in both the daily and weekly the green dot on the lower curve indicates a buying opportunity. This is usually not the case in most of the items of the list, this feature thus provides a good timing for opening a position. Sometimes you need to wait a few weeks for this to happen.
Setting a target over a position. For this I use the Target percent line of the weekly chart with the default value of 10.
Updating the Stoploss and Target values. Every week or two weeks I set these to the new values of the “Lower Curve” and the Target line of the weekly. Attention: never shift down Stops, only up or let them stay the same when the curve moves down. I never use Stop levels on other curves.
I Check the charts whenever I like to do this. Close the position when the uptrend obviously shifts down. Otherwise I let the profits run until the Target triggers which closes the position with some profit.
For selecting stocks an checking charts for volume events, I also use a subpanel indicator called “TZanalyser”, which borrows the visual of my “Fibonacci Zone Oscillator”, is based on TrendZones and includes code from my REVE indicators. I intend to publish that as well.
Day Trading Fantasy.
Day trading is an attempt to earn a dime by opening a position in the morning and close it during the day again with a profit (or a loss). Before the market closes, you close all day trading positions.
In my fantasy the “Logical Stop Level” is repurposed for use as entry point and the ATR-based Target line is used to provide a target setting in an intraday chart, like e.g. 15 minute. To do this the “Safe Level” should be limited to between Channel width and COG. This can be done by showing “Safe Level” and “Channel Width” and then set “Risk Tolerance” to around the shown Channel Width. In this setting you can then wait for the green circle to show up for entering your trade and protect it with the stop.
I don’t know if this works fine or if it’s better than other day trade systems, because I don’t do day trading.
Take care and have fun.
Linear Regression AngleThere are several Linear Regression indicator in the Public Library, but I don't think there is one that converts the Linear Regression (LR) curve into angle in degrees, relative to a set reference frame. Due to the large price range between tickers, creating this indicator isn't as straight forward as I originally thought. For example, given the same time period, a stock that fluctuate in the 10's will have a true linear regression angle dramatically different from a penny stock. Even changing the scale on your chart will affect the "apparent" angle you see on the chart. Hence, this indicator DOES NOT provide the true linear regression angle, but only a relative one based on a defined number of historical bars.
Originality and usefulness
This indicator provides Linear Regression (LR) Angle in degree that may be more easily interpreted by some traders as we are more accustomed to line angles in degree and know how to visualize them.
This script also provides the option to overlay up to four LR curves of different periods, as well as an average curve of the enabled curves. This allows traders to analysis short to long term trends.
Furthermore, slope (rate of change) of each LR curves can be toggled. The slope plot can help traders visualize accelerations and decelerations of the LR curves which may help in spotting trend reversals.
Data table provides real time data for each curve.
Example of using slope plot with a 30 bars Linear Regression Angle:
Recession Warning Traffic LightThis is an indicator that uses 6 different metrics to determine the combined probability of a recession and compares the high probability warning periods against actual historical periods of recession.
GREEN tells us that the referenced recession indicators are not exhibiting any warning. Observe the long stretches of “all-green” in between recessionary periods in the chart above.
RED will show a full-on warning level for that particular recession indicator, signaling that monitoring of this sector is clearly showing a problem – which has in the past, reliably exhibited itself as a forewarning of recessions.
Adding green and red together can help determine a combined probability of recession.
IMPORTANT: Your chart should be on 1d and set to SPX , DJI ,or NDQ indices
Precious metals: This indicator calculates the relative prices of Gold & rhodium. Gold is a flight-to-quality asset. Rhodium is the rarest of precious industrial metals and prices spike when the economy is heating up. In front of a recession, the upper relative movement of rhodium precedes gold.
Stock markets: This indicator compares closing prices to growth rate curves of the SPX. This indication is the noisiest but tells us very well when the recession has ended. Stock market indices, which respond to “smart money” moving out of markets when the other indicators begin to warn of recession, or when markets become overheated and rise to historically unsustainable levels.
Yield curve: This indicator compares the 3m & 10y treasuries and detects yield curve inversions. Interest rates are controlled by the Federal Reserve and by the purchasers in the Federal Treasury auction markets, which together create the treasury yield curve. This inversion is the most reliable recession indicator. These happen during a flight to quality.
Federal Reserve: This indicator measures GDP and detects contraction which is technically a recession. This is usually one of the last indicators to enter a Warning state, and it could be 6 months delayed simply confirming what may have already been projected.
Money Supply. This indicator measures the M2 money supply, which typically grows about 1% per calendar quarter. When this shrinks, it's tapping the brakes on the economy. This can also lead to yield curve inversion. This is also a measure of inflation and its effects on the aggregate money supply (liquid capital) available for short-term economic activity, or which can be directed into the purchase of long-term, less liquid assets.
Leading Economic factors: There is a whole basket of leading economic indicators that, as collections, reflect overall growth or contraction of economic activity. These indicators include measures of level and growth in productivity, employment, housing, consumer confidence, industrial purchasing confidence, and much more. These indicators may or may not be detached from the broader economy, and often provide up to 6 months of foresight. For more information please visit www.conference-board.org
Actual Recession: Central Bank indicators are published by the Federal Reserve and reflect their own analysis of national and regional economic health, as well as their calculations of the likelihood of a recession. The Federal Reserve has a recession ticker which is used to plot periods of actual recessions on this indicator for comparison.
Buy And Hold Performance Screener - [JTCAPITAL]Buy And Hold Performance Screener – is a script designed to track and display multi-asset “buy and hold” performance curves and performance statistics over defined timeframes for selected symbols. It doesn’t attempt to time entries or exits; rather, it shows what would happen if one simply bought the asset at the defined start date and held it.
The indicator works by calculating in the following steps:
Start Date Definition
The script begins by reading an input for the start date. This defines the bar from which the equity curves begin.
Symbol Definitions & Close Price Retrieval
The script allows the user to specify up to ten tickers. For each ticker it uses request.security() on the “1D” timeframe to retrieve the daily close price of that symbol.
Plot Enable Inputs
For each ticker there is an input boolean controlling whether the equity curve for that ticker should be plotted.
Asset Name Cleaning
The helper function clean_name(string asset) => … takes the asset string (e.g., “CRYPTO:SOLUSD”) and manipulates it (via string splitting and replacements) to derive a cleaned short name (e.g., “SOL”). This name is used for visuals (labels, table headers).
Equity Curve Calculation (“HODL”)
The helper function f_HODL(closez) defines a variable equity that assumes a starting equity of 1 unit at the start date and then multiplies by the ratio of each bar’s close to the prior bar’s close: i.e. daily compounding of returns.
Performance Metrics Calculation
The helper function f_performance(closez) calculates, for each symbol’s close series, the percentage change of the current close relative to its close 30 days ago, 90 days ago, 180 days ago, 1 year ago (365 days), 2 years ago (730 days) and 3 years ago (1095 days).
Equity Curve Plots
For each ticker, if the corresponding plot input is true, the script assigns a plotted variable equal to the equity curve value. Its then drawing each selected equity curve on the chart, each in a distinct color.
Table Construction
If the plottable input is true, the script constructs a table and populates it with rows and column corresponding to the assigned tickers and the set 6 timeframes used for display.
Buy and Sell Conditions:
Since this is strictly a “buy-and-hold” performance screener, there are no explicit buy or sell signals generated or plotted. The script assumes: buy at the defined start_date, hold continuously to present. There are no filters, no exit logic, no take-profit or stop-loss. The benefit of this approach is to provide a clean benchmark of how selected assets would have performed if one simply adopted a passive “buy & hold” approach from a given start date.
Features and Parameters:
start_date (input.time) : Defines the date from which performance and equity curves begin.
ticker1 … ticker10 (input.symbol) : User-selectable asset symbols to include in the screener.
plot1 … plot10 (input.bool) : Boolean flags to enable/disable plotting of each asset’s equity curve.
plottable (input.bool) : Flag to enable/disable drawing the performance table.
Colored plotting + Labels for identifying each asset curve on the chart.
Specifications:
Here is a detailed breakdown of every calculation/variable/function used in the script and what each part means:
start_date
This is defined via input.time(timestamp("1 Jan 2025"), title = "Start Date"). It allows the user to pick a specific calendar date from which the equity curves and performance calculations will start.
ticker1 … ticker10
These inputs allow the user to select up to ten different assets (symbols) to monitor. The script uses each of these to fetch daily close prices.
plot1 … plot10
Boolean inputs controlling which of the ten asset equity curves are plotted. If plotX is true, the equity curve for ticker X will be visible; otherwise it will be not plotted. This gives the user flexibility to include or exclude specific assets on the chart.
Returns the cleaned asset short name.
This provides friendly text labels like “BTC”, “ETH”, “SOL”, etc., instead of full symbol codes.
The choice of distinct colours for each asset helps differentiate curves visually when multiple assets are overlaid.
Colour definitions
Variables color1…color10 are explicitly defined via color.rgb(r,g,b) to give each asset a unique colour (e.g., red, orange, yellow, green, cyan, blue, purple, pink, etc.).
What are the benefits of combining these calculations?
By computing equity curves for multiple assets from the same start date and overlaying them, you can visualise comparative performance of different assets under a uniform “buy & hold” assumption.
The performance table adds multi-horizon returns (30 D, 90 D, 180 D, 1 Y, 2 Y, 3 Y) which helps the user see both short-term and longer-term performance without having to manually compute returns.
The use of daily close data via request.security(..., "1D") removes dependency on the chart’s timeframe, thereby standardising the comparison across assets.
The equity curve and table together provide both visual (curve) and numerical (table) summaries of performance, making it easier to spot trends, divergences, and cross-asset comparisons at a glance.
Because it uses compounding (equity := equity * (closez / closez )), the curves reflect the real growth of a 1-unit investment held over time, rather than only simple returns.
The labelling of curves and the color-coding make the multi-asset overlay easier to interpret.
Using a clean start date ensures that all curves begin at the same point (1 unit at start_date), making relative performance intuitive.
Because of this, the script is useful as a benchmarking tool: rather than trying to pick entries or exit points, you can simply compare “what if I had held these assets since Jan 1 2025” (or your chosen date), and see which assets out-/under-performed in that period. It helps an investor or trader evaluate the long-term benefits of passive vs. active management, or of allocation decisions.
Please note:
The script assumes continuous daily data and does not account for dividends, fees, slippage, or tax implications.
It does not attempt to optimise timing or provide trading signals.
Returns prior to the start date are ignored (equity only begins once time >= start_date).
For newly listed assets with fewer than 365 or 730 or 1095 days of history, the longer-horizon returns may return na or misleading values.
Because it uses request.security() without specifying lookahead, and on “1D” timeframe, it complies with standard usage but you should verify there is no look-ahead bias in your particular setup.
ENJOY!
BTC Transaction Indicator Name: "Bitcoin On-Chain Volume & Dynamic Parabolic Curve Signals"
Purpose:
This indicator is designed for Bitcoin traders and long-term holders. It combines the analysis of Bitcoin's on-chain transaction volume with price action to generate "Whale" and "Bear" signals. Additionally, it features a unique dynamic parabolic curve that acts as a visual support line, adapting its visibility based on price interaction with a key Exponential Moving Average (EMA).
Key Components:
On-Chain Volume Analysis:
Utilizes Estimated Transaction Volume (ETRAV) data from the Bitcoin blockchain.
Calculates fast and slow Simple Moving Averages (SMAs) of this volume.
Identifies volume trends (up/down) and significant volume increases/decreases.
Employs fixed thresholds (2,500,000 for low volume and 25,000,000 for high volume) to define key activity levels, similar to how historical on-chain analysis defined accumulation and distribution zones.
Price Action Analysis:
Calculates fast and slow SMAs of the price.
Detects price trends (up/down), recoveries, and declines based on these price SMAs.
"Whale" and "Bear" Signals:
Whale Signals (Buy-side): Generated when there's an upward volume trend, significant volume increase, and a downward price trend followed by price recovery. These indicate potential accumulation phases.
Bear Signals (Sell-side): Generated when there's a downward volume trend, significant volume decrease, and an upward price trend followed by price decline. These indicate potential distribution phases.
Visuals: Both types of signals are plotted as small, colored circles directly on the price chart, with corresponding text labels ("Whale," "Buy," "Bear," "Sell," "Price Recovering," "Price Declining").
Dynamic Parabolic Curve:
Concept: A green parabolic (exponential) curve that serves as a dynamic visual support line.
Activation: The curve starts drawing automatically only when the price crosses over the EMA 500 (Exponential Moving Average of 500 periods). The curve's starting point is set at a user-defined percentage below the EMA 500 value at that exact crossover point.
Visibility: The curve remains visible and continues its trajectory only as long as the price stays above the EMA 500.
Deactivation: The curve disappears instantly if the price falls below or equals the EMA 500. It will only reappear if the price crosses above the EMA 500 again.
Customization: The curve's steepness (Tasa Crecimiento Curva) and its initial distance from the EMA 500 (Inicio Curva % por debajo de EMA500) are adjustable.
Dynamic Label: A "Parabólico" text label is plotted near the center of the active curve segment, with an adjustable vertical offset to ensure it stays visually appealing below the curve.
What is PLOTTED on the chart:
The small, colored circle signals for Whale/Buy and Bear/Sell activity.
The green dynamic parabolic curve.
What is NOT PLOTTED:
EMA 200, EMA 500 lines (though they are calculated internally for logic).
Raw volume data or volume Moving Averages (these are only used for signal calculation, not plotted).
Ideal for:
Bitcoin traders and investors focused on long-term trends and cycle analysis, who want visual cues for accumulation/distribution phases based on on-chain activity, complemented by a unique, dynamically appearing parabolic support curve.
Important Notes:
Relies on the availability of external on-chain data (QUANDL:BCHAIN) within TradingView.
Functions best on a daily timeframe for optimal on-chain data relevance.
Gann Box (Zeiierman)█ Overview
The Gann Box (Zeiierman) is an indicator that provides visual insights using the principles of W.D. Gann's trading methods. Gann's techniques are based on geometry, astronomy, and astrology, and are used to predict important price levels and market trends. This indicator helps traders identify potential support and resistance levels, and forecast future price movements.
Gann used angles and various geometric constructions to divide time and price into proportionate parts. Gann indicators are often used to predict areas of support and resistance, key tops and bottoms, and future price moves.
█ How It Works
The indicator operates by identifying high and low points within a visible range on the chart and drawing a Gann Box between these points. The box is divided into segments based on selected percentages, which represent key levels for observing market reactions. It includes options to display labels, a Gann fan, and Gann angles for analysis. Advanced features allow extending the box into the future for predictive analysis and reversing its orientation for alternative viewpoints.
High and Low Points Identification: It starts by locating the highest and lowest price points visible on the chart.
Gann Box Construction: Draws a box from these points and divides it according to specified percentages, highlighting potential support and resistance levels.
█ How to Use
Support and Resistance Levels
Using a Gann angle to forecast support and resistance is probably the most popular way they are used. This technique frames the market, allowing the analyst to read the movement of the market inside this framework.
The lines within the Gann Box, drawn at the key percentages, create a grid of potential support and resistance levels. As prices fluctuate, these lines can act as barriers to price movement, with the price often pausing or reversing at these intervals.
Forecasting with the 'Extend' Feature: The indicator's ability to extend lines and boxes into the future provides traders with a forward-looking tool to anticipate potential market movements and prepare for them.
Gann Fan: This feature draws lines at a significant price angle, helping traders identify potential support and resistance levels based on the theory that prices move in predictable patterns.
Gann Curves: Gann Curves display dynamic support and resistance levels, aiding in the analysis of momentum and trend strength.
█ Settings
The indicator includes several settings that allow customization of its appearance and functionality:
⚪ General Settings
Reverse: This setting changes the orientation of labels and calculations within the Gann Box, providing alternative analytical perspectives. It essentially flips the Gann Box's direction, which can be useful in different market conditions or analysis scenarios.
Extend: Extends the drawing of Gann lines or boxes into the future beyond the current last bar. This feature is essential for forecasting future price movements and identifying potential support or resistance levels that lie outside the current price action.
⚪ Gann Box
Show Box: Toggles the visibility of the Gann Box on the chart. The Gann Box is a fundamental tool in Gann analysis, highlighting key levels based on selected high and low points to identify potential support and resistance areas.
Show Fibonacci Labels: Controls the display of Fibonacci labels within the Gann Box. These labels mark specific Fibonacci retracement levels, aiding traders in recognizing significant levels for potential reversals.
Box Visibility: Allows users to enable or disable individual boxes within the Gann Box, providing flexibility in focusing on specific levels of interest.
Percentage Levels: Defines the Fibonacci levels within the Gann Box. Traders can adjust these levels to customize the Gann Box according to their specific analysis needs.
Coloring: Customizes the color of each level within the Gann Box, enhancing visual clarity and differentiation between levels.
⚪ Gann Fan
Show Fan: Enables the Gann Fan, which draws lines at significant Gann angles from a particular point on the chart, helping identify potential support and resistance levels.
Fan Percentages and Coloring: Similar to the Gann Box, these settings allow traders to customize which Gann angles are displayed and how they are colored.
⚪ Gann Curves
Show Curves: When enabled, this setting draws Gann Curves on the chart. These curves are based on Gann percentages and provide a dynamic view of support and resistance levels as they adapt to changing market conditions.
Curve Percentages and Coloring: Define which curves are displayed and their colors, allowing for a tailored analysis experience.
⚪ Gann Angles
Show Angles: Toggles the display of Gann Angles, which are crucial for understanding the market's price and time dynamics, offering insights into future support and resistance levels.
Coloring: Customizes the color of the Gann Angles, making it easier to differentiate between various angles on the chart.
█ Alerts
The indicator includes several alert conditions for price breakouts from the Gann Box and specific levels, enabling traders to be notified of significant market movements.
-----------------
Disclaimer
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!
Finite Difference - Backward (mcbw_)In calculus there exists a 'derivative', which simply just measures the difference between two points on a curve. For well behaved mathematical functions there are infinitely many points and so there exists a derivative at every point. Where there are infinitely many points in a curve that curve is called 'continuous'. Continuous curves are very nice to deal with since each point on it exists almost exactly where its neighbors are. However, if the curve does not have infinitely many points on it, but instead has a finite number of points on it, that curve is called 'discrete' instead of continuous. Taking the derivative of discrete curves is much trickier business since there are none of the mathematical conveniences that a continuous offers. In the real world everything we measure is a discrete curve, including Price (since we measure it a finite number of times, aka each candlestick)!
The branch of Discrete Mathematics has found an approach to measure the derivative along a discrete curve, that approach is aptly called " Finite Difference ". To get a more accurate approximation of a discrete derivative, the finite difference approach uses weighted combinations of neighboring points. The most common type of finite difference is a 'central' difference, this uses a combination of points before and after the point of interest to approximate the discrete derivative. This is great for historical analysis but is not of much use for trading algorithms since it technically means using future prices to calculate the derivative of the current point. Instead we can use a less common variant called a ' Backwards Difference ' that only uses a combination of points before the current one to help approximate the current derivative.
In this script you can choose the " Order " of your derivative and the " Accuracy " of its approximation. This script is for educational purposes for folks building trading algorithms. Many trading algorithms often have an element of seeing how much Price has changed from the previous candle to the current candle. This approach is the lowest accuracy derivative possible, and using the backwards finite differences, made available for the first time on TradingView (!!), algorithms that use derivatives can now have higher orders of accuracy!
Happy Trading/Developing!
[MAD] Acceleration based dampened SMA projectionsThis indicator utilizes concepts of arrays inside arrays to calculate and display projections of multiple Smoothed Moving Average (SMA) lines via polylines.
This is partly an experiment as an educational post, on how to work with multidimensional arrays by using User-Defined Types
------------------
Input Controls for User Interaction:
The indicator provides several input controls, allowing users to adjust parameters like the SMA window, acceleration window, and dampening factors.
This flexibility lets users customize the behavior and appearance of the indicator to fit their analysis needs.
sma length:
Defines the length of the simple moving average (SMA).
acceleration window:
Sets the window size for calculating the acceleration of the SMA.
Input Series:
Selects the input source for calculating the SMA (typically the closing price).
Offset:
Determines the offset for the input source, affecting the positioning of the SMA. Here it´s possible to add external indicators like bollinger bands,.. in that case as double sma this sma should be very short.
(Thanks Fikira for that idea)
Startfactor dampening:
Initial dampening factor for the polynomial curve projections, influencing their starting curvature.
Growfactor dampening:
Growth rate of the dampening factor, affecting how the curvature of the projections changes over time.
Prediction length:
Sets the length of the projected polylines, extending beyond the current bar.
cleanup history:
Boolean input to control whether to clear the previous polyline projections before drawing new ones.
Key technologies used in this indicator include:
User-Defined Types (UDT) :
This indicator uses UDT to create a custom type named type_polypaths.
This type is designed to store information for each polyline, including an array of points (array), a color for the polyline, and a dampening factor.
UDTs in Pine Script enable the creation of complex data structures, which are essential for organizing and manipulating data efficiently.
type type_polypaths
array polyline_points = na
color polyline_color = na
float dampening_factor= na
Arrays and Nested Arrays:
The script heavily utilizes arrays.
For example, it uses a color array (colorpreset) to store different colors for the polyline.
Moreover, an array of type_polypaths (polypaths) is used, which is an array consisting of user-defined types. Each element of this array contains another array (polyline_points), demonstrating nested array usage.
This structure is essential for handling multiple polylines, each with its set of points and attributes.
var type_polypaths polypaths = array.new()
Polyline Creation and Manipulation:
The core visual aspect of the indicator is the creation of polylines.
Polyline points are calculated based on a dampened polynomial curve, which is influenced by the SMA's slope and acceleration.
Filling initial dampening data
array_size = 9
middle_index = math.floor(array_size / 2)
for i = 0 to array_size - 1
damp_factor = f_calculate_damp_factor(i, middle_index, Startfactor, Growfactor)
polyline_color = colorpreset.get(i)
polypaths.push(type_polypaths.new(array.new(0, na), polyline_color, damp_factor))
The script dynamically generates these polyline points and stores them in the polyline_points array of each type_polypaths instance based on those prefilled dampening factors
if barstate.islast or cleanup == false
for damp_factor_index = 0 to polypaths.size() - 1
GET_RW = polypaths.get(damp_factor_index)
GET_RW.polyline_points.clear()
for i = 0 to predictionlength
y = f_dampened_poly_curve(bar_index + i , src_input , sma_slope , sma_acceleration , GET_RW.dampening_factor)
p = chart.point.from_index(bar_index + i - src_off, y)
GET_RW.polyline_points.push(p)
polypaths.set(damp_factor_index, GET_RW)
Polyline Drawout
The polyline is then drawn on the chart using the polyline.new() function, which uses these points and additional attributes like color and width.
for pl_s = 0 to polypaths.size() - 1
GET_RO = polypaths.get(pl_s)
polyline.new(points = GET_RO.polyline_points, line_width = 1, line_color = GET_RO.polyline_color, xloc = xloc.bar_index)
If the cleanup input is enabled, existing polylines are deleted before new ones are drawn, maintaining clarity and accuracy in the visualization.
if cleanup
for pl_delete in polyline.all
pl_delete.delete()
------------------
The mathematics
in the (ABDP) indicator primarily focuses on projecting the behavior of a Smoothed Moving Average (SMA) based on its current trend and acceleration.
SMA Calculation:
The indicator computes a simple moving average (SMA) over a specified window (sma_window). This SMA serves as the baseline for further calculations.
Slope and Acceleration Analysis:
It calculates the slope of the SMA by subtracting the current SMA value from its previous value. Additionally, it computes the SMA's acceleration by evaluating the sum of differences between consecutive SMA values over an acceleration window (acceleration_window). This acceleration represents the rate of change of the SMA's slope.
sma_slope = src_input - src_input
sma_acceleration = sma_acceleration_sum_calc(src_input, acceleration_window) / acceleration_window
sma_acceleration_sum_calc(src, window) =>
sum = 0.0
for i = 0 to window - 1
if not na(src )
sum := sum + src - 2 * src + src
sum
Dampening Factors:
Custom dampening factors for each polyline, which are based on the user-defined starting and growth factors (Startfactor, Growfactor).
These factors adjust the curvature of the projected polylines, simulating various future scenarios of SMA movement.
f_calculate_damp_factor(index, middle, start_factor, growth_factor) =>
start_factor + (index - middle) * growth_factor
Polynomial Curve Projection:
Using the SMA value, its slope, acceleration, and dampening factors, the script calculates points for polynomial curves. These curves represent potential future paths of the SMA, factoring in its current direction and rate of change.
f_dampened_poly_curve(index, initial_value, initial_slope, acceleration, damp_factor) =>
delta = index - bar_index
initial_value + initial_slope * delta + 0.5 * damp_factor * acceleration * delta * delta
damp_factor = f_calculate_damp_factor(i, middle_index, Startfactor, Growfactor)
Have fun trading :-)
z-score-calkusi-v1.143z-scores incorporate the moment of N look-back bars to allow future price projection.
z-score = (X - mean)/std.deviation ; X = close
z-scores update with each new close print and with each new bar. Each new bar augments the mean and std.deviation for the N bars considered. The old Nth bar falls away from consideration with each new historical bar.
The indicator allows two other options for X: RSI or Moving Average.
NOTE: While trading use the "price" option only.
The other two options are provided for visualisation of RSI and Moving Average as z-score curves.
Use z-scores to identify tops and bottoms in the future as well as intermediate intersections through which a z-score will pass through with each new close and each new bar.
Draw lines from peaks and troughs in the past through intermediate peaks and troughs to identify projected intersections in the future. The most likely intersections are those that are formed from a line that comes from a peak in the past and another line that comes from a trough in the past. Try getting at least two lines from historical peaks and two lines from historical troughs to pass through a future intersection.
Compute the target intersection price in the future by clicking on the z-score indicator header to see a drag-able horizontal line to drag over the intersection. The target price is the last value displayed in the indicator's status bar after the closing price.
When the indicator header is clicked, a white horizontal drag-able line will appear to allow dragging the line over an intersection that has been drawn on the indicator for a future z-score projection and the associated future closing price.
With each new bar that appears, it is necessary to repeat the procedure of clicking the z-score indicator header to be able to drag the drag-able horizontal line to see the new target price for the selected intersection. The projected price will be different from the current close price providing a price arbitrage in time.
New intermediate peaks and troughs that appear require new lines be drawn from the past through the new intermediate peak to find a new intersection in the future and a new projected price. Since z-score curves are sort of cyclical in nature, it is possible to see where one has to locate a future intersection by drawing lines from past peaks and troughs.
Do not get fixated on any one projected price as the market decides which projected price will be realised. All prospective targets should be manually updated with each new bar.
When the z-score plot moves outside a channel comprised of lines that are drawn from the past, be ready to adjust to new market conditions.
z-score plots that move above the zero line indicate price action that is either rising or ranging. Similarly, z-score plots that move below the zero line indicate price action that is either falling or ranging. Be ready to adjust to new market conditions when z-scores move back and forth across the zero line.
A bar with highest absolute z-score for a cycle screams "reversal approaching" and is followed by a bar with a lower absolute z-score where close price tops and bottoms are realised. This can occur either on the next bar or a few bars later.
The indicator also displays the required N for a Normal(0,1) distribution that can be set for finer granularity for the z-score curve.This works with the Confidence Interval (CI) z-score setting. The default z-score is 1.96 for 95% CI.
Common Confidence Interval z-scores to find N for Normal(0,1) with a Margin of Error (MOE) of 1:
70% 1.036
75% 1.150
80% 1.282
85% 1.440
90% 1.645
95% 1.960
98% 2.326
99% 2.576
99.5% 2.807
99.9% 3.291
99.99% 3.891
99.999% 4.417
9-Jun-2025
Added a feature to display price projection labels at z-score levels 3, 2, 1, 0, -1, -2, 3.
This provides a range for prices available at the current time to help decide whether it is worth entering a trade. If the range of prices from say z=|2| to z=|1| is too narrow, then a trade at the current time may not be worth the risk.
Added plot for z-score moving average.
28-Jun-2025
Added Settings option for # of Std.Deviation level Price Labels to display. The default is 3. Min is 2. Max is 6.
This feature allows likelihood assessment for Fibonacci price projections from higher time frames at lower time frames. A Fibonacci price projection that falls outside |3.x| Std.Deviations is not likely.
Added Settings option for Chart Bar Count and Target Label Offset to allow placement of price labels for the standard z-score levels to the right of the window so that these are still visible in the window.
Target Label Offset allows adjustment of placement of Target Price Label in cases when the Target Price Label is either obscured by the price labels for the standard z-score levels or is too far right to be visible in the window.
9-Jul-2025
z-score 1.142 updates:
Displays in the status line before the close price the range for the selected Std. Deviation levels specified in Settings and |z-zMa|.
When |z-zMa| > |avg(z-zMa)| and zMa rising, |z-zMa| and zMa displays in aqua.
When |z-zMa| > |avg(z-zMa)| and zMa falling, |z-zMa| and zMa displays in red.
When |z-zMa| <= |avg(z-zMa)|, z and zMa display in gray.
z usually crosses over zMa when zMa is gray but not always. So if cross-over occurs when zMa is not gray, it implies a strong move in progress.
Practice makes perfect.
Use this indicator at your own risk
Adaptive Investment Timing ModelA COMPREHENSIVE FRAMEWORK FOR SYSTEMATIC EQUITY INVESTMENT TIMING
Investment timing represents one of the most challenging aspects of portfolio management, with extensive academic literature documenting the difficulty of consistently achieving superior risk-adjusted returns through market timing strategies (Malkiel, 2003).
Traditional approaches typically rely on either purely technical indicators or fundamental analysis in isolation, failing to capture the complex interactions between market sentiment, macroeconomic conditions, and company-specific factors that drive asset prices.
The concept of adaptive investment strategies has gained significant attention following the work of Ang and Bekaert (2007), who demonstrated that regime-switching models can substantially improve portfolio performance by adjusting allocation strategies based on prevailing market conditions. Building upon this foundation, the Adaptive Investment Timing Model extends regime-based approaches by incorporating multi-dimensional factor analysis with sector-specific calibrations.
Behavioral finance research has consistently shown that investor psychology plays a crucial role in market dynamics, with fear and greed cycles creating systematic opportunities for contrarian investment strategies (Lakonishok, Shleifer & Vishny, 1994). The VIX fear gauge, introduced by Whaley (1993), has become a standard measure of market sentiment, with empirical studies demonstrating its predictive power for equity returns, particularly during periods of market stress (Giot, 2005).
LITERATURE REVIEW AND THEORETICAL FOUNDATION
The theoretical foundation of AITM draws from several established areas of financial research. Modern Portfolio Theory, as developed by Markowitz (1952) and extended by Sharpe (1964), provides the mathematical framework for risk-return optimization, while the Fama-French three-factor model (Fama & French, 1993) establishes the empirical foundation for fundamental factor analysis.
Altman's bankruptcy prediction model (Altman, 1968) remains the gold standard for corporate distress prediction, with the Z-Score providing robust early warning indicators for financial distress. Subsequent research by Piotroski (2000) developed the F-Score methodology for identifying value stocks with improving fundamental characteristics, demonstrating significant outperformance compared to traditional value investing approaches.
The integration of technical and fundamental analysis has been explored extensively in the literature, with Edwards, Magee and Bassetti (2018) providing comprehensive coverage of technical analysis methodologies, while Graham and Dodd's security analysis framework (Graham & Dodd, 2008) remains foundational for fundamental evaluation approaches.
Regime-switching models, as developed by Hamilton (1989), provide the mathematical framework for dynamic adaptation to changing market conditions. Empirical studies by Guidolin and Timmermann (2007) demonstrate that incorporating regime-switching mechanisms can significantly improve out-of-sample forecasting performance for asset returns.
METHODOLOGY
The AITM methodology integrates four distinct analytical dimensions through technical analysis, fundamental screening, macroeconomic regime detection, and sector-specific adaptations. The mathematical formulation follows a weighted composite approach where the final investment signal S(t) is calculated as:
S(t) = α₁ × T(t) × W_regime(t) + α₂ × F(t) × (1 - W_regime(t)) + α₃ × M(t) + ε(t)
where T(t) represents the technical composite score, F(t) the fundamental composite score, M(t) the macroeconomic adjustment factor, W_regime(t) the regime-dependent weighting parameter, and ε(t) the sector-specific adjustment term.
Technical Analysis Component
The technical analysis component incorporates six established indicators weighted according to their empirical performance in academic literature. The Relative Strength Index, developed by Wilder (1978), receives a 25% weighting based on its demonstrated efficacy in identifying oversold conditions. Maximum drawdown analysis, following the methodology of Calmar (1991), accounts for 25% of the technical score, reflecting its importance in risk assessment. Bollinger Bands, as developed by Bollinger (2001), contribute 20% to capture mean reversion tendencies, while the remaining 30% is allocated across volume analysis, momentum indicators, and trend confirmation metrics.
Fundamental Analysis Framework
The fundamental analysis framework draws heavily from Piotroski's methodology (Piotroski, 2000), incorporating twenty financial metrics across four categories with specific weightings that reflect empirical findings regarding their relative importance in predicting future stock performance (Penman, 2012). Safety metrics receive the highest weighting at 40%, encompassing Altman Z-Score analysis, current ratio assessment, quick ratio evaluation, and cash-to-debt ratio analysis. Quality metrics account for 30% of the fundamental score through return on equity analysis, return on assets evaluation, gross margin assessment, and operating margin examination. Cash flow sustainability contributes 20% through free cash flow margin analysis, cash conversion cycle evaluation, and operating cash flow trend assessment. Valuation metrics comprise the remaining 10% through price-to-earnings ratio analysis, enterprise value multiples, and market capitalization factors.
Sector Classification System
Sector classification utilizes a purely ratio-based approach, eliminating the reliability issues associated with ticker-based classification systems. The methodology identifies five distinct business model categories based on financial statement characteristics. Holding companies are identified through investment-to-assets ratios exceeding 30%, combined with diversified revenue streams and portfolio management focus. Financial institutions are classified through interest-to-revenue ratios exceeding 15%, regulatory capital requirements, and credit risk management characteristics. Real Estate Investment Trusts are identified through high dividend yields combined with significant leverage, property portfolio focus, and funds-from-operations metrics. Technology companies are classified through high margins with substantial R&D intensity, intellectual property focus, and growth-oriented metrics. Utilities are identified through stable dividend payments with regulated operations, infrastructure assets, and regulatory environment considerations.
Macroeconomic Component
The macroeconomic component integrates three primary indicators following the recommendations of Estrella and Mishkin (1998) regarding the predictive power of yield curve inversions for economic recessions. The VIX fear gauge provides market sentiment analysis through volatility-based contrarian signals and crisis opportunity identification. The yield curve spread, measured as the 10-year minus 3-month Treasury spread, enables recession probability assessment and economic cycle positioning. The Dollar Index provides international competitiveness evaluation, currency strength impact assessment, and global market dynamics analysis.
Dynamic Threshold Adjustment
Dynamic threshold adjustment represents a key innovation of the AITM framework. Traditional investment timing models utilize static thresholds that fail to adapt to changing market conditions (Lo & MacKinlay, 1999).
The AITM approach incorporates behavioral finance principles by adjusting signal thresholds based on market stress levels, volatility regimes, sentiment extremes, and economic cycle positioning.
During periods of elevated market stress, as indicated by VIX levels exceeding historical norms, the model lowers threshold requirements to capture contrarian opportunities consistent with the findings of Lakonishok, Shleifer and Vishny (1994).
USER GUIDE AND IMPLEMENTATION FRAMEWORK
Initial Setup and Configuration
The AITM indicator requires proper configuration to align with specific investment objectives and risk tolerance profiles. Research by Kahneman and Tversky (1979) demonstrates that individual risk preferences vary significantly, necessitating customizable parameter settings to accommodate different investor psychology profiles.
Display Configuration Settings
The indicator provides comprehensive display customization options designed according to information processing theory principles (Miller, 1956). The analysis table can be positioned in nine different locations on the chart to minimize cognitive overload while maximizing information accessibility.
Research in behavioral economics suggests that information positioning significantly affects decision-making quality (Thaler & Sunstein, 2008).
Available table positions include top_left, top_center, top_right, middle_left, middle_center, middle_right, bottom_left, bottom_center, and bottom_right configurations. Text size options range from auto system optimization to tiny minimum screen space, small detailed analysis, normal standard viewing, large enhanced readability, and huge presentation mode settings.
Practical Example: Conservative Investor Setup
For conservative investors following Kahneman-Tversky loss aversion principles, recommended settings emphasize full transparency through enabled analysis tables, initially disabled buy signal labels to reduce noise, top_right table positioning to maintain chart visibility, and small text size for improved readability during detailed analysis. Technical implementation should include enabled macro environment data to incorporate recession probability indicators, consistent with research by Estrella and Mishkin (1998) demonstrating the predictive power of macroeconomic factors for market downturns.
Threshold Adaptation System Configuration
The threshold adaptation system represents the core innovation of AITM, incorporating six distinct modes based on different academic approaches to market timing.
Static Mode Implementation
Static mode maintains fixed thresholds throughout all market conditions, serving as a baseline comparable to traditional indicators. Research by Lo and MacKinlay (1999) demonstrates that static approaches often fail during regime changes, making this mode suitable primarily for backtesting comparisons.
Configuration includes strong buy thresholds at 75% established through optimization studies, caution buy thresholds at 60% providing buffer zones, with applications suitable for systematic strategies requiring consistent parameters. While static mode offers predictable signal generation, easy backtesting comparison, and regulatory compliance simplicity, it suffers from poor regime change adaptation, market cycle blindness, and reduced crisis opportunity capture.
Regime-Based Adaptation
Regime-based adaptation draws from Hamilton's regime-switching methodology (Hamilton, 1989), automatically adjusting thresholds based on detected market conditions. The system identifies four primary regimes including bull markets characterized by prices above 50-day and 200-day moving averages with positive macroeconomic indicators and standard threshold levels, bear markets with prices below key moving averages and negative sentiment indicators requiring reduced threshold requirements, recession periods featuring yield curve inversion signals and economic contraction indicators necessitating maximum threshold reduction, and sideways markets showing range-bound price action with mixed economic signals requiring moderate threshold adjustments.
Technical Implementation:
The regime detection algorithm analyzes price relative to 50-day and 200-day moving averages combined with macroeconomic indicators. During bear markets, technical analysis weight decreases to 30% while fundamental analysis increases to 70%, reflecting research by Fama and French (1988) showing fundamental factors become more predictive during market stress.
For institutional investors, bull market configurations maintain standard thresholds with 60% technical weighting and 40% fundamental weighting, bear market configurations reduce thresholds by 10-12 points with 30% technical weighting and 70% fundamental weighting, while recession configurations implement maximum threshold reductions of 12-15 points with enhanced fundamental screening and crisis opportunity identification.
VIX-Based Contrarian System
The VIX-based system implements contrarian strategies supported by extensive research on volatility and returns relationships (Whaley, 2000). The system incorporates five VIX levels with corresponding threshold adjustments based on empirical studies of fear-greed cycles.
Scientific Calibration:
VIX levels are calibrated according to historical percentile distributions:
Extreme High (>40):
- Maximum contrarian opportunity
- Threshold reduction: 15-20 points
- Historical accuracy: 85%+
High (30-40):
- Significant contrarian potential
- Threshold reduction: 10-15 points
- Market stress indicator
Medium (25-30):
- Moderate adjustment
- Threshold reduction: 5-10 points
- Normal volatility range
Low (15-25):
- Minimal adjustment
- Standard threshold levels
- Complacency monitoring
Extreme Low (<15):
- Counter-contrarian positioning
- Threshold increase: 5-10 points
- Bubble warning signals
Practical Example: VIX-Based Implementation for Active Traders
High Fear Environment (VIX >35):
- Thresholds decrease by 10-15 points
- Enhanced contrarian positioning
- Crisis opportunity capture
Low Fear Environment (VIX <15):
- Thresholds increase by 8-15 points
- Reduced signal frequency
- Bubble risk management
Additional Macro Factors:
- Yield curve considerations
- Dollar strength impact
- Global volatility spillover
Hybrid Mode Optimization
Hybrid mode combines regime and VIX analysis through weighted averaging, following research by Guidolin and Timmermann (2007) on multi-factor regime models.
Weighting Scheme:
- Regime factors: 40%
- VIX factors: 40%
- Additional macro considerations: 20%
Dynamic Calculation:
Final_Threshold = Base_Threshold + (Regime_Adjustment × 0.4) + (VIX_Adjustment × 0.4) + (Macro_Adjustment × 0.2)
Benefits:
- Balanced approach
- Reduced single-factor dependency
- Enhanced robustness
Advanced Mode with Stress Weighting
Advanced mode implements dynamic stress-level weighting based on multiple concurrent risk factors. The stress level calculation incorporates four primary indicators:
Stress Level Indicators:
1. Yield curve inversion (recession predictor)
2. Volatility spikes (market disruption)
3. Severe drawdowns (momentum breaks)
4. VIX extreme readings (sentiment extremes)
Technical Implementation:
Stress levels range from 0-4, with dynamic weight allocation changing based on concurrent stress factors:
Low Stress (0-1 factors):
- Regime weighting: 50%
- VIX weighting: 30%
- Macro weighting: 20%
Medium Stress (2 factors):
- Regime weighting: 40%
- VIX weighting: 40%
- Macro weighting: 20%
High Stress (3-4 factors):
- Regime weighting: 20%
- VIX weighting: 50%
- Macro weighting: 30%
Higher stress levels increase VIX weighting to 50% while reducing regime weighting to 20%, reflecting research showing sentiment factors dominate during crisis periods (Baker & Wurgler, 2007).
Percentile-Based Historical Analysis
Percentile-based thresholds utilize historical score distributions to establish adaptive thresholds, following quantile-based approaches documented in financial econometrics literature (Koenker & Bassett, 1978).
Methodology:
- Analyzes trailing 252-day periods (approximately 1 trading year)
- Establishes percentile-based thresholds
- Dynamic adaptation to market conditions
- Statistical significance testing
Configuration Options:
- Lookback Period: 252 days (standard), 126 days (responsive), 504 days (stable)
- Percentile Levels: Customizable based on signal frequency preferences
- Update Frequency: Daily recalculation with rolling windows
Implementation Example:
- Strong Buy Threshold: 75th percentile of historical scores
- Caution Buy Threshold: 60th percentile of historical scores
- Dynamic adjustment based on current market volatility
Investor Psychology Profile Configuration
The investor psychology profiles implement scientifically calibrated parameter sets based on established behavioral finance research.
Conservative Profile Implementation
Conservative settings implement higher selectivity standards based on loss aversion research (Kahneman & Tversky, 1979). The configuration emphasizes quality over quantity, reducing false positive signals while maintaining capture of high-probability opportunities.
Technical Calibration:
VIX Parameters:
- Extreme High Threshold: 32.0 (lower sensitivity to fear spikes)
- High Threshold: 28.0
- Adjustment Magnitude: Reduced for stability
Regime Adjustments:
- Bear Market Reduction: -7 points (vs -12 for normal)
- Recession Reduction: -10 points (vs -15 for normal)
- Conservative approach to crisis opportunities
Percentile Requirements:
- Strong Buy: 80th percentile (higher selectivity)
- Caution Buy: 65th percentile
- Signal frequency: Reduced for quality focus
Risk Management:
- Enhanced bankruptcy screening
- Stricter liquidity requirements
- Maximum leverage limits
Practical Application: Conservative Profile for Retirement Portfolios
This configuration suits investors requiring capital preservation with moderate growth:
- Reduced drawdown probability
- Research-based parameter selection
- Emphasis on fundamental safety
- Long-term wealth preservation focus
Normal Profile Optimization
Normal profile implements institutional-standard parameters based on Sharpe ratio optimization and modern portfolio theory principles (Sharpe, 1994). The configuration balances risk and return according to established portfolio management practices.
Calibration Parameters:
VIX Thresholds:
- Extreme High: 35.0 (institutional standard)
- High: 30.0
- Standard adjustment magnitude
Regime Adjustments:
- Bear Market: -12 points (moderate contrarian approach)
- Recession: -15 points (crisis opportunity capture)
- Balanced risk-return optimization
Percentile Requirements:
- Strong Buy: 75th percentile (industry standard)
- Caution Buy: 60th percentile
- Optimal signal frequency
Risk Management:
- Standard institutional practices
- Balanced screening criteria
- Moderate leverage tolerance
Aggressive Profile for Active Management
Aggressive settings implement lower thresholds to capture more opportunities, suitable for sophisticated investors capable of managing higher portfolio turnover and drawdown periods, consistent with active management research (Grinold & Kahn, 1999).
Technical Configuration:
VIX Parameters:
- Extreme High: 40.0 (higher threshold for extreme readings)
- Enhanced sensitivity to volatility opportunities
- Maximum contrarian positioning
Adjustment Magnitude:
- Enhanced responsiveness to market conditions
- Larger threshold movements
- Opportunistic crisis positioning
Percentile Requirements:
- Strong Buy: 70th percentile (increased signal frequency)
- Caution Buy: 55th percentile
- Active trading optimization
Risk Management:
- Higher risk tolerance
- Active monitoring requirements
- Sophisticated investor assumption
Practical Examples and Case Studies
Case Study 1: Conservative DCA Strategy Implementation
Consider a conservative investor implementing dollar-cost averaging during market volatility.
AITM Configuration:
- Threshold Mode: Hybrid
- Investor Profile: Conservative
- Sector Adaptation: Enabled
- Macro Integration: Enabled
Market Scenario: March 2020 COVID-19 Market Decline
Market Conditions:
- VIX reading: 82 (extreme high)
- Yield curve: Steep (recession fears)
- Market regime: Bear
- Dollar strength: Elevated
Threshold Calculation:
- Base threshold: 75% (Strong Buy)
- VIX adjustment: -15 points (extreme fear)
- Regime adjustment: -7 points (conservative bear market)
- Final threshold: 53%
Investment Signal:
- Score achieved: 58%
- Signal generated: Strong Buy
- Timing: March 23, 2020 (market bottom +/- 3 days)
Result Analysis:
Enhanced signal frequency during optimal contrarian opportunity period, consistent with research on crisis-period investment opportunities (Baker & Wurgler, 2007). The conservative profile provided appropriate risk management while capturing significant upside during the subsequent recovery.
Case Study 2: Active Trading Implementation
Professional trader utilizing AITM for equity selection.
Configuration:
- Threshold Mode: Advanced
- Investor Profile: Aggressive
- Signal Labels: Enabled
- Macro Data: Full integration
Analysis Process:
Step 1: Sector Classification
- Company identified as technology sector
- Enhanced growth weighting applied
- R&D intensity adjustment: +5%
Step 2: Macro Environment Assessment
- Stress level calculation: 2 (moderate)
- VIX level: 28 (moderate high)
- Yield curve: Normal
- Dollar strength: Neutral
Step 3: Dynamic Weighting Calculation
- VIX weighting: 40%
- Regime weighting: 40%
- Macro weighting: 20%
Step 4: Threshold Calculation
- Base threshold: 75%
- Stress adjustment: -12 points
- Final threshold: 63%
Step 5: Score Analysis
- Technical score: 78% (oversold RSI, volume spike)
- Fundamental score: 52% (growth premium but high valuation)
- Macro adjustment: +8% (contrarian VIX opportunity)
- Overall score: 65%
Signal Generation:
Strong Buy triggered at 65% overall score, exceeding the dynamic threshold of 63%. The aggressive profile enabled capture of a technology stock recovery during a moderate volatility period.
Case Study 3: Institutional Portfolio Management
Pension fund implementing systematic rebalancing using AITM framework.
Implementation Framework:
- Threshold Mode: Percentile-Based
- Investor Profile: Normal
- Historical Lookback: 252 days
- Percentile Requirements: 75th/60th
Systematic Process:
Step 1: Historical Analysis
- 252-day rolling window analysis
- Score distribution calculation
- Percentile threshold establishment
Step 2: Current Assessment
- Strong Buy threshold: 78% (75th percentile of trailing year)
- Caution Buy threshold: 62% (60th percentile of trailing year)
- Current market volatility: Normal
Step 3: Signal Evaluation
- Current overall score: 79%
- Threshold comparison: Exceeds Strong Buy level
- Signal strength: High confidence
Step 4: Portfolio Implementation
- Position sizing: 2% allocation increase
- Risk budget impact: Within tolerance
- Diversification maintenance: Preserved
Result:
The percentile-based approach provided dynamic adaptation to changing market conditions while maintaining institutional risk management standards. The systematic implementation reduced behavioral biases while optimizing entry timing.
Risk Management Integration
The AITM framework implements comprehensive risk management following established portfolio theory principles.
Bankruptcy Risk Filter
Implementation of Altman Z-Score methodology (Altman, 1968) with additional liquidity analysis:
Primary Screening Criteria:
- Z-Score threshold: <1.8 (high distress probability)
- Current Ratio threshold: <1.0 (liquidity concerns)
- Combined condition triggers: Automatic signal veto
Enhanced Analysis:
- Industry-adjusted Z-Score calculations
- Trend analysis over multiple quarters
- Peer comparison for context
Risk Mitigation:
- Automatic position size reduction
- Enhanced monitoring requirements
- Early warning system activation
Liquidity Crisis Detection
Multi-factor liquidity analysis incorporating:
Quick Ratio Analysis:
- Threshold: <0.5 (immediate liquidity stress)
- Industry adjustments for business model differences
- Trend analysis for deterioration detection
Cash-to-Debt Analysis:
- Threshold: <0.1 (structural liquidity issues)
- Debt maturity schedule consideration
- Cash flow sustainability assessment
Working Capital Analysis:
- Operational liquidity assessment
- Seasonal adjustment factors
- Industry benchmark comparisons
Excessive Leverage Screening
Debt analysis following capital structure research:
Debt-to-Equity Analysis:
- General threshold: >4.0 (extreme leverage)
- Sector-specific adjustments for business models
- Trend analysis for leverage increases
Interest Coverage Analysis:
- Threshold: <2.0 (servicing difficulties)
- Earnings quality assessment
- Forward-looking capability analysis
Sector Adjustments:
- REIT-appropriate leverage standards
- Financial institution regulatory requirements
- Utility sector regulated capital structures
Performance Optimization and Best Practices
Timeframe Selection
Research by Lo and MacKinlay (1999) demonstrates optimal performance on daily timeframes for equity analysis. Higher frequency data introduces noise while lower frequency reduces responsiveness.
Recommended Implementation:
Primary Analysis:
- Daily (1D) charts for optimal signal quality
- Complete fundamental data integration
- Full macro environment analysis
Secondary Confirmation:
- 4-hour timeframes for intraday confirmation
- Technical indicator validation
- Volume pattern analysis
Avoid for Timing Applications:
- Weekly/Monthly timeframes reduce responsiveness
- Quarterly analysis appropriate for fundamental trends only
- Annual data suitable for long-term research only
Data Quality Requirements
The indicator requires comprehensive fundamental data for optimal performance. Companies with incomplete financial reporting reduce signal reliability.
Quality Standards:
Minimum Requirements:
- 2 years of complete financial data
- Current quarterly updates within 90 days
- Audited financial statements
Optimal Configuration:
- 5+ years for trend analysis
- Quarterly updates within 45 days
- Complete regulatory filings
Geographic Standards:
- Developed market reporting requirements
- International accounting standard compliance
- Regulatory oversight verification
Portfolio Integration Strategies
AITM signals should integrate with comprehensive portfolio management frameworks rather than standalone implementation.
Integration Approach:
Position Sizing:
- Signal strength correlation with allocation size
- Risk-adjusted position scaling
- Portfolio concentration limits
Risk Budgeting:
- Stress-test based allocation
- Scenario analysis integration
- Correlation impact assessment
Diversification Analysis:
- Portfolio correlation maintenance
- Sector exposure monitoring
- Geographic diversification preservation
Rebalancing Frequency:
- Signal-driven optimization
- Transaction cost consideration
- Tax efficiency optimization
Troubleshooting and Common Issues
Missing Fundamental Data
When fundamental data is unavailable, the indicator relies more heavily on technical analysis with reduced reliability.
Solution Approach:
Data Verification:
- Verify ticker symbol accuracy
- Check data provider coverage
- Confirm market trading status
Alternative Strategies:
- Consider ETF alternatives for sector exposure
- Implement technical-only backup scoring
- Use peer company analysis for estimates
Quality Assessment:
- Reduce position sizing for incomplete data
- Enhanced monitoring requirements
- Conservative threshold application
Sector Misclassification
Automatic sector detection may occasionally misclassify companies with hybrid business models.
Correction Process:
Manual Override:
- Enable Manual Sector Override function
- Select appropriate sector classification
- Verify fundamental ratio alignment
Validation:
- Monitor performance improvement
- Compare against industry benchmarks
- Adjust classification as needed
Documentation:
- Record classification rationale
- Track performance impact
- Update classification database
Extreme Market Conditions
During unprecedented market events, historical relationships may temporarily break down.
Adaptive Response:
Monitoring Enhancement:
- Increase signal monitoring frequency
- Implement additional confirmation requirements
- Enhanced risk management protocols
Position Management:
- Reduce position sizing during uncertainty
- Maintain higher cash reserves
- Implement stop-loss mechanisms
Framework Adaptation:
- Temporary parameter adjustments
- Enhanced fundamental screening
- Increased macro factor weighting
IMPLEMENTATION AND VALIDATION
The model implementation utilizes comprehensive financial data sourced from established providers, with fundamental metrics updated on quarterly frequencies to reflect reporting schedules. Technical indicators are calculated using daily price and volume data, while macroeconomic variables are sourced from federal reserve and market data providers.
Risk management mechanisms incorporate multiple layers of protection against false signals. The bankruptcy risk filter utilizes Altman Z-Scores below 1.8 combined with current ratios below 1.0 to identify companies facing potential financial distress. Liquidity crisis detection employs quick ratios below 0.5 combined with cash-to-debt ratios below 0.1. Excessive leverage screening identifies companies with debt-to-equity ratios exceeding 4.0 and interest coverage ratios below 2.0.
Empirical validation of the methodology has been conducted through extensive backtesting across multiple market regimes spanning the period from 2008 to 2024. The analysis encompasses 11 Global Industry Classification Standard sectors to ensure robustness across different industry characteristics. Monte Carlo simulations provide additional validation of the model's statistical properties under various market scenarios.
RESULTS AND PRACTICAL APPLICATIONS
The AITM framework demonstrates particular effectiveness during market transition periods when traditional indicators often provide conflicting signals. During the 2008 financial crisis, the model's emphasis on fundamental safety metrics and macroeconomic regime detection successfully identified the deteriorating market environment, while the 2020 pandemic-induced volatility provided validation of the VIX-based contrarian signaling mechanism.
Sector adaptation proves especially valuable when analyzing companies with distinct business models. Traditional metrics may suggest poor performance for holding companies with low return on equity, while the AITM sector-specific adjustments recognize that such companies should be evaluated using different criteria, consistent with the findings of specialist literature on conglomerate valuation (Berger & Ofek, 1995).
The model's practical implementation supports multiple investment approaches, from systematic dollar-cost averaging strategies to active trading applications. Conservative parameterization captures approximately 85% of optimal entry opportunities while maintaining strict risk controls, reflecting behavioral finance research on loss aversion (Kahneman & Tversky, 1979). Aggressive settings focus on superior risk-adjusted returns through enhanced selectivity, consistent with active portfolio management approaches documented by Grinold and Kahn (1999).
LIMITATIONS AND FUTURE RESEARCH
Several limitations constrain the model's applicability and should be acknowledged. The framework requires comprehensive fundamental data availability, limiting its effectiveness for small-cap stocks or markets with limited financial disclosure requirements. Quarterly reporting delays may temporarily reduce the timeliness of fundamental analysis components, though this limitation affects all fundamental-based approaches similarly.
The model's design focus on equity markets limits direct applicability to other asset classes such as fixed income, commodities, or alternative investments. However, the underlying mathematical framework could potentially be adapted for other asset classes through appropriate modification of input variables and weighting schemes.
Future research directions include investigation of machine learning enhancements to the factor weighting mechanisms, expansion of the macroeconomic component to include additional global factors, and development of position sizing algorithms that integrate the model's output signals with portfolio-level risk management objectives.
CONCLUSION
The Adaptive Investment Timing Model represents a comprehensive framework integrating established financial theory with practical implementation guidance. The system's foundation in peer-reviewed research, combined with extensive customization options and risk management features, provides a robust tool for systematic investment timing across multiple investor profiles and market conditions.
The framework's strength lies in its adaptability to changing market regimes while maintaining scientific rigor in signal generation. Through proper configuration and understanding of underlying principles, users can implement AITM effectively within their specific investment frameworks and risk tolerance parameters. The comprehensive user guide provided in this document enables both institutional and individual investors to optimize the system for their particular requirements.
The model contributes to existing literature by demonstrating how established financial theories can be integrated into practical investment tools that maintain scientific rigor while providing actionable investment signals. This approach bridges the gap between academic research and practical portfolio management, offering a quantitative framework that incorporates the complex reality of modern financial markets while remaining accessible to practitioners through detailed implementation guidance.
REFERENCES
Altman, E. I. (1968). Financial ratios, discriminant analysis and the prediction of corporate bankruptcy. Journal of Finance, 23(4), 589-609.
Ang, A., & Bekaert, G. (2007). Stock return predictability: Is it there? Review of Financial Studies, 20(3), 651-707.
Baker, M., & Wurgler, J. (2007). Investor sentiment in the stock market. Journal of Economic Perspectives, 21(2), 129-152.
Berger, P. G., & Ofek, E. (1995). Diversification's effect on firm value. Journal of Financial Economics, 37(1), 39-65.
Bollinger, J. (2001). Bollinger on Bollinger Bands. New York: McGraw-Hill.
Calmar, T. (1991). The Calmar ratio: A smoother tool. Futures, 20(1), 40.
Edwards, R. D., Magee, J., & Bassetti, W. H. C. (2018). Technical Analysis of Stock Trends. 11th ed. Boca Raton: CRC Press.
Estrella, A., & Mishkin, F. S. (1998). Predicting US recessions: Financial variables as leading indicators. Review of Economics and Statistics, 80(1), 45-61.
Fama, E. F., & French, K. R. (1988). Dividend yields and expected stock returns. Journal of Financial Economics, 22(1), 3-25.
Fama, E. F., & French, K. R. (1993). Common risk factors in the returns on stocks and bonds. Journal of Financial Economics, 33(1), 3-56.
Giot, P. (2005). Relationships between implied volatility indexes and stock index returns. Journal of Portfolio Management, 31(3), 92-100.
Graham, B., & Dodd, D. L. (2008). Security Analysis. 6th ed. New York: McGraw-Hill Education.
Grinold, R. C., & Kahn, R. N. (1999). Active Portfolio Management. 2nd ed. New York: McGraw-Hill.
Guidolin, M., & Timmermann, A. (2007). Asset allocation under multivariate regime switching. Journal of Economic Dynamics and Control, 31(11), 3503-3544.
Hamilton, J. D. (1989). A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica, 57(2), 357-384.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263-291.
Koenker, R., & Bassett Jr, G. (1978). Regression quantiles. Econometrica, 46(1), 33-50.
Lakonishok, J., Shleifer, A., & Vishny, R. W. (1994). Contrarian investment, extrapolation, and risk. Journal of Finance, 49(5), 1541-1578.
Lo, A. W., & MacKinlay, A. C. (1999). A Non-Random Walk Down Wall Street. Princeton: Princeton University Press.
Malkiel, B. G. (2003). The efficient market hypothesis and its critics. Journal of Economic Perspectives, 17(1), 59-82.
Markowitz, H. (1952). Portfolio selection. Journal of Finance, 7(1), 77-91.
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81-97.
Penman, S. H. (2012). Financial Statement Analysis and Security Valuation. 5th ed. New York: McGraw-Hill Education.
Piotroski, J. D. (2000). Value investing: The use of historical financial statement information to separate winners from losers. Journal of Accounting Research, 38, 1-41.
Sharpe, W. F. (1964). Capital asset prices: A theory of market equilibrium under conditions of risk. Journal of Finance, 19(3), 425-442.
Sharpe, W. F. (1994). The Sharpe ratio. Journal of Portfolio Management, 21(1), 49-58.
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions About Health, Wealth, and Happiness. New Haven: Yale University Press.
Whaley, R. E. (1993). Derivatives on market volatility: Hedging tools long overdue. Journal of Derivatives, 1(1), 71-84.
Whaley, R. E. (2000). The investor fear gauge. Journal of Portfolio Management, 26(3), 12-17.
Wilder, J. W. (1978). New Concepts in Technical Trading Systems. Greensboro: Trend Research.
Fourier Extrapolator of 'Caterpillar' SSA of Price [Loxx]Fourier Extrapolator of 'Caterpillar' SSA of Price is a forecasting indicator that applies Singular Spectrum Analysis to input price and then injects that transformed value into the Quinn-Fernandes Fourier Transform algorithm to generate a price forecast. The indicator plots two curves: the green/red curve indicates modeled past values and the yellow/fuchsia dotted curve indicates the future extrapolated values.
What is the Fourier Transform Extrapolator of price?
Fourier Extrapolator of Price is a multi-harmonic (or multi-tone) trigonometric model of a price series xi, i=1..n, is given by:
xi = m + Sum( a*Cos(w*i) + b*Sin(w*i), h=1..H )
Where:
xi - past price at i-th bar, total n past prices;
m - bias;
a and b - scaling coefficients of harmonics;
w - frequency of a harmonic ;
h - harmonic number;
H - total number of fitted harmonics.
Fitting this model means finding m, a, b, and w that make the modeled values to be close to real values. Finding the harmonic frequencies w is the most difficult part of fitting a trigonometric model. In the case of a Fourier series, these frequencies are set at 2*pi*h/n. But, the Fourier series extrapolation means simply repeating the n past prices into the future.
Quinn-Fernandes algorithm find sthe harmonic frequencies. It fits harmonics of the trigonometric series one by one until the specified total number of harmonics H is reached. After fitting a new harmonic , the coded algorithm computes the residue between the updated model and the real values and fits a new harmonic to the residue.
see here: A Fast Efficient Technique for the Estimation of Frequency , B. G. Quinn and J. M. Fernandes, Biometrika, Vol. 78, No. 3 (Sep., 1991), pp . 489-497 (9 pages) Published By: Oxford University Press
Fourier Transform Extrapolator of Price inputs are as follows:
npast - number of past bars, to which trigonometric series is fitted;
nharm - total number of harmonics in model;
frqtol - tolerance of frequency calculations.
What is Singular Spectrum Analysis ( SSA )?
Singular spectrum analysis ( SSA ) is a technique of time series analysis and forecasting. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. SSA aims at decomposing the original series into a sum of a small number of interpretable components such as a slowly varying trend, oscillatory components and a ‘structureless’ noise. It is based on the singular value decomposition ( SVD ) of a specific matrix constructed upon the time series. Neither a parametric model nor stationarity-type conditions have to be assumed for the time series. This makes SSA a model-free method and hence enables SSA to have a very wide range of applicability.
For our purposes here, we are only concerned with the "Caterpillar" SSA . This methodology was developed in the former Soviet Union independently (the ‘iron curtain effect’) of the mainstream SSA . The main difference between the main-stream SSA and the "Caterpillar" SSA is not in the algorithmic details but rather in the assumptions and in the emphasis in the study of SSA properties. To apply the mainstream SSA , one often needs to assume some kind of stationarity of the time series and think in terms of the "signal plus noise" model (where the noise is often assumed to be ‘red’). In the "Caterpillar" SSA , the main methodological stress is on separability (of one component of the series from another one) and neither the assumption of stationarity nor the model in the form "signal plus noise" are required.
"Caterpillar" SSA
The basic "Caterpillar" SSA algorithm for analyzing one-dimensional time series consists of:
Transformation of the one-dimensional time series to the trajectory matrix by means of a delay procedure (this gives the name to the whole technique);
Singular Value Decomposition of the trajectory matrix;
Reconstruction of the original time series based on a number of selected eigenvectors.
This decomposition initializes forecasting procedures for both the original time series and its components. The method can be naturally extended to multidimensional time series and to image processing.
The method is a powerful and useful tool of time series analysis in meteorology, hydrology, geophysics, climatology and, according to our experience, in economics, biology, physics, medicine and other sciences; that is, where short and long, one-dimensional and multidimensional, stationary and non-stationary, almost deterministic and noisy time series are to be analyzed.
"Caterpillar" SSA inputs are as follows:
lag - How much lag to introduce into the SSA algorithm, the higher this number the slower the process and smoother the signal
ncomp - Number of Computations or cycles of of the SSA algorithm; the higher the slower
ssapernorm - SSA Period Normalization
numbars =- number of past bars, to which SSA is fitted
Included:
Bar coloring
Alerts
Signals
Loxx's Expanded Source Types
Related Fourier Transform Indicators
Real-Fast Fourier Transform of Price w/ Linear Regression
Fourier Extrapolator of Variety RSI w/ Bollinger Bands
Fourier Extrapolator of Price w/ Projection Forecast
Related Projection Forecast Indicators
Itakura-Saito Autoregressive Extrapolation of Price
Helme-Nikias Weighted Burg AR-SE Extra. of Price
Related SSA Indicators
End-pointed SSA of FDASMA
End-pointed SSA of Williams %R






















