COT IndexTHE HIDDEN INTELLIGENCE IN FUTURES MARKETS
What if you could see what the smartest players in the futures markets are doing before the crowd catches on? While retail traders chase momentum indicators and moving averages, obsess over Japanese candlestick patterns, and debate whether the RSI should be set to fourteen or twenty-one periods, institutional players leave footprints in the sand through their mandatory reporting to the Commodity Futures Trading Commission. These footprints, published weekly in the Commitment of Traders reports, have been hiding in plain sight for decades, available to anyone with an internet connection, yet remarkably few traders understand how to interpret them correctly. The COT Index indicator transforms this raw institutional positioning data into actionable trading signals, bringing Wall Street intelligence to your trading screen without requiring expensive Bloomberg terminals or insider connections.
The uncomfortable truth is this: Most retail traders operate in a binary world. Long or short. Buy or sell. They apply technical analysis to individual positions, constrained by limited capital that forces them to concentrate risk in single directional bets. Meanwhile, institutional traders operate in an entirely different dimension. They manage portfolios dynamically weighted across multiple markets, adjusting exposure based on evolving market conditions, correlation shifts, and risk assessments that retail traders never see. A hedge fund might be simultaneously long gold, short oil, neutral on copper, and overweight agricultural commodities, with position sizes calibrated to volatility and portfolio Greeks. When they increase gold exposure from five percent to eight percent of portfolio allocation, this rebalancing decision reflects sophisticated analysis of opportunity cost, risk parity, and cross-market dynamics that no individual chart pattern can capture.
This portfolio reweighting activity, multiplied across hundreds of institutional participants, manifests in the aggregate positioning data published weekly by the CFTC. The Commitment of Traders report does not show individual trades or strategies. It shows the collective footprint of how actual commercial hedgers and large speculators have allocated their capital across different markets. When mining companies collectively increase forward gold sales to hedge thirty percent more production than last quarter, they are not reacting to a moving average crossover. They are making strategic allocation decisions based on production forecasts, cost structures, and price expectations derived from operational realities invisible to outside observers. This is portfolio management in action, revealed through positioning data rather than price charts.
If you want to understand how institutional capital actually flows, how sophisticated traders genuinely position themselves across market cycles, the COT report provides a rare window into that hidden world. But understand what you are getting into. This is not a tool for scalpers seeking confirmation of the next five-minute move. This is not an oscillator that flashes oversold at market bottoms with convenient precision. COT analysis operates on a timescale measured in weeks and months, revealing positioning shifts that precede major market turns but offer no precision timing. The data arrives three days stale, published only once per week, capturing strategic positioning rather than tactical entries.
If you need instant gratification, if you trade intraday moves, if you demand mechanical signals with ninety percent accuracy, close this document now. COT analysis rewards patience, position sizing discipline, and tolerance for being early. It punishes impatience, overleveraging, and the expectation that any single indicator can substitute for market understanding.
The premise is deceptively simple. Every Tuesday, large traders in futures markets must report their positions to the CFTC. By Friday afternoon, this data becomes public. Academic research spanning three decades has consistently shown that not all market participants are created equal. Some traders consistently profit while others consistently lose. Some anticipate major turning points while others chase trends into exhaustion. Bessembinder and Chan (1992) demonstrated in their seminal study that commercial hedgers, those with actual exposure to the underlying commodity or financial instrument, possess superior forecasting ability compared to speculators. Their research, published in the Journal of Finance, found statistically significant predictive power in commercial positioning, particularly at extreme levels. This finding challenged the efficient market hypothesis and opened the door to a new approach to market analysis based on positioning rather than price alone.
Think about what this means. Every week, the government publishes a report showing you exactly how the most informed market participants are positioned. Not their opinions. Not their predictions. Their actual money at risk. When agricultural producers collectively hold their largest short hedge in five years, they are not making idle speculation. They are locking in prices for crops they will harvest, informed by private knowledge of weather conditions, soil quality, inventory levels, and demand expectations invisible to outside observers. When energy companies aggressively hedge forward production at current prices, they reveal information about expected supply that no analyst report can capture. This is not technical analysis based on past prices. This is not fundamental analysis based on publicly available data. This is behavioral analysis based on how the smartest money is actually positioned, how institutions allocate capital across portfolios, and how those allocation decisions shift as market conditions evolve.
WHY SOME TRADERS KNOW MORE THAN OTHERS
Building on this foundation, Sanders, Boris and Manfredo (2004) conducted extensive research examining the behaviour patterns of different trader categories. Their work, which analyzed over a decade of COT data across multiple commodity markets, revealed a fascinating dynamic that challenges much of what retail traders are taught. Commercial hedgers consistently positioned themselves against market extremes, buying when speculators were most bearish and selling when speculators reached peak bullishness. The contrarian positioning of commercials was not random noise but rather reflected their superior information about supply and demand fundamentals. Meanwhile, large speculators, primarily hedge funds and commodity trading advisors, exhibited strong trend-following behaviour that often amplified market moves beyond fundamental values. Small traders, the retail participants, consistently entered positions late in trends, frequently near turning points, making them reliable contrary indicators.
Wang (2003) extended this research by demonstrating that the predictive power of commercial positioning varies significantly across different commodity sectors. His analysis of agricultural commodities showed particularly strong forecasting ability, with commercial net positions explaining up to fifteen percent of return variance in subsequent weeks. This finding suggests that the informational advantages of hedgers are most pronounced in markets where physical supply and demand fundamentals dominate, as opposed to purely financial markets where information asymmetries are smaller. When a corn farmer hedges six months of expected harvest, that decision incorporates private observations about rainfall patterns, crop health, pest pressure, and local storage capacity that no distant analyst can match. When an oil refinery hedges crude oil purchases and gasoline sales simultaneously, the spread relationships reveal expectations about refining margins that reflect operational realities invisible in public data.
The theoretical mechanism underlying these empirical patterns relates to information asymmetry and different participant motivations. Commercial hedgers engage in futures markets not for speculative profit but to manage business risks. An agricultural producer selling forward six months of expected harvest is not making a bet on price direction but rather locking in revenue to facilitate financial planning and ensure business viability. However, this hedging activity necessarily incorporates private information about expected supply, inventory levels, weather conditions, and demand trends that the hedger observes through their commercial operations (Irwin and Sanders, 2012). When aggregated across many participants, this private information manifests in collective positioning.
Consider a gold mining company deciding how much forward production to hedge. Management must estimate ore grades, recovery rates, production costs, equipment reliability, labor availability, and dozens of other operational variables that determine whether locking in prices at current levels makes business sense. If the industry collectively hedges more aggressively than usual, it suggests either exceptional production expectations or concern about sustaining current price levels or combination of both. Either way, this positioning reveals information unavailable to speculators analyzing price charts and economic data. The hedger sees the physical reality behind the financial abstraction.
Large speculators operate under entirely different incentives and constraints. Commodity Trading Advisors managing billions in assets typically employ systematic, trend-following strategies that respond to price momentum rather than fundamental supply and demand. When crude oil rallies from sixty dollars to seventy dollars per barrel, these systems generate buy signals. As the rally continues to eighty dollars, position sizes increase. The strategy works brilliantly during sustained trends but becomes a liability at reversals. By the time oil reaches ninety dollars, trend-following funds are maximally long, having accumulated positions progressively throughout the rally. At this point, they represent not smart money anticipating further gains but rather crowded money vulnerable to reversal. Sanders, Boris and Manfredo (2004) documented this pattern across multiple energy markets, showing that extreme speculator positioning typically marked late-stage trend exhaustion rather than early-stage trend development.
Small traders, the retail participants who fall below reporting thresholds, display the weakest forecasting ability. Wang (2003) found that small trader positioning exhibited negative correlation with subsequent returns, meaning their aggregate positioning served as a reliable contrary indicator. The explanation combines several factors. Retail traders often lack the capital reserves to weather normal market volatility, leading to premature exits from positions that would eventually prove profitable. They tend to receive information through slower channels, entering trends after mainstream media coverage when institutional participants are preparing to exit. Perhaps most importantly, they trade with emotion, buying into euphoria and selling into panic at precisely the wrong times.
At major turning points, the three groups often position opposite each other with commercials extremely bearish, large speculators extremely bullish, and small traders piling into longs at the last moment. These high-divergence environments frequently precede increased volatility and trend reversals. The insiders with business exposure quietly exit as the momentum traders hit maximum capacity and retail enthusiasm peaks. Within weeks, the reversal begins, and positions unwind in the opposite sequence.
FROM RAW DATA TO ACTIONABLE SIGNALS
The COT Index indicator operationalizes these academic findings into a practical trading tool accessible through TradingView. At its core, the indicator normalizes net positioning data onto a zero to one hundred scale, creating what we call the COT Index. This normalization is critical because absolute position sizes vary dramatically across different futures contracts and over time. A commercial trader holding fifty thousand contracts net long in crude oil might be extremely bullish by historical standards, or it might be quite neutral depending on the context of total market size and historical ranges. Raw position numbers mean nothing without context. The COT Index solves this problem by calculating where current positioning stands relative to its range over a specified lookback period, typically two hundred fifty-two weeks or approximately five years of weekly data.
The mathematical transformation follows the methodology originally popularized by legendary trader Larry Williams, though the underlying concept appears in statistical normalization techniques across many fields. For any given trader category, we calculate the highest and lowest net position values over the lookback period, establishing the historical range for that specific market and trader group. Current positioning is then expressed as a percentage of this range, where zero represents the most bearish positioning ever seen in the lookback window and one hundred represents the most bullish extreme. A reading of fifty indicates positioning exactly in the middle of the historical range, suggesting neither extreme optimism nor pessimism relative to recent history (Williams and Noseworthy, 2009).
This index-based approach allows for meaningful comparison across different markets and time periods, overcoming the scaling problems inherent in analyzing raw position data. A commercial index reading of eighty-five in gold carries the same interpretive meaning as an eighty-five reading in wheat or crude oil, even though the absolute position sizes differ by orders of magnitude. This standardization enables systematic analysis across entire futures portfolios rather than requiring market-specific expertise for each contract.
The lookback period selection involves a fundamental tradeoff between responsiveness and stability. Shorter lookback periods, perhaps one hundred twenty-six weeks or approximately two and a half years, make the index more sensitive to recent positioning changes. However, it also increases noise and produces more false signals. Longer lookback periods, perhaps five hundred weeks or approximately ten years, create smoother readings that filter short-term noise but become slower to recognize regime changes. The indicator settings allow users to adjust this parameter based on their trading timeframe, risk tolerance, and market characteristics.
UNDERSTANDING CFTC DATA STRUCTURES
The indicator supports both Legacy and Disaggregated COT report formats, reflecting the evolution of CFTC reporting standards over decades of market development. Legacy reports categorize market participants into three broad groups: commercial traders (hedgers with underlying business exposure), non-commercial traders (large speculators seeking profit without commercial interest), and non-reportable traders (small speculators below reporting thresholds). Each category brings distinct motivations and information advantages to the market (CFTC, 2020).
The Disaggregated reports, introduced in September 2009 for physical commodity markets, provide finer granularity by splitting participants into five categories (CFTC, 2009). Producer and merchant positions capture those actually producing, processing, or merchandising the physical commodity. Swap dealers represent financial intermediaries facilitating derivative transactions for clients. Managed money includes commodity trading advisors and hedge funds executing systematic or discretionary strategies. Other reportables encompasses diverse participants not fitting the main categories. Small traders remain as the fifth group, representing retail participation.
This enhanced categorization reveals nuances invisible in Legacy reports, particularly distinguishing between different types of institutional capital and their distinct behavioural patterns. The indicator automatically detects which report type is appropriate for each futures contract and adjusts the display accordingly.
Importantly, Disaggregated reports exist only for physical commodity futures. Agricultural commodities like corn, wheat, and soybeans have Disaggregated reports because clear producer, merchant, and swap dealer categories exist. Energy commodities like crude oil and natural gas similarly have well-defined commercial hedger categories. Metals including gold, silver, and copper also receive Disaggregated treatment (CFTC, 2009). However, financial futures such as equity index futures, Treasury bond futures, and currency futures remain available only in Legacy format. The CFTC has indicated no plans to extend Disaggregated reporting to financial futures due to different market structures and participant categories in these instruments (CFTC, 2020).
THE BEHAVIORAL FOUNDATION
Understanding which trader perspective to follow requires appreciation of their distinct trading styles, success rates, and psychological profiles. Commercial hedgers exhibit anticyclical behaviour rooted in their fundamental knowledge and business imperatives. When agricultural producers hedge forward sales during harvest season, they are not speculating on price direction but rather locking in revenue for crops they will harvest. Their business requires converting volatile commodity exposure into predictable cash flows to facilitate planning and ensure survival through difficult periods. Yet their aggregate positioning reveals valuable information because these hedging decisions incorporate private information about supply conditions, inventory levels, weather observations, and demand expectations that hedgers observe through their commercial operations (Bessembinder and Chan, 1992).
Consider a practical example from energy markets. Major oil companies continuously hedge portions of forward production based on price levels, operational costs, and financial planning needs. When crude oil trades at ninety dollars per barrel, they might aggressively hedge the next twelve months of production, locking in prices that provide comfortable profit margins above their extraction costs. This hedging appears as short positioning in COT reports. If oil rallies further to one hundred dollars, they hedge even more aggressively, viewing these prices as exceptional opportunities to secure revenue. Their short positioning grows increasingly extreme. To an outside observer watching only price charts, the rally suggests bullishness. But the commercial positioning reveals that the actual producers of oil find these prices attractive enough to lock in years of sales, suggesting skepticism about sustaining even higher levels. When the eventual reversal occurs and oil declines back to eighty dollars, the commercials who hedged at ninety and one hundred dollars profit while speculators who chased the rally suffer losses.
Large speculators or managed money traders operate under entirely different incentives and constraints. Their systematic, momentum-driven strategies mean they amplify existing trends rather than anticipate reversals. Trend-following systems, the most common approach among large speculators, by definition require confirmation of trend through price momentum before entering positions (Sanders, Boris and Manfredo, 2004). When crude oil rallies from sixty dollars to eighty dollars per barrel over several months, trend-following algorithms generate buy signals based on moving average crossovers, breakouts, and other momentum indicators. As the rally continues, position sizes increase according to the systematic rules.
However, this approach becomes a liability at turning points. By the time oil reaches ninety dollars after a sustained rally, trend-following funds are maximally long, having accumulated positions progressively throughout the move. At this point, their positioning does not predict continued strength. Rather, it often marks late-stage trend exhaustion. The psychological and mechanical explanation is straightforward. Trend followers by definition chase price momentum, entering positions after trends establish rather than anticipating them. Eventually, they become fully invested just as the trend nears completion, leaving no incremental buying power to sustain the rally. When the first signs of reversal appear, systematic stops trigger, creating a cascade of selling that accelerates the downturn.
Small traders consistently display the weakest track record across academic studies. Wang (2003) found that small trader positioning exhibited negative correlation with subsequent returns in his analysis across multiple commodity markets. This result means that whatever small traders collectively do, the opposite typically proves profitable. The explanation for small trader underperformance combines several factors documented in behavioral finance literature. Retail traders often lack the capital reserves to weather normal market volatility, leading to premature exits from positions that would eventually prove profitable. They tend to receive information through slower channels, learning about commodity trends through mainstream media coverage that arrives after institutional participants have already positioned. Perhaps most importantly, retail traders are more susceptible to emotional decision-making, buying into euphoria and selling into panic at precisely the wrong times (Tharp, 2008).
SETTINGS, THRESHOLDS, AND SIGNAL GENERATION
The practical implementation of the COT Index requires understanding several key features and settings that users can adjust to match their trading style, timeframe, and risk tolerance. The lookback period determines the time window for calculating historical ranges. The default setting of two hundred fifty-two bars represents approximately one year on daily charts or five years on weekly charts, balancing responsiveness with stability. Conservative traders seeking only the most extreme, highest-probability signals might extend the lookback to five hundred bars or more. Aggressive traders seeking earlier entry and willing to accept more false positives might reduce it to one hundred twenty-six bars or even less for shorter-term applications.
The bullish and bearish thresholds define signal generation levels. Default settings of eighty and twenty respectively reflect academic research suggesting meaningful information content at these extremes. Readings above eighty indicate positioning in the top quintile of the historical range, representing genuine extremes rather than temporary fluctuations. Conversely, readings below twenty occupy the bottom quintile, indicating unusually bearish positioning (Briese, 2008).
However, traders must recognize that appropriate thresholds vary by market, trader category, and personal risk tolerance. Some futures markets exhibit wider positioning swings than others due to seasonal patterns, volatility characteristics, or participant behavior. Conservative traders seeking high-probability setups with fewer signals might raise thresholds to eighty-five and fifteen. Aggressive traders willing to accept more false positives for earlier entry could lower them to seventy-five and twenty-five.
The key is maintaining meaningful differentiation between bullish, neutral, and bearish zones. The default settings of eighty and twenty create a clear three-zone structure. Readings from zero to twenty represent bearish territory where the selected trader group holds unusually bearish positions. Readings from twenty to eighty represent neutral territory where positioning falls within normal historical ranges. Readings from eighty to one hundred represent bullish territory where the selected trader group holds unusually bullish positions.
The trading perspective selection determines which participant group the indicator follows, fundamentally shaping interpretation and signal meaning. For counter-trend traders seeking reversal opportunities, monitoring commercial positioning makes intuitive sense based on the academic research discussed earlier. When commercials reach extreme bearish readings below twenty, indicating unprecedented short positioning relative to recent history, they are effectively betting against the crowd. Given their informational advantages demonstrated by Bessembinder and Chan (1992), this contrarian stance often precedes major bottoms.
Trend followers might instead monitor large speculator positioning, but with inverted logic compared to commercials. When managed money reaches extreme bullish readings above eighty, the trend may be exhausting rather than accelerating. This seeming paradox reflects their late-cycle participation documented by Sanders, Boris and Manfredo (2004). Sophisticated traders thus use speculator extremes as fade signals, entering positions opposite to speculator consensus.
Small trader monitoring serves primarily as a contrary indicator for all trading styles. Extreme small trader bullishness above seventy-five or eighty typically warns of retail FOMO at market tops. Extreme small trader bearishness below twenty or twenty-five often marks capitulation bottoms where the last weak hands have sold.
VISUALIZATION AND USER INTERFACE
The visual design incorporates multiple elements working together to facilitate decision-making and maintain situational awareness during active trading. The primary COT Index line plots in bold with adjustable line width, defaulting to two pixels for clear visibility against busy price charts. An optional glow effect, controlled by a simple toggle, adds additional visual prominence through multiple plot layers with progressively increasing transparency and width.
A twenty-one period exponential moving average overlays the index line, providing trend context for positioning changes. When the index crosses above its moving average, it signals accelerating bullish sentiment among the selected trader group regardless of whether absolute positioning is extreme. Conversely, when the index crosses below its moving average, it signals deteriorating sentiment and potentially the beginning of a reversal in positioning trends.
The EMA provides a dynamic reference line for assessing positioning momentum. When the index trades far above its EMA, positioning is not only extreme in absolute terms but also building with momentum. When the index trades far below its EMA, positioning is contracting or reversing, which may indicate weakening conviction even if absolute levels remain elevated.
The data table positioned at the top right of the chart displays eleven metrics for each trader category, transforming the indicator from a simple index calculation into an analytical dashboard providing multidimensional market intelligence. Beyond the COT Index itself, users can monitor positioning extremity, which measures how unusual current levels are compared to historical norms using statistical techniques. The extremity metric clarifies whether a reading represents the ninety-fifth or ninety-ninth percentile, with values above two standard deviations indicating genuinely exceptional positioning.
Market power quantifies each group's influence on total open interest. This metric expresses each trader category's net position as a percentage of total market open interest. A commercial entity holding forty percent of total open interest commands significantly more influence than one holding five percent, making their positioning signals more meaningful.
Momentum and rate of change metrics reveal whether positions are building or contracting, providing early warning of potential regime shifts. Position velocity measures the rate of change in positioning changes, effectively a second derivative providing even earlier insight into inflection points.
Sentiment divergence highlights disagreements between commercial and speculative positioning. This metric calculates the absolute difference between normalized commercial and large speculator index values. Wang (2003) found that these high-divergence environments frequently preceded increased volatility and reversals.
The table also displays concentration metrics when available, showing how positioning is distributed among the largest handful of traders in each category. High concentration indicates a few dominant players controlling most of the positioning, while low concentration suggests broad-based participation across many traders.
THE ALERT SYSTEM AND MONITORING
The alert system, comprising five distinct alert conditions, enables systematic monitoring of dozens of futures markets without constant screen watching. The bullish and bearish COT signal alerts trigger when the index crosses user-defined thresholds, indicating the selected trader group has reached extreme positioning worthy of attention. These alerts fire in real-time as new weekly COT data publishes, typically Friday afternoon following the Tuesday measurement date.
Extreme positioning alerts fire at ninety and ten index levels, representing the top and bottom ten percent of the historical range, warning of particularly stretched readings that historically precede reversals with high probability. When commercials reach a COT Index reading below ten, they are expressing their most bearish stance in the entire lookback period.
The data staleness alert notifies users when COT reports have not updated for more than ten days, preventing reliance on outdated information for trading decisions. Government shutdowns or federal holidays can interrupt the normal Friday publication schedule. Using stale signals while believing them current creates dangerous false confidence.
The indicator's watermark information display positioned in the bottom right corner provides essential context at a glance. This persistent display shows the symbol and timeframe, the COT report date timestamp, days since last update, and the current signal state. A trader analyzing a potential short entry in crude oil can glance at the watermark to instantly confirm positioning context without interrupting analysis flow.
LIMITATIONS AND REALISTIC EXPECTATIONS
Practical application requires understanding both the indicator's considerable strengths and inherent limitations. COT data inherently lags price action by three days, as Tuesday positions are not published until Friday afternoon. This delay means the indicator cannot catch rapid intraday reversals or respond to surprise news events. Traders using the COT Index for timing entries must accept this latency and focus on swing trading and position trading timeframes where three-day lags matter less than in day trading or scalping.
The weekly publication schedule similarly makes the indicator unsuitable for short-term trading strategies requiring immediate feedback. The COT Index works best for traders operating on weekly or longer timeframes, where positioning shifts measured in weeks and months align with trading horizon.
Extreme COT readings can persist far longer than typical technical indicators suggest, testing the patience and capital reserves of traders attempting to fade them. When crude oil enters a sustained bull market driven by genuine supply disruptions, commercial hedgers may maintain bearish positioning for many months as prices grind higher. A commercial COT Index reading of fifteen indicating extreme bearishness might persist for three months while prices continue rallying before finally reversing. Traders without sufficient capital and risk tolerance to weather such drawdowns will exit prematurely, precisely when the signal is about to work (Irwin and Sanders, 2012).
Position sizing discipline becomes paramount when implementing COT-based strategies. Rather than risking large percentages of capital on individual signals, successful COT traders typically allocate modest position sizes across multiple signals, allowing some to take time to mature while others work more quickly.
The indicator also cannot overcome fundamental regime changes that alter the structural drivers of markets. If gold enters a true secular bull market driven by monetary debasement, commercial hedgers may remain persistently bearish as mining companies sell forward years of production at what they perceive as favorable prices. Their positioning indicates valuation concerns from a production cost perspective, but cannot stop prices from rising if investment demand overwhelms physical supply-demand balance.
Similarly, structural changes in market participation can alter the meaning of positioning extremes. The growth of commodity index investing in the two thousands brought massive passive long-only capital into futures markets, fundamentally changing typical positioning ranges. Traders relying on COT signals without recognizing this regime change would have generated numerous false bearish signals during the commodity supercycle from 2003 to 2008.
The research foundation supporting COT analysis derives primarily from commodity markets where the commercial hedger information advantage is most pronounced. Studies specifically examining financial futures like equity indices and bonds show weaker but still present effects. Traders should calibrate expectations accordingly, recognizing that COT analysis likely works better for crude oil, natural gas, corn, and wheat than for the S&P 500, Treasury bonds, or currency futures.
Another important limitation involves the reporting threshold structure. Not all market participants appear in COT data, only those holding positions above specified minimums. In markets dominated by a few large players, concentration metrics become critical for proper interpretation. A single large trader accounting for thirty percent of commercial positioning might skew the entire category if their individual circumstances are idiosyncratic rather than representative.
GOLD FUTURES DURING A HYPOTHETICAL MARKET CYCLE
Consider a practical example using gold futures during a hypothetical but realistic market scenario that illustrates how the COT Index indicator guides trading decisions through a complete market cycle. Suppose gold has rallied from fifteen hundred to nineteen hundred dollars per ounce over six months, driven by inflation concerns following aggressive monetary expansion, geopolitical uncertainty, and sustained buying by Asian central banks for reserve diversification.
Large speculators, operating primarily trend-following strategies, have accumulated increasingly bullish positions throughout this rally. Their COT Index has climbed progressively from forty-five to eighty-five. The table display shows that large speculators now hold net long positions representing thirty-two percent of total open interest, their highest in four years. Momentum indicators show positive readings, indicating positions are still building though at a decelerating rate. Position velocity has turned negative, suggesting the pace of position building is slowing.
Meanwhile, commercial hedgers have responded to the rally by aggressively selling forward production and inventory. Their COT Index has moved inversely to price, declining from fifty-five to twenty. This bearish commercial positioning represents mining companies locking in forward sales at prices they view as attractive relative to production costs. The table shows commercials now hold net short positions representing twenty-nine percent of total open interest, their most bearish stance in five years. Concentration metrics indicate this positioning is broadly distributed across many commercial entities, suggesting the bearish stance reflects collective industry view rather than idiosyncratic positioning by a single firm.
Small traders, attracted by mainstream financial media coverage of gold's impressive rally, have recently piled into long positions. Their COT Index has jumped from forty-five to seventy-eight as retail investors chase the trend. Television financial networks feature frequent segments on gold with bullish guests. Internet forums and social media show surging retail interest. This retail enthusiasm historically marks late-stage trend development rather than early opportunity.
The COT Index indicator, configured to monitor commercial positioning from a contrarian perspective, displays a clear bearish signal given the extreme commercial short positioning. The table displays multiple confirming metrics: positioning extremity shows commercials at the ninety-sixth percentile of bearishness, market power indicates they control twenty-nine percent of open interest, and sentiment divergence registers sixty-five, indicating massive disagreement between commercial hedgers and large speculators. This divergence, the highest in three years, places the market in the historically high-risk category for reversals.
The interpretation requires nuance and consideration of context beyond just COT data. Commercials are not necessarily predicting an imminent crash. Rather, they are hedging business operations at what they collectively view as favorable price levels. However, the data reveals they have sold unusually large quantities of forward production, suggesting either exceptional production expectations for the year ahead or concern about sustaining current price levels or combination of both. Combined with extreme speculator positioning indicating a crowded long trade, and small trader enthusiasm confirming retail FOMO, the confluence suggests elevated reversal risk even if the precise timing remains uncertain.
A prudent trader analyzing this situation might take several actions based on COT Index signals. Existing long positions could be tightened with closer stop losses. Profit-taking on a portion of long exposure could lock in gains while maintaining some participation. Some traders might initiate modest short positions as portfolio hedges, sizing them appropriately for the inherent uncertainty in timing reversals. Others might simply move to the sidelines, avoiding new long entries until positioning normalizes.
The key lesson from case study analysis is that COT signals provide probabilistic edges rather than deterministic predictions. They work over many observations by identifying higher-probability configurations, not by generating perfect calls on individual trades. A fifty-five percent win rate with proper risk management produces substantial profits over time, yet still means forty-five percent of signals will be premature or wrong. Traders must embrace this probabilistic reality rather than seeking the impossible goal of perfect accuracy.
INTEGRATION WITH TRADING SYSTEMS
Integration with existing trading systems represents a natural and powerful use case for COT analysis, adding a positioning dimension to price-based technical approaches or fundamental analytical frameworks. Few traders rely exclusively on a single indicator or methodology. Rather, they build systems that synthesize multiple information sources, with each component addressing different aspects of market behavior.
Trend followers might use COT extremes as regime filters, modifying position sizing or avoiding new trend entries when positioning reaches levels historically associated with reversals. Consider a classic trend-following system based on moving average crossovers and momentum breakouts. Integration of COT analysis adds nuance. When large speculator positioning exceeds ninety or commercial positioning falls below ten, the regime filter recognizes elevated reversal risk. The system might reduce position sizing by fifty percent for new signals during these high-risk periods (Kaufman, 2013).
Mean reversion traders might require COT signal confluence before fading extended moves. When crude oil becomes technically overbought and large speculators show extreme long positioning above eighty-five, both signals confirm. If only technical indicators show extremes while positioning remains neutral, the potential short signal is rejected, avoiding fades of trends with underlying institutional support (Kaufman, 2013).
Discretionary traders can monitor the indicator as a continuous awareness tool, informing bias and position sizing without dictating mechanical entries and exits. A discretionary trader might notice commercial positioning shifting from neutral to progressively more bullish over several months. This trend informs growing positive bias even without triggering mechanical signals.
Multi-timeframe analysis represents another powerful integration approach. A trader might use daily charts for trade execution and timing while monitoring weekly COT positioning for strategic context. When both timeframes align, highest-probability opportunities emerge.
Portfolio construction for futures traders can incorporate COT signals as an additional selection criterion. Markets showing strong technical setups AND favorable COT positioning receive highest allocations. Markets with strong technicals but neutral or unfavorable positioning receive reduced allocations.
ADVANCED METRICS AND INTERPRETATION
The metrics table transforms simple positioning data into multidimensional market intelligence. Position extremity, calculated as the absolute deviation from the historical mean normalized by standard deviation, helps identify truly unusual readings versus routine fluctuations. A reading above two standard deviations indicates ninety-fifth percentile or higher extremity. Above three standard deviations indicates ninety-ninth percentile or higher, genuinely rare positioning that historically precedes major events with high probability.
Market power, expressed as a percentage of total open interest, reveals whose positioning matters most from a mechanical market impact perspective. Consider two scenarios in gold futures. In scenario one, commercials show a COT Index reading of fifteen while their market power metric shows they hold net shorts representing thirty-five percent of open interest. This is a high-confidence bearish signal. In scenario two, commercials also show a reading of fifteen, but market power shows only eight percent. While positioning is extreme relative to this category's normal range, their limited market share means less mechanical influence on price.
The rate of change and momentum metrics highlight whether positions are accelerating or decelerating, often providing earlier warnings than absolute levels alone. A COT Index reading of seventy-five with rapidly building momentum suggests continued movement toward extremes. Conversely, a reading of eighty-five with decelerating or negative momentum indicates the positioning trend is exhausting.
Position velocity measures the rate of change in positioning changes, effectively a second derivative. When velocity shifts from positive to negative, it indicates that while positioning may still be growing, the pace of growth is slowing. This deceleration often precedes actual reversal in positioning direction by several weeks.
Sentiment divergence calculates the absolute difference between normalized commercial and large speculator index values. When commercials show extreme bearish positioning at twenty while large speculators show extreme bullish positioning at eighty, the divergence reaches sixty, representing near-maximum disagreement. Wang (2003) found that these high-divergence environments frequently preceded increased volatility and reversals. The mechanism is intuitive. Extreme divergence indicates the informed hedgers and momentum-following speculators have positioned opposite each other with conviction. One group will prove correct and profit while the other proves incorrect and suffers losses. The resolution of this disagreement through price movement often involves volatility.
The table also displays concentration metrics when available. High concentration indicates a few dominant players controlling most of the positioning within a category, while low concentration suggests broad-based participation. Broad-based positioning more reliably reflects collective market intelligence and industry consensus. If mining companies globally all independently decide to hedge aggressively at similar price levels, it suggests genuine industry-wide view about price valuations rather than circumstances specific to one firm.
DATA QUALITY AND RELIABILITY
The CFTC has maintained COT reporting in various forms since the nineteen twenties, providing nearly a century of positioning data across multiple market cycles. However, data quality and reporting standards have evolved substantially over this long period. Modern electronic reporting implemented in the late nineteen nineties and early two thousands significantly improved accuracy and timeliness compared to earlier paper-based systems.
Traders should understand that COT reports capture positions as of Tuesday's close each week. Markets remain open three additional days before publication on Friday afternoon, meaning the reported data is three days stale when received. During periods of rapid market movement or major news events, this lag can be significant. The indicator addresses this limitation by including timestamp information and staleness warnings.
The three-day lag creates particular challenges during extreme volatility episodes. Flash crashes, surprise central bank interventions, geopolitical shocks, and other high-impact events can completely transform market positioning within hours. Traders must exercise judgment about whether reported positioning remains relevant given intervening events.
Reporting thresholds also mean that not all market participants appear in disaggregated COT data. Traders holding positions below specified minimums aggregate into the non-reportable or small trader category. This aggregation affects different markets differently. In highly liquid contracts like crude oil with thousands of participants, reportable traders might represent seventy to eighty percent of open interest. In thinly traded contracts with only dozens of active participants, a few large reportable positions might represent ninety-five percent of open interest.
Another data quality consideration involves trader classification into categories. The CFTC assigns traders to commercial or non-commercial categories based on reported business purpose and activities. However, this process is not perfect. Some entities engage in both commercial and speculative activities, creating ambiguity about proper classification. The transition to Disaggregated reports attempted to address some of these ambiguities by creating more granular categories.
COMPARISON WITH ALTERNATIVE APPROACHES
Several alternative approaches to COT analysis exist in the trading community beyond the normalization methodology employed by this indicator. Some analysts focus on absolute position changes week-over-week rather than index-based normalization. This approach calculates the change in net positioning from one week to the next. The emphasis falls on momentum in positioning changes rather than absolute levels relative to history. This method potentially identifies regime shifts earlier but sacrifices cross-market comparability (Briese, 2008).
Other practitioners employ more complex statistical transformations including percentile rankings, z-score standardization, and machine learning classification algorithms. Ruan and Zhang (2018) demonstrated that machine learning models applied to COT data could achieve modest improvements in forecasting accuracy compared to simple threshold-based approaches. However, these gains came at the cost of interpretability and implementation complexity.
The COT Index indicator intentionally employs a relatively straightforward normalization methodology for several important reasons. First, transparency enhances user understanding and trust. Traders can verify calculations manually and develop intuitive feel for what different readings mean. Second, academic research suggests that most of the predictive power in COT data comes from extreme positioning levels rather than subtle patterns requiring complex statistical methods to detect. Third, robust methods that work consistently across many markets and time periods tend to be simpler rather than more complex, reducing the risk of overfitting to historical data. Fourth, the complexity costs of implementation matter for retail traders without programming teams or computational infrastructure.
PSYCHOLOGICAL ASPECTS OF COT TRADING
Trading based on COT data requires psychological fortitude that differs from momentum-based approaches. Contrarian positioning signals inherently mean betting against prevailing market sentiment and recent price action. When commercials reach extreme bearish positioning, prices have typically been rising, sometimes for extended periods. The price chart looks bullish, momentum indicators confirm strength, moving averages align positively. The COT signal says bet against all of this. This psychological difficulty explains why COT analysis remains underutilized relative to trend-following methods.
Human psychology strongly predisposes us toward extrapolation and recency bias. When prices rally for months, our pattern-matching brains naturally expect continued rally. The recent price action dominates our perception, overwhelming rational analysis about positioning extremes and historical probabilities. The COT signal asking us to sell requires overriding these powerful psychological impulses.
The indicator design attempts to support the required psychological discipline through several features. Clear threshold markers and signal states reduce ambiguity about when signals trigger. When the commercial index crosses below twenty, the signal is explicit and unambiguous. The background shifts to red, the signal label displays bearish, and alerts fire. This explicitness helps traders act on signals rather than waiting for additional confirmation that may never arrive.
The metrics table provides analytical justification for contrarian positions, helping traders maintain conviction during inevitable periods of adverse price movement. When a trader enters short positions based on extreme commercial bearish positioning but prices continue rallying for several weeks, doubt naturally emerges. The table display provides reassurance. Commercial positioning remains extremely bearish. Divergence remains high. The positioning thesis remains intact even though price action has not yet confirmed.
Alert functionality ensures traders do not miss signals due to inattention while also not requiring constant monitoring that can lead to emotional decision-making. Setting alerts for COT extremes enables a healthier relationship with markets. When meaningful signals occur, alerts notify them. They can then calmly assess the situation and execute planned responses.
However, no indicator design can completely overcome the psychological difficulty of contrarian trading. Some traders simply cannot maintain short positions while prices rally. For these traders, COT analysis might be better employed as an exit signal for long positions rather than an entry signal for shorts.
Ultimately, successful COT trading requires developing comfort with probabilistic thinking rather than certainty-seeking. The signals work over many observations by identifying higher-probability configurations, not by generating perfect calls on individual trades. A fifty-five or sixty percent win rate with proper risk management produces substantial profits over years, yet still means forty to forty-five percent of signals will be premature or wrong. COT analysis provides genuine edge, but edge means probability advantage, not elimination of losing trades.
EDUCATIONAL RESOURCES AND CONTINUOUS LEARNING
The indicator provides extensive built-in educational resources through its documentation, detailed tooltips, and transparent calculations. However, mastering COT analysis requires study beyond any single tool or resource. Several excellent resources provide valuable extensions of the concepts covered in this guide.
Books and practitioner-focused monographs offer accessible entry points. Stephen Briese published The Commitments of Traders Bible in two thousand eight, offering detailed breakdowns of how different markets and trader categories behave (Briese, 2008). Briese's work stands out for its empirical focus and market-specific insights. Jack Schwager includes discussion of COT analysis within the broader context of market behavior in his book Market Sense and Nonsense (Schwager, 2012). Perry Kaufman's Trading Systems and Methods represents perhaps the most rigorous practitioner-focused text on systematic trading approaches including COT analysis (Kaufman, 2013).
Academic journal articles provide the rigorous statistical foundation underlying COT analysis. The Journal of Futures Markets regularly publishes research on positioning data and its predictive properties. Bessembinder and Chan's earlier work on systematic risk, hedging pressure, and risk premiums in futures markets provides theoretical foundation (Bessembinder, 1992). Chang's examination of speculator returns provides historical context (Chang, 1985). Irwin and Sanders provide essential skeptical perspective in their two thousand twelve article (Irwin and Sanders, 2012). Wang's two thousand three article provides one of the most empirical analyses of COT data across multiple commodity markets (Wang, 2003).
Online resources extend beyond academic and book-length treatments. The CFTC website provides free access to current and historical COT reports in multiple formats. The explanatory materials section offers detailed documentation of report construction, category definitions, and historical methodology changes. Traders serious about COT analysis should read these official CFTC documents to understand exactly what they are analyzing.
Commercial COT data services such as Barchart provide enhanced visualization and analysis tools beyond raw CFTC data. TradingView's educational materials, published scripts library, and user community provide additional resources for exploring different approaches to COT analysis.
The key to mastering COT analysis lies not in finding a single definitive source but rather in building understanding through multiple perspectives and information sources. Academic research provides rigorous empirical foundation. Practitioner-focused books offer practical implementation insights. Direct engagement with data through systematic backtesting develops intuition about how positioning dynamics manifest across different market conditions.
SYNTHESIZING KNOWLEDGE INTO PRACTICE
The COT Index indicator represents the synthesis of academic research, trading experience, and software engineering into a practical tool accessible to retail traders equipped with nothing more than a TradingView account and willingness to learn. What once required expensive data subscriptions, custom programming capabilities, statistical software, and institutional resources now appears as a straightforward indicator requiring only basic parameter selection and modest study to understand. This democratization of institutional-grade analysis tools represents a broader trend in financial markets over recent decades.
Yet technology and data access alone provide no edge without understanding and discipline. Markets remain relentlessly efficient at eliminating edges that become too widely known and mechanically exploited. The COT Index indicator succeeds only when users invest time learning the underlying concepts, understand the limitations and probability distributions involved, and integrate signals thoughtfully into trading plans rather than applying them mechanically.
The academic research demonstrates conclusively that institutional positioning contains genuine information about future price movements, particularly at extremes where commercial hedgers are maximally bearish or bullish relative to historical norms. This informational content is neither perfect nor deterministic but rather probabilistic, providing edge over many observations through identification of higher-probability configurations. Bessembinder and Chan's finding that commercial positioning explained modest but significant variance in future returns illustrates this probabilistic nature perfectly (Bessembinder and Chan, 1992). The effect is real and statistically significant, yet it explains perhaps ten to fifteen percent of return variance rather than most variance. Much of price movement remains unpredictable even with positioning intelligence.
The practical implication is that COT analysis works best as one component of a trading system rather than a standalone oracle. It provides the positioning dimension, revealing where the smart money has positioned and where the crowd has followed, but price action analysis provides the timing dimension. Fundamental analysis provides the catalyst dimension. Risk management provides the survival dimension. These components work together synergistically.
The indicator's design philosophy prioritizes transparency and education over black-box complexity, empowering traders to understand exactly what they are analyzing and why. Every calculation is documented and user-adjustable. The threshold markers, background coloring, tables, and clear signal states provide multiple reinforcing channels for conveying the same information.
This educational approach reflects a conviction that sustainable trading success comes from genuine understanding rather than mechanical system-following. Traders who understand why commercial positioning matters, how different trader categories behave, what positioning extremes signify, and where signals fit within probability distributions can adapt when market conditions change. Traders mechanically following black-box signals without comprehension abandon systems after normal losing streaks.
The research foundation supporting COT analysis comes primarily from commodity markets where commercial hedger informational advantages are most pronounced. Agricultural producers hedging crops know more about supply conditions than distant speculators. Energy companies hedging production know more about operating costs than financial traders. Metals miners hedging output know more about ore grades than index funds. Financial futures markets show weaker but still present effects.
The journey from reading this documentation to profitable trading based on COT analysis involves several stages that cannot be rushed. Initial reading and basic understanding represents the first stage. Historical study represents the second stage, reviewing past market cycles to observe how positioning extremes preceded major turning points. Paper trading or small-size real trading represents the third stage to experience the psychological challenges. Refinement based on results and personal psychology represents the fourth stage.
Markets will continue evolving. New participant categories will emerge. Regulatory structures will change. Technology will advance. Yet the fundamental dynamics driving COT analysis, that different market participants have different information, different motivations, and different forecasting abilities that manifest in their positioning, will persist as long as futures markets exist. While specific thresholds or optimal parameters may shift over time, the core logic remains sound and adaptable.
The trader equipped with this indicator, understanding of the theory and evidence behind COT analysis, realistic expectations about probability rather than certainty, discipline to maintain positions through adverse volatility, and patience to allow signals time to develop possesses genuine edge in markets. The edge is not enormous, markets cannot allow large persistent inefficiencies without arbitraging them away, but it is real, measurable, and exploitable by those willing to invest in learning and disciplined application.
REFERENCES
Bessembinder, H. (1992) Systematic risk, hedging pressure, and risk premiums in futures markets, Review of Financial Studies, 5(4), pp. 637-667.
Bessembinder, H. and Chan, K. (1992) The profitability of technical trading rules in the Asian stock markets, Pacific-Basin Finance Journal, 3(2-3), pp. 257-284.
Briese, S. (2008) The Commitments of Traders Bible: How to Profit from Insider Market Intelligence. Hoboken: John Wiley & Sons.
Chang, E.C. (1985) Returns to speculators and the theory of normal backwardation, Journal of Finance, 40(1), pp. 193-208.
Commodity Futures Trading Commission (CFTC) (2009) Explanatory Notes: Disaggregated Commitments of Traders Report. Available at: www.cftc.gov (Accessed: 15 January 2025).
Commodity Futures Trading Commission (CFTC) (2020) Commitments of Traders: About the Report. Available at: www.cftc.gov (Accessed: 15 January 2025).
Irwin, S.H. and Sanders, D.R. (2012) Testing the Masters Hypothesis in commodity futures markets, Energy Economics, 34(1), pp. 256-269.
Kaufman, P.J. (2013) Trading Systems and Methods. 5th edn. Hoboken: John Wiley & Sons.
Ruan, Y. and Zhang, Y. (2018) Forecasting commodity futures prices using machine learning: Evidence from the Chinese commodity futures market, Applied Economics Letters, 25(12), pp. 845-849.
Sanders, D.R., Boris, K. and Manfredo, M. (2004) Hedgers, funds, and small speculators in the energy futures markets: an analysis of the CFTC's Commitments of Traders reports, Energy Economics, 26(3), pp. 425-445.
Schwager, J.D. (2012) Market Sense and Nonsense: How the Markets Really Work and How They Don't. Hoboken: John Wiley & Sons.
Tharp, V.K. (2008) Super Trader: Make Consistent Profits in Good and Bad Markets. New York: McGraw-Hill.
Wang, C. (2003) The behavior and performance of major types of futures traders, Journal of Futures Markets, 23(1), pp. 1-31.
Williams, L.R. and Noseworthy, M. (2009) The Right Stock at the Right Time: Prospering in the Coming Good Years. Hoboken: John Wiley & Sons.
FURTHER READING
For traders seeking to deepen their understanding of COT analysis and futures market positioning beyond this documentation, the following resources provide valuable extensions:
Academic Journal Articles:
Fishe, R.P.H. and Smith, A. (2012) Do speculators drive commodity prices away from supply and demand fundamentals?, Journal of Commodity Markets, 1(1), pp. 1-16.
Haigh, M.S., Hranaiova, J. and Overdahl, J.A. (2007) Hedge funds, volatility, and liquidity provision in energy futures markets, Journal of Alternative Investments, 9(4), pp. 10-38.
Kocagil, A.E. (1997) Does futures speculation stabilize spot prices? Evidence from metals markets, Applied Financial Economics, 7(1), pp. 115-125.
Sanders, D.R. and Irwin, S.H. (2011) The impact of index funds in commodity futures markets: A systems approach, Journal of Alternative Investments, 14(1), pp. 40-49.
Books and Practitioner Resources:
Murphy, J.J. (1999) Technical Analysis of the Financial Markets: A Guide to Trading Methods and Applications. New York: New York Institute of Finance.
Pring, M.J. (2002) Technical Analysis Explained: The Investor's Guide to Spotting Investment Trends and Turning Points. 4th edn. New York: McGraw-Hill.
Federal Reserve and Research Institution Publications:
Federal Reserve Banks regularly publish working papers examining commodity markets, futures positioning, and price discovery mechanisms. The Federal Reserve Bank of San Francisco and Federal Reserve Bank of Kansas City maintain active research programs in this area.
Online Resources:
The CFTC website provides free access to current and historical COT reports, explanatory materials, and regulatory documentation.
Barchart offers enhanced COT data visualization and screening tools.
TradingView's community library contains numerous published scripts and educational materials exploring different approaches to positioning analysis.
"ha溢价率" için komut dosyalarını ara
Seasonality Heatmap [QuantAlgo]🟢 Overview
The Seasonality Heatmap analyzes years of historical data to reveal which months and weekdays have consistently produced gains or losses, displaying results through color-coded tables with statistical metrics like consistency scores (1-10 rating) and positive occurrence rates. By calculating average returns for each calendar month and day-of-week combination, it identifies recognizable seasonal patterns (such as which months or weekdays tend to rally versus decline) and synthesizes this into actionable buy low/sell high timing possibilities for strategic entries and exits. This helps traders and investors spot high-probability seasonal windows where assets have historically shown strength or weakness, enabling them to align positions with recurring bull and bear market patterns.
🟢 How It Works
1. Monthly Heatmap
How % Return is Calculated:
The indicator fetches monthly closing prices (or Open/High/Low based on user selection) and calculates the percentage change from the previous month:
(Current Month Price - Previous Month Price) / Previous Month Price × 100
Each cell in the heatmap represents one month's return in a specific year, creating a multi-year historical view
Colors indicate performance intensity: greener/brighter shades for higher positive returns, redder/brighter shades for larger negative returns
What Averages Mean:
The "Avg %" row displays the arithmetic mean of all historical returns for each calendar month (e.g., averaging all Januaries together, all Februaries together, etc.)
This metric identifies historically recurring patterns by showing which months have tended to rise or fall on average
Positive averages indicate months that have typically trended upward; negative averages indicate historically weaker months
Example: If April shows +18.56% average, it means April has averaged a 18.56% gain across all years analyzed
What Months Up % Mean:
Shows the percentage of historical occurrences where that month had a positive return (closed higher than the previous month)
Calculated as:
(Number of Months with Positive Returns / Total Months) × 100
Values above 50% indicate the month has been positive more often than negative; below 50% indicates more frequent negative months
Example: If October shows "64%", then 64% of all historical Octobers had positive returns
What Consistency Score Means:
A 1-10 rating that measures how predictable and stable a month's returns have been
Calculated using the coefficient of variation (standard deviation / mean) - lower variation = higher consistency
High scores (8-10, green): The month has shown relatively stable behavior with similar outcomes year-to-year
Medium scores (5-7, gray): Moderate consistency with some variability
Low scores (1-4, red): High variability with unpredictable behavior across different years
Example: A consistency score of 8/10 indicates the month has exhibited recognizable patterns with relatively low deviation
What Best Means:
Shows the highest percentage return achieved for that specific month, along with the year it occurred
Reveals the maximum observed upside and identifies outlier years with exceptional performance
Useful for understanding the range of possible outcomes beyond the average
Example: "Best: 2016: +131.90%" means the strongest January in the dataset was in 2016 with an 131.90% gain
What Worst Means:
Shows the most negative percentage return for that specific month, along with the year it occurred
Reveals maximum observed downside and helps understand the range of historical outcomes
Important for risk assessment even in months with positive averages
Example: "Worst: 2022: -26.86%" means the weakest January in the dataset was in 2022 with a 26.86% loss
2. Day-of-Week Heatmap
How % Return is Calculated:
Calculates the percentage change from the previous day's close to the current day's price (based on user's price source selection)
Returns are aggregated by day of the week within each calendar month (e.g., all Mondays in January, all Tuesdays in January, etc.)
Each cell shows the average performance for that specific day-month combination across all historical data
Formula:
(Current Day Price - Previous Day Close) / Previous Day Close × 100
What Averages Mean:
The "Avg %" row at the bottom aggregates all months together to show the overall average return for each weekday
Identifies broad weekly patterns across the entire dataset
Calculated by summing all daily returns for that weekday across all months and dividing by total observations
Example: If Monday shows +0.04%, Mondays have averaged a 0.04% change across all months in the dataset
What Days Up % Mean:
Shows the percentage of historical occurrences where that weekday had a positive return
Calculated as:
(Number of Positive Days / Total Days Observed) × 100
Values above 50% indicate the day has been positive more often than negative; below 50% indicates more frequent negative days
Example: If Fridays show "54%", then 54% of all Fridays in the dataset had positive returns
What Consistency Score Means:
A 1-10 rating measuring how stable that weekday's performance has been across different months
Based on the coefficient of variation of daily returns for that weekday across all 12 months
High scores (8-10, green): The weekday has shown relatively consistent behavior month-to-month
Medium scores (5-7, gray): Moderate consistency with some month-to-month variation
Low scores (1-4, red): High variability across months, with behavior differing significantly by calendar month
Example: A consistency score of 7/10 for Wednesdays means they have performed with moderate consistency throughout the year
What Best Means:
Shows which calendar month had the strongest average performance for that specific weekday
Identifies favorable day-month combinations based on historical data
Format shows the month abbreviation and the average return achieved
Example: "Best: Oct: +0.20%" means Mondays averaged +0.20% during October months in the dataset
What Worst Means:
Shows which calendar month had the weakest average performance for that specific weekday
Identifies historically challenging day-month combinations
Useful for understanding which month-weekday pairings have shown weaker performance
Example: "Worst: Sep: -0.35%" means Tuesdays averaged -0.35% during September months in the dataset
3. Optimal Timing Table/Summary Table
→ Best Month to BUY: Identifies the month with the lowest average return (most negative or least positive historically), representing periods where prices have historically been relatively lower
Based on the observation that buying during historically weaker months may position for subsequent recovery
Shows the month name, its average return, and color-coded performance
Example: If May shows -0.86% as "Best Month to BUY", it means May has historically averaged -0.86% in the analyzed period
→ Best Month to SELL: Identifies the month with the highest average return (most positive historically), representing periods where prices have historically been relatively higher
Based on historical strength patterns in that month
Example: If July shows +1.42% as "Best Month to SELL", it means July has historically averaged +1.42% gains
→ 2nd Best Month to BUY: The second-lowest performing month based on average returns
Provides an alternative timing option based on historical patterns
Offers flexibility for staged entries or when the primary month doesn't align with strategy
Example: Identifies the next-most favorable historical buying period
→ 2nd Best Month to SELL: The second-highest performing month based on average returns
Provides an alternative exit timing based on historical data
Useful for staged profit-taking or multiple exit opportunities
Identifies the secondary historical strength period
Note: The same logic applies to "Best Day to BUY/SELL" and "2nd Best Day to BUY/SELL" rows, which identify weekdays based on average daily performance across all months. Days with lowest averages are marked as buying opportunities (historically weaker days), while days with highest averages are marked for selling (historically stronger days).
🟢 Examples
Example 1: NVIDIA NASDAQ:NVDA - Strong May Pattern with High Consistency
Analyzing NVIDIA from 2015 onwards, the Monthly Heatmap reveals May averaging +15.84% with 82% of months being positive and a consistency score of 8/10 (green). December shows -1.69% average with only 40% of months positive and a low 1/10 consistency score (red). The Optimal Timing table identifies December as "Best Month to BUY" and May as "Best Month to SELL." A trader recognizes this high-probability May strength pattern and considers entering positions in late December when prices have historically been weaker, then taking profits in May when the seasonal tailwind typically peaks. The high consistency score in May (8/10) provides additional confidence that this pattern has been relatively stable year-over-year.
Example 2: Crypto Market Cap CRYPTOCAP:TOTALES - October Rally Pattern
An investor examining total crypto market capitalization notices September averaging -2.42% with 45% of months positive and 5/10 consistency, while October shows a dramatic shift with +16.69% average, 90% of months positive, and an exceptional 9/10 consistency score (blue). The Day-of-Week heatmap reveals Mondays averaging +0.40% with 54% positive days and 9/10 consistency (blue), while Thursdays show only +0.08% with 1/10 consistency (yellow). The investor uses this multi-layered analysis to develop a strategy: enter crypto positions on Thursdays during late September (combining the historically weak month with the less consistent weekday), then hold through October's historically strong period, considering exits on Mondays when intraweek strength has been most consistent.
Example 3: Solana BINANCE:SOLUSDT - Extreme January Seasonality
A cryptocurrency trader analyzing Solana observes an extraordinary January pattern: +59.57% average return with 60% of months positive and 8/10 consistency (teal), while May shows -9.75% average with only 33% of months positive and 6/10 consistency. August also displays strength at +59.50% average with 7/10 consistency. The Optimal Timing table confirms May as "Best Month to BUY" and January as "Best Month to SELL." The Day-of-Week data shows Sundays averaging +0.77% with 8/10 consistency (teal). The trader develops a seasonal rotation strategy: accumulate SOL positions during May weakness, hold through the historically strong January period (which has shown this extreme pattern with reasonable consistency), and specifically target Sunday exits when the weekday data shows the most recognizable strength pattern.
Wickless Heikin Ashi B/S [CHE]Wickless Heikin Ashi B/S \
Purpose.
Wickless Heikin Ashi B/S \ is built to surface only the cleanest momentum turns: it prints a Buy (B) when a bullish Heikin-Ashi candle forms with virtually no lower wick, and a Sell (S) when a bearish Heikin-Ashi candle forms with no upper wick. Optional Lock mode turns these into one-shot signals that hold the regime (bull or bear) until the opposite side appears. The tool can also project dashed horizontal lines from each signal’s price level to help you manage entries, stops, and partial take-profits visually.
How it works.
The indicator computes standard Heikin-Ashi values from your chart’s OHLC. A bar qualifies as bullish if its HA close is at or above its HA open; bearish if below. Then the wick on the relevant side is compared to the bar’s HA range. If that wick is smaller than your selected percentage threshold (plus a tiny tick epsilon to avoid rounding noise), the raw condition is considered “wickless.” Only one side can fire; on the rare occasion both raw conditions would overlap, the bar is ignored to prevent false dual triggers. When Lock is enabled, the first valid signal sets the active regime (background shaded light green for bull, light red for bear) and suppresses further same-side triggers until the opposite side appears, which helps reduce overtrading in chop.
Why wickless?
A missing wick on the “wrong” side of a Heikin-Ashi candle is a strong hint of persistent directional pressure. In practice, this filters out hesitation bars and many mid-bar flips. Traders who prefer entering only when momentum is decisive will find wickless bars useful for timing entries within an established bias.
Visuals you get.
When a valid buy appears, a small triangle “B” is plotted below the bar and a green dashed line can extend to the right from the signal’s HA open price. For sells, a triangle “S” above the bar and a red dashed line do the same. These lines act like immediate, price-anchored references for stop placement and profit scaling; you can shift the anchor left by a chosen number of bars if you prefer the line to start a little earlier for visual alignment.
How to trade it
Establish context first.
Pick a timeframe that matches your style: intraday index or crypto traders often use 5–60 minutes; swing traders might prefer 2–4 hours or daily. The tool is agnostic, but the cleanest results occur when the market is already trending or attempting a fresh breakout.
Entry.
When a B prints, the simplest rule is to enter long at or just after bar close. A conservative variation is to require price to take out the high of the signal bar in the next bar(s). For S, invert the logic: enter short on or after close, or only if price breaks the signal bar’s low.
Stop-loss.
Place the stop beyond the opposite extreme of the signal HA bar (for B: under the HA low; for S: above the HA high). If you prefer a static reference, use the dashed line level (signal HA open) or an ATR buffer (e.g., 1.0–1.5× ATR(14)). The goal is to give the trade enough room that normal noise does not immediately knock you out, while staying small enough to keep the risk contained.
Take-profit and management.
Two pragmatic approaches work well:
R-multiple scaling. Define your initial risk (distance from entry to stop). Scale out at 1R, 2R, and let a runner go toward 3R+ if structure holds.
Trailing logic. Trail behind a short moving average (e.g., EMA 20) or progressive swing points. Many traders also exit on the opposite signal when Lock flips, especially on faster timeframes.
Position sizing.
Keep risk per trade modest and consistent (e.g., 0.25–1% of account). The indicator improves timing; it does not replace risk control.
Settings guidance
Max lower wick for Bull (%) / Max upper wick for Bear (%).
These control how strict “wickless” must be. Tighter values (0.3–1.0%) yield fewer but cleaner signals and are great for strong trends or low-noise instruments. Looser values (1.5–3.0%) catch more setups in volatile markets but admit more noise. If you notice too many borderline bars triggering during high-volatility sessions, increase these thresholds slightly.
Lock (one-shot until opposite).
Keep Lock ON when you want one decisive signal per leg, reducing noise and signal clusters. Turn it OFF only if your plan intentionally scales into trends with multiple entries.
Extended lines & anchor offset.
Leave lines ON to maintain a visual memory of the last trigger levels. These often behave like near-term support/resistance. The offset simply lets you start that line one or more bars earlier if you prefer the look; it does not change the math.
Colors.
Use distinct bull/bear line colors you can read easily on your theme. The default lime/red scheme is chosen for clarity.
Practical examples
Momentum continuation (long).
Price is above your baseline (e.g., EMA 200). A B prints with a tight lower wick filter. Enter on close; stop under the signal HA low. Price pushes up in the next bars; you scale at 1R, trail the rest with EMA 20, and finally exit when a distant S appears or your trail is hit.
Breakout confirmation (short).
Following a range, price breaks down and prints an S with no upper wick. Enter short as the bar closes or on a subsequent break of the signal bar’s low. If the next bar immediately rejects and prints a bullish HA bar, your stop above the signal HA high limits damage. Otherwise, ride the move, harvesting partials as the red dashed line remains unviolated.
Alerts and automation
Set alerts to “Once Per Bar Close” for stability.
Bull ONE-SHOT fires when a valid buy prints (and Lock allows it).
Bear ONE-SHOT fires for sells analogously.
With Lock enabled, you avoid multiple pings in the same direction during a single leg—useful for webhooks or mobile notifications.
Reliability and limitations
The script calculates from completed bars and does not use higher-timeframe look-ahead or repainting tricks. Heikin-Ashi smoothing can lag turns slightly, which is expected and part of the design. In narrow ranges or whipsaw conditions, signals naturally thin out; if you must trade ranges, either tighten the wick filters and keep Lock ON, or add a trend/volatility filter (e.g., trade B only above EMA 200; S only below). Remember: this is an indicator, not a strategy. If you want exact statistics, port the triggers into a strategy and backtest with your chosen entry, stop, and exit rules.
Final notes
Wickless Heikin Ashi B/S \ is a precision timing tool: it waits for decisive, wickless HA bars, provides optional regime locking to reduce noise, and leaves clear price anchors on your chart for disciplined management. Use it with a simple framework—trend bias, fixed risk, and a straightforward exit plan—and it will keep your execution consistent without cluttering the screen or your decision-making.
Disclaimer: This indicator is for educational use and trade assistance only. It is not financial advice. You alone are responsible for your risk and results.
Enhance your trading precision and confidence with Wickless Heikin Ashi B/S ! 🚀
Happy trading
Chervolino
Heikin-Ashi Mean Reversion Oscillator [Alpha Extract]The Heikin-Ashi Mean Reversion Oscillator combines the smoothing characteristics of Heikin-Ashi candlesticks with mean reversion analysis to create a powerful momentum oscillator. This indicator applies Heikin-Ashi transformation twice - first to price data and then to the oscillator itself - resulting in smoother signals while maintaining sensitivity to trend changes and potential reversal points.
🔶 CALCULATION
Heikin-Ashi Transformation: Converts regular OHLC data to smoothed Heikin-Ashi values
Component Analysis: Calculates trend strength, body deviation, and price deviation from mean
Oscillator Construction: Combines components with weighted formula (40% trend strength, 30% body deviation, 30% price deviation)
Double Smoothing: Applies EMA smoothing and second Heikin-Ashi transformation to oscillator values
Signal Generation: Identifies trend changes and crossover points with overbought/oversold levels
Formula:
HA Close = (Open + High + Low + Close) / 4
HA Open = (Previous HA Open + Previous HA Close) / 2
Trend Strength = Normalized consecutive HA candle direction
Body Deviation = (HA Body - Mean Body) / Mean Body * 100
Price Deviation = ((HA Close - Price Mean) / Price Mean * 100) / Standard Deviation * 25
Raw Oscillator = (Trend Strength * 0.4) + (Body Deviation * 0.3) + (Price Deviation * 0.3)
Final Oscillator = 50 + (EMA(Raw Oscillator) / 2)
🔶 DETAILS Visual Features:
Heikin-Ashi Candlesticks: Smoothed oscillator representation using HA transformation with vibrant teal/red coloring
Overbought/Oversold Zones: Horizontal lines at customizable levels (default 70/30) with background highlighting in extreme zones
Moving Averages: Optional fast and slow EMA overlays for additional trend confirmation
Signal Dashboard: Real-time table showing current oscillator status (Overbought/Oversold/Bullish/Bearish) and buy/sell signals
Reference Lines: Middle line at 50 (neutral), with 0 and 100 boundaries for range visualization
Interpretation:
Above 70: Overbought conditions, potential selling opportunity
Below 30: Oversold conditions, potential buying opportunity
Bullish HA Candles: Green/teal candles indicate upward momentum
Bearish HA Candles: Red candles indicate downward momentum
MA Crossovers: Fast EMA above slow EMA suggests bullish momentum, below suggests bearish momentum
Zone Exits: Price moving out of extreme zones (above 70 or below 30) often signals trend continuation
🔶 EXAMPLES
Mean Reversion Signals: When the oscillator reaches extreme levels (above 70 or below 30), it identifies potential reversal points where price may revert to the mean.
Example: Oscillator reaching 80+ levels during strong uptrends often precedes short-term pullbacks, providing profit-taking opportunities.
Trend Change Detection: The double Heikin-Ashi smoothing helps identify genuine trend changes while filtering out market noise.
Example: When oscillator HA candles change from red to teal after oversold readings, this confirms potential trend reversal from bearish to bullish.
Moving Average Confirmation: Fast and slow EMA crossovers on the oscillator provide additional confirmation of momentum shifts.
Example: Fast EMA crossing above slow EMA while oscillator is rising from oversold levels provides strong bullish confirmation signal.
Dashboard Signal Integration: The real-time dashboard combines oscillator status with directional signals for quick decision-making.
Example: Dashboard showing "Oversold" status with "BUY" signal when HA candles turn bullish provides clear entry timing.
🔶 SETTINGS
Customization Options:
Calculation: Oscillator period (default 14), smoothing factor (1-50, default 2)
Levels: Overbought threshold (50-100, default 70), oversold threshold (0-50, default 30)
Moving Averages: Toggle display, fast EMA length (default 9), slow EMA length (default 21)
Visual Enhancements: Show/hide signal dashboard, customizable table position
Alert Conditions: Oversold bounce, overbought reversal, bullish/bearish MA crossovers
The Heikin-Ashi Mean Reversion Oscillator provides traders with a sophisticated momentum tool that combines the smoothing benefits of Heikin-Ashi analysis with mean reversion principles. The double transformation process creates cleaner signals while the integrated dashboard and multiple confirmation methods help traders identify high-probability entry and exit points during both trending and ranging market conditions.
Non-Psychological Levels🟩 Non-Psychological Levels is a structural analysis tool that segments price action into objective ranges, identifying Broken and Unbroken levels without relying on psychological or time-based assumptions. By emphasizing mechanically derived price behavior, it provides traders with a clear framework for analyzing support and resistance in a consistent and unbiased manner across various market conditions.
This indicator introduces a new approach to understanding market structure by focusing on price movement within defined segments, free from behavioral patterns, round numbers, or specific time intervals. While the indicator is time-agnostic in design, it works within the natural time progression of the chart, ensuring that segmentation aligns with the inherent structure of price movement. Broken levels, where price has breached a structural boundary, and Unbroken levels, which remain intact, are visualized with horizontal lines. These structural zones are complemented by dynamically boxed segments that contextualize both historical and ongoing price behavior.
By offering an objective perspective, the Non-Psychological Levels indicator complements psychology-based tools, helping traders explore market dynamics from multiple angles. When structural levels align with psychological zones, they reinforce critical price areas; when they differ, they provide opportunities to analyze price behavior from an alternative lens. This indicator is designed as both an educational framework and a practical tool, encouraging a deeper understanding of structural price behavior in technical analysis.
⭕ THEORY AND CONCEPT ⭕
The Non-Psychological Levels indicator is grounded in the principle of analyzing price behavior without reliance on psychological assumptions or time-based factors. Its primary purpose is to provide a structural framework for identifying support and resistance levels by focusing solely on price movement within mechanically defined segments. By removing external influences such as sentiment, time intervals, or market sessions, the indicator offers an unbiased lens through which traders can observe price dynamics.
Non-psychology, as defined here, refers to an approach that excludes behavioral and emotional patterns—like fear, greed, or herd mentality—from price analysis. Traditional tools often depend on these patterns to identify zones such as pivots or Fibonacci retracements, but these methods can be inconsistent in volatile markets. In contrast, the Non-Psychological Levels indicator focuses entirely on what price is doing, free from assumptions about trader behavior or external time constraints.
The indicator’s time-agnostic and mechanically driven design segments price action into consistent ranges, highlighting "Broken" levels (where price breaches structural boundaries) and "Unbroken" levels (where price holds). These structural zones remain unaffected by subjective or external influences, ensuring clarity and consistency across different markets and timeframes. By doing so, the indicator reveals a pure view of price structure, independent of psychological biases.
Importantly, the Non-Psychological Levels indicator is not intended to replace psychology-based tools but to complement them. When its structural levels align with psychological zones like round numbers or session highs/lows, the significance of these areas is reinforced. Conversely, when the levels differ, the contrast provides traders with alternative insights into market dynamics. This dual perspective—blending mechanical objectivity with behavioral analysis—enhances the depth and flexibility of market evaluation.
The following principles outline the theoretical foundation of the indicator and its unique contribution to structural price analysis:
Time-Agnostic Design : The indicator avoids reliance on time-based factors like daily opens, session intervals, or specific events. Instead, it segments price action using bar indexes, ensuring that structural levels are identified independently of external time variables. While the x-axis of a chart inherently represents time, this indicator abstracts away its influence, allowing traders to focus purely on price movement without the bias of temporal context.
Mechanical and Neutral Framework : Every calculation within the indicator is predetermined by a set of mechanical rules, ensuring no subjective input or interpretation affects the results. This objectivity guarantees that levels are derived solely from observed price behavior, providing a reliable framework that traders can trust to remain consistent across different assets, timeframes, and market conditions.
Broken and Unbroken Levels : Broken levels represent zones where price has breached a structural boundary, while Unbroken levels highlight areas where price has consistently respected its range. This distinction provides a clear and systematic method for identifying key support and resistance levels, offering insights into where future price interactions are most likely to occur.
Neutral Price Behavior : By dividing price action into equal segments, the indicator removes the influence of external factors like trader sentiment or psychological expectations. Each segment independently determines significant levels based purely on price action, enabling a structural view of the market that abstracts away behavioral or emotional biases.
Complement to Psychological Tools : While the indicator itself avoids behavioral assumptions, its levels can align with psychological zones like round numbers, pivots, or Fibonacci levels. When these structural and psychological levels overlap, it reinforces the importance of key areas, while divergences offer opportunities to examine price behavior from a new perspective.
Educational Value : The indicator encourages traders to explore the contrast between structural and psychological analysis. By introducing a framework that isolates price behavior from external influences, it challenges traditional methods of technical analysis, fostering deeper insights into market structure and behavior.
🔍 UNDERSTANDING STRUCTURAL LEVELS 🔍
The Non-Psychological Levels indicator offers a straightforward yet powerful way to understand market structure by segmenting price action into mechanically defined ranges. This segmentation highlights two key elements: "Broken" levels, where price has breached structural boundaries, and "Unbroken" levels, which remain intact and respected by price action. Together, these components create a framework for identifying potential areas of support and resistance.
Broken Levels : These are structural boundaries that price has surpassed, indicating areas where previous support or resistance failed. Broken levels often signal transitions in price behavior, such as shifts in momentum or the start of trending movements. They provide insight into zones where price has already tested and moved beyond.
Unbroken Levels : These levels remain intact within a given price segment, marking areas where price has consistently respected boundaries. Unbroken levels are particularly useful for identifying potential reversal points or zones of continued support or resistance. Their persistence across price action often makes them reliable indicators of market structure.
The visual segmentation of price action into distinct ranges allows traders to observe how price transitions between structural zones. For example:
- Clusters of Unbroken levels near the current price may suggest strong support or resistance, offering areas of interest for reversals or breakouts.
- Gaps between Unbroken levels highlight areas of price inefficiency or low interaction, which may become significant if revisited.
By focusing solely on structural price behavior, the Non-Psychological Levels indicator enables traders to analyze price independently of time or psychological factors. This makes it a valuable tool for understanding price dynamics objectively, whether used on its own or alongside other indicators.
🛠️ SETTINGS 🛠️
The Non-Psychological Levels indicator offers various customizable settings to help users tailor its visualization to their specific trading style and market conditions. These settings allow adjustments to sensitivity, level projection, and the source of price calculations (e.g., wicks or closing prices). Below, we outline each setting and its impact on the chart, along with examples to illustrate their functionality.
Custom Settings
Sensitivity : This setting adjusts the balance between detailed and broader structural levels by controlling the number of segments. Higher values result in more segments, revealing finer price levels, while lower values consolidate segments to highlight major price movements.
Source : Allows the user to choose between 'Wick' or 'Close' for detecting levels. Selecting 'Wick' emphasizes the absolute highs and lows of price action, while 'Close' focuses on closing prices within each segment.
Level Labels : Configures the visual representation of price levels, allowing users to toggle between price values, symbols (▲ ▼), or disabling labels altogether. This setting ensures clarity in how Broken and Unbroken levels are displayed on the chart.
Unbroken Levels : - - - Users can customize the colors and label styles for Unbroken levels, which highlight areas where price has respected structural boundaries.
Broken Levels : -|- Similar to Unbroken levels, users can specify the visual appearance of Broken levels, including color customization for Broken highs and lows. These settings help distinguish areas where price has breached a structural boundary.
Projection Options : This setting allows users to control how broken and unbroken levels are visually extended on the chart. The Future option projects lines forward to the right of the current price, showing potential future relevance of levels. The All option extends lines both forward and backward, providing a comprehensive view of how levels align with historical and potential future price action. The None option disables projections, keeping the chart focused solely on current segment levels without any extensions.
Segments : Includes options for customizing the segment visualization:
- Live Segment : Toggles the display of a highlighted box representing the current developing segment, helping users focus on ongoing price action.
- Boxes : Allows users to display filled boxes around each segment for additional visual emphasis.
- Segment Colors : Users can define separate colors for support (lower) and resistance (upper) segments, making it easier to interpret directional trends.
- Boundaries : Enables or disables vertical lines to mark segment boundaries, providing a clearer view of structural divisions.
Repaint : This setting allows users to enable or disable triangle labels within the live segment. When enabled, the triangles dynamically update to reflect real-time price behavior during the live bar but will repaint until the bar is fully confirmed. Disabling this option prevents the triangles from appearing during the live bar, reducing potential confusion as they may otherwise flash on and off during price updates. This setting ensures users can choose their preferred visualization while maintaining clarity in real-time analysis.
Color Settings : Offers extensive customization for all visual elements, including Broken and Unbroken levels, segment boundaries, and live segments. These settings ensure the indicator can adapt to individual preferences for chart readability.
🖼️ CHART EXAMPLES 🖼️
The following chart examples illustrate different configurations and features of the Non-Psychological Levels indicator. These examples highlight how the indicator’s settings influence the visualization of structural price behavior, helping traders understand its functionality in various scenarios.
Broken and Unbroken Levels : Orange prices are Broken HIghs. Blue prices are Broken Lows. Green and Red are Unbroken.
Boundaries : Enable Boundaries to visualize segments.
High Sensitivity Setting : A high sensitivity setting produces fewer segments and levels, emphasizing broader price ranges and major structural zones. This configuration is better suited for higher timeframes or identifying overarching trends.
Low Sensitivity Setting : A low sensitivity setting results in a greater number of segments and levels, offering a granular view of price structure. This configuration is ideal for analyzing detailed price movements on lower timeframes.
Live Segment with Triangles Enabled : This example shows the live segment box with triangle labels enabled. These triangles update dynamically during the live bar but may repaint until the bar is confirmed, helping traders observe real-time price behavior.
Broken and Unbroken Levels : This example highlights Broken levels (where price has breached structural boundaries and are drawn through subsequent price action) and Unbroken levels (where price has respected structural boundaries). These distinctions visually identify areas of potential support and resistance.
Broken and Unbroken Levels with Projection: All : This example demonstrates the "Project All" feature, where broken and unbroken levels are extended both forward and backward on the chart. This visualization highlights historical and potential future support and resistance zones, helping traders better understand how price interacts with these structural levels over time.
Segment Boxes with Boundaries : Filled boxes around individual segments visually distinguish each price interval, offering clarity in observing structural price transitions.
📊 SUMMARY 📊
The Non-Psychological Levels indicator provides a unique framework for analyzing structural price behavior through the identification of Broken and Unbroken levels. These levels act as a mechanical representation of support and resistance, independent of psychological biases or time-based factors. By focusing purely on price movement within defined segments, the indicator offers a neutral and consistent approach to understanding market dynamics.
This method complements traditional tools by providing an unbiased perspective. When structural levels align with psychological zones—such as round numbers or session-based highs and lows—they reinforce the significance of these areas as key price zones. When they diverge, the indicator introduces an alternative view, prompting further exploration of price behavior. This dual perspective enhances the depth of analysis by combining the mechanical and behavioral aspects of price action.
The Non-Psychological Levels indicator is not designed to generate trading signals or predict future price movements but serves as a visual and educational tool. Its adaptability across all markets and timeframes allows traders to integrate it into their broader strategies. By highlighting structural price dynamics, the indicator offers a fresh perspective on market analysis while remaining compatible with other technical tools.
⚙️ COMPATIBILITY AND LIMITATIONS ⚙️
Asset Compatibility :
The Non-Psychological Levels indicator is compatible with all asset classes, including cryptocurrencies, forex, stocks, and commodities. It can be applied to any chart or timeframe, making it a flexible tool for structural price analysis. Users should adjust the Sensitivity setting to ensure the segmentation aligns with the price behavior of the specific asset being analyzed. For instance, higher sensitivity values are more suitable for assets with large price ranges, while lower values work well for assets with tighter ranges.
Visual Range Dependency :
The indicator is optimized to perform calculations only within the visible range of the chart. This is a significant advantage, as it prevents unnecessary calculations and maintains efficient performance. However, because of this dependency, levels may appear to "recalculate" when the chart is zoomed in or out quickly or shifted abruptly. While this does not affect the integrity of the levels, it may cause a temporary lag as the indicator adjusts to the new visual range.
Persistence of Levels Beyond Visibility :
Even if levels are not visible on the chart due to zoom or scroll settings, they still exist in the background and are recalculated when revisited. This ensures that the structural price analysis remains consistent, regardless of the chart view.
Box Limitations in Pine Script :
The indicator is subject to Pine Script's inherent limitation of 500 boxes. This means that no more than 500 segments or level boxes can be drawn on the chart simultaneously. For most configurations, this limitation is mitigated by focusing on the visual range, but users employing very low sensitivity settings may exceed the limit. In such cases, only the most recent 500 boxes will be displayed, potentially omitting earlier segments.
Lag with Low Sensitivity Settings :
When sensitivity is set to a low value, the indicator creates many more segments, resulting in finer granularity and a higher number of boxes. While this provides detailed structural levels, it may increase the likelihood of exceeding Pine Script’s 500-box limit or cause a temporary lag when rendering a dense set of boxes over a wide visual range. Users should adjust sensitivity to balance detail with performance, especially on assets with high volatility or broad price ranges.
Live Segment Caution :
The live segment box updates in real time to reflect price movements as the segment is still developing. Since the segment high and segment low are not yet finalized, users should interpret this feature as a dynamic visualization of current price behavior rather than a definitive structural analysis. This ensures clarity during ongoing price action while maintaining the integrity of the indicator's framework.
Cross-Market Versatility :
The indicator’s time-agnostic and mechanical design ensures that it functions identically across all markets and timeframes. However, users should consider the unique characteristics of different markets when interpreting the results, as certain assets (e.g., highly volatile cryptocurrencies) may require sensitivity adjustments for optimal segmentation.
Visual Range Dependency: Levels recalculate efficiently within the chart's visible range but may lag temporarily when zooming or scrolling quickly.
These considerations ensure that the Non-Psychological Levels indicator remains robust and versatile while highlighting some inherent limitations of Pine Script and real-time recalculations. Users can mitigate these constraints by carefully adjusting sensitivity and understanding how the visual range dependency affects performance.
⚠️ DISCLAIMER ⚠️
The Non-Psychological Levels indicator is a visual analysis tool and is not designed as a predictive or trading signal indicator. Its primary purpose is to highlight structural price levels, providing an objective framework for understanding support and resistance within mechanically segmented price action.
The indicator operates within the visible range of the chart to ensure efficiency and adaptiveness, but this recalculation should not be interpreted as a forecast of future price behavior. While the structural levels may align with significant price zones in hindsight, they are purely a reflection of observed price dynamics and should not be used as standalone trading signals.
This indicator is intended as an educational and visual aid to complement other analysis methods. Users are encouraged to integrate it into a broader trading strategy and make adjustments to the settings based on their individual needs and market conditions.
🧠 BEYOND THE CODE 🧠
The Non-Psychological Levels indicator, like other xxattaxx indicators , is designed with education and community collaboration in mind. Its open-source nature encourages exploration, experimentation, and the development of new approaches to price analysis. By focusing on structural price behavior rather than psychological or time-based factors, this indicator introduces a fresh perspective for users to study.
Beyond its visual utility, the indicator serves as an educational framework for understanding the concept of non-psychological analysis. It offers traders an opportunity to explore price dynamics in a purely mechanical way, challenging conventional methods and fostering deeper insights into structural behavior. This approach is especially valuable for those interested in exploring new concepts or seeking alternative perspectives on market analysis.
Your comments, suggestions, and discussions are invaluable in shaping the future of this project. We actively encourage your feedback and contributions, which will directly help us refine and improve the Non-Psychological Levels indicator. We look forward to seeing the creative ways in which you use and enhance this tool. MVS
Simple Neural Network Transformed RSI [QuantraSystems]Simple Neural Network Transformed RSI
Introduction
The Simple Neural Network Transformed RSI (ɴɴᴛ ʀsɪ) stands out as a formidable tool for traders who specialize in lower timeframe trading.
It is an innovative enhancement of the traditional RSI readings with simple neural network smoothing techniques.
This unique blend results in fairly accurate signals, tailored for swift market movements. The ɴɴᴛ ʀsɪ is particularly resistant to the usual market noise found in lower timeframes, ensuring a clearer view of short-term trends.
Furthermore, its diverse range of visualization options adds versatility, making it a valuable tool for traders seeking to capitalize on short-duration market dynamics.
Legend
In the Image you can see the BTCUSD 1D Chart with the ɴɴᴛ ʀsɪ in Trend Following Mode to display the current trend. This is visualized with the barcoloring.
Its Overbought and Oversold zones start at 50% and end at 100% of the selected Standard Deviation (default σ = 2), which can indicate extremely rare situations which can lead to either a softening momentum in the trend or even a mean reversion situation.
Here you can also see the original Indicator line and the Heikin Ashi transformed Indicator bars - more on that now.
Notes
Quantra Standard Value Contents:
To draw out all the information from the indicator calculation we have added a Heikin-Ashi (HA) Candle Visualization.
This HA transformation smoothens out the indicator values and gives a more informative look into Momentum and Trend of the Indicator itself.
This allows early entries and exits by observing the HA transformed Indicator values.
To diversify, different visualization options are available, either a classic line, HA transformed or Hybrid, which contains both of the previous.
To make Quantra's Indicators as useful and versatile as possible we have created options
to change the barcoloring and thus the derived signal from the indicator based on different modes.
Option to choose different Modes:
Trend Following (Indicator above mid line counts as uptrend, below is downtrend)
Extremities (Everything going beyond the Deviation Bands in a Mean Reversion manner is highlighted)
Candles (Color of HA candles as barcolor)
Reversion (HA ONLY) (Reversion Signals via the triangles if HA candles change state outside of the Deviation Bands)
- Reversion Signals are indicated by the triangles in the Heikin-Ashi or Hybrid visualization when the HA Candles revert
from downwards to upwards or the other way around OUTSIDE of the SD Bands.
Depending on the Indicator they signal OB/OS areas and can either work as high probability entries and exits for Mean Reversion trades or
indicate Momentum slow downs and potential ranges.
Please use another indicator to confirm this.
Case Study
To effectively utilize the NNT-RSI, traders should know their style and familiarize themselves with the available options.
As stated above, you have multiple modes available that you can combine as you need and see fit.
In the given example mostly only the mode was used in an isolated fashion.
Trend Following:
Purely relied on State Change - Midline crossover
Could be combined with Momentum or Reversion analysis for better entries/exits.
Extremities:
Ideal entry/exit is in the accordingly colored OS/OB Area, the Reversion signaled the latest possible entry/exit.
HA Candles:
Specifically applicable for strong trends. Powerful and fast tool.
Can whip if used as sole condition.
Reversions:
Shows the single entry and exit bars which have a positive expected value outcome.
Can also be used as confirmation or as last signal.
Please note that we always advise to find more confluence by additional indicators.
Traders are encouraged to test and determine the most suitable settings for their specific trading strategies and timeframes.
In the showcased trades the default settings were used.
Methodology
The Simple Neural Network Transformed RSI uses a simple neural network logic to process RSI values, smoothing them for more accurate trend analysis.
This is achieved through a linear combination of RSI values over a specified input length, weighted evenly to produce a neural network output.
// Simple neural network logic (linear combination with weighted aggregation)
var float inputs = array.new_float(nnLength, na)
for i = 0 to nnLength - 1
array.set(inputs, i, rsi1 )
nnOutput = 0.0
for i = 0 to nnLength - 1
nnOutput := nnOutput + array.get(inputs, i) * (1 / nnLength)
nnOutput
This output is then compared against a standard or dynamic mean line to generate trend following signals.
Mean = ta.sma(nnOutput, sdLook)
cross = useMean? 50 : Mean
The indicator also incorporates Heikin Ashi candlestick calculations to provide additional insights into market dynamics, such as trend strength and potential reversals.
// Calculate Heikin Ashi representation
ha = ha(
na(nnOutput ) ? nnOutput : nnOutput ,
math.max(nnOutput, nnOutput ),
math.min(nnOutput, nnOutput ),
nnOutput)
Standard deviation bands are used to create dynamic overbought and oversold zones, further enhancing the tool's analytical capabilities.
// Calculate Dynamic OB/OS Zones
stdv_bands(_src, _length, _mult) =>
float basis = ta.sma(_src, _length)
float dev = _mult * ta.stdev(_src, _length)
= stdv_bands(nnOutput, sdLook,sdMult/2)
= stdv_bands(nnOutput, sdLook, sdMult)
The Standard Deviation bands take defined parameters from the user, in this case sigma of ideally between 2 to 3,
to help the indicator detect extremely improbable conditions and thus take an inversely probable signal from it to forward to the user.
The parameter settings and also the visualizations allow for ample customizations by the trader.
For questions or recommendations, please feel free to seek contact in the comments.
Goertzel Cycle Composite Wave [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Cycle Composite Wave indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
*** To decrease the load time of this indicator, only XX many bars back will render to the chart. You can control this value with the setting "Number of Bars to Render". This doesn't have anything to do with repainting or the indicator being endpointed***
█ Brief Overview of the Goertzel Cycle Composite Wave
The Goertzel Cycle Composite Wave is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The Goertzel Cycle Composite Wave is considered a non-repainting and endpointed indicator. This means that once a value has been calculated for a specific bar, that value will not change in subsequent bars, and the indicator is designed to have a clear start and end point. This is an important characteristic for indicators used in technical analysis, as it allows traders to make informed decisions based on historical data without the risk of hindsight bias or future changes in the indicator's values. This means traders can use this indicator trading purposes.
The repainting version of this indicator with forecasting, cycle selection/elimination options, and data output table can be found here:
Goertzel Browser
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the cycles. The color of the lines indicates whether the wave is increasing or decreasing.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast: These inputs define the window size for the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Cycle Composite Wave Code
The Goertzel Cycle Composite Wave code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Cycle Composite Wave function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past sizes (WindowSizePast), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Cycle Composite Wave algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Cycle Composite Wave code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Cycle Composite Wave code calculates the waveform of the significant cycles for specified time windows. The windows are defined by the WindowSizePast parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in a matrix:
The calculated waveforms for the cycle is stored in the matrix - goeWorkPast. This matrix holds the waveforms for the specified time windows. Each row in the matrix represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Cycle Composite Wave function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Cycle Composite Wave code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Cycle Composite Wave's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for specified time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast:
The WindowSizePast is updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
The matrix goeWorkPast is initialized to store the Goertzel results for specified time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for waveforms:
The goertzel array is initialized to store the endpoint Goertzel.
Calculating composite waveform (goertzel array):
The composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Drawing composite waveform (pvlines):
The composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms and visualizes them on the chart using colored lines.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
Limited applicability:
The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Cycle Composite Wave indicator can be interpreted by analyzing the plotted lines. The indicator plots two lines: composite waves. The composite wave represents the composite wave of the price data.
The composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend.
Interpreting the Goertzel Cycle Composite Wave indicator involves identifying the trend of the composite wave lines and matching them with the corresponding bullish or bearish color.
█ Conclusion
The Goertzel Cycle Composite Wave indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Cycle Composite Wave indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Cycle Composite Wave indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
1. The first term represents the deviation of the data from the trend.
2. The second term represents the smoothness of the trend.
3. λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
Goertzel Browser [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Browser indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
█ Brief Overview of the Goertzel Browser
The Goertzel Browser is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
3. Project the composite wave into the future, providing a potential roadmap for upcoming price movements.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the past and dotted lines for the future projections. The color of the lines indicates whether the wave is increasing or decreasing.
5. Displaying cycle information: The indicator provides a table that displays detailed information about the detected cycles, including their rank, period, Bartel's test results, amplitude, and phase.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements and their potential future trajectory, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast and WindowSizeFuture: These inputs define the window size for past and future projections of the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
UseCycleList: This boolean input determines whether a user-defined list of cycles should be used for constructing the composite wave. If set to false, the top N cycles will be used.
Cycle1, Cycle2, Cycle3, Cycle4, and Cycle5: These inputs define the user-defined list of cycles when 'UseCycleList' is set to true. If using a user-defined list, each of these inputs represents the period of a specific cycle to include in the composite wave.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Browser Code
The Goertzel Browser code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Browser function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past and future window sizes (WindowSizePast, WindowSizeFuture), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, goeWorkFuture, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Browser algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Browser code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Browser code calculates the waveform of the significant cycles for both past and future time windows. The past and future windows are defined by the WindowSizePast and WindowSizeFuture parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in matrices:
The calculated waveforms for each cycle are stored in two matrices - goeWorkPast and goeWorkFuture. These matrices hold the waveforms for the past and future time windows, respectively. Each row in the matrices represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Browser function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Browser code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Browser's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for both past and future time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast and WindowSizeFuture:
The WindowSizePast and WindowSizeFuture are updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
Two matrices, goeWorkPast and goeWorkFuture, are initialized to store the Goertzel results for past and future time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for past and future waveforms:
Three arrays, epgoertzel, goertzel, and goertzelFuture, are initialized to store the endpoint Goertzel, non-endpoint Goertzel, and future Goertzel projections, respectively.
Calculating composite waveform for past bars (goertzel array):
The past composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Calculating composite waveform for future bars (goertzelFuture array):
The future composite waveform is calculated in a similar way as the past composite waveform.
Drawing past composite waveform (pvlines):
The past composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
Drawing future composite waveform (fvlines):
The future composite waveform is drawn on the chart using dotted lines. The color of the lines is determined by the direction of the waveform (fuchsia for upward, yellow for downward).
Displaying cycle information in a table (table3):
A table is created to display the cycle information, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
Filling the table with cycle information:
The indicator iterates through the detected cycles and retrieves the relevant information (period, amplitude, phase, and Bartel value) from the corresponding arrays. It then fills the table with this information, displaying the values up to six decimal places.
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms for both past and future time windows and visualizes them on the chart using colored lines. Additionally, it displays detailed cycle information in a table, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles and potential future impact. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
No guarantee of future performance: While the script can provide insights into past cycles and potential future trends, it is important to remember that past performance does not guarantee future results. Market conditions can change, and relying solely on the script's predictions without considering other factors may lead to poor trading decisions.
Limited applicability: The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Browser indicator can be interpreted by analyzing the plotted lines and the table presented alongside them. The indicator plots two lines: past and future composite waves. The past composite wave represents the composite wave of the past price data, and the future composite wave represents the projected composite wave for the next period.
The past composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend. On the other hand, the future composite wave line is a dotted line with fuchsia indicating a bullish trend and yellow indicating a bearish trend.
The table presented alongside the indicator shows the top cycles with their corresponding rank, period, Bartels, amplitude or cycle strength, and phase. The amplitude is a measure of the strength of the cycle, while the phase is the position of the cycle within the data series.
Interpreting the Goertzel Browser indicator involves identifying the trend of the past and future composite wave lines and matching them with the corresponding bullish or bearish color. Additionally, traders can identify the top cycles with the highest amplitude or cycle strength and utilize them in conjunction with other technical indicators and fundamental analysis for trading decisions.
This indicator is considered a repainting indicator because the value of the indicator is calculated based on the past price data. As new price data becomes available, the indicator's value is recalculated, potentially causing the indicator's past values to change. This can create a false impression of the indicator's performance, as it may appear to have provided a profitable trading signal in the past when, in fact, that signal did not exist at the time.
The Goertzel indicator is also non-endpointed, meaning that it is not calculated up to the current bar or candle. Instead, it uses a fixed amount of historical data to calculate its values, which can make it difficult to use for real-time trading decisions. For example, if the indicator uses 100 bars of historical data to make its calculations, it cannot provide a signal until the current bar has closed and become part of the historical data. This can result in missed trading opportunities or delayed signals.
█ Conclusion
The Goertzel Browser indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Browser indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Browser indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
The first term represents the deviation of the data from the trend.
The second term represents the smoothness of the trend.
λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
Adaptive Candlestick Pattern Recognition System█ INTRODUCTION
Nearly three years in the making, intermittently worked on in the few spare hours of weekends and time off, this is a passion project I undertook to flesh out my skills as a computer programmer. This script currently recognizes 85 different candlestick patterns ranging from one to five candles in length. It also performs statistical analysis on those patterns to determine prior performance and changes the coloration of those patterns based on that performance. In searching TradingView's script library for scripts similar to this one, I had found a handful. However, when I reviewed the ones which were open source, I did not see many that truly captured the power of PineScrypt or leveraged the way it works to create efficient and reliable code; one of the main driving factors for releasing this 5,000+ line behemoth open sourced.
Please take the time to review this description and source code to utilize this script to its fullest potential.
█ CONCEPTS
This script covers the following topics: Candlestick Theory, Trend Direction, Higher Timeframes, Price Analysis, Statistic Analysis, and Code Design.
Candlestick Theory - This script focuses solely on the concept of Candlestick Theory: arrangements of candlesticks may form certain patterns that can potentially influence the future price action of assets which experience those patterns. A full list of patterns (grouped by pattern length) will be in its own section of this description. This script contains two modes of operation for identifying candlestick patterns, 'CLASSIC' and 'BREAKOUT'.
CLASSIC: In this mode, candlestick patterns will be identified whenever they appear. The user has a wide variety of inputs to manipulate that can change how certain patterns are identified and even enable alerts to notify themselves when these patterns appear. Each pattern selected to appear will have their Profit or Loss (P/L) calculated starting from the first candle open succeeding the pattern to a candle close specified some number of candles ahead. These P/L calculations are then collected for each pattern, and split among partitions of prior price action of the asset the script is currently applied to (more on that in Higher Timeframes ).
BREAKOUT: In this mode, P/L calculations are held off until a breakout direction has been confirmed. The user may specify the number of candles ahead of a pattern's appearance (from one to five) that a pattern has to confirm a breakout in either an upward or downward direction. A breakout is constituted when there is a candle following the appearance of the pattern that closes above/at the highest high of the pattern, or below/at its lowest low. Only then will percent return calculations be performed for the pattern that's been identified, and these percent returns are broken up not only by the partition they had appeared in but also by the breakout direction itself. Patterns which do not breakout in either direction will be ignored, along with having their labels deleted.
In both of these modes, patterns may be overridden. Overrides occur when a smaller pattern has been detected and ends up becoming one (or more) of the candles of a larger pattern. A key example of this would be the Bearish Engulfing and the Three Outside Down patterns. A Three Outside Down necessitates a Bearish Engulfing as the first two candles in it, while the third candle closes lower. When a pattern is overridden, the return for that pattern will no longer be tracked. Overrides will not occur if the tail end of a larger pattern occurs at the beginning of a smaller pattern (Ex: a Bullish Engulfing occurs on the third candle of a Three Outside Down and the candle immediately following that pattern, the Three Outside Down pattern will not be overridden).
Important Functionality Note: These patterns are only searched for at the most recently closed candle, not on the currently closing candle, which creates an offset of one for this script's execution. (SEE LIMITATIONS)
Trend Direction - Many of the patterns require a trend direction prior to their appearance. Noting TradingView's own publication of candlestick patterns, I utilize a similar method for determining trend direction. Moving Averages are used to determine which trend is currently taking place for candlestick patterns to be sought out. The user has access to two Moving Averages which they may individually modify the following for each: Moving Average type (list of 9), their length, width, source values, and all variables associated with two special Moving Averages (Least Squares and Arnaud Legoux).
There are 3 settings for these Moving Averages, the first two switch between the two Moving Averages, and the third uses both. When using individual Moving Averages, the user may select a 'price point' to compare against the Moving Average (default is close). This price point is compared to the Moving Average at the candles prior to the appearance of candle patterns. Meaning: The close compared to the Moving Average two candles behind determines the trend direction used for Candlestick Analysis of one candle patterns; three candles behind for two candle patterns and so on. If the selected price point is above the Moving Average, then the current trend is an 'uptrend', 'downtrend' otherwise.
The third setting using both Moving Averages will compare the lengths of each, and trend direction is determined by the shorter Moving Average compared to the longer one. If the shorter Moving Average is above the longer, then the current trend is an 'uptrend', 'downtrend' otherwise. If the lengths of the Moving Averages are the same, or both Moving Averages are Symmetrical, then MA1 will be used by default. (SEE LIMITATIONS)
Higher Timeframes - This script employs the use of Higher Timeframes with a few request.security calls. The purpose of these calls is strictly for the partitioning of an asset's chart, splitting the returns of patterns into three separate groups. The four inputs in control of this partitioning split the chart based on: A given resolution to grab values from, the length of time in that resolution, and 'Upper' and 'Lower Limits' which split the trading range provided by that length of time in that resolution that forms three separate groups. The default values for these four inputs will partition the current chart by the yearly high-low range where: the 'Upper' partition is the top 20% of that trading range, the 'Middle' partition is 80% to 33% of the trading range, and the 'Lower' partition covers the trading range within 33% of the yearly low.
Patterns which are identified by this script will have their returns grouped together based on which partition they had appeared in. For example, a Bullish Engulfing which occurs within a third of the yearly low will have its return placed separately from a Bullish Engulfing that occurred within 20% of the yearly high. The idea is that certain patterns may perform better or worse depending on when they had occurred during an asset's trading range.
Price Analysis - Price Analysis is a major part of this script's functionality as it can fundamentally change how patterns are shown to the user. The settings related to Price Analysis include setting the number of candles ahead of a pattern's appearance to determine the return of that pattern. In 'BREAKOUT' mode, an additional setting allows the user to specify where the P/L calculation will begin for a pattern that had appeared and confirmed. (SEE LIMITATIONS)
The calculation for percent returns of patterns is illustrated with the following pseudo-code (CLASSIC mode, this is a simplified version of the actual code):
type patternObj
int ID
int partition
type returnsArray
float returns
// No pattern found = na returned
patternObj TEST_VAL = f_FindPattern()
priorTestVal = TEST_VAL
if not na( priorTestVal )
pnlMatrixRow = priorTestVal.ID
pnlMatrixCol = priorTestVal.partition
matrixReturn = matrix.get(PERCENT_RETURNS, pnlMatrixRow, pnlMatrixCol)
percentReturn = ( (close - open ) / open ) * 100%
array.push(matrixReturn.returns, percentReturn)
Statistic Analysis - This script uses Pine's built-in array functions to conduct the Statistic Analysis for patterns. When a pattern is found and its P/L calculation is complete, its return is added to a 'Return Array' User-Defined-Type that contains numerous fields which retain information on a pattern's prior performance. The actual UDT is as follows:
type returnArray
float returns = na
int size = 0
float avg = 0
float median = 0
float stdDev = 0
int polarities = na
All values within this UDT will be updated when a return is added to it (some based on user input). The array.avg , array.median and array.stdev will be ran and saved into their respective fields after a return is placed in the 'returns' array. The 'polarities' integer array is what will be changed based on user input. The user specifies two different percentages that declare 'Positive' and 'Negative' returns for patterns. When a pattern returns above, below, or in between these two values, different indices of this array will be incremented to reflect the kind of return that pattern had just experienced.
These values (plus the full name, partition the pattern occurred in, and a 95% confidence interval of expected returns) will be displayed to the user on the tooltip of the labels that identify patterns. Simply scroll over the pattern label to view each of these values.
Code Design - Overall this script is as much of an art piece as it is functional. Its design features numerous depictions of ASCII Art that illustrate what is being attempted by the functions that identify patterns, and an incalculable amount of time was spent rewriting portions of code to improve its efficiency. Admittedly, this final version is nearly 1,000 lines shorter than a previous version (one which took nearly 30 seconds after compilation to run, and didn't do nearly half of what this version does). The use of UDTs, especially the 'patternObj' one crafted and redesigned from the Hikkake Hunter 2.0 I published last month, played a significant role in making this script run efficiently. There is a slight rigidity in some of this code mainly around pattern IDs which are responsible for displaying the abbreviation for patterns (as well as the full names under the tooltips, and the matrix row position for holding returns), as each is hard-coded to correspond to that pattern.
However, one thing I would like to mention is the extensive use of global variables for pattern detection. Many scripts I had looked over for ideas on how to identify candlestick patterns had the same idea; break the pattern into a set of logical 'true/false' statements derived from historically referencing candle OHLC values. Some scripts which identified upwards of 20 to 30 patterns would reference Pine's built-in OHLC values for each pattern individually, potentially requesting information from TradingView's servers numerous times that could easily be saved into a variable for re-use and only requested once per candle (what this script does).
█ FEATURES
This script features a massive amount of switches, options, floating point values, detection settings, and methods for identifying/tailoring pattern appearances. All modifiable inputs for patterns are grouped together based on the number of candles they contain. Other inputs (like those for statistics settings and coloration) are grouped separately and presented in a way I believe makes the most sense.
Not mentioned above is the coloration settings. One of the aims of this script was to make patterns visually signify their behavior to the user when they are identified. Each pattern has its own collection of returns which are analyzed and compared to the inputs of the user. The user may choose the colors for bullish, neutral, and bearish patterns. They may also choose the minimum number of patterns needed to occur before assigning a color to that pattern based on its behavior; a color for patterns that have not met this minimum number of occurrences yet, and a color for patterns that are still processing in BREAKOUT mode.
There are also an additional three settings which alter the color scheme for patterns: Statistic Point-of-Reference, Adaptive coloring, and Hard Limiting. The Statistic Point-of-Reference decides which value (average or median) will be compared against the 'Negative' and 'Positive Return Tolerance'(s) to guide the coloration of the patterns (or for Adaptive Coloring, the generation of a color gradient).
Adaptive Coloring will have this script produce a gradient that patterns will be colored along. The more bullish or bearish a pattern is, the further along the gradient those patterns will be colored starting from the 'Neutral' color (hard lined at the value of 0%: values above this will be colored bullish, bearish otherwise). When Adaptive Coloring is enabled, this script will request the highest and lowest values (these being the Statistic Point-of-Reference) from the matrix containing all returns and rewrite global variables tied to the negative and positive return tolerances. This means that all patterns identified will be compared with each other to determine bullish/bearishness in Adaptive Coloring.
Hard Limiting will prevent these global variables from being rewritten, so patterns whose Statistic Point-of-Reference exceed the return tolerances will be fully colored the bullish or bearish colors instead of a generated gradient color. (SEE LIMITATIONS)
Apart from the Candle Detection Modes (CLASSIC and BREAKOUT), there's an additional two inputs which modify how this script behaves grouped under a "MASTER DETECTION SETTINGS" tab. These two "Pattern Detection Settings" are 'SWITCHBOARD' and 'TARGET MODE'.
SWITCHBOARD: Every single pattern has a switch that is associated with its detection. When a switch is enabled, the code which searches for that pattern will be run. With the Pattern Detection Setting set to this, all patterns that have their switches enabled will be sought out and shown.
TARGET MODE: There is an additional setting which operates on top of 'SWITCHBOARD' that singles out an individual pattern the user specifies through a drop down list. The names of every pattern recognized by this script will be present along with an identifier that shows the number of candles in that pattern (Ex: " (# candles)"). All patterns enabled in the switchboard will still have their returns measured, but only the pattern selected from the "Target Pattern" list will be shown. (SEE LIMITATIONS)
The vast majority of other features are held in the one, two, and three candle pattern sections.
For one-candle patterns, there are:
3 — Settings related to defining 'Tall' candles:
The number of candles to sample for previous candle-size averages.
The type of comparison done for 'Tall' Candles: Settings are 'RANGE' and 'BODY'.
The 'Tolerance' for tall candles, specifying what percent of the 'average' size candles must exceed to be considered 'Tall'.
When 'Tall Candle Setting' is set to RANGE, the high-low ranges are what the current candle range will be compared against to determine if a candle is 'Tall'. Otherwise the candle bodies (absolute value of the close - open) will be compared instead. (SEE LIMITATIONS)
Hammer Tolerance - How large a 'discarded wick' may be before it disqualifies a candle from being a 'Hammer'.
Discarded wicks are compared to the size of the Hammer's candle body and are dependent upon the body's center position. Hammer bodies closer to the high of the candle will have the upper wick used as its 'discarded wick', otherwise the lower wick is used.
9 — Doji Settings, some pulled from an old Doji Hunter I made a while back:
Doji Tolerance - How large the body of a candle may be compared to the range to be considered a 'Doji'.
Ignore N/S Dojis - Turns off Trend Direction for non-special Dojis.
GS/DF Doji Settings - 2 Inputs that enable and specify how large wicks that typically disqualify Dojis from being 'Gravestone' or 'Dragonfly' Dojis may be.
4 Settings related to 'Long Wick Doji' candles detailed below.
A Tolerance for 'Rickshaw Man' Dojis specifying how close the center of the body must be to the range to be valid.
The 4 settings the user may modify for 'Long Legged' Dojis are: A Sample Base for determining the previous average of wicks, a Sample Length specifying how far back to look for these averages, a Behavior Setting to define how 'Long Legged' Dojis are recognized, and a tolerance to specify how large in comparison to the prior wicks a Doji's wicks must be to be considered 'Long Legged'.
The 'Sample Base' list has two settings:
RANGE: The wicks of prior candles are compared to their candle ranges and the 'wick averages' will be what the average percent of ranges were in the sample.
WICKS: The size of the wicks themselves are averaged and returned for comparing against the current wicks of a Doji.
The 'Behavior' list has three settings:
ONE: Only one wick length needs to exceed the average by the tolerance for a Doji to be considered 'Long Legged'.
BOTH: Both wick lengths need to exceed the average of the tolerance of their respective wicks (upper wicks are compared to upper wicks, lower wicks compared to lower) to be considered 'Long Legged'.
AVG: Both wicks and the averages of the previous wicks are added together, divided by two, and compared. If the 'average' of the current wicks exceeds this combined average of prior wicks by the tolerance, then this would constitute a valid 'Long Legged' Doji. (For Dojis in general - SEE LIMITATIONS)
The final input is one related to candle patterns which require a Marubozu candle in them. The two settings for this input are 'INCLUSIVE' and 'EXCLUSIVE'. If INCLUSIVE is selected, any opening/closing variant of Marubozu candles will be allowed in the patterns that require them.
For two-candle patterns, there are:
2 — Settings which define 'Engulfing' parameters:
Engulfing Setting - Two options, RANGE or BODY which sets up how one candle may 'engulf' the previous.
Inclusive Engulfing - Boolean which enables if 'engulfing' candles can be equal to the values needed to 'engulf' the prior candle.
For the 'Engulfing Setting':
RANGE: If the second candle's high-low range completely covers the high-low range of the prior candle, this is recognized as 'engulfing'.
BODY: If the second candle's open-close completely covers the open-close of the previous candle, this is recognized as 'engulfing'. (SEE LIMITATIONS)
4 — Booleans specifying different settings for a few patterns:
One which allows for 'opens within body' patterns to let the second candle's open/close values match the prior candles' open/close.
One which forces 'Kicking' patterns to have a gap if the Marubozu setting is set to 'INCLUSIVE'.
And Two which dictate if the individual candles in 'Stomach' patterns need to be 'Tall'.
8 — Floating point values which affect 11 different patterns:
One which determines the distance the close of the first candle in a 'Hammer Inverted' pattern must be to the low to be considered valid.
One which affects how close the opens/closes need to be for all 'Lines' patterns (Bull/Bear Meeting/Separating Lines).
One that allows some leeway with the 'Matching Low' pattern (gives a small range the second candle close may be within instead of needing to match the previous close).
Three tolerances for On Neck/In Neck patterns (2 and 1 respectively).
A tolerance for the Thrusting pattern which give a range the close the second candle may be between the midpoint and close of the first to be considered 'valid'.
A tolerance for the two Tweezers patterns that specifies how close the highs and lows of the patterns need to be to each other to be 'valid'.
The first On Neck tolerance specifies how large the lower wick of the first candle may be (as a % of that candle's range) before the pattern is invalidated. The second tolerance specifies how far up the lower wick to the close the second candle's close may be for this pattern. The third tolerance for the In Neck pattern determines how far into the body of the first candle the second may close to be 'valid'.
For the remaining patterns (3, 4, and 5 candles), there are:
3 — Settings for the Deliberation pattern:
A boolean which forces the open of the third candle to gap above the close of the second.
A tolerance which changes the proximity of the third candle's open to the second candle's close in this pattern.
A tolerance that sets the maximum size the third candle may be compared to the average of the first two candles.
One boolean value for the Two Crows patterns (standard and Upside Gapping) that forces the first two candles in the patterns to completely gap if disabled (candle 1's close < candle 2's low).
10 — Floating point values for the remaining patterns:
One tolerance for defining how much the size of each candle in the Identical Black Crows pattern may deviate from the average of themselves to be considered valid.
One tolerance for setting how close the opens/closes of certain three candle patterns may be to each other's opens/closes.*
Three floating point values that affect the Three Stars in the South pattern.
One tolerance for the Side-by-Side patterns - looks at the second and third candle closes.
One tolerance for the Stick Sandwich pattern - looks at the first and third candle closes.
A floating value that sizes the Concealing Baby Swallow pattern's 3rd candle wick.
Two values for the Ladder Bottom pattern which define a range that the third candle's wick size may be.
* This affects the Three Black Crows (non-identical) and Three White Soldiers patterns, each require the opens and closes of every candle to be near each other.
The first tolerance of the Three Stars in the South pattern affects the first candle body's center position, and defines where it must be above to be considered valid. The second tolerance specifies how close the second candle must be to this same position, as well as the deviation the ratio the candle body to its range may be in comparison to the first candle. The third restricts how large the second candle range may be in comparison to the first (prevents this pattern from being recognized if the second candle is similar to the first but larger).
The last two floating point values define upper and lower limits to the wick size of a Ladder Bottom's fourth candle to be considered valid.
█ HOW TO USE
While there are many moving parts to this script, I attempted to set the default values with what I believed may help identify the most patterns within reasonable definitions. When this script is applied to a chart, the Candle Detection Mode (along with the BREAKOUT settings) and all candle switches must be confirmed before patterns are displayed. All switches are on by default, so this gives the user an opportunity to pick which patterns to identify first before playing around in the settings.
All of the settings/inputs described above are meant for experimentation. I encourage the user to tweak these values at will to find which set ups work best for whichever charts they decide to apply these patterns to.
Refer to the patterns themselves during experimentation. The statistic information provided on the tooltips of the patterns are meant to help guide input decisions. The breadth of candlestick theory is deep, and this was an attempt at capturing what I could in its sea of information.
█ LIMITATIONS
DISCLAIMER: While it may seem a bit paradoxical that this script aims to use past performance to potentially measure future results, past performance is not indicative of future results . Markets are highly adaptive and often unpredictable. This script is meant as an informational tool to show how patterns may behave. There is no guarantee that confidence intervals (or any other metric measured with this script) are accurate to the performance of patterns; caution must be exercised with all patterns identified regardless of how much information regarding prior performance is available.
Candlestick Theory - In the name, Candlestick Theory is a theory , and all theories come with their own limits. Some patterns identified by this script may be completely useless/unprofitable/unpredictable regardless of whatever combination of settings are used to identify them. However, if I truly believed this theory had no merit, this script would not exist. It is important to understand that this is a tool meant to be utilized with an array of others to procure positive (or negative, looking at you, short sellers ) results when navigating the complex world of finance.
To address the functionality note however, this script has an offset of 1 by default. Patterns will not be identified on the currently closing candle, only on the candle which has most recently closed. Attempting to have this script do both (offset by one or identify on close) lead to more trouble than it was worth. I personally just want users to be aware that patterns will not be identified immediately when they appear.
Trend Direction - Moving Averages - There is a small quirk with how MA settings will be adjusted if the user inputs two moving averages of the same length when the "MA Setting" is set to 'BOTH'. If Moving Averages have the same length, this script will default to only using MA 1 regardless of if the types of Moving Averages are different . I will experiment in the future to alleviate/reduce this restriction.
Price Analysis - BREAKOUT mode - With how identifying patterns with a look-ahead confirmation works, the percent returns for patterns that break out in either direction will be calculated on the same candle regardless of if P/L Offset is set to 'FROM CONFIRMATION' or 'FROM APPEARANCE'. This same issue is present in the Hikkake Hunter script mentioned earlier. This does not mean the P/L calculations are incorrect , the offset for the calculation is set by the number of candles required to confirm the pattern if 'FROM APPEARANCE' is selected. It just means that these two different P/L calculations will complete at the same time independent of the setting that's been selected.
Adaptive Coloring/Hard Limiting - Hard Limiting is only used with Adaptive Coloring and has no effect outside of it. If Hard Limiting is used, it is recommended to increase the 'Positive' and 'Negative' return tolerance values as a pattern's bullish/bearishness may be disproportionately represented with the gradient generated under a hard limit.
TARGET MODE - This mode will break rules regarding patterns that are overridden on purpose. If a pattern selected in TARGET mode would have otherwise been absorbed by a larger pattern, it will have that pattern's percent return calculated; potentially leading to duplicate returns being included in the matrix of all returns recognized by this script.
'Tall' Candle Setting - This is a wide-reaching setting, as approximately 30 different patterns or so rely on defining 'Tall' candles. Changing how 'Tall' candles are defined whether by the tolerance value those candles need to exceed or by the values of the candle used for the baseline comparison (RANGE/BODY) can wildly affect how this script functions under certain conditions. Refer to the tooltip of these settings for more information on which specific patterns are affected by this.
Doji Settings - There are roughly 10 or so two to three candle patterns which have Dojis as a part of them. If all Dojis are disabled, it will prevent some of these larger patterns from being recognized. This is a dependency issue that I may address in the future.
'Engulfing' Setting - Functionally, the two 'Engulfing' settings are quite different. Because of this, the 'RANGE' setting may cause certain patterns that would otherwise be valid under textbook and online references/definitions to not be recognized as such (like the Upside Gap Two Crows or Three Outside down).
█ PATTERN LIST
This script recognizes 85 patterns upon initial release. I am open to adding additional patterns to it in the future and any comments/suggestions are appreciated. It recognizes:
15 — 1 Candle Patterns
4 Hammer type patterns: Regular Hammer, Takuri Line, Shooting Star, and Hanging Man
9 Doji Candles: Regular Dojis, Northern/Southern Dojis, Gravestone/Dragonfly Dojis, Gapping Up/Down Dojis, and Long-Legged/Rickshaw Man Dojis
White/Black Long Days
32 — 2 Candle Patterns
4 Engulfing type patterns: Bullish/Bearish Engulfing and Last Engulfing Top/Bottom
Dark Cloud Cover
Bullish/Bearish Doji Star patterns
Hammer Inverted
Bullish/Bearish Haramis + Cross variants
Homing Pigeon
Bullish/Bearish Kicking
4 Lines type patterns: Bullish/Bearish Meeting/Separating Lines
Matching Low
On/In Neck patterns
Piercing pattern
Shooting Star (2 Lines)
Above/Below Stomach patterns
Thrusting
Tweezers Top/Bottom patterns
Two Black Gapping
Rising/Falling Window patterns
29 — 3 Candle Patterns
Bullish/Bearish Abandoned Baby patterns
Advance Block
Collapsing Doji Star
Deliberation
Upside/Downside Gap Three Methods patterns
Three Inside/Outside Up/Down patterns (4 total)
Bullish/Bearish Side-by-Side patterns
Morning/Evening Star patterns + Doji variants
Stick Sandwich
Downside/Upside Tasuki Gap patterns
Three Black Crows + Identical variation
Three White Soldiers
Three Stars in the South
Bullish/Bearish Tri-Star patterns
Two Crows + Upside Gap variant
Unique Three River Bottom
3 — 4 Candle Patterns
Concealing Baby Swallow
Bullish/Bearish Three Line Strike patterns
6 — 5 Candle Patterns
Bullish/Bearish Breakaway patterns
Ladder Bottom
Mat Hold
Rising/Falling Three Methods patterns
█ WORKS CITED
Because of the amount of time needed to complete this script, I am unable to provide exact dates for when some of these references were used. I will also not provide every single reference, as citing a reference for each individual pattern and the place it was reviewed would lead to a bibliography larger than this script and its description combined. There were five major resources I used when building this script, one book, two websites (for various different reasons including patterns, moving averages, and various other articles of information), various scripts from TradingView's public library (including TradingView's own source code for *all* candle patterns ), and PineScrypt's reference manual.
Bulkowski, Thomas N. Encyclopedia of Candlestick Patterns . Hoboken, New Jersey: John Wiley & Sons Inc., 2008. E-book (google books).
Various. Numerous webpages. CandleScanner . 2023. online. Accessed 2020 - 2023.
Various. Numerous webpages. Investopedia . 2023. online. Accessed 2020 - 2023.
█ AKNOWLEDGEMENTS
I want to take the time here to thank all of my friends and family, both online and in real life, for the support they've given me over the last few years in this endeavor. My pets who tried their hardest to keep me from completing it. And work for the grit to continue pushing through until this script's completion.
This belongs to me just as much as it does anyone else. Whether you are an institutional trader, gold bug hedging against the dollar, retail ape who got in on a squeeze, or just parents trying to grow their retirement/save for the kids. This belongs to everyone.
Private Beta for new features to be tested can be found here .
Vires In Numeris
STD-Adaptive T3 [Loxx]STD-Adaptive T3 is a standard deviation adaptive T3 moving average filter. This indicator acts more like a trend overlay indicator with gradient coloring.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Bar coloring
Loxx's Expanded Source Types
Softmax Normalized T3 Histogram [Loxx]Softmax Normalized T3 Histogram is a T3 moving average that is morphed into a normalized oscillator from -1 to 1.
What is the Softmax function?
The softmax function, also known as softargmax: or normalized exponential function, converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
T3 Velocity Candles [Loxx]T3 Velocity Candles is a candle coloring overlay that calculates its gradient coloring using T3 velocity.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
R-squared Adaptive T3 Ribbon Filled Simple [Loxx]R-squared Adaptive T3 Ribbon Filled Simple is a T3 ribbons indicator that uses a special implementation of T3 that is R-squared adaptive.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
What is R-squared Adaptive?
One tool available in forecasting the trendiness of the breakout is the coefficient of determination ( R-squared ), a statistical measurement.
The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average .
R-squared is used here to derive a T3 factor used to modify price before passing price through a six-pole non-linear Kalman filter.
Included:
Alerts
Signals
Loxx's Expanded Source Types
T3 Slope Variation [Loxx]T3 Slope Variation is an indicator that uses T3 moving average to calculate a slope that is then weighted to derive a signal.
The center line
The center line changes color depending on the value of the:
Slope
Signal line
Threshold
If the value is above a signal line (it is not visible on the chart) and the threshold is greater than the required, then the main trend becomes up. And reversed for the trend down.
Colors and style of the histogram
The colors and style of the histogram will be drawn if the value is at the right side, if the above described trend "agrees" with the value (above is green or below zero is red) and if the High is higher than the previous High or Low is lower than the previous low, then the according type of histogram is drawn.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Alets
Signals
Bar coloring
Loxx's Expanded Source Types
Multi T3 Slopes [Loxx]Multi T3 Slopes is an indicator that checks slopes of 5 (different period) T3 Moving Averages and adds them up to show overall trend. To us this, check for color changes from red to green where there is no red if green is larger than red and there is no red when red is larger than green. When red and green both show up, its a sign of chop.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Signals: long, short, continuation long, continuation short.
Alerts
Bar coloring
Loxx's expanded source types
T3 Volatility Quality Index (VQI) w/ DSL & Pips Filtering [Loxx]T3 Volatility Quality Index (VQI) w/ DSL & Pips Filtering is a VQI indicator that uses T3 smoothing and discontinued signal lines to determine breakouts and breakdowns. This also allows filtering by pips.***
What is the Volatility Quality Index ( VQI )?
The idea behind the volatility quality index is to point out the difference between bad and good volatility in order to identify better trade opportunities in the market. This forex indicator works using the True Range algorithm in combination with the open, close, high and low prices.
What are DSL Discontinued Signal Line?
A lot of indicators are using signal lines in order to determine the trend (or some desired state of the indicator) easier. The idea of the signal line is easy : comparing the value to it's smoothed (slightly lagging) state, the idea of current momentum/state is made.
Discontinued signal line is inheriting that simple signal line idea and it is extending it : instead of having one signal line, more lines depending on the current value of the indicator.
"Signal" line is calculated the following way :
When a certain level is crossed into the desired direction, the EMA of that value is calculated for the desired signal line
When that level is crossed into the opposite direction, the previous "signal" line value is simply "inherited" and it becomes a kind of a level
This way it becomes a combination of signal lines and levels that are trying to combine both the good from both methods.
In simple terms, DSL uses the concept of a signal line and betters it by inheriting the previous signal line's value & makes it a level.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Signals
Alerts
Related indicators
Zero-line Volatility Quality Index (VQI)
Volatility Quality Index w/ Pips Filtering
Variety Moving Average Waddah Attar Explosion (WAE)
***This indicator is tuned to Forex. If you want to make it useful for other tickers, you must change the pip filtering value to match the asset. This means that for BTC, for example, you likely need to use a value of 10,000 or more for pips filter.
T3 PPO [Loxx]T3 PPO is a percentage price oscillator indicator using T3 moving average. This indicator is used to spot reversals. Dark red is upward price exhaustion, dark green is downward price exhaustion.
What is Percentage Price Oscillator (PPO)?
The percentage price oscillator (PPO) is a technical momentum indicator that shows the relationship between two moving averages in percentage terms. The moving averages are a 26-period and 12-period exponential moving average (EMA).
The PPO is used to compare asset performance and volatility, spot divergence that could lead to price reversals, generate trade signals, and help confirm trend direction.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
VHF-Adaptive T3 iTrend [Loxx]VHF-Adaptive T3 iTrend is an iTrend indicator with T3 smoothing and Vertical Horizontal Filter Adaptive period input. iTrend is used to determine where the trend starts and ends. You'll notice that the noise filter on this one is extreme. Adjust the period inputs accordingly to suit your take and your backtest requirements. This is also useful for scalping lower timeframes. Enjoy!
What is VHF Adaptive Period?
Vertical Horizontal Filter (VHF) was created by Adam White to identify trending and ranging markets. VHF measures the level of trend activity, similar to ADX DI. Vertical Horizontal Filter does not, itself, generate trading signals, but determines whether signals are taken from trend or momentum indicators. Using this trend information, one is then able to derive an average cycle length.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Bar coloring
Alerts
Signals
Loxx's Expanded Source Types
STD-Adaptive T3 Channel w/ Ehlers Swiss Army Knife Mod. [Loxx]STD-Adaptive T3 Channel w/ Ehlers Swiss Army Knife Mod. is an adaptive T3 indicator using standard deviation adaptivity and Ehlers Swiss Army Knife indicator to adjust the alpha value of the T3 calculation. This helps identify trends and reduce noise. In addition. I've included a Keltner Channel to show reversal/exhaustion zones.
What is the Swiss Army Knife Indicator?
John Ehlers explains the calculation here: www.mesasoftware.com
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
T3 Velocity [Loxx]T3 Velocity is a simple velocity indicator using T3 moving average that uses gradient colors to better identify trends.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
R-squared Adaptive T3 w/ DSL [Loxx]R-squared Adaptive T3 w/ DSL is the following T3 indicator but with Discontinued Signal Lines added to reduce noise and thereby increase signal accuracy. This adaptation makes this indicator lower TF scalp friendly.
What is R-squared Adaptive?
One tool available in forecasting the trendiness of the breakout is the coefficient of determination ( R-squared ), a statistical measurement.
The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average .
R-squared is used here to derive a T3 factor used to modify price before passing price through a six-pole non-linear Kalman filter.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
EMA and FEMA Signla/DSL smoothing
Loxx's Expanded Source Types
Pips-Stepped, R-squared Adaptive T3 [Loxx]Pips-Stepped, R-squared Adaptive T3 is a a T3 moving average with optional adaptivity, trend following, and pip-stepping. This indicator also uses optional flat coloring to determine chops zones. This indicator is R-squared adaptive. This is also an experimental indicator.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
What is R-squared Adaptive?
One tool available in forecasting the trendiness of the breakout is the coefficient of determination (R-squared), a statistical measurement.
The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average.
R-squared is used here to derive a T3 factor used to modify price before passing price through a six-pole non-linear Kalman filter.
Included:
Bar coloring
Signals
Alerts
Flat coloring
R-squared Adaptive T3 [Loxx]R-squared Adaptive T3 is an R-squared adaptive version of Tilson's T3 moving average. This adaptivity was originally proposed by mladen on various forex forums. This is considered experimental but shows how to use r-squared adapting methods to moving averages. In theory, the T3 is a six-pole non-linear Kalman filter.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis. Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD, Momentum, Relative Strength Index) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA (simple moving average) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA(n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA.
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE/2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE/2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE/2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA, popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE/2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA(3) has lag 1, and EMA(11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA(3) through itself 5 times than if I just take EMA(11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA(3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA(7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA(n) = EMA(n) + EMA(time series - EMA(n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA. The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA(n) + EMA(time series - EMA(n))*.7;
This is algebraically the same as:
EMA(n)*1.7-EMA(EMA(n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD(n,v) = EMA(n)*(1+v)-EMA(EMA(n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA, and when v=1, GD is DEMA. In between, GD is a cooler DEMA. By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD(GD(GD(n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA(n)) to correct themselves. In Technical Analysis, these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types






















