Monetary Policy Regimes in Macroeconomic Data:

an Application of Fractal Analysis

 

 

Robert F. Mulligan, Ph.D.

Department of Business Computer Information Systems & Economics

Western Carolina University

College of Business

Cullowhee, North Carolina 28723

Phone: 828-227-3329

Fax: 828-227-7414

Email: mulligan@wcu.edu

 

 

Roger Koppl

Fairleigh Dickenson University

 

 

Acknowledgements

Robert F. Mulligan is associate professor of economics in the Department of Business Computer Information Systems and Economics at Western Carolina University College of Business and a research associate of the State University of New York at Binghamton.  Roger Koppl is professor of economics at Fairleigh Dickenson University.  Financial support in the form of a Visiting Research Fellowship from the American Institute for Economic Research is gratefully acknowledged.  The authors remain responsible for any errors or omissions.

 

Abstract

 

This paper examines macromonetary data for behavioral stability over Alan Greenspan's tenure as chairman of the Federal Reserve System.  Five self-affine fractal analysis techniques for estimating the Hurst exponent, Mandelbrot-Lévy characteristic exponent, and fractal dimension are employed to explore the data's fractal properties.  Techniques employed are rescaled-range analysis, power-spectral density analysis, roughness-length analysis, the variogram or structure function method, and wavelet analysis.  Formal hypothesis tests provide evidence of a change in monetary policy behavior between the 1989-1996 and 1997-2003 periods.  This change is manifested both in the behavior and distribution of the month-to-month changes in the monetary aggregates, ratios, and multipliers, and in the behavior and distribution of macroeconomic data.  Strong evidence is presented that U.S. monetary policy became actively interventionist after December 1996, and that the effectiveness of the Federal Reserve System has been lowered compared to the earlier period.

 

Introduction

This paper examines the distribution of changes in a vector of macromonetary data.  Statistical tests focusing on five alternative methods for estimating Hurst (1951) exponent, fractal dimension, and Mandelbrot-Lévy characteristic exponent (Lévy 1925) are used.  An analysis of the findings reveal a sharp change in U.S. monetary policy starting in December 1996, announced by Chairman Alan Greenspan's now famous "irrational exuberance" speech. 

The paper is organized as follows.  A literature review is provided in the second section.  The data are documented in the third section.  Methodology and results are presented in the fourth and fifth sections.  The conclusions are provided in the sixth section.  Additionally, the glossary and appendix have been prepared to assist the reader in understanding the uniqueness of the specialized statistical language used in this paper. 

Mandelbrot's (1972a, 1975, 1977) and Mandelbrot and Wallis's (1969) R/S or rescaled range analysis characterizes time series as one of four types: 1.) dependent or autocorrelated series, 2.) persistent, trend-reinforcing series, also called biased random walks, random walks with drift, or fractional Brownian motion,  3.) random walks, or 4.) anti-persistent, ergodic, or mean-reverting series.  Mandelbrot-Lévy distributions are a general class of probability distributions derived from the generalized central limit theorem, and include the normal or Gaussian and Cauchy as limiting cases (Lévy 1925; Gnedenko and Kolmolgorov 1954).  They are also referred to as stable, Lévy-stable, L-stable, stable-Paretian, and Pareto-Lévy.  Samuelson (1982) popularized the term Mandelbrot-Lévy, but Mandelbrot avoids this expression, perhaps out of modesty, and the other terms remain current.  The reciprocal of the Mandelbrot-Lévy characteristic exponent alpha is the Hurst exponent H, and estimates of H indicate the probability distribution underlying a time series.  H = 1/alpha = 1/2 for normally-distributed or Gaussian processes.  H = 1 for Cauchy-distributed processes.  H = 2 for the Lévy distribution governing tosses of a fair coin.  H is also related to the fractal dimension D by the relationship D = 2 - H.  Series with different fractal statistics exhibit different properties as described in Table 1.  Fractal analysis applies only to stationary time series, so non-stationary series must be differenced or rendered stationary by some other means.

<<Table 1 about here>>

 

Literature review

The search for long memory in capital markets has been a fixture in the literature applying fractal geometry and chaos theory to economics since Mandelbrot (1963b) shifted his attention from income distribution to speculative prices.  Fractal analysis has been applied extensively to equities (Green and Fielitz 1977; Lo 1991; Barkoulas and Baum 1996; Peters 1994, 1996; Koppl et al 1997; Kraemer and Runde1997; Barkoulas and Travlos 1998; Koppl and Nardone 2001; Mulligan 2004, and Mulligan and Lombardo 2003) interest rates (Duan and Jacobs 1996; and Barkoulas and Baum 1997a, 1997b), commodities (Barkoulas, Baum, and Ogutz 1998), exchange rates (Cheung 1993; Byers and Peel 1996; Koppl and Yeager 1996; Barkoulas and Baum 1997c; Chou and Shih 1997; Andersen and Bollerslev 1997; Koppl and Broussard 1999, and Mulligan 2000a), and derivatives (Fang, Lai, and Lai 1994; Barcoulas, Labys, and Onochie 1997; and Corazza, Malliaris, and Nardelli 1997).  Fractal analysis has also been applied to income distribution (Mandelbrot 1963a) and macroeconomic data (Peters 1994, 1996).

Gilanshah and Koppl (2001) advance the thesis that postwar money demand and monetary policy behavior were mostly stable from 1945-1970, but that instability emerged during the seventies as the Federal Reserve System adopted more activist policies and procedures.  The present study contrasts the earlier and later years of Alan Greenspan’s tenure as Chairman for evidence of a switch from non-discretionary, non-activist monetary policy to more discretionary, more activist behavior.  If it is the case that the Federal Reserve System switched from being a passive to an active market player after December 1996, the influence of this one “big player” would be to reduce the stability of money demand, as the many smaller players attempt to react to, as well as anticipate, big player moves.  The smaller players behavior should exhibit herding if it is difficult to anticipate or observe big player behavior, or if that behavior changes abruptly at the big player’s discretion, and if it is relatively easy to observe behavior of other small players.  If the Federal Reserve System is a big player acting in accordance with discretion as opposed to rules, the many little players would not appear to be following any coherent behavior, even if little players developed and followed consistent and rational strategic responses.  Even if the little players respond according to set rules, because the big player acts unpredictably through discretion, the little players’ behavior seems incoherent.  If this reading is correct, the instability in money demand is not a statistical artifact of specification error, and cannot be removed by adding variables to conventional money demand models.

Big players induce herding in money demand.  Gilanshah and Koppl  (2001) found that Federal Reserve System policy grew more discretionary after 1970, and that the increase in big player influence reduced the stability of money demand.  As the Federal Reserve System began to adopt more activist policy measures during the 1970s, estimates generated by standard money demand specifications began to show sizable prediction errors.  If activist monetary policy does indeed impose instability, this implies that the Federal Reserve System should abandon discretion and pursue money supply targets according to fixed rules.  This implication runs counter to a prevailing inference presented in the literature on money demand instability.  Mishkin’s (1995:572) view is representative: “because the money demand function has become unstable, velocity is now harder to predict, and setting rigid money supply targets in order to control aggregate spending in the economy may not be an effective way to conduct monetary policy.”  But as Gilanshah and Koppl (2001) argue, since the money demand instability results from Federal Reserve activism, the situation calls for less discretion, not more.  In their view, one mechanism introducing herding or bandwagon effects in money demand is cash managers’ attempts to enhance their reputations, which enhances their job security and earning potential.  Cash managers seek to enhance their reputations in a manner similar to, and for the same reasons as, portfolio managers (Scharfstein and Stein 1990).  Cash managers achieve and maintain reputation through conformity with industry practice, a global criterion, and through conduct appropriate to the unique circumstances of their business enterprise, a local criterion.  Pursuit of the global criterion imposes herding behavior or bandwagon effects.  If cash managers act as others do and things go well, their reputation is assured.  If they act as others do and things go badly, the blame is shared throughout the profession.  If cash managers defy prevailing practice in their profession and things go badly, their reputation is ruined.  Sharfstein and Stein (1990:466) call this incentive to imitate standard practices the “sharing-the-blame effect.”

If, however, cash managers defy prevailing practice and things go well, their reputation is strongly enhanced and they enjoy improved income prospects and job security.  This is a powerful counter-incentive to herding.  Not all cash managers are constitutionally capable of acting independently of their peers, and some may require the security of the herd.  Some cash managers will herd; others will not.  Big player conduct affects the fraction that herds.  Activist monetary policy impairs the value of local information which could be exploited by the more independent cash managers.  Thus discretionary conduct by the monetary authorities promotes herding and introduces more volatility into macromonetary data.

 

<<Table 2 about here>>

Data

The data are monthly-observed monetary aggregates, ratios, and multipliers over the 1987-2003 range.  Macroeconomic data, specifically output measures and interest rates, are also examined over the same period to determine if their behavior appears significantly driven by the monetary data.

GMB is the logarithmic first difference of the monetary base. 

GIIP is the first difference of the index of industrial production. 

GC is the logarithmic first difference of real consumable output, which in turn is 100 times personal consumption expenditures divided by its deflator. 

GP is the first difference of the personal consumption expenditures deflator.

GMM3 is the logarithmic first difference of the M3 monetary multiplier. 

GERR is the first difference of the effective reserve requirement. 

GCDD is the first difference of the currency-to-demand-deposit ratio. 

GTDD is the first difference of the time-deposit-to-demand-deposit ratio. 

GEDD is the first difference of the excess-reserve-to-demand-deposit ratio. 

GI10Y is the first difference of the ten-year constant maturity government security interest rate. 

GI3MO is the first difference of the three-month secondary market treasury bill interest rate. 

GR is the first difference of the term spread, the ten-year constant maturity rate minus the three-month secondary market rate.

Time series which were already represented as interest rates, percentages, or ratios, were simply first differenced without taking logarithms.  Table 2 presents descriptive statistics for the differenced series.

 

Methodology

Long memory series exhibit non-periodic long cycles, or persistent dependence between observations far apart in time; i.e., observable patterns which tend to repeat.   Long memory or persistent series tend to reverse themselves less often than a purely random series.  Thus, they display a trend, and are also called black noise, in contrast to purely random white noise.  Persistent series have long memory in that events are correlated over long time periods.  In contrast, short-term dependent time series include standard autoregressive moving average and Markov processes, and have the property that observations far apart exhibit little or no statistical dependence.  R/S or rescaled range analysis distinguishes random from non-random or deterministic series.  The rescaled range is the range divided (rescaled) by the standard deviation.  Seemingly random time series may be deterministic chaos, fractional Brownian motion (FBM), or a mixture of random and non-random components. 

Conventional statistical techniques lack power to distinguish unpredictable random components from highly predictable deterministic components.  R/S analysis evolved to address this difficulty.  R/S analysis exploits the structure of dependence in time series irrespective of their marginal distributions, statistically identifying non-periodic cyclic long run dependence as distinguished from short dependence or Markov character and periodic variation (Mandelbot 1972a: 259-260).  The difference between long-memory processes, also called non-periodic long cycles, and short-term dependence, is that each observation in long memory processes has a persistent effect, on average, on all subsequent observations, up to some horizon after which memory is lost, whereas in contrast, short-term dependent processes display little or no memory of the past, and what short-term dependence can be observed often diminishes with the square of the time elapsed.  For equity prices, long memory can be observed when a stock follows a trend or repeats a cyclical movement, even though the cycles can have time-varying frequencies.  Short-term dependence is indicated when there are no observable trends or patterns beyond a very short time span, and the impact of any outliers or extreme values diminishes rapidly over time. 

Mandelbrot (1963a, 1963b) demonstrated all stationary series can be categorized in accordance with their Hurst exponent H.   The Hurst exponent was introduced in the hydrological study of the Nile valley and is the reciprocal of the characteristic exponent alpha (Hurst 1951).  Some series are persistent or black noise processes with (0.50 < H < 1.00).  These less noisy series exhibit clearer trends and more persistence the closer H is to one.  However, Hs very close to one indicate high risk of large, abrupt changes, e.g., H = 1.00 for the Cauchy distribution, the basis for the characteristic exponent test.  This research used the approach of estimating the Hurst exponent for each series over the whole sample period by five alternative techniques, then testing for Gaussian character, and finally testing for stability of the Hurst exponent over two subsamples by R/S to examine whether the behavior of the data processes changed during the time studied.

 

Results

Many macromonetary series are anti-persistent or ergodic, mean-reverting, or pink noise processes with (0.00 < H < 0.50), indicating they are more volatile than a random walk.  Pink noise processes are used to model dynamic turbulence.  Ergodic or antipersistent processes reverse themselves more often than purely random series.  Ergodicity, that is, H significantly below 0.50, indicate policy makers persistently over-react to new information, imposing more macroeconomic volatility than would maintain in the absence of policy, and never learn not to over-react.  This observed phenomenon is directly analogous to Mussa's (1984) disequilibrium overshooting, in which the market process of adjustment toward final equilbrium is unstable, and never quiets down.  Hs significantly above 0.50 demonstrate macroeconomic data series are not random walks. 

<<Table 3 about here>>

This section discusses and interprets the results of five alternative fractal analysis methods for measuring the Hurst exponent H presented in Table 3.  All data are converted to first differences, losing one observation.  Standard errors are given in parentheses.  H is estimated first over the 1989-2003 sample (whole range), and then the data is split between 1996 and 1997, giving separate estimates of H for the 1989-1996 (early range) and 1997-2002 (late range) periods.  Because of the number of observations for the two subsamples, estimates of H could only be obtained by the R/S and wavelet methods, and since the wavelet method does not provide a standard error, only the R/S measures of H could be used for hypothesis tests of structural stability.   Mandelbrot, Fisher, and Calvet (1997) refer to H as the self-affinity index or scaling exponent.   

Five techniques for estimating the Hurst exponent are reported in this paper:  1.) Mandelbrot's (1972a) AR1 rescaled-range or R/S analysis; 2.) power spectral-density analysis; 3.) roughness-length relationship analysis; 4.) variogram analysis; and 5.) wavelet analysis: 

1.)  Rescaled-range or R/S analysis:  R/S analysis is the traditional technique introduced by Mandelbrot (1972a).  Hs estimated by this method are generally far from 0.50, suggesting non-Gaussian processes.  The difference between estimated Hs and 0.50 is statistically significant over the whole sample range and both subsamples for each series examined.  Hs are always below 0.50, indicating ergodicity or antipersistence, e.g., negative serial correlation meaning the data processes persistently overcorrect.  This measurable antipersistence or ergodicity demonstrates policy makers habitually overreact to new information, and never learn not to.

Hs different from 0.50 demonstrate the data series have not been random walks, nevertheless, this finding may be due to short-term dependence still present after taking AR1 residuals, or systematic bias due to information asymmetries, or both.

2.)  Power spectral density analysis:  Power spectral density analysis could only obtain estimates of H for the whole 1989-2003 sample period.  Hs estimated by this technique also fall in the persistent range (H < 0.50), except for the consumption price deflator (GP) and the excess-reserve-to-demand-deposit ratio (GEDD).  Note these results often flatly contradict those provided by other techniques.  Spectral density often provides very large standard errors for H, and thus formal hypothesis tests are generally biased against rejecting the null.  However, the standard errors of the Hs of GP and GEDD are quite low, supporting the conclusion that they are normally-distributed, white-noise processes.

3.)  Roughness-length relationship method:  Formal hypothesis tests reject the Gaussian null for all series, and all Hs are significantly less than 0.50, indicating antipersistence.

4.)  Variogram analysis:  Variogram analysis supports antipersistence for all series.

5.)  Wavelet analysis:   This method was developed by Daubechies (1990), Beylkin (1992), and Coifman et al (1992).  Wavelet H estimates indicate antipersistence or ergodicity (H < 0.50) for the index of industrial production (GIIP) (whole and early samples), the M3 money multiplier (GMM3) (both subsamples, but not over the whole sample range), the effective required reserve ratio (GERR) (late sample only), the currency-to-demand-deposit ratio (GCDD) (early sample only), the excess-reserve-to-demand-deposit ratio (GEDD) (all ranges), the ten-year government security rate (GI10Y) (whole and early samples), the three-month treasury bill secondary market rate (GI3MO) (all ranges), and the term spread (GR) (all ranges), indicating persistence (H > 0.50), or in some cases normality, elsewhere.   Because wavelet analysis does not provide a standard error for H, formal hypothesis tests cannot be constructed.

<<Table 4 about here>>

Hypothesis tests are constructed to test for 1.) the Gaussian character or normality of the underlying time series, 2.) Cauchy-character, and 3.) changes in the behavior of the distribution between 1987-1996 and 1997-2003: 

1.)  Tests of Gaussian character or normality:  Table 4 presents t-statistics for tests of the null hypothesis H = 0.50, along with two-tail probability levels.  T-statistics are computed as 0.50 minus the Hurst exponent, divided by the standard error of the Hurst exponent, and are based on the R/S estimate of H.  Degrees of freedom are the number of observations or average R/S samples in the regression used to estimate H, rather than the number of observations of the underlying series.  The findings are that none of the macromonetary series are Gaussian processes, for the whole sample or either subsample.  Jarque-Bera (1980) test statistics for normality are provided for comparison.  These tests do not always agree.  The Jarque-Bera test indicates normality for the monetary base (1987-1996), the index of industrial production (all ranges), real consumable output (1987-1996), the M3 money multiplier (1987-1996), the currency-to-demand-deposit ratio (1997-2003), the time-deposit-to-demand-deposit ratio (all ranges), the ten-year government security rate (all ranges), and the three-month treasury bill rate (1987-1996).  The Jarque-Bera test indicates non-normality for all other ranges and all other variables.  Interestingly, the Jarque-Bera test is much more strongly suggestive of a change in behavior, and of an increase in volatility after the break point, than tests of normality based on the R/S.  

2.)  Tests of Cauchy character: the Mandelbrot-Lévy characteristic exponent test:  Various statistics are available to test the null hypothesis of normality, but not for the Cauchy distribution, the other extreme.  The Mandelbrot-Lévy characteristic exponent alpha is computed as the reciprocal of the Hurst exponent.  Mulligan (2000b) provides tables of percentages of alpha generated by Monte Carlo experiments with 1,000 iterations for different sample sizes.  These critical values can be used to evaluate estimated alphas for the Cauchy null; the null should be rejected if the estimated characteristic exponent lies outside the critical bounds.   Dispersion of alpha around the theoretical value of 1 varies greatly with the sample size; for sufficiently large sample sizes (e.g., n = 10,000), alpha is highly concentrated around 1.00 for a Cauchy-distributed random variable.  Critical values interpolated from Mulligan (2000b: 491) for a sample size of 188 are: 1%, 0.487; 5%, 0.635; 10%, 0.700; 90%, 1.198; 95%, 1.272; 99%, 1.452.  Critical Hs are the recycprocals.  None of the R/S Hs are even close to being as low as the 99% critical value of 0.689, computed as 1/1.452; thus, the Cauchy null is always rejected by R/S.  In contrast, the wavelet  Hs are large enough to fail to reject the Cauchy null for real consumable output at all conventional significance levels, and the time-deposit-to-demand-deposit ratio at the 1% significance level  (one tail).  Evidence of Cauchy character for macroeconomic data is extremely surprising, as suggests periodic, very large, and abrupt, changes in the data unpredictably move the economy from one regime to another, independent of changes in other observable economic fundamentals.

                For the 1987-1996 subsample, the sample size is 113, and interpolated critical alphas are: 1%, 0.420; 5%, 0.568; 10%, 0.641; 90%, 1.247; 95%, 1.342; 99%, 1.607.  Only wavelet estimated alphas approach the critical range.  Over this earlier subsample, the index of industrial production and the time-deposit-to-demand-deposit ratio fail to reject the Cauchy null at the 1% significance level (one tail).  The effective reserve requirement fails to reject the Cauchy null at all conventional significance levels.

                For the 1997-2003 subsample, the sample size is 75, and interpolated critical alphas are: 1%, 0.332; 5%, 0.510; 10%, 0.590; 90%, 1.293; 95%, 1.438; 99%, 1.800.  Again, only wavelet estimated alphas approach the critical range.  Over the later subsample, real consumable output, the currency-to-demand-deposit ratio, and the time-deposit-to-demand-deposit ratio fail to reject the null hypothesis at all conventional significance levels.

<<Table 5 about here>>

3.)  Tests of structural change:  Table 5 presents t-statistics testing for significant differences among Hs estimated over the whole sample range and the two subsamples, referred to a ranges A, B, and C.  The first null hypothesis tested for each equity is that the Hs for the two subsamples are equal (B=C), with degrees of freedom equal to the sum of sample average R/Ss in the two regressions estimating H for each subsample (4 + 3 = 7).  The second and third null hypotheses are that the H for each subsample is equal to the H estimated over the whole sample (A=B and A=C) with degrees of freedom equal to the number of R/Ss in the whole-sample regression (9), because the standard error of the whole sample H is treated as the pooled standard error. 

Although not every test indicates a break or change in structural behavior, the number that do is so overwhelming (32 tests indicating rejection of the hypothesis of stable Hs across subsamples at the 1% significance level, out of 36 tests) it is difficult to avoid the conclusion of a fairly sharp break, here indicating a drastic change in the statistical behavior and distributions of the data processes examined.

With time series that may be fractional Gaussian noise, that is, apparently random combinations of otherwise statistically well-behaved processes scrambled together with periodically-changing parameters and characteristics, it is not strictly correct to infer structural change in the conventional sense.  For example, a random scrambling of several different finite-variance processes can result in an infinite-variance process over a larger sample range.  The finding that H is not constant over two subsamples and the whole sample range is wholly consistent with a stable fractal process, but more importantly, it points to some difference in fundamentals, or at least in behavior of the variable studied, from one period to the other.

 

Conclusion

The logarithmic differences of macroeconomic data for a stable and growing economy should have Hurst exponents approximately equal to 0.50, indicating these series change in a purely random, normally-distributed manner.  Series with long-term trends and non-periodic cycles should display time persistence with H > 0.50, unless economic efficiency imposes randomness and normality anyway.   All the macroeconomic data in this study yield strong evidence of antipersistence, ergodicity, or negative serial correlation.  The conclusion suggested is that policy makers are incapable of correctly evaluing economic data, persistently overreact to the arrival of new information, and never learn not to overreact. 

A possible scenario that renders this finding more intuitive is that information relevant to a nation's macroeconomic performance arrives frequently and seemingly at random.  Policy makers habitually ignore the vast majority of this information, because the vast majority is unimportant or irrelevant, until it accumulates a critical mass they must finally recognize.  Then, perceiving they have ignored a body of relevant information which they have allowed to accumulate, they attempt to compensate for their history of informational sloth by overreacting.  The expression "informational sloth" can just as validly be characterized as "filtering out noise." 


References

Andersen, Torben G.; Bollerslev, Tim. 1997. "Heterogeneous Information Arrivals and Return Volatility Dynamics: Uncovering the Long-run in High Frequency Returns," Journal of Finance, 52(3): 975-1005.

 

Barkoulas, John T.; Baum, Christopher F. 1996. "Long-term Dependence in Stock Returns," Economics Letters, 53: 253-259.

 

_____; _____. 1997a. "Fractional Differencing Modeling and Forecasting of Eurocurrency Deposit Rates," Journal of Financial Research, 20(3): 355-372.

 

_____; _____. 1997b. "Long Memory and Forecasting in Euroyen Deposit Rates," Financial Engineering and the Japanese Markets, 4(3): 189-201.

 

_____; _____. 1997c."A Re-examination of the Fragility of Evidence from Cointegration-based Tests of Foreign Exchange Market Efficiency," Applied Financial Economics, 7: 635-643.

 

Barkoulas, John T.; Baum, Christopher F.; Oguz, Gurkan S. 1998. "Stochastic Long Memory in Traded Goods Prices." Applied Economics Letters, 5: 135-138.

 

Barkoulas, John T.; Labys, Walter C.; Onochie, Joseph. 1997. "Fractional Dynamics in International Commodity Prices," Journal of Futures Markets, 17(2): 161-189.

 

Barkoulas, John T.; Travlos, Nickolaos. 1998. "Chaos in an Emerging Capital Market? The Case of the Athens Stock Exchange," Applied Financial Economics, 8: 231-243.

 

Beylkin, Gregory. 1992. "On the Representation of Operators in Bases of Compactly Supported Wavelets," SIAM Journal on Numerical Analysis, 29(6): 1716-1740.

 

Black, Fisher; Scholes, Myron.  1972. "The Valuation of Option Contracts and a Test of Market Efficiency," Journal of Finance, 27: 399-418.

 

_____; _____. 1973. "The Pricing of Options and Corporate Liabilities," Journal of Political Economy, 81: 637-418.

 

Byers, J.D.; Peel, D.A. 1996. "Long-memory Risk Premia in Exchange Rates," Manchester School of Economic and Social Studies, 64(4): 421-438.

 

Calvet, Laurent; Fisher, Adlai; Mandelbrot, Benoit B. 1997. "Large Deviations and the Distribution of Price Changes," Cowles Foundation Discussion Paper no. 1165, Yale University.

 

Cheung, Yin-Wong. 1993. "Tests for Fractional Integration: a Monte Carlo Investigation," Journal of Time Series Analysis, 14: 331-345.

 

Cheung, Yin-Wong; Lai, Kon S. 1993. "Do Gold Market Returns Have Long Memory?" The Financial Review, 28(3): 181-202.

 

Chou, W.L.; Shih, Y.C. 1997. "Long-run Purchasing Power Parity and Long-term Memory: Evidence from Asian Newly-industrialized Countries," Applied Economics Letters, 4: 575-578.

 

Coifman, Ronald; Ruskai, Mary Beth; Beylkin, Gregory; Daubechies, Ingrid; Mallat, Stephane; Meyer, Yves; Raphael, Louise, eds. 1992. Wavelets and Their Applications. Sudbury, Massachusetts: Jones and Bartlett Publishers.

 

Corazza, Marco; Malliaris, A.G.; Nardelli, Carla. 1997. "Searching for Fractal Structure in Agricultural Futures Markets," Journal of Futures Markets, 71(4): 433-473.

 

Daubechies, Ingrid. 1990. "The Wavelet Transform, Time-frequency Localization and Signal Analysis," IEEE Transactions on Information Theory, 36: 961-1005.

 

Diebold, Francis X.; Inoue, A.  2000. "Long Memory and Regime Switching," National Bureau of Economic Research technical working paper no. 264.

 

Duan, Jin-Chuan; Jacobs, Kris. 1996. "A Simple Long-memory Equilibrium Interest Rate Model," Economics Letters, 53: 317-321.

 

Fama, Eugene; Fisher, L.; Jensen, M., and Roll, R. 1969. " The Adjustment of Stock Prices to New Information," International Economic Review, 10: 1-21.

 

Fang, H.; Lai, Kon S.; Lai, M. 1994. "Fractal Structure in Currency Futures Prices," Journal of Futures Markets, 14: 169-181.

 

Fisher, Adlai; Calvet, Laurent; Mandelbrot, Benoit B. 1997. "Multifractality of Deutschemark/US Dollar Exchange Rates," Cowles Foundation Discussion Paper no. 1166, Yale University.

 

Gilanshah and Koppl (2001)

 

Gnedenko, Boris Vladimirovich; Kolmogorov, Andrei Nikolaevich. 1954. Limit Distributions for Sums of Random Variables, Reading MA: Addison-Wesley.

 

Granger, C.W.J.  1989.  Forecasting in Business and Economics, 2nd ed., Boston: Academic Press.

 

Granger, C.W.J.; Hyung, N. 1999. "Occasional Structural Breaks and Long Memory," discussion paper 99-14, University of California at San Diego.

 

Greene, M.T.; Fielitz, B.D. 1977. "Long-term Dependence in Common Stock Returns," Journal of Financial Economics, 5: 339-349.

 

Heiner, Ronald A. 1980. "The Origin of Predictable Behavior," American Economic Review, 73(3): 560-595.

 

Hurst, H. Edwin. 1951. "Long-term Storage Capacity of Reservoirs," Transactions of the American Society of Civil Engineers, 116: 770-799.

 

Jarque, C.M., and A.K. Bera (1980). "Efficient Tests for Normality, Homoskedasticity, and Serial Independence of Regression Residuals," Economics Letters 6: 255-259.

 

Kaen, Fred R.; Rosenman, Robert E. 1986. "Predictable Behavior in Financial Markets: Some Evidence in Support of Heiner's Hypothesis," American Economic Review, 76(1): 212-220.

 

Kraemer, Walter; Runde, Ralf. 1997. "Chaos and the Compass Rose," Economics Letters, 54: 113-118.

 

Koppl, Roger; Ahmed, Ehsan; Rosser, J. Barkley; White, Mark V. 1997. "Complex Bubble Persistence in Closed-End Country Funds," Journal of Economic Behavior and Organization, 32(1): 19-37.

 

Koppl, Roger; Broussard, John. 1999. "Big Players and the Russian Ruble: Explaining Volatility Dynamics," Managerial Finance, 25(1): 49-63.

 

Koppl, Roger; Nardone, Carlo. 2001. "The Angular Distribution of Asset Returns in Delay Space," Discrete Dynamics in Nature and Society, 6: 101-120.

 

Koppl, Roger; Yeager, Leland, 1996. "Big Players and Herding in Asset Markets: The Case of the Russian Ruble," Explorations in Economic History, 33(3): 367-383.

 

Lévy, Paul. 1925. Calcul des Probibilités, Paris: Gauthier Villars.

 

Lo, Andrew W. 1991. "Long-term Memory in Stock Market Prices," Econometrica, 59(3): 1279-1313.

 

Malkiel, Burton G. 1987. "Efficient Market Hypothesis," New Palgrave Dictionary of Economics, (New York: Stockton Press), 2: 120-122.

 

Mandelbrot, Benoit B. 1963a. "New Methods in Statistical Economics," Journal of Political Economy, 71(5): 421-440.

 

_____. 1963b. "The Variation of Certain Speculative Prices," Journal of Business, 36(3): 394-419.

 

_____. 1972a. "Statistical Methodology for Non-periodic Cycles: From the Covariance to R/S Analysis," Annals of Economic and Social Measurement, 1(3): 255-290.

 

_____. 1972b. "Possible Refinements of the Lognormal Hypothesis Concerning the Distribution of Energy Dissipation in Intermittent Turbulence," in M. Rosenblatt and C. Van Atta, eds.,  Statistical Models and Turbulence, New York: Springer Verlag.

 

_____.  1974. "Intermittent Turbulence in Self Similar Cascades: Divergence of High Moments and Dimension of the Carrier, Journal of Fluid Mechanics, 62: 331-358.

 

_____.  1975. "Limit Theorems on the Self-normalized Range for Weakly and Strongly Dependent Processes," Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 31: 271-285.

 

_____. 1977. The Fractal Geometry of Nature, New York: Freeman.

 

Mandelbrot, Benoit B.; Fisher, Adlai; Calvet, Laurent. 1997. "A Multifractal Model of Asset Returns," Cowles Foundation Discussion Paper no. 1164, Yale University.

 

Mandelbrot, Benoit B.; van Ness, J.W. 1968. "Fractional Brownian Motion, Fractional Noises and Application," SIAM Review, 10: 422-437.

 

Mandelbrot, Benoit B.; Wallis, James R. 1969. "Robustness of the Rescaled Range R/S in the Measurement of Noncyclic Long-run Statistical Dependence," Water Resources Research, 5(4): 976-988.

 

Mulligan, Robert F. 2000a. "A Fractal Analysis of Foreign Exchange Markets," International Advances in Economic Research, 6(1): 33-49.

 

_____. 2000b. "A Characteristic Exponent Test for the Cauchy Distribution," Atlantic Economic Journal, 28(4): 491.

 

_____.  2004. "Fractal Analysis of Highly Volatile Markets: an Application to Technology Equities," Quarterly Review of Economics and Finance, 44(1).

 

_____. 2003. "Maritime Businesses: Volatile Prices and Market Valuation Inefficiencies," Quarterly Review of Economics and Finance, forthcoming.

 

Mussa, Michael. 1984. "The Theory of Exchange Rate Determination," in John F.O. Bilson and Richard C. Marston, eds., Exchange Rate Theory and Practice, Chicago: University of Chicago Press, 13-78.

 

Neely, Christopher; Weller, Paul; Dittmar, Robert. 1997. "Is Technical Analysis in the Foreign Exchange Market Profitable? A Genetic Programming Approach," Journal of Financial and Quantitative Analysis, 32(4): 405-426.

 

Osborne, M.  1959. "Brownian Motions in the Stock Market," Operations Research, 7: 145-173.

 

Peters, Edgar E. 1994. Fractal Market Analysis, New York: Wiley.

 

_____. 1996. Chaos and Order in the Capital Markets: a New View of Cycles, Prices, and Market Volatility, second edition, New York: Wiley.

 

_____.  1999. Complexity, Risk, and Financial Markets, New York: Wiley.

 

Samuelson, Paul A. 1982. "Paul Cootner's Reconciliation of Economic Law with Chance," in William F. Scharpe and Cathryn M. Cootner, eds., Financial Economics: Essays in Honor of Paul Cootner, Englewood Cliffs NJ: Scott, Foresman & Co., 101-117.

 


 

Glossary of Fractal Analysis Terms

Antipersistence – a series that reverses itself more often than a purely random series, also called pink noise, ergodicity, 1/f noise, or negative serial correlation. (Peters 1994: 306).

Black noise – a series that reverses itself less often than a purely random series, displaying trends or repetitive patterns over time, also called persistence, positive serial correlation, or autocorrelation. (Peters 1994: 183-187).

Brown noise – the cumulative sum of a normally-distributed random variable, also called Brownian motion. (Peters 1994: 183-185; Osborne 1959).

Efficient Market Hypothesis – the proposition that market prices fully and correctly reflect all relevant information. A market is described as efficient with respect to an information set if prices do not change when the information is revealed to all market participants. There are three levels of market efficiency: weak, semi-strong, and strong. (Fama et al 1969; Malkiel 1987).

Long memory – the property that any value a series takes on generally has a long and persistent effect, e.g., extreme values that repeat at fairly regular intervals.  (Peters 1994: 274).

Multifractal Model of Asset Returns (MMAR) – a very general model of asset pricing behavior allowing for long-memory and fat-tailed distributions.  Instead of infinite-variance distributions such as the Mandelbrot-Lévy and Cauchy distributions, the MMAR relies on fractional combinations of random variables with non-constant mean and variance, providing many of the properties of infinite-variance distributions.  (Mandelbrot, Fisher, and Calvet 1997).

Non-periodic long cycles - a characteristic of long-memory processes, i.e., of statistical processes where each value has a long and persistent impact on values that follow it, that identifiable patterns tend to repeat over similar, though irregular, cycles (non-periodic cycles.)  Also called the Joseph effect.  (Peters 1994: 266).

Non-stationarity – the property that a series has a systematically varying mean and variance.  Any series with a trend, e.g., U.S. GDP, has a growing mean and therefore is non-stationary.  Brown-noise processes are non-stationary, but white-noise processes are stationary. (Granger 1989: 58).

Persistence or persistent dependence – a series that reverses itself less often than a purely random series, and thus tends to display a trend, also called black noise.  Persistent series have long memory in that events are correlated over long time periods, and thus display non-periodic long cycles. (Peters 1994: 310).

Semi-strong-form Market Efficiency – the intermediate form of the efficient market hypothesis, asserting that market prices incorporate all publicly available information, including both historical data on the prices in question, and any other relevant, publicly-available data, and thus it is impossible for any market participant to gain advantage and earn excess profits, in the absence of inside information. (Peters 1994: 308).

Short-term dependence - the property that any value a series takes on generally has a transient effect, e.g., extreme values bias the series for a certain number of observations that follow.  Eventually, however, all memory of the extreme event is lost, in contrast to long-memory or the Joseph effect.  Special cases include Markov processes and serial correlation.  (Peters 1994: 274).

Spectral density or Power-spectral Density Analysis – a fractal analysis based on the power spectra calculated through the Fourier transform of a series. (Peters 1994: 170-171).

Stationarity – the property that a series has a constant mean and variance.  White, pink, and black-noise processes are all stationary.  Because it is the cumulative sum of a white-noise process, a brown-noise process is non-stationary.  (Granger 1989: 58).

Strong-form Market Efficiency – the most restrictive version of the efficient market hypothesis, asserting that all information known to any one market participant is fully reflected in the price, and thus insider information provides no speculative advantage and cannot offer above average returns. (Malkiel 1987: 120).

Weak-form Market Efficiency – the least restrictive version of the efficient market hypothesis, asserting that current prices fully reflect the historical sequence of past prices.  One implication is that investors cannot obtain above-average returns through analyzing patterns in historical data, i.e., through technical analysis.  Also referred to as the Random Walk Hypothesis.  One common way of testing for weak-form efficiency is to test price series for normality, however, normality is a sufficient rather than a necessary condition.  (Malkiel 1987: 120).

White noise – a perfectly random process exhibiting no serial dependence.  Normal processes meet this requirement, and normality is often conflated with white noise.  Normality is a sufficient condition rather than a necessary condition for white noise.  (Peters 1994: 312).

 


 

Table 1

Fractal Taxonomy of Time Series

 

Term

 

'Color'

 

Hurst exponent

 

Fractal dimension

 

Characteristic exponent

Antipersistent,

Ergodic,

Mean-reverting,

Negative serial correlation,

1/f noise

Pink noise

0 ≤ H < ½

0 ≤ D < 1.50

2.00 ≤ alpha < ∞

Gaussian process,

Normal distribution

White noise

H º ½

D º 1.50

alpha º 2.00

Brownian motion, Wiener process

Brown noise

H º ½

D º 1.50

alpha º 2.00

Persistent,

Trend-reinforcing,

Hurst process

Black noise

½ < H < 1

1 < D < 1.50

1.00 < alpha < 2.00

Cauchy process,

Cauchy distribution

Cauchy noise

H º 1

D º 1

alpha º 1

Note:  Brown noise or Brownian motion is the cumulative sum of a normally-distributed white-noise process.  The changes in, or returns on, a Brownian motion, are white noise.  The fractal statistics are the same for Brown and white noise because the brown-noise process should be differenced as part of the estimation process, yielding white noise.


 


 Table 2

Macromonetary Data Series

Descriptive Statistics

Underlying variable

 

Rubric

Mean

Median

Standard Deviation

Sample Variance

Kurtosis

Skewness

ln Monetary Base

GMB

0.000968

0.000987

0.001388

1.93E-06

7.824068

-0.18583

Index of Industrial Production

GIIP

0.198729

0.26

0.477671

0.22817

0.160655

-0.08616

ln Real Consumable Output

GC

0.000302

0.000271

0.000506

2.56E-07

0.891277

0.379863

Consumption Price Index

 

GP

35.00091

0.2

477.1887

227709.1

188

13.71131

M3 Money Multiplier

 

GMM3

-0.00042

-0.00054

0.003371

1.14E-05

6.794751

1.14446

Effective Reserve Requirement

GERR

-6.9E-05

-3.2E-05

0.000429

1.84E-07

8.645952

-1.7519

Currency-to-Demand-Deposit Ratio

GCDD

0.006258

0.002839

0.019836

0.000393

0.902251

0.347443

Time-Deposit-to-Demand-Deposit Ratio

GTDD

0.003564

0.001313

0.022226

0.000494

-0.12213

0.136709

Excess-Reserve-to-Demand-Deposit Ratio

GEDD

0.082139

0.015298

1.005744

1.011522

176.6499

13.08789

10 year T-bond

 

GI10Y

-0.00354

-0.00821

0.037183

0.001383

-0.14438

0.235176

3 month T-bill

 

GI3MO

-0.00728

-0.00399

0.049516

0.002452

3.772713

-1.1582

Term spread

 

GR

0.126294

-0.0047

1.593287

2.538563

159.8713

12.08197

Note:

All raw time series are converted to logarithmic returns or simple first differences, thus rendering them stationary.

 


 


Table 3

Fractal Analyses of Macromonetary Data Processes

Estimated Hurst Exponent H, Various Methods

(Standard Errors in Parentheses)

 

Firm

Range

 

R/S

Power Spectrum

Roughness-length

 

Variogram

 

Wavelet

GMB

 

1987:08-2003:03

-0.091 (0.0154)

-0.0495

(5.372)

-0.043

(0.0018)

-0.001

(0.1645)

0.651

1987:08-1996:12

-0.097

(0.0051)

 

 

 

0.611

1997:01-2003:03

0.234

(0.0001)

 

 

 

0.541

GIIP

 

1987:08-2003:03

0.095

(0.0136)

-0.203

(3.511)

0.032

(0.0016)

0.066

(0.0219)

0.179

1987:08-1996:12

-0.030

(0.0025)

 

 

 

0.647

1997:01-2003:03

0.000

(0.0041)

 

 

 

0.166

GC

1987:08-2003:03

0.038

(0.0028)

-0.204

(9.2004)

-0.096

(0.0001)

-0.028

(0.0218)

0.875

1987:08-1996:12

-0.042

(0.0033)

 

 

 

0.600

1997:01-2003:03

-0.026

(0.0004)

 

 

 

0.890

GP

 

1987:08-2003:03

0.117

(0.0094)

0.500

(0.0000009)

-0.185

(0.0002)

0.017

(0.0004)

0.431

1987:08-1996:12

-0.020

(0.0003)

 

 

 

0.584

1997:01-2003:03

-0.071

(0.0008)

 

 

 

0.431

GMM3

 

1987:08-2003:03

0.011

(0.0106)

0.292

(5.3671)

-0.074

(0.0015)

0.031

(0.0946)

0.658

1987:08-1996:12

-0.008

(0.0030)

 

 

 

0.349

1997:01-2003:03

0.078

(0.0015)

 

 

 

0.421

GERR

1987:08-2003:03

0.069

(0.00934)

-0.442

(3.3994)

0.031

(0.0001)

-0.006

(0.0684)

0.833

1987:08-1996:12

0.077

(0.0077)

 

 

 

0.861

1997:01-2003:03

-0.188

(0.0005)

 

 

 

0.388

GCDD

 

1987:08-2003:03

-0.073

(0.0067)

-0.475

(12.4782)

-0.099

(0.0005)

-0.041

(0.3649)

0.857

1987:08-1996:12

-0.101

(0.0031)

 

 

 

0.364

1997:01-2003:03

-0.218

(0.0002)

 

 

 

0.945

GTDD

 

1987:08-2003:03

-0.076

(0.0104)

-0.442

(21.1959)

-0.094

(0.0004)

-0.038

(0.3892)

0.778

1987:08-1996:12

-0.136

(0.0042)

 

 

 

0.732

1997:01-2003:03

-0.206

(0.0015)

 

 

 

0.802

GEDD

1987:08-2003:03

0.116

(0.0072)

0.456

(0.0702)

-1.320

(0.0459)

0.010

(0.0025)

0.135

1987:08-1996:12

0.237

(0.0015)

 

 

 

0.306

1997:01-2003:03

0.001

(0.0051)

 

 

 

0.332

GI10Y

 

1987:08-2003:03

0.120

(0.0064)

0.426

(9.1405)

0.051

(0.0001)

0.037

(0.0416)

0.048

1987:08-1996:12

-0.064

(0.0029)

 

 

 

0.126

1997:01-2003:03

0.020

(0.0006)

 

 

 

0.546

GI3MO

1987:08-2003:03

0.179

(0.0091)

-0.224

(4.2599)

0.066

(0.0001)

0.083

(0.0217)

0.327

1987:08-1996:12

0.117

(0.0007)

 

 

 

0.258

1997:01-2003:03

0.214

(0.0001)

 

 

 

0.325

GR

1987:08-2003:03

0.253

(0.0047)

-0.476

(0.6939)

0.205

(0.0010)

0.026

(0.0058)

0.327

1987:08-1996:12

0.277

(0.0083)

 

 

 

0.185

1997:01-2003:03

0.214

(0.0001)

 

 

 

0.329

Note:

The Mandelbrot-Lévy characteristic exponent alpha is the reciprocal of the Hurst exponent H, thus alpha = 1/H.  The fractal dimension D = 2 – H.

 


 

Table 4

Hypothesis Tests for Normality or Gaussian Character (H = 0.50)

 

Process

 

Range

 

d.f.

Mandelbrot-Lévy R/S

Jarque-Bera test

R/S

SE(R/S)

t(R/S)

prob(t)

JB

prob(JB)

GMB

 

1987-2003

9

-0.091

0.0154

38.377

0.00000

451.827

0.00000

1987-1996

4

-0.097

0.0051

117.059

0.00000

1.727

0.42158

1997-2003

3

0.234

0.0001

2660.000

0.00000

282.706

0.00000

GIIP

 

1987-2003

9

0.095

0.0136

29.779

0.00000

0.351

0.83919

1987-1996

4

-0.030

0.0025

212.000

0.00000

0.878

0.64472

1997-2003

3

0.000

0.0041

121.951

0.00000

0.240

0.88711

GC

1987-2003

9

0.038

0.0028

165.000

0.00000

9.924

0.00700

1987-1996

4

-0.042

0.0033

164.242

0.00000

1.020

0.60053

1997-2003

3

-0.026

0.0004

1315.000

0.00000

23.095

0.00001

GP

 

1987-2003

9

0.117

0.0094

40.745

0.00000

268,142.6

0.00000

1987-1996

4

-0.020

0.0003

1733.333

0.00000

8.947

0.01141

1997-2003

3

-0.071

0.0008

713.750

0.00000

16,218.58

0.00000

GMM3

 

1987-2003

9

0.011

0.0106

46.132

0.00000

379.916

0.00000

1987-1996

4

-0.008

0.0030

169.333

0.00000

2.731

0.25519

1997-2003

3

0.078

0.0015

281.333

0.00000

131.877

0.00000

GERR

1987-2003

9

0.069

0.0093

46.344

0.00000

645.510

0.00000

1987-1996

4

0.077

0.0077

54.935

0.00000

460.666

0.00000

1997-2003

3

-0.188

0.0005

1376.000

0.00000

9.768

0.00757

GCDD

 

1987-2003

9

-0.073

0.0067

85.522

0.00000

9.338

0.00938

1987-1996

4

-0.101

0.0031

193.871

0.00000

13.159

0.00139

1997-2003

3

-0.218

0.0002

3590.000

0.00000

2.699

0.25937

GTDD

 

1987-2003

9

-0.076

0.0104

55.385

0.00000

0.754

0.68590

1987-1996

4

-0.136

0.0042

151.429

0.00000

4.094

0.12913

1997-2003

3

-0.206

0.0015

470.667

0.00000

2.193

0.33408

GEDD

1987-2003

9

0.116

0.0072

53.333

0.00000

236,901.3

0.00000

1987-1996

4

0.237

0.0015

175.333

0.00000

29.563

0.00000

1997-2003

3

0.001

0.0051

97.843

0.00000

15,364.48

0.00000

GI10Y

 

1987-2003

9

0.120

0.0064

59.375

0.00000

1.938

0.37947

1987-1996

4

-0.064

0.0029

194.483

0.00000

1.799

0.40673

1997-2003

3

0.020

0.0006

800.000

0.00000

1.007

0.60434

GI3MO

1987-2003

9

0.179

0.0091

35.275

0.00000

145.229

0.00000

1987-1996

4

0.117

0.0007

547.143

0.00000

2.116

0.34710

1997-2003

3

0.214

0.0001

2860.000

0.00000

55.135

0.00000

GR

1987-2003

9

0.253

0.0047

52.553

0.00000

194,203.4

0.00000

1987-1996

4

0.277

0.0083

26.867

0.00001

1,496.471

0.00000

1997-2003

3

0.214

0.0001

2860.000

0.00000

12,463.68

0.00000

Note:

Hs computed by R/S are used for conventional hypothesis tests where the null hypothesis is H = 0.500, (i.e., equivalently alpha = 2, D = 1.500, or normality of the asset returns).  Three independent hypothesis tests are performed for each time series.  The Hurst exponent is estimated for three sample ranges A:1987-2003, B:1987-1996, and C:1997-2003.  Lack of a consistent outcome with any of the three nulls for the same series is suggestive of a structural break, i.e., a shift in H, between 1996 and 1997.  Rejection at 10%, 5%, and 1% two-tail significance levels are indicated by *, **, and ***.  'd.f.' indicates degrees of freedom.  


 

 

Table 5

Hypothesis Tests for Structural Stability

Series

Null

d.f.

t(R/S)

prob(t)

GMB

 

B=C

7

21.494

*** 0.00000

A=B

9

0.390

0.70588

A=C

9

21.104

*** 0.00000

GIIP

 

B=C

7

2.206

* 0.06318

A=B

9

9.191

*** 0.00001

A=C

9

6.985

*** 0.00006

GC

B=C

7

5.714

*** 0.00072

A=B

9

28.571

*** 0.00000

A=C

9

22.857

*** 0.00000

GP

 

B=C

7

5.426

*** 0.00098

A=B

9

14.574

*** 0.00000

A=C

9

20.000

*** 0.00000

GMM3

 

B=C

7

8.113

*** 0.00008

A=B

9

1.792

0.10666

A=C

9

6.321

*** 0.00014

GERR

B=C

7

28.495

*** 0.00000

A=B

9

0.860

0.41200

A=C

9

27.634

*** 0.00000

GCDD

 

B=C

7

17.463

*** 0.00000

A=B

9

4.179

*** 0.00238

A=C

9

21.642

*** 0.00000

GTDD

 

B=C

7

6.731

*** 0.00027

A=B

9

5.769

*** 0.00027

A=C

9

12.500

*** 0.00000

GEDD

B=C

7

32.778

*** 0.00000

A=B

9

16.806

*** 0.00000

A=C

9

15.972

*** 0.00000

GI10Y

 

B=C

7

13.125

*** 0.00000

A=B

9

28.750

*** 0.00000

A=C

9

15.625

*** 0.00000

GI3MO

B=C

7

10.659

*** 0.00001

A=B

9

6.813

*** 0.00008

A=C

9

3.846

*** 0.00393

GR

B=C

7

13.404

*** 0.00000

A=B

9

5.106

*** 0.00064

A=C

9

8.298

*** 0.00002

Note:

Three independent hypothesis tests are performed for each time series.  The Hurst exponent is estimated for three sample ranges A:1989-2003, B:1989-1996, and C:1997-2003.  The first hypothesis tested is whether the Hs estimated for ranges B and C, the split samples, are equal.  The second hypothesis is whether the H estimated for range B is significantly different from that estimated for the whole sample A.  The third hypothesis is whether H estimated for range C is significantly different from that for the whole sample A.  Rejection of any of the three nulls indicates a structural break, i.e., a shift in H, between 1996 and 1997.  Rejection at 10%, 5%, and 1% two-tail significance levels are indicated by *, **, and ***.  'd.f.' indicates degrees of freedom.

 


Appendix

Statistical methodology

 

Rescaled-range or R/S analysis:  R/S analysis is the conventional method introduced by Mandelbrot (1972a).  Time series are classified according to the estimated value of the Hurst exponent H, which is defined from the relationship

R/S = anH

where R is the average range of all subsamples of size n, S is the average standard deviation for all samples of size n, a is a scaling variable, and n is the size of the subsamples, which is allowed to range from an arbitrarily small value to the largest subsample the data will allow.   Putting this expression in logarithms yields

log(R/S) = log(a) + H log(n)

which is used to estimate H as a regression slope.  Standard errors are given in parentheses.  H ranges from 1.00 to 0.50 for persistent series, is exactly equal to 0.50 for random walks, ranges from zero to 0.50 for anti-persistent series, and is greater than one for a persistent or autocorrelated series.  Mandelbrot, Fisher, and Calvet (1997) refer to H as the self-affinity index or scaling exponent.

 

Power spectral density analysis:  This method uses the properties of power spectra of self-affine traces, calculating the power spectrum P(k) where k = 2p/l is the wavenumber, and l is the wavelength, and plotting the logarithm of P(k) versus log(k), after applying a symmetric taper function which transforms the data smoothly to zero at both ends.  If the series is self-affine, this plot follows a straight line with a negative slope –b, which is estimated by regression and reported as beta, along with its standard error.  This coefficient is related to the fractal dimension by:  D = (5 - beta)/2.  H and alpha are computed as H = 2 – D, and alpha = 1/H.   Power spectral density is the most common technique used to obtain the fractal dimension in the literature, although it is also highly problematic due to spectral leakage.

 

Roughness-length relationship method:  This method is similar to R/S, substituting the root-mean-square (RMS) roughness s(w) and window size w for the standard deviation and range.  Then H is computed by regression from a logarithmic form of the relationship s(w) = wH.   As noted above, the roughness-length method provides standard errors so low the null hypothesis of H = 0.500 is nearly always rejected no matter how nearly normal the asset returns.

 

Variogram analysis:  The variogram, also known as variance of the increments, or structure function, is defined as the expected value of the squared difference between two y values in a series separated by a distance w.  In other words, the sample variogram V(w) of a series y(x) is measured as:  V(w) = [y(x) – y(x+w)]2, thus V(w) is the average value of the squared difference between pairs of points at distance w . The distance of separation w is also referred to as the lag.  The Hurst exponent is estimated by regression from the relationship V(w) = w2H. 

 

Wavelet analysis:  Wavelet analysis exploits localized variations in power by decomposing a series into time frequency space to determine both the dominant modes of variability and how those modes vary in time.  This method is appropriate for analysis of non-stationary traces such as asset prices, i.e. where the variance does not remain constant with increasing length of the data set.  Fractal properties are present where the wavelet power spectrum is a power law function of frequency.  The wavelet method is based on the property that wavelet transforms of the self-affine traces also have self-affine properties.     

 

Consider n wavelet transforms each with a different scaling coefficient ai, where S1, S2,..., Sn are  the standard deviations from zero of the scaling coefficients ai.  Then define the ratio of the standard deviations G1, G2, ..., Gn-1 as: G1 = S1/S2,  G2 = S2/S3, ..., Gn-1 = Sn-1/Sn.  Then the average value of Gi is estimated as Gavg = (Gi)/(n – 1).  The estimated Hurst exponent H is computed as a heuristic function of Gavg.  The Benoit software computes H based on first three dominant wavelet functions, i.e., n is allowed to vary up to 4, and i for the scaling coefficient ai is allowed to vary from i = 0, 1, 2, 3. 

 

Mandelbrot-Lévy characteristic exponent test: The Mandelbrot-Lévy distributions are a family of infinite-variance distributions without explicit analytical expressions, except for special cases.  Limiting distributions include the normal, with finite variance, and the Cauchy, with the most extreme platykurtosis or fat tails.  Paul Lévy (1925) developed the theory of these distributions.  The Hurst exponent H introduced in the hydrological study of the Nile valley is the reciprocal of the characteristic exponent alpha (Hurst 1951).  The characteristic function of a Mandelbrot-Lévy random variable is:

log f(t) = i(delta)t – (gamma)|t|alpha[1 + i(beta)(sign(t)(tan[(alpha)(pi/2)])],

where delta is the expectation or mean of t if alpha > 1; gamma is a scale parameter; alpha is the characteristic exponent; and i is the square root of -1.  Gnedenko and Kolmogorov (1954) showed the sum of n independent and identically distributed Mandelbrot-Lévy variables is:

n log f(t) = in(delta)t – n(gamma)|t|alpha[1 + i(beta)(sign(t)(tan[(alpha)(pi/2)])],

and thus the distributions exhibit stability under addition.  Many applications of the central limit theorem only demonstrate Mandelbrot-Lévy character.  The result of normality generally depends on an unjustified assumption of finite variance.  Mandelbrot (1972a) introduced a technique for estimating alpha by regression, further refined by Lo (1991).  Mulligan (2000b) estimates the distribution of alpha for Cauchy-distributed random variables.  This distribution is used to test estimated alphas for technology equities against the Cauchy null.