Monetary Policy
Regimes in Macroeconomic Data:
an Application of Fractal Analysis
Robert F. Mulligan, Ph.D.
Department of Business Computer Information
Systems & Economics
Phone: 8282273329
Fax: 8282277414
Email: mulligan@wcu.edu
Roger Koppl
Acknowledgements
Robert F. Mulligan is associate professor of economics in
the Department of Business Computer Information Systems and Economics at
Western Carolina University College of Business and a research associate of the
State University of New York at
Abstract
This
paper examines macromonetary data for behavioral
stability over Alan Greenspan's tenure as chairman of the Federal Reserve
System. Five selfaffine fractal
analysis techniques for estimating the
Introduction
This paper
examines the distribution of changes in a vector of macromonetary
data. Statistical tests focusing on five
alternative methods for estimating
The paper is organized as follows. A literature review is provided in the second section. The data are documented in the third section. Methodology and results are presented in the fourth and fifth sections. The conclusions are provided in the sixth section. Additionally, the glossary and appendix have been prepared to assist the reader in understanding the uniqueness of the specialized statistical language used in this paper.
Mandelbrot's
(1972a, 1975, 1977) and Mandelbrot and Wallis's (1969) R/S or rescaled range
analysis characterizes time series as one of four types: 1.) dependent or autocorrelated series, 2.) persistent, trendreinforcing
series, also called biased random walks, random walks with drift, or fractional
Brownian motion, 3.) random walks, or
4.) antipersistent, ergodic, or meanreverting
series. MandelbrotLévy
distributions are a general class of probability distributions derived from the
generalized central limit theorem, and include the normal or Gaussian and
Cauchy as limiting cases (Lévy 1925; Gnedenko and Kolmolgorov
1954). They are also referred to as
stable, Lévystable, Lstable, stableParetian, and ParetoLévy. Samuelson (1982) popularized the term
MandelbrotLévy, but Mandelbrot avoids this
expression, perhaps out of modesty, and the other terms remain current. The reciprocal of the MandelbrotLévy
characteristic exponent alpha is the
<<Table
1 about here>>
Literature review
The search for long memory in capital markets has been a fixture in the literature applying fractal geometry and chaos theory to economics since Mandelbrot (1963b) shifted his attention from income distribution to speculative prices. Fractal analysis has been applied extensively to equities (Green and Fielitz 1977; Lo 1991; Barkoulas and Baum 1996; Peters 1994, 1996; Koppl et al 1997; Kraemer and Runde1997; Barkoulas and Travlos 1998; Koppl and Nardone 2001; Mulligan 2004, and Mulligan and Lombardo 2003) interest rates (Duan and Jacobs 1996; and Barkoulas and Baum 1997a, 1997b), commodities (Barkoulas, Baum, and Ogutz 1998), exchange rates (Cheung 1993; Byers and Peel 1996; Koppl and Yeager 1996; Barkoulas and Baum 1997c; Chou and Shih 1997; Andersen and Bollerslev 1997; Koppl and Broussard 1999, and Mulligan 2000a), and derivatives (Fang, Lai, and Lai 1994; Barcoulas, Labys, and Onochie 1997; and Corazza, Malliaris, and Nardelli 1997). Fractal analysis has also been applied to income distribution (Mandelbrot 1963a) and macroeconomic data (Peters 1994, 1996).
Gilanshah and Koppl (2001) advance the thesis that postwar money demand and monetary policy behavior were mostly stable from 19451970, but that instability emerged during the seventies as the Federal Reserve System adopted more activist policies and procedures. The present study contrasts the earlier and later years of Alan Greenspan’s tenure as Chairman for evidence of a switch from nondiscretionary, nonactivist monetary policy to more discretionary, more activist behavior. If it is the case that the Federal Reserve System switched from being a passive to an active market player after December 1996, the influence of this one “big player” would be to reduce the stability of money demand, as the many smaller players attempt to react to, as well as anticipate, big player moves. The smaller players behavior should exhibit herding if it is difficult to anticipate or observe big player behavior, or if that behavior changes abruptly at the big player’s discretion, and if it is relatively easy to observe behavior of other small players. If the Federal Reserve System is a big player acting in accordance with discretion as opposed to rules, the many little players would not appear to be following any coherent behavior, even if little players developed and followed consistent and rational strategic responses. Even if the little players respond according to set rules, because the big player acts unpredictably through discretion, the little players’ behavior seems incoherent. If this reading is correct, the instability in money demand is not a statistical artifact of specification error, and cannot be removed by adding variables to conventional money demand models.
Big players induce herding in money demand. Gilanshah and Koppl (2001) found that Federal Reserve System policy grew more discretionary after 1970, and that the increase in big player influence reduced the stability of money demand. As the Federal Reserve System began to adopt more activist policy measures during the 1970s, estimates generated by standard money demand specifications began to show sizable prediction errors. If activist monetary policy does indeed impose instability, this implies that the Federal Reserve System should abandon discretion and pursue money supply targets according to fixed rules. This implication runs counter to a prevailing inference presented in the literature on money demand instability. Mishkin’s (1995:572) view is representative: “because the money demand function has become unstable, velocity is now harder to predict, and setting rigid money supply targets in order to control aggregate spending in the economy may not be an effective way to conduct monetary policy.” But as Gilanshah and Koppl (2001) argue, since the money demand instability results from Federal Reserve activism, the situation calls for less discretion, not more. In their view, one mechanism introducing herding or bandwagon effects in money demand is cash managers’ attempts to enhance their reputations, which enhances their job security and earning potential. Cash managers seek to enhance their reputations in a manner similar to, and for the same reasons as, portfolio managers (Scharfstein and Stein 1990). Cash managers achieve and maintain reputation through conformity with industry practice, a global criterion, and through conduct appropriate to the unique circumstances of their business enterprise, a local criterion. Pursuit of the global criterion imposes herding behavior or bandwagon effects. If cash managers act as others do and things go well, their reputation is assured. If they act as others do and things go badly, the blame is shared throughout the profession. If cash managers defy prevailing practice in their profession and things go badly, their reputation is ruined. Sharfstein and Stein (1990:466) call this incentive to imitate standard practices the “sharingtheblame effect.”
If, however, cash managers defy prevailing practice and things go well, their reputation is strongly enhanced and they enjoy improved income prospects and job security. This is a powerful counterincentive to herding. Not all cash managers are constitutionally capable of acting independently of their peers, and some may require the security of the herd. Some cash managers will herd; others will not. Big player conduct affects the fraction that herds. Activist monetary policy impairs the value of local information which could be exploited by the more independent cash managers. Thus discretionary conduct by the monetary authorities promotes herding and introduces more volatility into macromonetary data.
<<Table
2 about here>>
Data
The data are monthlyobserved monetary aggregates, ratios, and multipliers over the 19872003 range. Macroeconomic data, specifically output measures and interest rates, are also examined over the same period to determine if their behavior appears significantly driven by the monetary data.
GMB is the logarithmic first difference of the monetary base.
GIIP is the first difference of the index of industrial production.
GC is the logarithmic first difference of real consumable output, which in turn is 100 times personal consumption expenditures divided by its deflator.
GP is the first difference of the personal consumption expenditures deflator.
GMM3 is the logarithmic first difference of the M3 monetary multiplier.
GERR is the first difference of the effective reserve requirement.
GCDD is the first difference of the currencytodemanddeposit ratio.
GTDD is the first difference of the timedeposittodemanddeposit ratio.
GEDD is the first difference of the excessreservetodemanddeposit ratio.
GI10Y is the first difference of the tenyear constant maturity government security interest rate.
GI3MO is the first difference of the threemonth secondary market treasury bill interest rate.
GR is the first difference of the term spread, the tenyear constant maturity rate minus the threemonth secondary market rate.
Time series which were already represented as interest rates, percentages, or ratios, were simply first differenced without taking logarithms. Table 2 presents descriptive statistics for the differenced series.
Methodology
Long memory series exhibit nonperiodic long cycles, or persistent dependence between observations far apart in time; i.e., observable patterns which tend to repeat. Long memory or persistent series tend to reverse themselves less often than a purely random series. Thus, they display a trend, and are also called black noise, in contrast to purely random white noise. Persistent series have long memory in that events are correlated over long time periods. In contrast, shortterm dependent time series include standard autoregressive moving average and Markov processes, and have the property that observations far apart exhibit little or no statistical dependence. R/S or rescaled range analysis distinguishes random from nonrandom or deterministic series. The rescaled range is the range divided (rescaled) by the standard deviation. Seemingly random time series may be deterministic chaos, fractional Brownian motion (FBM), or a mixture of random and nonrandom components.
Conventional statistical techniques lack power to distinguish unpredictable random components from highly predictable deterministic components. R/S analysis evolved to address this difficulty. R/S analysis exploits the structure of dependence in time series irrespective of their marginal distributions, statistically identifying nonperiodic cyclic long run dependence as distinguished from short dependence or Markov character and periodic variation (Mandelbot 1972a: 259260). The difference between longmemory processes, also called nonperiodic long cycles, and shortterm dependence, is that each observation in long memory processes has a persistent effect, on average, on all subsequent observations, up to some horizon after which memory is lost, whereas in contrast, shortterm dependent processes display little or no memory of the past, and what shortterm dependence can be observed often diminishes with the square of the time elapsed. For equity prices, long memory can be observed when a stock follows a trend or repeats a cyclical movement, even though the cycles can have timevarying frequencies. Shortterm dependence is indicated when there are no observable trends or patterns beyond a very short time span, and the impact of any outliers or extreme values diminishes rapidly over time.
Mandelbrot (1963a, 1963b) demonstrated all
stationary series can be categorized in accordance with their
Results
Many macromonetary series are antipersistent or ergodic, meanreverting, or pink noise processes
with (0.00 < H < 0.50), indicating they are more volatile than a random
walk. Pink noise processes are used to
model dynamic turbulence. Ergodic or antipersistent
processes reverse themselves more often than purely random series. Ergodicity, that
is, H significantly below 0.50, indicate policy makers persistently overreact
to new information, imposing more macroeconomic volatility than would maintain
in the absence of policy, and never learn not to overreact. This observed phenomenon is directly
analogous to Mussa's (1984) disequilibrium
overshooting, in which the market process of adjustment toward final equilbrium is unstable, and never quiets down. Hs significantly above 0.50 demonstrate
macroeconomic data series are not random walks.
<<Table 3 about here>>
This section
discusses and interprets the results of five alternative fractal analysis
methods for measuring the
Five techniques
for estimating the
1.) Rescaledrange or R/S analysis: R/S analysis is the traditional technique introduced by Mandelbrot (1972a). Hs estimated by this method are generally far from 0.50, suggesting nonGaussian processes. The difference between estimated Hs and 0.50 is statistically significant over the whole sample range and both subsamples for each series examined. Hs are always below 0.50, indicating ergodicity or antipersistence, e.g., negative serial correlation meaning the data processes persistently overcorrect. This measurable antipersistence or ergodicity demonstrates policy makers habitually overreact to new information, and never learn not to.
Hs different from 0.50 demonstrate the data series have not been random walks, nevertheless, this finding may be due to shortterm dependence still present after taking AR1 residuals, or systematic bias due to information asymmetries, or both.
2.) Power spectral density analysis: Power spectral density analysis could only obtain estimates of H for the whole 19892003 sample period. Hs estimated by this technique also fall in the persistent range (H < 0.50), except for the consumption price deflator (GP) and the excessreservetodemanddeposit ratio (GEDD). Note these results often flatly contradict those provided by other techniques. Spectral density often provides very large standard errors for H, and thus formal hypothesis tests are generally biased against rejecting the null. However, the standard errors of the Hs of GP and GEDD are quite low, supporting the conclusion that they are normallydistributed, whitenoise processes.
3.) Roughnesslength relationship method: Formal hypothesis tests reject the Gaussian null for all series, and all Hs are significantly less than 0.50, indicating antipersistence.
4.) Variogram analysis: Variogram analysis supports antipersistence for all series.
5.) Wavelet analysis: This method was developed by Daubechies (1990), Beylkin (1992), and Coifman et al (1992). Wavelet H estimates indicate antipersistence or ergodicity (H < 0.50) for the index of industrial production (GIIP) (whole and early samples), the M3 money multiplier (GMM3) (both subsamples, but not over the whole sample range), the effective required reserve ratio (GERR) (late sample only), the currencytodemanddeposit ratio (GCDD) (early sample only), the excessreservetodemanddeposit ratio (GEDD) (all ranges), the tenyear government security rate (GI10Y) (whole and early samples), the threemonth treasury bill secondary market rate (GI3MO) (all ranges), and the term spread (GR) (all ranges), indicating persistence (H > 0.50), or in some cases normality, elsewhere. Because wavelet analysis does not provide a standard error for H, formal hypothesis tests cannot be constructed.
<<Table
4 about here>>
Hypothesis tests are constructed to test for 1.) the Gaussian character or normality of the underlying time series, 2.) Cauchycharacter, and 3.) changes in the behavior of the distribution between 19871996 and 19972003:
1.) Tests of Gaussian character or normality: Table 4 presents tstatistics for tests of
the null hypothesis H = 0.50, along with twotail probability levels. Tstatistics are computed as 0.50 minus the
2.) Tests of Cauchy character: the MandelbrotLévy characteristic exponent test: Various statistics are available to test
the null hypothesis of normality, but not for the Cauchy distribution, the
other extreme. The MandelbrotLévy characteristic exponent alpha is computed as the
reciprocal of the
For the 19871996 subsample, the sample size is 113, and interpolated critical alphas are: 1%, 0.420; 5%, 0.568; 10%, 0.641; 90%, 1.247; 95%, 1.342; 99%, 1.607. Only wavelet estimated alphas approach the critical range. Over this earlier subsample, the index of industrial production and the timedeposittodemanddeposit ratio fail to reject the Cauchy null at the 1% significance level (one tail). The effective reserve requirement fails to reject the Cauchy null at all conventional significance levels.
For
the 19972003 subsample, the
sample size is 75, and interpolated critical alphas are: 1%, 0.332; 5%, 0.510;
10%, 0.590; 90%, 1.293; 95%, 1.438; 99%, 1.800.
Again, only wavelet estimated alphas approach the critical range. Over the later subsample,
real consumable output, the currencytodemanddeposit ratio, and the
timedeposittodemanddeposit ratio fail to reject the null hypothesis at all
conventional significance levels.
<<Table 5 about here>>
3.) Tests of structural change: Table 5 presents tstatistics testing for significant differences among Hs estimated over the whole sample range and the two subsamples, referred to a ranges A, B, and C. The first null hypothesis tested for each equity is that the Hs for the two subsamples are equal (B=C), with degrees of freedom equal to the sum of sample average R/Ss in the two regressions estimating H for each subsample (4 + 3 = 7). The second and third null hypotheses are that the H for each subsample is equal to the H estimated over the whole sample (A=B and A=C) with degrees of freedom equal to the number of R/Ss in the wholesample regression (9), because the standard error of the whole sample H is treated as the pooled standard error.
Although not every test indicates a break or change in structural behavior, the number that do is so overwhelming (32 tests indicating rejection of the hypothesis of stable Hs across subsamples at the 1% significance level, out of 36 tests) it is difficult to avoid the conclusion of a fairly sharp break, here indicating a drastic change in the statistical behavior and distributions of the data processes examined.
With time series that may be fractional Gaussian noise, that is, apparently random combinations of otherwise statistically wellbehaved processes scrambled together with periodicallychanging parameters and characteristics, it is not strictly correct to infer structural change in the conventional sense. For example, a random scrambling of several different finitevariance processes can result in an infinitevariance process over a larger sample range. The finding that H is not constant over two subsamples and the whole sample range is wholly consistent with a stable fractal process, but more importantly, it points to some difference in fundamentals, or at least in behavior of the variable studied, from one period to the other.
Conclusion
The logarithmic
differences of macroeconomic data for a stable and growing economy should have
A possible scenario that renders this finding more intuitive is that information relevant to a nation's macroeconomic performance arrives frequently and seemingly at random. Policy makers habitually ignore the vast majority of this information, because the vast majority is unimportant or irrelevant, until it accumulates a critical mass they must finally recognize. Then, perceiving they have ignored a body of relevant information which they have allowed to accumulate, they attempt to compensate for their history of informational sloth by overreacting. The expression "informational sloth" can just as validly be characterized as "filtering out noise."
References
Andersen, Torben G.; Bollerslev, Tim. 1997. "Heterogeneous Information Arrivals and Return Volatility Dynamics: Uncovering the Longrun in High Frequency Returns," Journal of Finance, 52(3): 9751005.
Barkoulas, John T.; Baum, Christopher F. 1996. "Longterm Dependence in Stock Returns," Economics Letters, 53: 253259.
_____; _____. 1997a. "Fractional Differencing Modeling and Forecasting of Eurocurrency Deposit Rates," Journal of Financial Research, 20(3): 355372.
_____; _____. 1997b. "Long Memory and Forecasting in Euroyen Deposit Rates," Financial Engineering and the Japanese Markets, 4(3): 189201.
_____; _____. 1997c."A Reexamination of the Fragility of Evidence from Cointegrationbased Tests of Foreign Exchange Market Efficiency," Applied Financial Economics, 7: 635643.
Barkoulas, John T.; Baum, Christopher F.; Oguz, Gurkan S. 1998. "Stochastic Long Memory in Traded Goods Prices." Applied Economics Letters, 5: 135138.
Barkoulas, John T.; Labys, Walter C.; Onochie, Joseph. 1997. "Fractional Dynamics in International Commodity Prices," Journal of Futures Markets, 17(2): 161189.
Barkoulas, John
T.; Travlos, Nickolaos.
1998. "Chaos in an Emerging Capital Market? The Case of the
Beylkin, Gregory. 1992. "On the Representation of
Operators in Bases of Compactly Supported Wavelets,"
Black, Fisher; Scholes, Myron. 1972. "The Valuation of Option Contracts and a Test of Market Efficiency," Journal of Finance, 27: 399418.
_____; _____. 1973. "The Pricing of Options and Corporate Liabilities," Journal of Political Economy, 81: 637418.
Byers, J.D.; Peel, D.A. 1996. "Longmemory Risk Premia in Exchange Rates," Manchester School of Economic and Social Studies, 64(4): 421438.
Calvet, Laurent;
Fisher, Adlai; Mandelbrot, Benoit B. 1997. "Large
Deviations and the Distribution of Price Changes," Cowles Foundation
Discussion Paper no. 1165,
Cheung, YinWong. 1993. "Tests for Fractional
Integration: a
Cheung, YinWong; Lai, Kon S. 1993. "Do Gold Market Returns Have Long Memory?" The Financial Review, 28(3): 181202.
Chou, W.L.; Shih, Y.C. 1997. "Longrun Purchasing Power Parity and Longterm Memory: Evidence from Asian Newlyindustrialized Countries," Applied Economics Letters, 4: 575578.
Coifman, Ronald; Ruskai, Mary
Beth; Beylkin, Gregory; Daubechies,
Ingrid; Mallat, Stephane;
Meyer, Yves; Raphael, Louise, eds. 1992. Wavelets and
Their Applications.
Corazza, Marco; Malliaris, A.G.; Nardelli, Carla. 1997. "Searching for Fractal Structure in Agricultural Futures Markets," Journal of Futures Markets, 71(4): 433473.
Daubechies, Ingrid. 1990. "The Wavelet Transform, Timefrequency Localization and Signal Analysis," IEEE Transactions on Information Theory, 36: 9611005.
Diebold, Francis X.; Inoue, A. 2000. "Long Memory and Regime Switching," National Bureau of Economic Research technical working paper no. 264.
Duan, JinChuan; Jacobs, Kris. 1996. "A Simple Longmemory Equilibrium Interest Rate Model," Economics Letters, 53: 317321.
Fama, Eugene; Fisher, L.; Jensen, M., and Roll, R. 1969. " The Adjustment of Stock Prices to New Information," International Economic Review, 10: 121.
Fang, H.; Lai, Kon S.; Lai, M. 1994. "Fractal Structure in Currency Futures Prices," Journal of Futures Markets, 14: 169181.
Fisher, Adlai; Calvet, Laurent;
Mandelbrot, Benoit B. 1997. "Multifractality
of Deutschemark/US Dollar Exchange Rates," Cowles Foundation Discussion
Paper no. 1166,
Gilanshah and Koppl (2001)
Gnedenko, Boris Vladimirovich; Kolmogorov, Andrei
Nikolaevich. 1954. Limit Distributions for
Sums of Random Variables,
Granger, C.W.J.
1989. Forecasting in Business
and Economics, 2nd ed.,
Granger, C.W.J.; Hyung, N. 1999. "Occasional Structural Breaks and Long Memory,"
discussion paper 9914,
Greene, M.T.; Fielitz, B.D. 1977. "Longterm Dependence in Common Stock Returns," Journal of Financial Economics, 5: 339349.
Heiner, Ronald A. 1980. "The Origin of Predictable Behavior," American Economic Review, 73(3): 560595.
Hurst, H. Edwin. 1951. "Longterm Storage Capacity of Reservoirs," Transactions of the American Society of Civil Engineers, 116: 770799.
Jarque, C.M.,
and A.K. Bera (1980). "Efficient Tests
for Normality, Homoskedasticity, and Serial
Kaen, Fred R.; Rosenman, Robert E. 1986. "Predictable Behavior in Financial Markets: Some Evidence in Support of Heiner's Hypothesis," American Economic Review, 76(1): 212220.
Kraemer, Walter; Runde, Ralf. 1997. "Chaos and the Compass Rose," Economics Letters, 54: 113118.
Koppl, Roger; Ahmed, Ehsan; Rosser, J. Barkley; White, Mark V. 1997. "Complex Bubble Persistence in ClosedEnd Country Funds," Journal of Economic Behavior and Organization, 32(1): 1937.
Koppl, Roger; Broussard, John. 1999. "Big Players and the Russian Ruble: Explaining Volatility Dynamics," Managerial Finance, 25(1): 4963.
Koppl, Roger; Nardone, Carlo. 2001. "The Angular Distribution of Asset Returns in Delay Space," Discrete Dynamics in Nature and Society, 6: 101120.
Koppl, Roger; Yeager, Leland, 1996. "Big Players and Herding in Asset Markets: The Case of the Russian Ruble," Explorations in Economic History, 33(3): 367383.
Lévy, Paul. 1925. Calcul des Probibilités,
Lo, Andrew W. 1991. "Longterm Memory in Stock Market Prices," Econometrica, 59(3): 12791313.
Malkiel,
Mandelbrot, Benoit B. 1963a. "New Methods in Statistical Economics," Journal of Political Economy, 71(5): 421440.
_____. 1963b. "The Variation of Certain Speculative Prices," Journal of Business, 36(3): 394419.
_____. 1972a. "Statistical Methodology for Nonperiodic Cycles: From the Covariance to R/S Analysis," Annals of Economic and Social Measurement, 1(3): 255290.
_____. 1972b.
"Possible Refinements of the Lognormal Hypothesis Concerning the
Distribution of Energy Dissipation in Intermittent Turbulence," in M.
Rosenblatt and C. Van Atta, eds., Statistical Models and
Turbulence,
_____. 1974. "Intermittent Turbulence in Self Similar Cascades: Divergence of High Moments and Dimension of the Carrier, Journal of Fluid Mechanics, 62: 331358.
_____. 1975. "Limit Theorems on the Selfnormalized Range for Weakly and Strongly Dependent Processes," Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 31: 271285.
_____. 1977. The
Fractal Geometry of Nature,
Mandelbrot, Benoit B.; Fisher, Adlai; Calvet,
Laurent. 1997. "A Multifractal Model of
Asset Returns," Cowles Foundation Discussion Paper no. 1164,
Mandelbrot, Benoit
B.; van
Mandelbrot, Benoit B.; Wallis, James R. 1969. "Robustness of the Rescaled Range R/S in the Measurement of Noncyclic Longrun Statistical Dependence," Water Resources Research, 5(4): 976988.
Mulligan, Robert F. 2000a. "A Fractal Analysis of Foreign Exchange Markets," International Advances in Economic Research, 6(1): 3349.
_____. 2000b. "A Characteristic Exponent Test for the Cauchy Distribution," Atlantic Economic Journal, 28(4): 491.
_____. 2004. "Fractal Analysis of Highly Volatile Markets: an Application to Technology Equities," Quarterly Review of Economics and Finance, 44(1).
_____. 2003. "Maritime Businesses: Volatile Prices and Market Valuation Inefficiencies," Quarterly Review of Economics and Finance, forthcoming.
Mussa, Michael. 1984. "The Theory of Exchange Rate
Determination," in John F.O. Bilson and Richard
C. Marston, eds., Exchange Rate Theory and Practice,
Neely, Christopher; Weller, Paul; Dittmar, Robert. 1997. "Is Technical Analysis in the Foreign Exchange Market Profitable? A Genetic Programming Approach," Journal of Financial and Quantitative Analysis, 32(4): 405426.
Osborne, M. 1959. "Brownian Motions in the Stock Market," Operations Research, 7: 145173.
Peters, Edgar E.
1994. Fractal Market Analysis,
_____. 1996. Chaos
and Order in the Capital Markets: a New View of Cycles, Prices, and Market Volatility,
second edition,
_____. 1999. Complexity, Risk, and Financial
Markets,
Samuelson, Paul
A. 1982. "Paul Cootner's Reconciliation of
Economic Law with Chance," in William F. Scharpe
and Cathryn M. Cootner,
eds., Financial Economics: Essays in Honor of Paul Cootner,
Glossary of
Fractal Analysis Terms 
Antipersistence –
a series that reverses itself more often than a purely random series, also
called pink noise, ergodicity, 1/f noise, or
negative serial correlation. (Peters 1994: 306). 
Black noise – a series that reverses itself less
often than a purely random series, displaying trends or repetitive patterns
over time, also called persistence, positive serial correlation, or
autocorrelation. (Peters 1994: 183187). 
Brown noise – the cumulative sum of a
normallydistributed random variable, also called Brownian motion. (Peters
1994: 183185; Osborne 1959). 
Efficient Market Hypothesis – the proposition that market prices fully and correctly reflect all relevant information. A market is described as efficient with respect to an information set if prices do not change when the information is revealed to all market participants. There are three levels of market efficiency: weak, semistrong, and strong. (Fama et al 1969; Malkiel 1987). 
Long memory – the property that any value a series takes on generally has a long and persistent effect, e.g., extreme values that repeat at fairly regular intervals. (Peters 1994: 274). 
Multifractal Model of Asset Returns (MMAR) – a very general model of asset pricing behavior allowing for longmemory and fattailed distributions. Instead of infinitevariance distributions such as the MandelbrotLévy and Cauchy distributions, the MMAR relies on fractional combinations of random variables with nonconstant mean and variance, providing many of the properties of infinitevariance distributions. (Mandelbrot, Fisher, and Calvet 1997). 
Nonperiodic long cycles  a characteristic of longmemory processes, i.e., of statistical processes where each value has a long and persistent impact on values that follow it, that identifiable patterns tend to repeat over similar, though irregular, cycles (nonperiodic cycles.) Also called the Joseph effect. (Peters 1994: 266). 
Nonstationarity – the property that a series has a systematically varying mean and variance. Any series with a trend, e.g., U.S. GDP, has a growing mean and therefore is nonstationary. Brownnoise processes are nonstationary, but whitenoise processes are stationary. (Granger 1989: 58). 
Persistence or persistent dependence – a series that reverses itself less often than a purely random series, and thus tends to display a trend, also called black noise. Persistent series have long memory in that events are correlated over long time periods, and thus display nonperiodic long cycles. (Peters 1994: 310). 
Semistrongform Market Efficiency – the
intermediate form of the efficient market hypothesis, asserting that market
prices incorporate all publicly available information, including both
historical data on the prices in question, and any other relevant,
publiclyavailable data, and thus it is impossible for any market participant
to gain advantage and earn excess profits, in the absence of inside
information. (Peters 1994: 308). 
Shortterm dependence  the property that any value
a series takes on generally has a transient effect, e.g., extreme values bias
the series for a certain number of observations that follow. Eventually, however, all memory of the
extreme event is lost, in contrast to longmemory or the Joseph effect. Special cases include Markov processes and
serial correlation. (Peters 1994:
274). 
Spectral density or Powerspectral Density Analysis – a fractal analysis based on the power spectra calculated through the Fourier transform of a series. (Peters 1994: 170171). 
Stationarity – the property that a series has a constant mean and variance. White, pink, and blacknoise processes are all stationary. Because it is the cumulative sum of a whitenoise process, a brownnoise process is nonstationary. (Granger 1989: 58). 
Strongform Market Efficiency – the most restrictive version of the efficient market hypothesis, asserting that all information known to any one market participant is fully reflected in the price, and thus insider information provides no speculative advantage and cannot offer above average returns. (Malkiel 1987: 120). 
Weakform Market Efficiency – the least restrictive version of the efficient market hypothesis, asserting that current prices fully reflect the historical sequence of past prices. One implication is that investors cannot obtain aboveaverage returns through analyzing patterns in historical data, i.e., through technical analysis. Also referred to as the Random Walk Hypothesis. One common way of testing for weakform efficiency is to test price series for normality, however, normality is a sufficient rather than a necessary condition. (Malkiel 1987: 120). 
White noise – a perfectly random process exhibiting no serial dependence. Normal processes meet this requirement, and normality is often conflated with white noise. Normality is a sufficient condition rather than a necessary condition for white noise. (Peters 1994: 312). 
Table 1 Fractal Taxonomy of Time Series 

Term 
'Color' 

Fractal dimension 
Characteristic exponent 
Antipersistent, Ergodic, Meanreverting, Negative serial correlation, 1/f noise 
Pink noise 
0 ≤ H < ½ 
0 ≤ D < 1.50 
2.00 ≤ alpha < ∞ 
Gaussian process, Normal distribution 
White noise 
H º ½ 
D º 1.50 
alpha º 2.00 
Brownian motion, Wiener process 
Brown noise 
H º ½ 
D º 1.50 
alpha º 2.00 
Persistent, Trendreinforcing, Hurst process 
Black noise 
½ < H < 1 
1 < D < 1.50 
1.00 < alpha < 2.00 
Cauchy process, Cauchy distribution 
Cauchy noise 
H º 1 
D º 1 
alpha º 1 
Note: Brown noise or Brownian motion is the cumulative sum of a normallydistributed whitenoise process. The changes in, or returns on, a Brownian motion, are white noise. The fractal statistics are the same for Brown and white noise because the brownnoise process should be differenced as part of the estimation process, yielding white noise. 
Table 2 Macromonetary Data Series Descriptive Statistics 

Underlying variable 
Rubric 
Mean 
Median 
Standard Deviation 
Sample Variance 
Kurtosis 
Skewness 
ln Monetary Base 
GMB 
0.000968 
0.000987 
0.001388 
1.93E06 
7.824068 
0.18583 
Index of Industrial Production 
GIIP 
0.198729 
0.26 
0.477671 
0.22817 
0.160655 
0.08616 
ln Real Consumable Output 
GC 
0.000302 
0.000271 
0.000506 
2.56E07 
0.891277 
0.379863 
Consumption Price Index 
GP 
35.00091 
0.2 
477.1887 
227709.1 
188 
13.71131 
M3 Money Multiplier 
GMM3 
0.00042 
0.00054 
0.003371 
1.14E05 
6.794751 
1.14446 
Effective Reserve Requirement 
GERR 
6.9E05 
3.2E05 
0.000429 
1.84E07 
8.645952 
1.7519 
CurrencytoDemandDeposit Ratio 
GCDD 
0.006258 
0.002839 
0.019836 
0.000393 
0.902251 
0.347443 
TimeDeposittoDemandDeposit Ratio 
GTDD 
0.003564 
0.001313 
0.022226 
0.000494 
0.12213 
0.136709 
ExcessReservetoDemandDeposit Ratio 
GEDD 
0.082139 
0.015298 
1.005744 
1.011522 
176.6499 
13.08789 
10 year Tbond 
GI10Y 
0.00354 
0.00821 
0.037183 
0.001383 
0.14438 
0.235176 
3 month Tbill 
GI3MO 
0.00728 
0.00399 
0.049516 
0.002452 
3.772713 
1.1582 
Term spread 
GR 
0.126294 
0.0047 
1.593287 
2.538563 
159.8713 
12.08197 
Note: All raw time series are converted to logarithmic returns or simple first differences, thus rendering them stationary. 
Table 3 Fractal Analyses of Macromonetary Data Processes Estimated Hurst Exponent H, Various Methods (Standard Errors in Parentheses) 

Firm 
Range 
R/S 
Power Spectrum 
Roughnesslength 
Variogram 
Wavelet 
GMB 
1987:082003:03 
0.091 (0.0154) 
0.0495 (5.372) 
0.043 (0.0018) 
0.001 (0.1645) 
0.651 
1987:081996:12 
0.097 (0.0051) 



0.611 

1997:012003:03 
0.234 (0.0001) 



0.541 

GIIP 
1987:082003:03 
0.095 (0.0136) 
0.203 (3.511) 
0.032 (0.0016) 
0.066 (0.0219) 
0.179 
1987:081996:12 
0.030 (0.0025) 



0.647 

1997:012003:03 
0.000 (0.0041) 



0.166 

GC 
1987:082003:03 
0.038 (0.0028) 
0.204 (9.2004) 
0.096 (0.0001) 
0.028 (0.0218) 
0.875 
1987:081996:12 
0.042 (0.0033) 



0.600 

1997:012003:03 
0.026 (0.0004) 



0.890 

GP 
1987:082003:03 
0.117 (0.0094) 
0.500 (0.0000009) 
0.185 (0.0002) 
0.017 (0.0004) 
0.431 
1987:081996:12 
0.020 (0.0003) 



0.584 

1997:012003:03 
0.071 (0.0008) 



0.431 

GMM3 
1987:082003:03 
0.011 (0.0106) 
0.292 (5.3671) 
0.074 (0.0015) 
0.031 (0.0946) 
0.658 
1987:081996:12 
0.008 (0.0030) 



0.349 

1997:012003:03 
0.078 (0.0015) 



0.421 

GERR 
1987:082003:03 
0.069 (0.00934) 
0.442 (3.3994) 
0.031 (0.0001) 
0.006 (0.0684) 
0.833 
1987:081996:12 
0.077 (0.0077) 



0.861 

1997:012003:03 
0.188 (0.0005) 



0.388 

GCDD 
1987:082003:03 
0.073 (0.0067) 
0.475 (12.4782) 
0.099 (0.0005) 
0.041 (0.3649) 
0.857 
1987:081996:12 
0.101 (0.0031) 



0.364 

1997:012003:03 
0.218 (0.0002) 



0.945 

GTDD 
1987:082003:03 
0.076 (0.0104) 
0.442 (21.1959) 
0.094 (0.0004) 
0.038 (0.3892) 
0.778 
1987:081996:12 
0.136 (0.0042) 



0.732 

1997:012003:03 
0.206 (0.0015) 



0.802 

GEDD 
1987:082003:03 
0.116 (0.0072) 
0.456 (0.0702) 
1.320 (0.0459) 
0.010 (0.0025) 
0.135 
1987:081996:12 
0.237 (0.0015) 



0.306 

1997:012003:03 
0.001 (0.0051) 



0.332 

GI10Y 
1987:082003:03 
0.120 (0.0064) 
0.426 (9.1405) 
0.051 (0.0001) 
0.037 (0.0416) 
0.048 
1987:081996:12 
0.064 (0.0029) 



0.126 

1997:012003:03 
0.020 (0.0006) 



0.546 

GI3MO 
1987:082003:03 
0.179 (0.0091) 
0.224 (4.2599) 
0.066 (0.0001) 
0.083 (0.0217) 
0.327 
1987:081996:12 
0.117 (0.0007) 



0.258 

1997:012003:03 
0.214 (0.0001) 



0.325 

GR 
1987:082003:03 
0.253 (0.0047) 
0.476 (0.6939) 
0.205 (0.0010) 
0.026 (0.0058) 
0.327 
1987:081996:12 
0.277 (0.0083) 



0.185 

1997:012003:03 
0.214 (0.0001) 



0.329 

Note: The
MandelbrotLévy characteristic exponent alpha is
the reciprocal of the 
Table 4 Hypothesis Tests for Normality or Gaussian Character (H = 0.50) 

Process 
Range 
d.f. 
MandelbrotLévy R/S 
JarqueBera test 

R/S 
SE(R/S) 
t(R/S) 
prob(t) 
JB 
prob(JB) 

GMB 
19872003 
9 
0.091 
0.0154 
38.377 
0.00000 
451.827 
0.00000 

19871996 
4 
0.097 
0.0051 
117.059 
0.00000 
1.727 
0.42158 

19972003 
3 
0.234 
0.0001 
2660.000 
0.00000 
282.706 
0.00000 

GIIP 
19872003 
9 
0.095 
0.0136 
29.779 
0.00000 
0.351 
0.83919 

19871996 
4 
0.030 
0.0025 
212.000 
0.00000 
0.878 
0.64472 

19972003 
3 
0.000 
0.0041 
121.951 
0.00000 
0.240 
0.88711 

GC 
19872003 
9 
0.038 
0.0028 
165.000 
0.00000 
9.924 
0.00700 

19871996 
4 
0.042 
0.0033 
164.242 
0.00000 
1.020 
0.60053 

19972003 
3 
0.026 
0.0004 
1315.000 
0.00000 
23.095 
0.00001 

GP 
19872003 
9 
0.117 
0.0094 
40.745 
0.00000 
268,142.6 
0.00000 

19871996 
4 
0.020 
0.0003 
1733.333 
0.00000 
8.947 
0.01141 

19972003 
3 
0.071 
0.0008 
713.750 
0.00000 
16,218.58 
0.00000 

GMM3 
19872003 
9 
0.011 
0.0106 
46.132 
0.00000 
379.916 
0.00000 

19871996 
4 
0.008 
0.0030 
169.333 
0.00000 
2.731 
0.25519 

19972003 
3 
0.078 
0.0015 
281.333 
0.00000 
131.877 
0.00000 

GERR 
19872003 
9 
0.069 
0.0093 
46.344 
0.00000 
645.510 
0.00000 

19871996 
4 
0.077 
0.0077 
54.935 
0.00000 
460.666 
0.00000 

19972003 
3 
0.188 
0.0005 
1376.000 
0.00000 
9.768 
0.00757 

GCDD 
19872003 
9 
0.073 
0.0067 
85.522 
0.00000 
9.338 
0.00938 

19871996 
4 
0.101 
0.0031 
193.871 
0.00000 
13.159 
0.00139 

19972003 
3 
0.218 
0.0002 
3590.000 
0.00000 
2.699 
0.25937 

GTDD 
19872003 
9 
0.076 
0.0104 
55.385 
0.00000 
0.754 
0.68590 

19871996 
4 
0.136 
0.0042 
151.429 
0.00000 
4.094 
0.12913 

19972003 
3 
0.206 
0.0015 
470.667 
0.00000 
2.193 
0.33408 

GEDD 
19872003 
9 
0.116 
0.0072 
53.333 
0.00000 
236,901.3 
0.00000 

19871996 
4 
0.237 
0.0015 
175.333 
0.00000 
29.563 
0.00000 

19972003 
3 
0.001 
0.0051 
97.843 
0.00000 
15,364.48 
0.00000 

GI10Y 
19872003 
9 
0.120 
0.0064 
59.375 
0.00000 
1.938 
0.37947 

19871996 
4 
0.064 
0.0029 
194.483 
0.00000 
1.799 
0.40673 

19972003 
3 
0.020 
0.0006 
800.000 
0.00000 
1.007 
0.60434 

GI3MO 
19872003 
9 
0.179 
0.0091 
35.275 
0.00000 
145.229 
0.00000 

19871996 
4 
0.117 
0.0007 
547.143 
0.00000 
2.116 
0.34710 

19972003 
3 
0.214 
0.0001 
2860.000 
0.00000 
55.135 
0.00000 

GR 
19872003 
9 
0.253 
0.0047 
52.553 
0.00000 
194,203.4 
0.00000 

19871996 
4 
0.277 
0.0083 
26.867 
0.00001 
1,496.471 
0.00000 

19972003 
3 
0.214 
0.0001 
2860.000 
0.00000 
12,463.68 
0.00000 

Note: Hs
computed by R/S are used for conventional hypothesis tests where the null
hypothesis is H = 0.500, (i.e., equivalently alpha = 2, D = 1.500, or
normality of the asset returns). Three
independent hypothesis tests are performed for each time series. The 

Table 5 Hypothesis Tests for Structural Stability 

Series 
Null 
d.f. 
t(R/S) 
prob(t) 
GMB 
B=C 
7 
21.494 
*** 0.00000 
A=B 
9 
0.390 
0.70588 

A=C 
9 
21.104 
*** 0.00000 

GIIP 
B=C 
7 
2.206 
* 0.06318 
A=B 
9 
9.191 
*** 0.00001 

A=C 
9 
6.985 
*** 0.00006 

GC 
B=C 
7 
5.714 
*** 0.00072 
A=B 
9 
28.571 
*** 0.00000 

A=C 
9 
22.857 
*** 0.00000 

GP 
B=C 
7 
5.426 
*** 0.00098 
A=B 
9 
14.574 
*** 0.00000 

A=C 
9 
20.000 
*** 0.00000 

GMM3 
B=C 
7 
8.113 
*** 0.00008 
A=B 
9 
1.792 
0.10666 

A=C 
9 
6.321 
*** 0.00014 

GERR 
B=C 
7 
28.495 
*** 0.00000 
A=B 
9 
0.860 
0.41200 

A=C 
9 
27.634 
*** 0.00000 

GCDD 
B=C 
7 
17.463 
*** 0.00000 
A=B 
9 
4.179 
*** 0.00238 

A=C 
9 
21.642 
*** 0.00000 

GTDD 
B=C 
7 
6.731 
*** 0.00027 
A=B 
9 
5.769 
*** 0.00027 

A=C 
9 
12.500 
*** 0.00000 

GEDD 
B=C 
7 
32.778 
*** 0.00000 
A=B 
9 
16.806 
*** 0.00000 

A=C 
9 
15.972 
*** 0.00000 

GI10Y 
B=C 
7 
13.125 
*** 0.00000 
A=B 
9 
28.750 
*** 0.00000 

A=C 
9 
15.625 
*** 0.00000 

GI3MO 
B=C 
7 
10.659 
*** 0.00001 
A=B 
9 
6.813 
*** 0.00008 

A=C 
9 
3.846 
*** 0.00393 

GR 
B=C 
7 
13.404 
*** 0.00000 
A=B 
9 
5.106 
*** 0.00064 

A=C 
9 
8.298 
*** 0.00002 

Note: Three
independent hypothesis tests are performed for each time series. The 
Appendix
Statistical methodology
Rescaledrange or R/S analysis: R/S analysis is the conventional method introduced by Mandelbrot (1972a). Time series are classified according to the estimated value of the Hurst exponent H, which is defined from the relationship
R/S
= an^{H}
where R is the average range of all subsamples of size n, S is the average standard deviation for all samples of size n, a is a scaling variable, and n is the size of the subsamples, which is allowed to range from an arbitrarily small value to the largest subsample the data will allow. Putting this expression in logarithms yields
log(R/S) = log(a) + H log(n)
which is used to estimate H as a regression slope. Standard errors are given in parentheses. H ranges from 1.00 to 0.50 for persistent series, is exactly equal to 0.50 for random walks, ranges from zero to 0.50 for antipersistent series, and is greater than one for a persistent or autocorrelated series. Mandelbrot, Fisher, and Calvet (1997) refer to H as the selfaffinity index or scaling exponent.
Power spectral density analysis: This method uses the properties of power spectra of selfaffine traces, calculating the power spectrum P(k) where k = 2p/l is the wavenumber, and l is the wavelength, and plotting the logarithm of P(k) versus log(k), after applying a symmetric taper function which transforms the data smoothly to zero at both ends. If the series is selfaffine, this plot follows a straight line with a negative slope –b, which is estimated by regression and reported as beta, along with its standard error. This coefficient is related to the fractal dimension by: D = (5  beta)/2. H and alpha are computed as H = 2 – D, and alpha = 1/H. Power spectral density is the most common technique used to obtain the fractal dimension in the literature, although it is also highly problematic due to spectral leakage.
Roughnesslength relationship method: This method is similar to R/S, substituting the rootmeansquare (RMS) roughness s(w) and window size w for the standard deviation and range. Then H is computed by regression from a logarithmic form of the relationship s(w) = w^{H}. As noted above, the roughnesslength method provides standard errors so low the null hypothesis of H = 0.500 is nearly always rejected no matter how nearly normal the asset returns.
Variogram
analysis: The variogram, also known as variance of the increments, or
structure function, is defined as the expected value of the squared difference
between two y values in a series separated by a distance w. In other words, the sample variogram V(w) of a series y(x) is
measured as: V(w) = [y(x) – y(x+w)]^{2}, thus V(w) is the average value of the
squared difference between pairs of points at distance w . The distance of
separation w is also referred to as the lag.
The
Wavelet analysis: Wavelet analysis exploits localized variations in power by decomposing a series into time frequency space to determine both the dominant modes of variability and how those modes vary in time. This method is appropriate for analysis of nonstationary traces such as asset prices, i.e. where the variance does not remain constant with increasing length of the data set. Fractal properties are present where the wavelet power spectrum is a power law function of frequency. The wavelet method is based on the property that wavelet transforms of the selfaffine traces also have selfaffine properties.
Consider n wavelet
transforms each with a different scaling coefficient a_{i},
where S_{1}, S_{2},..., S_{n} are
the standard deviations from zero of the scaling coefficients a_{i}. Then
define the ratio of the standard deviations G_{1}, G_{2}, ..., G_{n1} as: G_{1} = S_{1}/S_{2}, G_{2} = S_{2}/S_{3},
..., G_{n1} = S_{n1}/S_{n}. Then the average value of G_{i}
is estimated as G_{avg} = (G_{i})/(n – 1). The estimated
MandelbrotLévy characteristic exponent test: The
MandelbrotLévy distributions are a family of
infinitevariance distributions without explicit analytical expressions, except
for special cases. Limiting
distributions include the normal, with finite variance, and the Cauchy, with
the most extreme platykurtosis or fat tails. Paul Lévy (1925)
developed the theory of these distributions.
The
log
f(t) = i(delta)t – (gamma)t^{alpha}[1 + i(beta)(sign(t)(tan[(alpha)(pi/2)])],
where delta is the expectation or mean of t if alpha > 1; gamma is a scale parameter; alpha is the characteristic exponent; and i is the square root of 1. Gnedenko and Kolmogorov (1954) showed the sum of n independent and identically distributed MandelbrotLévy variables is:
n
log f(t) = in(delta)t – n(gamma)t^{alpha}[1 + i(beta)(sign(t)(tan[(alpha)(pi/2)])],
and thus the distributions exhibit stability under addition. Many applications of the central limit theorem only demonstrate MandelbrotLévy character. The result of normality generally depends on an unjustified assumption of finite variance. Mandelbrot (1972a) introduced a technique for estimating alpha by regression, further refined by Lo (1991). Mulligan (2000b) estimates the distribution of alpha for Cauchydistributed random variables. This distribution is used to test estimated alphas for technology equities against the Cauchy null.