ńňđ. 3 |

the mean. Positive skewness indicates a distribution with an asymmetric

tail extending more toward positive values. Negative skewness indicates a

distribution with an asymmetric tail extending more toward negative val-

ues. Next, we observe the kurtosis of the distribution. Kurtosis charac-

terizes the relative peakedness or flatness of a distribution, compared to

the normal distribution. A positive kurtosis indicates a relatively peaked

distribution. A negative kurtosis indicates a relatively flat distribution.

When we look at the distribution of financial data, we see some inter-

esting things. First, financial data have a distribution that is leptokur-

totic: large moves occur more than they should for a normal distribution.

6.2 The distribution of 5.day returns for the D-Mark

This property of being leptokurtotic˜is very important. The fact that large FIGURE

98 Statistically Based Market Prediction 99

A Traderâ€™s Guide to Statistical Analysis

STATISTICAL TESTSâ€™ VALUE TO TRADING

mean and fat tails, and they are characterized by a tendency to trend as

SYSTEM DEVELOPERS

well as to be cyclical. They also have discontinuous changes, and they

can be adjusted for skewness. They are different from simple leptokur-

totic gaussian distributions in that they have an infinite or undefined var- Many statistical testing methods are valuable for analyzing a market or

ante. These types of distributions are now called â€śfractal distributions.â€ť for developing and testing a trading system, and you should learn how to

Because financial markets do not follow a gaussian distribution, using use some of these methods. Most of the statistical methods used in mar-

standard statistics can give us only an imperfect estimate, but that esti- ket analysis will tell (1) whether the distributions of two data sets are

mate is good enough to give us an edge. different or (2) whether the means of the populations are different. An

understanding of what each method is trying to prove is necessary be-

cause, when using statistical methods, you need to formulate both a hy-

THE CONCEPT OF VARIANCE AND STANDARD DEVIATION pothesis and its inverse, called the null hypothesis.

Hypothesis testing works as follows. We formulate a statistical method

The variance and standard deviation of a data set are very important. The so that the null hypothesis is used in the formulation. For example, if we

variance is the average deviation of the data set from its mean. The vari- had a trading system with an average trade of $200.00 and wanted to

ance is calculated as follows: prove statistically that our average trade was greater than 0, we would

formulate our statistical measure to assume a zero average trade. We

would then calculate our statistical measure and use that value to decide

whether to accept or reject the null hypothesis. If we reject the null hy-

pothesis, we show the original hypothesis to be true. This decision is

where N is the number of elements, M is the same mean, and D is the cur- based on a gaussian distribution of the statistical values for a given test.

rent value. The standard deviation is simply the square root of the vari- The relevant charts and values are available in every textbook on statis-

ance. The standard deviation has some very interesting trading tics. If we are two standard deviations from the mean value of a statistic

applications and is used in many different indicators. A Bollinger band, based on a standard distribution, then we can reject the null hypothesis

for example, is simply a price band drawn two standard deviations away at the 95 percent confidence level.

from the mean. To fully understand how to formulate a statistical test, letâ€™s take a

closer look at some of the tests we could use. We will start with something

called the Z test, which is calculated as follows:

HOW GAUSSIAN DISTRIBUTION, MEAN, AND STANDARD

DEVIATION INTERRELATE M-D

z=yIIIIz

dV/,Y

The interaction between the mean and the standard deviation has many

where:

trading applications. First, there is a basic relationship between the

mean of a data set and the standard deviation. This relationship states M is the mean of the sample,

that, for a normal or standard distribution, 68 percent of the data is

D is the value based on the null hypothesis,

contained within one standard deviation and 95 percent of the data is

Vis the variance and its root is the standard deviation of the sample,

within two standard deviations. Almost all of the data is contained

and

within three standard deviations. For a normal distribution, this num-

ber is 99.5 percent. iV is the number of cases.

A Traderâ€™s Guide to Statistical Analysis 101

100 Statisticallv Based Market Prediction

Letâ€™s now see how we would use this in a trading application. Say we

Chi-rquare(*â€˜)=(134-100)1 =1,156=I1,56

want to know whether an S&P500 trading system had greater than a

100 100

$250.00 average trade. When we collect the systemâ€™s results, we find that

In our simple case, the Chi-square value is 11.56. For a two-tail Chi-

the average trade was $275.00 with a standard deviation of $30.00, and

there were 100 trades. We use the Z test to see whether this is true. Our square test, the 99.9 percent confidence level is above 10.83. We can now

first step is to formulate our hypothesis. Because our average trade is conclude that, for our sample, our results are predictive.

Another test, called student f, is often used. This test can indicate

$275.00, our null hypothesis can be that our average trade is $250.00 be-

cause $275.00 is greater than $250.00. We would then state as our hy- whether two distributions have the same mean or variances. The formula

is as follows:

pothesis: The average trade is greater than $250.00.

Our Z calculation is as follows:

%

z= 275-250

where ˜ =

JiGziG

I

z=25

N, +N,-2

30/10

2 = 8.33

so is the standard error of the difference of the means. Each sum is

over the points in one sample, the first or second. Likewise, each mean

Based on a normal distribution, 1 percent of the scores will have a Z

refers to one sample or the other, and N, and N? are the numbers of points

value greater than 2.33. Because 8.33 is greater than 2.33, we can con-

in the first and second samples, respectively. N, + N2 - 2 would be our

clude that our average trade was greater than $250.00.

degrees of freedom.

Another valuable statistical measure is called the â€śChi-square test.â€ť

Once we calculate t, we need to get the critical values for I based on

This test is used to judge whether the hypothesis results are valid for a

the degrees of freedom and a standard distribution.

small sample and is often used to test hypotheses based on discrete out-

comes. The formula for Chi-square is as follows:

CORRELATION ANALYSIS

One of the most powerful tools for developing and testing trading indi-

cators and systems is a statistical measure called Pearsonâ€™s correlation,

where Oi is the observed frequency in the ? class and E, is the expected a measure of the correlation between two data series. A 1 is a perfect

frequency in the ifh class. positive relationship and a -1 would be a perfect negative relationship.

Letâ€™s look at an example of how we would use this, Suppose we have The formula for Pearsonâ€™s correlation r is as follows:

a pattern, in that the market rises 134 times out of 200 cases. We would

like to know whether this is statistically significant. Because 134 cases

out of 200 is greater than 50 percent or 100 cases, which is our expected

frequency based on a random population, Chi-square will tell us whether

our pattern is predictive of rises in the market. In this case, we would

calculate Chi-square as follows:

Statistically Based Market Prediction

102

where (Xi, Y;), i =I, , N is the pair of quantities whose correlation

we want to estimate, and X and L are the means of the Xiâ€™s and Y!â€˜s,

respectively.

7

Pearsonâ€™s correlation is useful in many trading applications, for ex-

ample, in evaluating the current strength of different intermarket rela-

tionships. Pearsonâ€™s correlation can also be used to evaluate inputs for

Cycle-Based Trading

neural networks or other machine learning methods.

These examples are only a few of the many trading-based applications

for simple correlation analysis. We will be using correlation analysis

many times in later chapters of the book.

This chapter has given an overview of some simple statistical methods

that are valuable to traders. Other valuable methods are in use, but these

were chosen for discussion because we will use them in later chapters.

Cycles are recurring patterns in a given market. The area of cycles has

been intensely researched for almost 100 years. In fact, there are orga-

nizations dedicated to the study of cycles. Cycle-based trading has be-

come a hot topic because of the software now available for analyzing

financial data and for developing trading systems. Because of this soft-

ware, we can now see how a market is really composed of a series of dif-

ferent cycles that together form its general trading patterns. We can also

see that the markets are not stationary. This means that the cycles in a

market change over time. Change occurs because the movements of

a market are not composed solely of a series of cycles. Fundamental

forces, as well as noise, combine to produce the price chart.

Cycle-based trading uses only the cycle and the noise part of the sig-

nal. There are many tools that can be used for cycle analysis. The best

known are the mechanical cycle tools that are laid over a chart. An ex-

ample is the Stan Ehrlich cycle finder, a mechanical tool that is overlaid

on a chart to detect the current dominant cycle.

Among the several numerical methods for finding cycles, the most well

known is Fourier analysis. Fourier analysis is not a good tool for finding

cycles in financial data because it requires a long, stationary series of

data-that is, the cycle content of the data does not change. The best au-

merical.method for finding cycles in financial data is the maximum en-

trophy method (MEM), an autoregressive method that fits an equation

103

Statisticah Based Market Prediction

104 Cycle-Based Trading 105

THE NATURE OF CYCLES

by minimizing error. The original MEM method for extracting cycles

from data was discovered by J. P. Burg in 1967. Burg wrote a thesis on

MEM, which was applied to oil exploration in the 1960s. The method Letâ€™s start our discussion by probing the nature of cycles. A cycle has

was used to analyze the returning spectra from sound waves sent into three major components: (1) frequency, (2) phase, and (3) amplitude. Fre-

rock to detect oil. There are several products that use the MEM algo- quency is a measure of the angular rate of change in a cycle. For exam-

rithm. The first cycle-based product was MESA, by John Ehlers. It is ple, a IO-day cycle has a frequency of 0.10 cycle per day. The formula

now available as a stand-alone WindowsTM product as well as an add-in for frequency is:

for TradeStation. Another powerful product is TradeCyclesTM, codevel-

Frequency = l/Cycle length

oped by Ruggiero Associates and Scientific Consultant Services. There

are other products, such as Cycle Finder by Walter Bresser, but they do

The phase is an angular measure of where you are in a cycle. If you had

not offer the ability to backtest results. If you cannot use a tool to back-

a 20.day cycle and were 5 days into the cycle, you would be at 90 degrees:.

test your results, then, in my opinion, you should be very careful trying

a complete cycle is 360 degrees, and you are 25 percent into the cycle.

to trade it.

The last major characteristic of a primitive cycle is amplitude. Am-

Using MEM to develop trading applications requires some under-

plitude is the power of a cycle and is independent of frequency and phase.

standing of how MEM works. The MEM algorithm was not originally

All three of these features make up a simple cycle. Letâ€™s now use them

designed for financial data, so the first thing that must be done to make

to plot a simple sine wave in Omega TradeStation, using the following

MEM work on financial data is de&end it. There are many ways to de-

formula:

trend data. We used the difference between two Butterworth filters, one

with a period of 6 and the other with a period of 20. (A Butterworth fil-

Value1 = (Sine ((360 x Current Bar)/Period)*Amplitude) + Offset;

ter is a fancy type of moving average.) Once the data has been detrended

and normalized, the MEM algorithm can be applied. The MEM algo-

rithm will develop a polynomial equation based on a series of linear pre- Using a period of 30, an amplitude of 20, and an offset of 600, these

dictors. MEM can be used to forecast future values by recursively using parameters produce the curve shown in Figure 7.1, which looks a little

the identified prediction coefficients. Because we need to preprocess like the S&P500 and shows how phase interacts with a simple sine wave.

our data, we are really predicting our d&ended values and rxX the real Note in Figure 7.1 that, by using the TradeCycles phase indicator, tops

price. MEM also gives us the power of the spectra at each frequency. occur at 180 degrees and bottoms occur at 0 degrees.

Using this information, we can develop cycle-based forecasts and use A simple sine wave is not very interesting, but by adding and sub-

MEM for trading. MEM requires us to select (1) how much data are used tracting harmonics we can produce a pattern that looks a little like an

in developing our polynomial and (2) the number of coefficients used. Elliott Wave. Our formula is as follows:

The amount of data will be referred to as window size, and the number

of coefficients, as poles. These numbers are very important. The larger Elliott Wave = Sine(Period x 360) x Amplitude

the window size, the better the sharpness of the spectra, but the spectra -.5 x Sine@ x Period x 360) x Amplitude

+ h x Sine(3 x Period x 360) x Amplitude

will also then contain false peaks at different frequencies because of

noise in the data. The number of poles also affects the sharpness of the

This simple curve resembles as Elliott Wave pattern using a period of

spectra. The spectra are less defined and smoother when fewer poles are

30 and an amplitude of 20 (see Figure 7.2).

used. TradeCycles allows adjustment of both of these parameters, as well

The figure is an example of how chart patterns are actually made up

as others, because different applications of MEM require different op-

of combinations of cycles. Letâ€™s use our sine wave to test the relationship

timal parameters.

Cycle-Based Trading 107

Statistically Based Market Prediction

106

between cycles and a simple moving average. We can start with a simple

half-cycle moving average. The lag in a half-cycle moving average is 90

degrees. Figure 7.3 shows a sine wave curve, a half-period moving aver-

age, and a full-period moving average. The full-period moving average is

always zero because it has as many values below zero as above zero.

If we buy when the half-period moving average crosses below the full-

period moving average, and sell when it crosses above it, we have the per-

fect system. We will buy every bottom and sell every top. These rules are

the opposite of the classic moving-average system.

Letâ€™s now build a simulated Elliott Wave with a period of 30 and an

amplitude of 20. If we trade the classic moving-average crossover system,

we see that a 2-period and a H-period moving average produce the best

results (about 16.90 points per cycle). This system bought just as our fake

wave three took out our fake wave one to the upside, and it sold about

one-third of the way down on the short side. On the other hand, if we use

the reverse rules and a 15period and 30-period moving average, we then

make over 53.00 points per cycle. This system would buy at the bottom

Fet

91

NW DES

SW M

A 30.day cycle versus its phase angle.

FIGURE 7.1

Noâ€ť

sip oa FIGURE. 7.3 The interaction between a 30-day period sine wave and

both a full-period and half-period moving average.

An example of a fake Elliott Wave composed of sine waves.

FIGURE 7.2

108 Statisticallv Based Market Prediction Cycle-Based Trading 109

and sell about one-third before the top, or about at the point where wave discussed this mode in Chapter 4.) When using MEM for cycle-based

five begins. These results show that, in a pure cycle-based simulated mar- trading applications, the higher the signal-to-noise ratio, the more reli-

ket, the classic moving-average system would lose money. In real life, able the trading results. If we are using MEM for trend-based trading,

moving-average systems only make money when a market moves into a then the signal-to-noise ratio relationship is less clear. This is currently

trend mode or because of shifting in cycles that causes changes in the one of my main areas of research.

phase of the moving averages to price. We cannot predict these shifts in

phase. Unless a market trends, a system based on moving-average opti-

mization cannot be expected to make money into the future. USING CYCLES TO DETECT WHEN A MARKET IS TRENDING

Letâ€™s now look at another classic indicator, the RSI, which expresses

the relative strength of the momentum in a given market. Once again, we Letâ€™s start our discussion of cycle-based trading by investigating how to

use our simulated Elliott Wave, based on a 30-day dominant cycle. We tell when a market is in a trend. We begin by studying the dominant cycle

have found that combining a 16-day RSI with the classic 30 and 70 lev- and signal-to-noise ratio for the Yen during the period from 3/20/96 to

els produces good results, showing that RSI is really a cycle-based indi- 7126196. Figure 1.4 shows how the dominant cycle increases at the start

cator. Using a simulated market over 5 complete cycles, we produced over of a major trend.

59.00 points per cycle and the signals produced bought 1 day after the We will start by studying the spectral intensity of a market over dif-

bottom sold and 3 days after the top. ferent time frames, beginning with an uptrend followed by a major top.

Another classic system that actually works well in cycle mode and will Figure 7.5 shows the spectra for the D-Mark on October 23, 1995, the

continue to work in trend mode is a consecutive close system. Channel

breakout also would work, but you would want to use a 2-day high or low

to maximize the profit based on our simulated Elliott Wave. If you use a

l-day high, your trades would be whipsawed in waves two and four, as

they would be in a real market, because the channel length is too short.

CYCLE-BASED TRADING IN THE REAL WORLD

Letâ€™s now talk about using the MEM algorithm to trade cycles in the real

world. The MEM algorithm requires tuning of many technical parame-

ters. MESA 1996 adjusts these parameters for you. TradeCycles adjusts

many of these parameters too, but it also lets you set the window and

the number of poles. Real financial data are noisy, so it is important to

know the power of the dominant cycle. We call the power, relative to the

power of the rest of the spectra, the signal-to-noise ratio. The higher the

signal-to-noise ratio, the more reliable the cycle. In TradeCycles, we cal-

culate this ratio by determining the scaled amplitude of the dominant Ji

J&l

dr w

cycle and dividing it by the average strength of all frequencies. The level

of the signal-to-noise ratio tells us a lot about the markets. Many times, FIGURE 7.4 The dominant cycle, signal-to-noise ratio, and price for the

Yen, for April to July 1996.

the signal-to-noise ratio is lower when a market is in breakout mode. (We

layjeur aql Kes us3 am â€˜aIaLa-3leq B ueql JaZuol103 ai?eela.m %U!AOO˜U aIDK3

-I˜IJ a* a.toqe sKels I! suo˜lgmo˜ iaym Ieal u! 31 â€˜alaLa e 30 3Ieq 103 a%

â€˜1â€™˜ a[qeJ U! paanpoJda1 s! %npoa aqJ .eyal!m ua@ e slaaru -Ja,te %˜AOUI poFJad-11n3 e Mo[aq JO a˜oqe Km I[? auna aqi â€˜ar\e-M auls

qa!qm Jeq 1? Mo[aq 10 akoqe lop e smd amMoqs v â€˜ammoqs pearl a[dw!s laa3rad e 103 â€˜Ieql pony noL˜â€˜akeern auis aIdur!s e 01 saielar po!Iad a%elaAe

s,uogeisapeIL pm sala,Qape˜˜

slql3o apoa aql pU!3 aa â€˜a%nih$se˜ -SU!AOUJ e .4t0q JO uo!ssnas!p aqi Zqe3a˜ .puaJi 01 paiws seq iaveu

%u˜sfl â€˜spuawwiop 103 sJeq alah mt˜˜yop aql30 maxad 0˜ ]se[ aql3o e uaq,v, p&y ,eql sroleatpul do[ahap 01 idwaw s,ial â€˜%urpuarl s! iax

q8g Isa&q aql JOU spuaJ)dn ˜03 sreq a[30 meu!luop aql30 maxad 0s -1eu.x e uaqm sa%ueqa eJ,aads aql ˜oq MOU %˜MOU˜ â€˜layJew %u!puari e

lse1 aql30 hoI isaM aql ueql aJocu pua4 aql w!eZe aaerlaJ 10˜ plnoqs 11 u! UOWUIO˜ ale spo!Jad JaE?uoI 01 ahow e pue eaaads aqi 30 %upIap!M

â€˜puaxi 01 weis laylam e uaqM â€˜apea 01 qsnoua @a puau e Eh@luap! (â€˜9˜ arn%g aas) â€˜pasealau! a$3 lueuyop aq puno.re eJi

Jo3 [tarn yJom sapu asaqL .aro3aq sKep 2 aye.4 aqi se u@s atues aql seq -aads aql30 ease aql30 q]p!h aqi pue â€˜SZâ€™EZ 01 pasealau! apK3 meu!uIop

a4eJaAe %I˜AOUI aql ˜1013 aayd 8uyae˜lqns Lq paleaJa rom˜˜!aso aql3o aqJ â€˜9661 â€˜p LenueI 103 waads aql ie 3001 s,lal â€˜K[leayaads-puaJ1

aauala33Fp hp-2 e (z) pue â€˜a1aLa-˜alJenb e ueql alom 103 a4eIaAe %!AOUI -UMOP Jorem aql30 weis aql Synp yIem-a aqi ie yool Mou s,la7

alaLa-lln3 e Mo[aq Jo a.hoqe sAels iayxeur e (1) uaqfi s1n330 puall v .layJeur a8uer &npeIl e u! UOWUIO˜ SF yead MODU v .sKep SZâ€™LI IV yead

â€˜JoieaFpu! ue qans plyq MOU s,ia˜ .Jolexpu! puan ˜002 e a*eq MOLIBU e aas UK a,tt â€˜ewads aql II? %!yoo˜ .sKep SZâ€™LI SBM a13K3 lueu!

p[noM afi â€˜a13h31eq e ueqi a.rom 103 akane %goom aql tio˜aq IO a,ioqe -crop manna aql â€˜m!od s!ql IV â€˜TJem-a aql IIF do1 maaal ISOU aql3o awp

u!euIaI II+ saa!ld IaqlaqM aledyyue pIno:, aM 31 .spuaJl %q saqama

Kpo pue aleI KJaA spuali Kuem olu˜ sla2 idaauos a[dur!s s!qJ %!puaIl SF

112 Statistically Based Market Prediction

TABLE 7.1 CYCLE BASED TREND INDICATOR.

Van: DCycle(O),Osc(O)TrFlag(O);

DCycle=RSMemCyclel .Z(C.6,50,30,12,0):

Osc=Close-Average(Close,DCycles);

TrFlag==O;

If MRO(sign˜Osc˜osign˜Osc˜l]˜,.25*valuel,l˜=-1 and sign(Osc-Osc[21)=

sign(Osc[21-Osc[41) then TrFlag=l;

IfOsc<O and High=Highest(High,.j*Dcycle) then TrFlag=O;

If 0˜00 and Low=Lowest(Low,.S*Cycle) then TrFlag=O;

If TrFla,q.=l then PloTl(High,â€śCycleTrendâ€ť);

over the period from 12/I/94 to 5/l/95. Note in Figure 7.7 that the mar- I

ket has a major uptrend that is detected near the bottom in December Noâ€ť Dee 95 FCb MB,

Od m

1994, as shown by the dots at the top of the bars.

FIGURE 7.7 How the dominant cycle-based trend indicator shows the

How can the phase between the price data and the current dominant

major uptrend in the D-Mark that started in December 1994.

cycle be used to detect when a market is trending or is in cycle mode? If

we assume that we have a perfect 30-day cycle, the phase should change

360/30, or 12 degrees per day. The more this rate of change of phase dif-

fers from the ideal, the less the market is acting as if it is in cycle mode.

Using this theory, letâ€™s develop a cycle and trend mode indicator based on

phase. We will compare the ideal rate of change with the actual rate of

change calculated using our price data. When the rate of change of the

phase is the same when using real data as when using theoretical data the

market is in a perfect cycle mode. When the rate of change is less than 1

by some threshold, then the market is trending. If it is greater than 1 by

some threshold, the market is consolidating. When we are within +-our

threshold of 1, then the market is in cycle mode. Because real data are

noisy, we smooth this ratio by using a quarter-cycle moving average. Letâ€™s

take a look at how this indicator worked for the D-Mark during the pe-

riod from 111196 to 711196 (see Figure 7.8).

During this period, the D-Mark started to trend several times. We can

tell it was trending because the RSPhaseMode indicator described above

was significantly below 1, When this indicator stays at about 1, the mar- 96 Fit Msr MbY &

Ab

ket is moving in cycle mode. If it is significantly above 1, the market is

FIGURE 7.8

consolidating. When we use this indicator, we normally use below 0.67 The phase mode indicator points to a trending market

for trending and above 1.33 for the consolidation mode. If this indicator whem it is much below 1, a cycle market when it is near 1, and a

consolidation mode market when it is much above 1.

is between these levels, we can say the market is in cycle mode.

113

Statistically Based Market Prediction

114 Cvcle-Based Tradine 115

USING PREDICTIONS FROM MEM FOR TRADING

ADAPTIVE CHANNEL BREAKOUT

Predictions from MEM can be used to develop trading strategies. These

Letâ€™s now look at another method for using cycle analysis to confirm not

only whether the market is trending but also its direction. This method, predictions are good for predicting turning points but not for predicting

magnitude. The MEM prediction works only when the market is in a

called adaptive channel breakout, defines an uptrend as occurring when

cycle mode. Letâ€™s now look at an example of how to use the prediction

the market makes the highest high of the past dominant cycle bars. The

trend is then up, and you should buy on a stop. When the market makes from MEM. We will learn later in this book, when we develop systems

based on any autoregressive method or even on neural networks, that it is

the lowest low of the past dominant cycle bars, you should sell on a stop.

This definition of both trend and direction is so good that it can be traded easier to develop models on weekly data than on daily data. Once you

as a stand-alone system. My research in the area of developing trading can get a prediction method to work well on weekly data, you can then

move to daily data and then intraday data. Our basic model is shown in

systems using MEM has shown that the size of the window used to cal-

culate MEM and the number of poles used in the calculations have an Table 7.3.

We tested this model on weekly D-Mark data from l/1/80 to 6/22/96

effect on how MEM performs for different applications. We have opti-

and deducted $50.00 for slippage and commissions. Based on our test-

mized the number of poles using a fixed window size of 30 and have

found that when using MEM in a trend-following application, fewer poles ing, we decided that, in normal conditions, our MEM prediction should

look ahead only 4 bars. Our optimal window size was 24 bars, and we

produce more reliable performance, because the output from both the

used 12 poles for both the MEM cycle calculation and the predictions.

dominant cycle and the MEM predictions is smoother. For example, using

Table 7.4 shows how our simple model, using these parameters, per-

a window size of 30, we found that having 6 poles produces the best

formed over our analysis period.

results on daily data for currencies for our adaptive channel breakout sys-

The results show that this simple model is predictive. We can improve

tem. The results for the D-Mark, Yen, and Swiss Franc for the period

the results if we trade the model only when the market is in acycle mode.

from l/1/80 to 6/22/96 are shown in Table 7.2. (A deduction of $50.00

For example, from May 22, 1992, to September 18, 1992, the system

was made for slippage and commissions.)

traded six times and lost about $2,000.00 overall. The D-Mark was in a

These results across the three most popular currencies show the power

major uptrend, rising almost 8.00 full points in this short period of time.

of this method. It can be used to trade a basket of commodities by opti-

During this type of period, the method will perform badly. This is a good

mizing window size and poles over the complete basket and selecting the

most robust pair of parameters. We will learn later that this concept can example for your own use of MEM to develop trading systems and filters

for when a market is in a cycle mode.

be made even more adaptive by optimizing the window size and poles in

The predictions from MEM can also be used in intermarket analysis.

line, based on the current performance of each set of parameters over re-

We can use the classic intermarket relationships discussed in Chapter 1,

cent history. This concept is called optimal adaptive channel breakout.

and then filter our trades by requiring that the prediction from MEM

RESULTS OF ADAPTIVE CHANNEL BREAKOUT.

TABLE 7.2

D-Mark TABLE 7.3 SYSTEM USING MEM PREDICTIONS.

$131,325.00

$99.237.50 $161,937.50

Net profit If MEMPred>MEMPred[21 and MEMPred[4l>MEMPred[21 then buy at open;

105

94 95

Trades

If BarsSinceEntry>Dominate Cycle*.25 then exitlong at open;

44

47

49

Win%

If MEMPred<MEMPredlZl and MEMPred[4l<MEMPredI21 then sell open;

$1,250.71

51,704&l

$1,055.72

Average trade

-$12,412.00 If BarsSinceEntry>Dominate Cycle*.25 then exitshort open:

Drawdown -$11,312.50 -$&775.00

Statistically Based Market Prediction

116 Cycle-Based Trading 117

TABLE 7.4 MEM PREDICTION RESULTS fewer trades than when using divergence without the MEM prediction

WEEKLY D-MARK. filter. We did not optimize the other parameters. They were selected

based on other research that recommended a window size of 30 and 6

$75,562.50

Net profit

255 poles for many trading applications. We found that using 20 for the TrLen

Trades

49

Win% and 30 for InterLen produced reliable and stable results. Our results over

$296.32

Average trade our test period are shown in Table 7.6. (A deduction of $50.00 was made

-$10,787.50

Drawdown for slippage and commissions.)

Table 7.6 shows that this system produced better overall results than

even the best intermarket divergence system for the S&P500. The concept

of confirming intermarket relationships using MEM predictions does

must confirm the system. Letâ€™s look at an example of this concept. Sup-

have value and is an important area of future research.

pose we are trading the S&P500 using T-Bonds. If bonds are in an up-

One fact is little discussed: Predictions from MEM change when dif-

trend and the S&P500 is in a downtrend, we would then buy the S&P500.

ferent parameters are used. One method for improving performance when

If T-Bonds are in a downtrend and the S&P500 is in an uptrend, we would

using MEM would be to combine different predictions made with dif-

then sell the S&P500. We showed in Chapter 1 that this type of model

ferent parameters. A consensus or even an average method is a possible

works very well. We could filter this model by requiring the MEM pre-

choice. A lot of computer power would be required, but the performance

diction to be positive for both markets to buy the S&P500, and negative

would improve.

for both markets to sell the S&P500. This type of filter can help solve

This chapter has shown how cycles can be used to create a trading sys-

the problem of an intermarket change of direction just as the signal is

tem, or can become part of a trading system, or can even be inputted into

generated. Table 7.5 shows this concept, coded in TradeStationâ€™s Easy-

a neural network or genetic algorithm. These issues are addressed further

Language and using TradeCycles. Note that the S&P500 is in Data1 and

in Chapter 18, but I will give you one example now. If you look at the rate

T-Bond futures are in Data2.

of change of a dominant cycle, it will give you an idea of whether the pre-

We tested this system from April 21, 1982 to June 28, 1996. We opti-

dicted turning points would be too early, or late, or on time. When the

mized the moving-average lengths for both TrLen and InterLen, using the

range of 10 to 30 in steps of 2. We used a window size of 30, 6 poles, and

a lookahead of 6. On average, we realized a higher winning percentage and

TABLE 7.6 RESULTS OF INTERMARKET CYCLE

BASED SYSTEM S&P500 USING T-BONDS

AS THE INTERMARKET.

TABLE 7.5 INTERMARKET BASED CYCLE ANALYSIS SYSTEM.

Net profit $326,775.00

Inputs: LK1(6),LK2(6),TrLen(20),lnterLen(30),Win(30˜,Poles˜6˜; Profit long $269.100.00

Vars: TrOsc(O),InterOsc˜0),TrPred˜0˜,lnterPred˜0˜; Profit short $57,675.00

Trades

TrPred=RSMemPred(Close of Datal,Win,Poles,LKl); 54

Win% 80

InterPred=RSMemPred(Close oi DataZ,Win,Poles,LK2);

Win% long 93

TrOsc=Close of Datal-Average(Close of Data1 ,TrLen);

Win% short 67

InterOsc=Close of Data-Average(Close of Data2,lnterLen); Average trade $6,051.39

If InterPred>O and TrPred>O and TrOsc<O and InterOsoO then buy at open; Drawdown -$27,600.00

Profit factor

If InterRed<O and TrPred<O and TrOsoO and InterOsc<O then sell at open; 8.02

Statisticallv Based Market Prediction

118

dominant cycles are getting longer, the turning points you predict would

be too early. When the cycles are getting shorter, the turning points would

be too late.

8

This chapter is a starting point for future work on using spectral analy-

sis to develop trading systems. This area of research should be one of the

most important technologies in developing the next generation of trading

Combining Statistics and

systems.

lntermarket Analysis

In Chapter 1, we discussed many different intermarket relationships that

are valuable for developing trading systems. If you actually have pro-

grammed some of the examples in Chapter 1, you have learned that these

systems work very well during some periods, but do have long periods of

drawdown.

Our research over the past few years has shown that analysis of

intermarket relationships, based on current correlations between the in-

termarket and the market you are trading, is a very valuable tool in de-

veloping trading strategies.

USING CORRELATION TO FILTER INTERMARKET PATTERNS

Letâ€™s now show how Pearsonâ€™s correlation can be used to improve clas-

sic intermarket relationships. In Chapter 1, we showed that you can trade

crude oil using the Dollar index (see Table 1.3). You go long when the

dollar is below its 40.day moving average, and you go short when it is

above that average. This relationship has been steady over the years, but

it did have problems during 1991 and 1992. During those years, this

model lost $3,920.00 and recorded its maximum drawdown.

Byusing Pearsonâ€™s correlation as a filter for this simple intermarket re-

lationship, we are able to improve our modelâ€™s performance. We will still

119

Combining Statistics and Intermarket Analysis 121

120 Statistically Based Market Prediction

TABLE 8.2 RESULTS USING UTY TO TRADE

enter and exit our trades using a 4C-day moving average of the Dollar, but

T-BOND WITH CORRELATION AS A FILTER.

we now also require a 40-day correlation between the Dollar and crude oil

to be less than -.5. Net profit $108,037.50

Using this new filter, we more than doubled our average trade and cut Trades 68

Win%

our drawdown in half. Our first trade was on March 17, 1986, and ourre-

Average trade ::.5@3.79

salts, after deducting $50.00 for slippage and commissions, are as shoivn

Maximum drawdown -$6,593.75

in Table 8.1.

Profit factor 5.2R

This model did not make as much as the original one, but the average

trade, drawdown, and profit factor all improved. In addition, the model,

using the correlation filter, made $1,800.00 during 1991 and 1992, when

the original model suffered its largest drawdown. Use of the filter re- The use of correlation as a filter improved almost every part of this

systemâ€™s performance. Net profit increased and drawdown dropped by

duced the number of trades from 167 to 55, and increased the profit fac-

about 30 percent. We also won 75 percent of our trades.

tor from 2.07 to 3.07.

Letâ€™s now apply a simple intermarket pattern filter to the use of

Letâ€™s look at another example of using correlation analysis to develop

day-of-week analysis. We will buy the S&P500 on Mondayâ€™s open when

intermarket trading models. We will use the relationship between

T-Bonds are above their 26.day moving average, and we will exit this po-

T-Bonds and UTY, which was discussed in Chapter 1. Our model was

sition on the close. This pattern has performed well from 1982 to date.

based on prices relative to a moving average. We used an g-day period

Table 8.3 shows the results from 4/21/82 to 7126196, with $50.00 deducted

for T-Bonds and a 24-day period for UTY. We took trades when T-

for slippage and commissions.

Bonds and UTY diverged. If UTY was rising and T-Bonds were falling,

The results of our buy-on-Monday pattern are good, but we can im-

we bought T-Bonds. If UTY was falling and T-Bonds were rising, we

prove them by using correlation. We first use a simple correlation be-

sold T-Bonds. For the period from 6/l/87 to 6/18/96, this model pro-

tween T-Bonds and the S&P500. Because we base this pattern on the

duced a little over $98,000.00, with 64 percent winning trades and about

relationship between T-Bonds and the S&P500, we want to filter out

a -$9,500.00 drawdown. Can filtering our trades, using the correlation

trades when the link between the S&P500 and T-Bonds is weaker than

between UTY and T-Bonds, help even a system that performed this

well? If we filter our signals, we require a 12.day correlation between normal. We therefore take trades only when the 20.day correlation be-

tween T-Bonds and the S&P500 is greater than .40.,This filter improves

UTY and T-Bond to be greater than .50. Our results for the period from

the performance of our original pattern. Table 8.4 shows our results

611187 to 7126196 are shown in Table 8.2.

TABLE 8.3 RESULTS OF BUY MONDAY

T A B L E 8 . 1 RESULTS OF TRULONC CRUDE OIL

WHEN T-BONDS ARE IN AN UPTREND.

USING THE DOLLAR INDEX AND CORRELATION.

Net profit $89,100.00

$39,499.00

Net profit

Trades 417

$34,319.00

Profit long

Average trade

$5,180.00 $213.67

Profit short

Win% 55

49

Win%

Profit factor 1.57

$718.16

Average trade

Drawdown -818.975.00

-$5,930.00

Drawdown

122 Statisticah Based Market Prediction Combining Statistics and Intermarket Analysis 123

TABLE 8.4 RESULTS OF BUY MONDAY WHEN PREDICTIVE CORRELATION

T-BONDS ARE IN AN UPTREND AND S&P500

AND T-BONDS ARE STRONGLY LINKED. A correlation between two markets does not always mean that the current

movement in one market can be used to predict the movement in the other.

Net profit $88,200.00

To address this issue, I have developed a concept calledpredicrive corre-

Trades 268

him. The idea behind predictive correlation requires taking a correla-

Average trade $329.10

Win% 58 tion between an indicator N periods ago and a change in a given market

Profit factor 2.01

over the last N periods. For example, on daily data, we can take a corre-

Drawdown -$7,775.00

lation between T-Bonds[5]-T-Bonds[lO] and the S&PSOO-S&P500[5].

This correlation will tell us how predictive a simple momentum of

T-Bonds has been over the length of the correlation. The predictive cor-

for the period from 4/21/82 to 7/26/96, allowing $50.00 for slippage and relation curve is much different from the curve generated by standard

commission. correlation, but it does seem that they both trend in the same direction.

Filtering the trades by using correlation not only improves our aver- The power of predictive correlation is that we can correlate an indicator

age trade by 54 percent, but also improves our percentage of wins, draw- or intermarket market relationship to future price movements in the mar-

down, profit factor, and win/loss ratio. We were able to filter out about ket we are trying to trade. This allows us to use relationships and indica-

160 trades that averaged about $6.00 each. We could have used higher tors in rules, and to trade these rules only when these indicators are

thresholds for our trigger, but that tactic would have filtered out too many currently predictive. Letâ€™s txlw add predictive correlation to our modified

trades. For example, using an g-day correlation and a .90 trigger yielded S&P500 pattern.

over $500.00 per trade but produced only 36 trades in 13 years. We use Close[ l] of T-Bonds-Open[ l] of T-Bonds as our independent

Letâ€™s start again with our buy-on-Monday strategy when T-Bonds are variable, and Close-Open of the S&P500 as our dependent variable. We

above their 26-day moving-average pattern and we have an additional fil- go long on Mondays only when a 35.day predictive correlation is above

ter. We now buy at the open on a Monday only when T-Bonds are above 0. The amazing results, from 4/21/82 to 7/26/96, are shown in Table 8.6.

their 26-day moving average and when T-Bonds closed higher than they This system produced over $600.00 per trade, after deducting $50.00

opened on Friday. for slippage and commissions. We won 66 percent of our trades and had

The new requirement, that T-Bonds had to close higher than they a profit factor of 3.75. These numbers are much better than any of the

opened on Friday, improved the performance of our original pattern. The variations that did not use predictive correlation, and they should prove

results are shown in Table 8.5. the power of predictive correlation.

TABLE 8.5 RESULTS OF MONDAY RISK TABLE 8.6 RESULTS OF ADDING

WITH T-BONDS HEAVY AND UP FRIDAY. PREDICTIVE CORRELATION.

Net profit $75,825.00 Net profit $55,050.00

Trades 244 Trades 88

Average trade $310.76 Average trade $625.57

57 Win% 66

Win%

1.86 Profit factor 3.75

Profit factor

Drawdown -$13,800.00 Drawdown -$4,400.00

124 Statistically Based Market Prediction Combining Statistics and Intermarket Analysis 125

USING THE CR6 AND PREDICTIVE CORRELATION TABLE 8.7 COLDjCRB RATIO SYSTEM.

TO PREDICT COLD

Vars: IntRatio˜O˜,lntOsc˜O˜,Correl˜O˜;

Vars: IndfObDepiO);

In Chapter 1, we discussed many of the classic methods for expressing in-

lntRatio=Close of data2lClose;

termarket relationships. One of the most powerful methods is a ratio be-

Ind=lntRatiol51;

tween the intermarket and the commodity being traded. I will ww show

Dep=Close-CLoseiS];

you how to combine the ratio between the CRB and gold with predictive

Correl=RACorrel(Ind,Dep,24);

correlation to develop a very profitable and reliable system for trading

IntOsc=Average(lntRatio,l2)-Average(lntRatio,30);

gold.

If IntOso and Correl>.6 then buy at XAverageKlose,BO) Limit;

The Commodity Research Bureau index (the CRB) is a basket of 21

If IntOsc<O and Correl>.h then sell at XAverage(Close,80) Limit;

commodities.* This index has been traded as a futures contract since mid-

RACorrel is a user iunction developed by Ruggiero Associates. It calculates the standard

1986. It has had an inverse correlation to T-Bond prices and it has been

Pearsonâ€™s correlation found in any statistics textbook.

positively correlated to gold during its history.

On the basis of this relationship, I decided to use the ratio between the

CRB and gold to develop a trading system for gold. When this ratio is

The model made over $48,000.00 during this period, and the system

moving up, the CRB is outperforming gold, and gold should catch up,

was profitable on both the long and short sides. Another important point:

Another fact about gold was revealed in my research. Often, when the

The entry method (buy on a limit set at the SO-day exponential moving

CRB starts moving, gold will first move in the opposite direction and

average of the close) increased the average trade by over $500.00 when

test support, before moving in the direction of the CRB.

compared to the method of entering at the next open when the signal first

On the basis of my understanding of the gold market, I am proposing

occurs.

a system that (1) uses a moving-average crossover of the ratio between the

The system does have some problems, however. For example, the aver-

CRB and gold to generate its signals and (2) enters the market on a limit

age winning trade lasted 45 days but the average losing trade lasted 144

order set at the level of an N-day exponential moving average. This con-

days. We can help solve this problem by developing better exits for the

cept was tested on backadjusted contracts over the period from 1 l/18/86

model. Even with this problem, the model is fundamentally sound and

to 7/26/96. The system is based on a sound premise. If inflation increases,

could be the core for a system for trading gold futures or gold mutual

so will the price of gold. Still, it performed badly and made only

funds.

$4,000.00 in profit over the test period. The reason the system did so

badly is that it had large drawdown during periods when the CRB and

gold decoupled. We can filter these periods out by using correlation

TABLE 8.8 GOLD/CRB RATIO

analysis. Letâ€™s now add the predictive correlation between the ratio of

SYSTEM RESULTS.

the CRB/gold 5 days ago and the past 5&y change in gold. This simple

$48,794.70

Net profit

gold model, coded in TradeStationâ€™s EasyLanguage with parameters se-

35

Trades

lected based on my research, is shown in Table 8.7.

27

Wins

This simple model has performed very well in the gold market over a

LOS%S

the past decade. We tested the model using continuous backadjusted con- Win% 77

Average trade $1,394.13

tracts for the period from 1 l/18/86 to 7/26/96, and deducted $50.00 for

Drawdown -$11,250.00

slippage and commissions. The results are shown in Table 8.8.

Win/loss ratio 1.56

Profit factor 5.26

* The CRB was reformulated in December 1995.

126 Statistically Based Market Prediction 127

Combining Statistics and Intermarket Analysis

INTERMARKET ANALYSIS AND PREDICTING THE This link between correlation and trend also occurs during major

EXISTENCE OF A TREND downtrends. An example, the last important stock market correction, in

February 1994, is shown in Figure 8.2.

Intermarket analysis is another powerful tool for predicting when a mar- One of the few downtrends in the S&P500 occurred during the period

ket will trend. My research has shown that many markets will trend when from 1993 to 1995. During this short time, the S&P500 dropped almost

well-known intermarket linkages are strong-for example, the link be- 40.00 points during just a 45-day trading period. When trading the S&P500,

tween the S&P500 and T-Bonds. I have found that the S&P500 trends correlation analysis can tell you when the trend is really your friend.

when the 50-day correlation between the S&P500 and T-Bonds is high. This relationship between trend and intermarket linkage does not exist

Letâ€™s look at some examples of how intermarket linkage relates to a solely for the S&P500 using T-Bonds. Gold almost never trends without

marketâ€™s trending. Figure 8.1 shows that the 50-day correlation between a strong link to the CRB index. Using a 50.day correlation between gold

the S&P500 and T-Bonds started to fall in mid-July of 1995, at just about and the CRB, I have found that almost every major up or down move in

the time when the S&P500 moved into a trading range. The correlation gold started when the correlation between the CRB and gold was above

bottomed in early September and then rose rapidly. During this rapid rise .40 and rising, or was stable above .6. The only major trend in gold that

and a stabilization at higher levels, the S&P500 rose 57.55 points in about this relationship missed was the rally in gold that started in June 1992.

70 trading days without recording more than two consecutive days on Letâ€™s review an example of how the CRB can be used to predict trends

which the market closed lower than it opened, in gold. One of the last explosive rallies in the gold market was in early

.T,..

RACurd -0.03-x o.mm osmo

FIGURE 8.1 FIGURE 8.2

The correlation between the S&P500 and T-Bonds can Another example of how the correlation between the

predict when a market will trend. In 1995, as correlation rose, the S&P500 and T-Bonds predicts trends. The last major stock market

S&P500 exploded and rose 57.55 points in only about 70trending days correlation occurred in February 1994.

Statistically Based Market Prediction

128 Combining Statistics and Intermarket Analysis 129

November of 1989. The correlation between the CRB and gold started to

increase in mid-October and rose steadily until November 27,1989. Dur-

ing this time, the gold market rallied over $50.00 per ounce. The corre-

lation broke above SO on November 7, 1989, and did not drop below SO

until December 21, 1989. During this time, gold rallied $26.70 per ounce

in only 31 trading days. (See Figure 8.3.)

On the basis of my research, the gold market will almost never have a

major move until the SO-day correlation between gold and the CRB rises

above SO. This means that the great bull market that many believe will

happen soon in gold will most likely not start until the 50-day correla-

tion between the CRB and gold rises above .50 while the CRB is in a

major uptrend. Without the intermarket link between the CRB and gold,

most breakouts to the upside will fail within several trading days.

In early 1996, gold had a breakout to $420.00 per ounce, and many ex-

perts predicted that gold would go to $450.00 per ounce within a few

.â€ś,Iâ€ť

months. Using correlation analysis, we could see that this breakout would s Feb MB, m

FIGURE 8.4 During the upside breakout in gold in early 1996, it

decoupled from the CR6 and then failed.

fail and that the correlation from $420.00 down to $400.00 would be a

very strong trending move. Figure 8.4 shows that the breakout during

early 1996 occurred as the gold decoupled from the CRB. The move was

not a long-term move that would produce a rally of $50.00 or more. When

gold collapsed in early February 1996, it had a 50.day correlation with the

CRB of greater than .50. Once gold started its collapse, it dropped 24.00

points in one month. After that, the correlation between gold and the

CRB dropped, and gold once again moved into a trading range.

The correlation between intermarkets is a valuable tool for developing

trading systems and can even be a tool for discretionary traders. We will

learn later that it is even useful as an input for a neural network. Corre-

lation analysis and predictive correlation, combined with intermarket

analysis, can be used to develop trading systems as well as to improve ex-

isting ones. Later in the book, we will use this type of analysis to develop

systems based on advanced technologies such as neural networks.

FIGURE 8.3 We can use the correlation between gold and the CRB

to predict when gold will trend. The correlation broke above .50 on

January 7, 1989, and gold rallied $26.70 per ounce in only 3˜1 days.

Using Statistical Analysis to Develop Intelligent Exits 131

TABLE 9.1 ENTRY RULE TESTS.

Exit after holding a position for N bars.

1.

2. Exit on an N bar low for longs or an N bar high for shorts.

9 3. Exit after N consecutive bars in which the trades moves against you.

4. Exit at a target profit oi N.

Using Statistical Analysis

to Develop Intelligent Exits there are many reasons to exit a trade. The exit development process re-

quires mixing money management and technical information about a sys-

tem for a given market. For example, you can exit a losing trade using a

$500.00 stop, or you can exit a long position when the market reaches a

5-bar low. You should exit a trade when the assumptions that caused you

to enter the trade are proven wrong. Letâ€™s suppose you entered a long

trade because the price bounced off a support line. You should then exit

the trade if the price closes below that support level. You might also exit

a trade if you have IX) opinion about the future direction of the market you

When most traders develop mechanical trading systems, they spend 90

are currently trading.

percent of their time developing the entry signals. The exit signals are

Now that you have a better understanding of the logic used in devel-

usually less complex and are tested only in combination with the entries.

oping exits, I can show you how to develop your own intelligent exits.

Unfortunately, this process does n+X develop optimal exits for a given sys-

tem and market. This chapter discusses how to develop properly designed

exit signals, using various statistical methods.

DEVELOPING DOLLAR-BASED STOPS

THE DIFFERENCE BETWEEN DEVELOPING One of the mosr frequently used methods for exiting a trade is triggered

when the market moves some given level against investors. These types

ENTRIES AND EXITS

of exits, called â€śstops,â€ť are used in two ways:

The underlying logic between developing entry and exit signals is differ-

1. If a trade is losing money, exit at a given loss level to protect the

ent. When developing entry signals, we are trying to find a set of condi-

trading capital.

tions that statistically produces a good risk-reward ratio when taking a

position in a given direction. To judge how predictive an entry rule is, I 2. If a trade is winning, use money-level stops. Exit the trade after

use the set of primitive exits shown in Table 9.1. reaching a minimum profit level and retracing a given percentage of

Test your entry rules using different values of N for each of the prim- the maximum profit during that trade. This is called a â€śtrailing

stop.â€ť

itive exits defined above. This will allow you to evaluate how well agiven

entry rule works in predicting future market direction. It is easy to un-

derstand the logic needed to develop and test entry rules. The problem Scatter charts can be used to develop intelligent Dollar-based stops

with developing exitrules is thatthe logic is tx)t as easy to define because for a given system. In Chapter 4, we showed a simple channel breakout

130

Using Statistical Analysis to Develop Intelligent Exits 133

Statistically Based Market Prediction

132

TABLE 9.2 CODE TO GENERATE MAXIMUM ADVERSE

MOVEMENT SPREADSHEET.

Input: Length(l0);

Vars: Direct(O);

Buy Highest(High,Length)+l point Stop;

point Stop;

Sell Lowest(Low,Length)-1

Direct=MarketPosition;

If Current&u=1 then

P,int(file(â€śd:\book\chap9\dmadver.txtâ€ť),â€ťEntryDateâ€ť.â€ś,â€ś,â€śMarketPositionâ€ť,â€ś,˜,˜

MaxPositionLossâ€ť,â€ś,â€ś,â€śPositionProfitâ€ť);

If DirectoDirect[l] then begin

Print(file(â€śd:\book\chap9\dmadver,txtâ€ť),EntryDate(l ).â€ś.â€ś,MarketPosition(1),â€ś,â€ś,

MaxPositionLoss(lLâ€ť,â€ś,PositionProfit(l));

end;

FIGURE 9.1 Adverse movement versus final trading profit.

system that used a 20-day high or low to set the breakout. The system

was profitable over many different commodities. Letâ€™s look at how this

system performed on the D-Mark and how we can use intelligent exits to

maximum adverse movement of more than -$1,800.00 also made more

improve the system. than $2,000.00 when the trade was closed. During the trade history, 29

The original system on the D-Mark, shown in Table 4.8, made a little

trades finished with a profit of more than $1,500.00. There were 25 trades

over $56,000.00 and had a drawdown of over -$22,000.00 from l/1/80

with a maximum adverse movement of more than -$1,800.00.

to 5/17/96. Letâ€™s develop an intelligent money management stop for this

On the basis of our analysis using the scatter chart, we can set a pro-

system.

tective stop of -$1,800.00. This stop level is based on the fact that only

We start by collecting the maximum adverse movement and final profit

three winning trades had adverse movement of -$1,800.00 or more. We

for each trade. The code (in TradeStationâ€™s EasyLanguage) for collecting

used this -$1,800.00 level and then added $50.00 for commissions. The

this information in an ASCII file is shown in Table 9.2.

protective stop produced the results shown in Table 9.3.

This code first saves the current market position for each bar in the

Drawdown was reduced by over 50 percent. Net profit increased

variable â€śDirect.â€ť When the market position changes from one bar to the

slightly and the winning percentage decreased by only 1 percent!

next, we have just closed out a position. We then output the following to

an ASCII file: entry date, market position, and maximum position loss,

which we refer to as maximum adverse movement.

TABLE 9.3 SYSTEM RESULTS

USING NEW STOPS.

Net profit

USING SCATTER CHARTS OF˜ADVERSE MOVEMENT TO $72,462.50

Percent profitable 49%

DEVELOP STOPS

Average winner $2,964.42

Average loser -.$1,512.61

Figure 9.1 shows a scatter plot of maxitium˜adverse movement on the X Maximum drawdown -$10,087.50

axis and final trade profit on the Y axis. Only three trades that had a

,._

Statistically Based Market Prediction Using Statistical Analysis to Develop Intelligent Exits

134 135

then check to see whether the position in the market has changed and

You might ask: Why not just use the optimizer in TradeStation to set

whether the trade has lasted Nor more bars. If so, we output the entry

the stops? Analyzing scatter charts will produce better results, for several

date, Position, P/L at bar N, and final profit. We can then chart these re-

reasons. First, we want to improve overall system performance, not just

sults using a scatter chart by plotting profit on bar N on the X axis and

net profit or drawdown. The TradeStation optimizer can optimize on only

final trade profit on the Y axis.

one factor. This can cause a major problem because protective stops al-

Letâ€™s analyze our simple system on the D-Mark for bar 5 of our trade.

ways lower the winning percentage of a system. The optimal stop based

Figure 9.2 shows a scatter chart of adverse movement on day 5 versus

on profit or drawdown could reduce the winning percentage by 30 to 70

final trade profit.

percent. For many trend-following systems, such as our D-Mark example,

Only 3 trades that were losing more than $500.00 during day 5 are

the winning percentage could drop from 40 or 50 percent down to 25 per-

profitable. Only 5 trades closed profitably that were not profitable by day

cent or less. Second, and more important, when stops are developed based

5. Based on this analysis, we can use either a $500.00 stop on day 5 or a

on the distribution of trades, they are more robust and should continue to

breakeven stop on day 5. The results for both possibilities are given in

work into the future. When stops are developed based on optimization,

Table 9.5.

it is possible that a few isolated events could have produced the improve-

Table 9.5 shows that using a breakeven stop on day 5 cuts the draw-

ments. Because these improvements are not based on the distribution of

down by over 50 percent from the original system and cuts the average

trades, they might not be as robust.

losing trade from -$1,899.50 to -$961.72. Because we can use a

We also can analyze current trading profits versus final trading profits

breakeven or a $500.00 stop on day 5, we have greatly reduced the risk

on a bar-by-bar basis, TradeStationâ€™s EasyLanguage code for creating these

statistics in an ASCII file, for any bar in a trade, is shown in Table 9.4.

This code generates the trades and saves the current market position

and the bars since entry. We save the open position profit for bar N. We

TABLE 9.4 CODE TO OUTPUT ADVERSE MOVEMENT OF BAR N.

Input: LengthUO),BarNo(S);

Vars: MarkPos(O),TradeLen(O),OpProf(O):

Buy Highest(High,Length)+l point Stop;

sell Lowest(Low,Length)-1 point Stop;

MarkPos=MarketPosition;

TradeLen=BarsSinceEntry;

if BarsSinceEntryzBarNo then OpProf=OpenPositionProfit:

if CurrentBar=l then

Print(file(â€śd:\book\chap9\chap9B.txtâ€ť),â€ťEntryDateâ€ť,â€ś.â€ś.â€śMarketPOSitionâ€ť,â€ś˜â€ś,â€ť

CurrentProfIt â€ś,â€ś,â€ś,â€śPositionProfitâ€ť˜;

if MarkPosoMarkPos[ll and TradeLenll˜l>=BarNo then begin

Print(file(â€śd:\book\chap9\chap9B.txtâ€ť),EntryDate(l ),â€ś,â€ś,MarketPosition(l),â€ś,â€ś,OP IMuerse HOâ€ťml.?â€ś, rmg 5

Prof,â€ś,â€ś. PositionProfit(l

FIGURE 9.2 Adverse movement on day 5 versus final trading profit and

end; loss.

Using Statistical Analysis to Develops lntellieent Exits

Statistically Based Market Prediction 137

136

ADAPTIVE STOPS

TABLE 9.5 RESULT BASED ON ADVERSE MOVEMENT

ON DAY 5 OF THE TRADE.

To develop a simple adaptive example, we start with an S&P500 pattern

Breakeven

$500.00 stop

we have discussed in Chapter 1 and Chapter 8. This pattern buys on Mon-

$70,562.50

$72,475.50

Net profit days when T-Bonds futures are above their 26-day moving average. This

38%

45%

Percent profitable pattern made a little over $89,000.00 from April 21, 1982, to July 27,

53,072.92

$3,042.00

Average winner

1996, with a drawdown of about -$19,000.00. We will now develop an

-8961.72

-$1,284.27

Average loser

$551.27 adaptive stop for this pattern. On every bar, we will find and keep track

$647.10

Average trade

-$9,637.50

-$11,312.50

Maximum drawdown of the adverse movement on winning trades only. We will then calculate

both the average and the standard deviation. We will place our stop at

one standard deviation from the mean below the open.

To develop an adaptive intelligent exit, we simulate taking trades with-

out stops and we collect the adverse movement information. If we use

attached to using the $1,800.00 stop we discussedearlier. Using this type

TradeStation to manage the trades once we begin to use the stops, we then

of analysis, we can set an ideal stop or even a trailing stop for the com-

have a different statistical profile. For this reason, we must simulate the

plete life of the average trade. The analysis can be repeated for at least the

trades without stops and then apply the information to rules that actually

number of bars in the average winning trade or average losing trade,

take these trades in TradeStation. We also must make sure that we have

whichever is greater. Our D-Mark trader would run this analysis from

enough information before applying it to our system. In our example, we

bar 1 to bar 60 because an average winning trade lasted 60 days and an

waited and collected 30 winning trades. Until then, we used a 2.point

average losing trade lasted only 19 days.

($l,OOO.OO) stop. The TradeStation code for this system and stop is shown

With the filter, trades that would have been stopped out on previous

in Table 9.6.

bars can be excluded from analysis. This method offers the possibility of

The results for this system for the period from 4/21/82 to 7/26/96, with

developing both protective stops and trailing stops, based on the statis-

$50.00 deducted for slippage and commissions, are shown in Table 9.7.

tics of the system and market we are trading. Using the bar-by-bar analy-

We have improved our net profit and still cut our drawdown in half. We

sis method, it is possible to exit trades using much smaller stops, or even

have also won about the same percentage of our trades (55%), and cut

to exit at a profit trades that would normally have been exited at a loss.

our largest losing trade from -$9,425.00 to only -$1,925.00.

Suppose we have a system in which trades that had an open position profit

This simple example shows the power of developing a system using

of more than $200.00 on bar 20 average a final profit of $l,OOO.OO, and

adaptive intelligent exits. This example made it easy to show how to col-

trades that had an open position profit of less than $200.00 produce an av-

lect the adverse movement information and simulate the trades. You can

erage loss of $200.00 per trade. We could use a profit floor of $200.00 for

apply this type of technology to any system, as long as you can write code

bar 20 and improve our systemâ€™s performance. This is only the beginning

to simulate the trades and collect the statistics.

of the uses of bar-by-bar analysis to develop stops for a mechanical trad-

ing system.

This chapter has introduced the concepts of developing statistically

Using a scatter chart to develop intelligent exits is a valuable method

based intelligent exits. This technology can also be used to develop exits

for creating stops, but it does have some drawbacks. Primarily, it is very

based on the potential profit and risk for your trade. Other technologies,

time-consuming and must be rtm for each system and commodity being

such as neural networks and machine induction, which are discussed later

traded. Using TradeStation, letâ€™s try to develop self-adjusting exits based

in this book, can also be used to develop intelligent exit methods.

on this technology.

Using Statistical Analysis to Develop Intelligent Exits

Statistically Based Market Prediction 139

138

TABLE 9.7 RESULTS OF BUY MONDAY

TABLE 9.6 S&P500 MONDAY SYSTEM WITH ADAPTIVE STOPS.

WITH ADAPTIVE STOPS.

Vars: WinNo(O),AveWin(O),StopLev(2);

Net profit $92,950.00

Vars: stdwin(O),Count(O);

55

Win%

{ If we had signal to buy Monday and it was a winning trade store adverse

$224.52

Average trade

movement1 Drawdown -$9,175.00

if (DayOfWeek(Date)=S and Close of DataZ>Average(Close of Data2,26))ll I and 1.62

Profit factor

close>Open then begin AdvWinIWinNol=Open-Low;

WinNo=WinNo+l;

end;

We also learned how to develop a very simple trade simulator and col-

{ Calculate the Average Adverse movement] lect the statistics based on these signals. The next chapter addresses the

if WinNo> then begin development of these simulators in more detail. What we have learned

For Count=0 to WinNo begin here will be used later to develop system examples using advanced tech-

AveWin=AveWin+AdvWinICountl; nologies such as neural networks and genetic algorithms.

end;

AveWin=AveWin/WinNo+l;

( Calculate the Standard Deviation)

for Count=0 to WinNo begin

stdwin=(˜vWin[Count]-AveWin)*(AdvWinLCountl-AveWin)+stdwin;

stdwin=SquareRoot(stdwin/(WinNo+l));

end;

if DayOfWeek(Date)=S and Close of Data2>Average(Close of Data2.26) then

buy at open;

exitlong at close;

( Use Adaptive Exit After 30 Trades and 2 point exit before1

( Using one Standard Deviation from the mean will only stop out 5% of the

trades based on a normal distribution}

if WinNo> then exitlong (â€śAdaptiveâ€ť) at NextOpen-AveWin-stdwin stop

else exitlong fâ€ťAdaptive2â€ť) at NextOpen-2.00 stop;

Usine Svstem Feedback to lmwove Tradine Svstem Performance 141

HOW TO MEASURE SYSTEM PERFORMANCE

FOR USE AS FEEDBACK

10 We measure the performance of a system based on its trade history. We

can look at the trading results from two different perspectives: (1) on the

basis of only closed-out positions, or (2) on a bar-by-bar basis, by record-

Using System Feedback ing the opening equity of each position. In this book, we will study only

the equity of closed-out positions.

to Improve Trading We also can consider how technical and intermarket analysis can be

combined with equity analysis to improve system performance. When we

System Performance analyze which component of a trading system model generated a signal

and combine this knowledge with trading performance feedback, we are

able to see relationships that are important in modeling the markets but

would not otherwise be observed.

METHODS OF VIEWING TRADING PERFORMANCE

FOR USE AS FEEDBACK

The first method of viewing performance is to build an equity curve,

which is a moving cumulative value of all closed out trades over the eval-

A mechanical trading system is really a predictive model of a given mar-

uation period for a particular system. A modified version of this method

ket. Many predictive models suffer from the effect of noise or an inade-

is to view not all trades, but only closed-out trades, over a period of N

quate understanding of the problem being modeled. In most fields outside

of trading, the errors of the last few forecasts are analyzed in order to days-usualiy, the past 100 days.

Another method is to view a trade history, which can be generated in

improve future forecasts. This concept is calledfeedback. Most predictive

a program like SuperCharts TM. The trade history would have a trade on

models output numerical values. A simple trading system has four dis-

each row and would show the type of signal, entry date and price, exit

crete values: (1) long, (2) exit long, (3) short, and (4) cover.

date and price, trade profit and loss, and current equity. SuperCharts al-

lows exporting this type of history to a spreadsheet.

HOW FEEDBACK CAN HELP MECHANICAL We can also view closed trade equity versus technical or intermarket

TRADING SYSTEMS relationships. For example, a loo-day change in equity can be plotted on

the Y axis, and volatility on the X axis. Figure 10.1 is an example of this

type of chart.

In a mechanical trading system, feedback is valuable in efforts to iden-

Alternatively, current volatility can be plotted versus the next N-day

tify which signals from the system have the highest probability of being

change in equity.

profitable. Feedback also helps in selecting which system to trade when

The interrelationships between the trading systems and the market

multiple systems are trading in the same market. Later in this book, we

price actions are very complex and are well hidden from simple analy-

will discuss how to implement this application using advanced technolo-

sis. When a system is complex and uses many indicators, it becomes

gies such as machine learning. :

140

142 Statisticah Based Market Prediction Using System Feedback to Improve Trading System Performance 143

develop stops for the actual trading system in a walk forward adaptive

manner. The same idea can be used to adapt system parameters and rules

based on a moving window of equity.

Letâ€™s look at an example of how this approach can improve system per-

formance. We will use a 20-bar channel breakout, as in Chapter 4, and

will apply this analysis to the D-Mark.

Channel breakout systems can be helpful using equity feedback be-

cause there are dependencies for trades. For example, if the last trade

on the buy side was winning, the chances of the next trade on the buy

side, winnings are increased. Letâ€™s now develop code for TradeStation,

which can track a simulated equity curve for both the long and the short

sides. We need to simulate these trades because once we change the

trades based on our analysis, we have, in effect, changed the system. If

we want to develop statistics on our original system and use them to

modify a new system, we need to simulate the trade. The code for sim-

IS 2 3 35

-l-O 05 I ulating both the long and short sides for a channel breakout system is

uotatiaty shown in Table 10.1.

These two simple user functions-TrendChanShort and TrendChan-

A loo-day change in equity versus 1 N-day volatility.

FIGURE 10.1 Long-keep track of the equity on the long side and the short side, respec-

tively. They work well, but in order to simplify this example, we avoid

handling several special cases. First case is that, if you are using backad-

impossible to judge how it will perform in any given market condition.

Feedback analysis is a valuable tool for making this impossible job man- justed contacts, we do not handle cases in which prices of a commodity go

negative. Another unaddressed issue is when both a long and a short signal

ageable. All mechanical systems have a given trading pattern or â€śfoot-

print.â€ť When we test most trend-following methods over a long enough are generated on the same day. These issues are not a problem with com-

time period, a common trading pattern emerges. The classic channel modities like the currency, the S&P500, and T-Bonds, but they are a prob-

lem with crude oil because this market often moves in a tight range. The

breakout system will win 35 to 50 percent of its trades, and the average

code for handling these issues exists but is beyond the scope of this book.

winner will be much larger than the average loser. The winner will also

have more bars in its winning trades than in its losing trades. Another Letâ€™s now use these routines in the channel breakout trading system.

often overlooked fact is that most trading systems are directly or indi- We ran this system on the D-Mark, using data from l/l/SO to 5/17/96.

Using equity analysis, we found a pattern that worked well on the

rectly based on cycles, System performance will change as the different

D-Mark: Take only short trades when the system has been profitable on

dominant cycles in the market appear, disappear, and shift. We will also

the long side but has been losing money on the short side. Using the same

see changes in the trading results as changes occur in the composite of

logic, we take only long trades when the system is making money on the

multiple cycles present in the market.

short side and losing on the long side. We also wait 10 trades before ap-

plying equity analysis. The code for our feedback channel breakout sys-

WALK FORWARD EQUITY FEEDBACK tem is shown in Table 10.2.

We tested this code on the D-Mark data from the period l/l/SO to

5/17/96. Our results are shown in Table 10.3, along with the results for

In Chapter 9, we showed how we can simulate a trading system and calcu-

the original system.

late its adverse excursion on a system. We can then use this information to \

Using System Feedback to improve Trading System Performance 145

144 Statistically Based Market Prediction

TABLE 10.2 CHANNEL BREAKOUT WITH

TABLE 10.1 CODE TO SIMULATE EQUITY

SIMULATED EOUITY CODE.

ON LONG TRADES ONLY.

Input: SLen(1 80),LLen(l20);

Inputs: ChanLen(Numeric);

Vars: LongE˜â€ś(OUhortEqâ€ťiO),TBars(O˜;

Vars: BuyLev(Ol,SellLev˜O˜,BuyEntry˜O˜,SellEntry˜0˜,Position˜0˜;

ShortEqu=TrendChanShort(20,0):

â€˜Jars: Returns(O);

LongEqu=TrendChanLong(20,0);

if CurrentBar= then Returns=O;

If TotalTrades<lO then begin

BuyLev=Highest(High,ChanLen)[ll;

Buy Highest(High,ZO)+l point Stop;

sellLev=Lowest(Low,ChanLen)[ll:

Sell Lowest(Low,20)-1 point Stop:

If High>BuyLev and position<>1 then begin

end;

BuyEntry=MaxList(BuyLev,Open);

If ShortEqu-ShortEqu[SLen]<O and LongEqu-LongEqu[LLenl>O then Sell

Position=l;

Lowest(Low,ZO) -1 point Stop;

end:

ExitShort at Highest(High,ZO)+l point stop;

If Low<SellLev and position<>-I then begin

If LongEqu-LongEqu[LLenl<O and ShortEqu-ShortEqu[SLenl>O then Buy

SellEntry=MinList(SellLev,Open);

Highest(High,20) -1 point Stop;

Position=-1;

ExitLong at Lowest(HiRh,ZO)+l point stop;

end;

if Position=1 and Position[l]=-1 then Returns=Returns+(SellEntry-BuyEntry);

TrendChanLong=Returns;

By using this method of equity feedback to filter our trades, we reduced

{ Code to simulate equity on short trades only1

both the number of trades and the drawdown by almost half. We made 11

Inputs: ChanLen(Numeric);

percent less money but increased our average trades by 84 percent.

V X BuyLev˜O),SellLev˜O˜,BuyEntry˜O˜,SellEntry˜O˜,Position˜O˜;

Our second example, a 20-day channel breakout system applied to the

Vars: Returns(O); D-Mark, used a moving-average crossover of equity as a filter for both

if CurrentBar=l then Retums=O;

long and short trades. We take only long trades when a faster period mow

BuyLev=Highest(High,ChanLen)ill;

ing average of long equity is greater than a slower period one. We take

sellLev=Lowest(Low,ChanLen)[ll;

only short trades when a faster period moving average of equity on the

If High>BuyLev and position<>1 then begin

BuyEntry=MaxList(BuyLev,Open);

Position=l;

TABLE 10.3 CHANNEL BREAKOUT ON D-MARK WITH

end;

AND WITHOUT EQUITY ANALYSIS.

If Low&llLev and position<>-1 then begin

Original System Equity Feedback Demo 1

SelIEntry=MinList(SetlLev,Open);

Position=-I; Net profit $56,663.75 $50,945.00

Trades 104 50

end;

Win% 50 54

if Position=-? and Position[l]=l then Returns=Returns+(SellEntry-BUyEntry);

Average trade $544.84 $1 ,018.90

TrendChanShort=Returns

Drawdown -$22,075.00 -$12,245.00

Profit factor 1.57 2.53

146 Statistically Based Market Prediction Usina System Feedback to Improve Trading System Performance 147

TABLE 10.4 CODE FOR MOVING AVERAGE OF EQUITY FILTER Using this moving average filter, we cut the drawdown by over 200

WITH CHANNEL BREAKOUT SYSTEM. percent and filtered out a little more than one-third of the trades. We did

this while slightly increasing net profit.

Input: SLen(l30),LLen(l50);

Van: LongEqu(O),ShortEqu(O),TBars(O);

ShortEqu=TrendChanShort(20,0);

HOW TO USE FEEDBACK TO DEVELOP ADAPTIVE

LongEqu=TrendChanLong(20,0˜;

SYSTEMS OR SWITCH BETWEEN SYSTEMS

If TotalTrades<lO then begin

Buy Highest(High,ZO) + 1 point Stop;

Equity curve analysis is a powerful tool for improving the performance

Sell Lowest(Low,20) 1 point Stop;

of trading systems. The examples presented are only a sample of what

end;

cab be done using this technology. In another application, this technol-

If Average(ShortEqu,SLen)>Average(ShortEqu,LLen) then Sell Lowest(Low,ZO) 1

ogy can be used to switch between different trading systems. Suppose we

point Stop;

had a channel breakout system, a system based on intermarket analysis,

ExitShort at Highest(High,ZO)+l point stop;

and a countertrending type of system. We could simulate the equity

If Average(LongEqu,SLen)>AveragefLongEqu,LLen) then Buy Highest(High.20)

I point Stop; curves for each of these systems, and then, based on our analysis, select

ExitLong at Lowest(High,ZO)+l point stop; the one that currently has the highest probability of producing the best

trade. When developing these types of models, we can combine technical

or fundamental factors with our equity curve analysis to further improve

performance.

short side is above a slower period one. The code for this system is shown

Another application of this technology is to adjust parameters for a

in Table 10.4.

given system in a walk forward manner. This can be done by simulating

On the basis of our analysis, we found that using a 13Gday average of

the equity curve of many different sets of parameters for the same sys-

equity minus a 150-day average produced good results. It might be sur-

tem and then, based on our analysis, selecting the best one to currently

prising to know that these two moving averages produce goodâ€™results

trade.

when they are so close together. We were surprised, but almost all of the

top sets of parameters had moving-average lengths very close together.

Using lengths this close reduced our analysis to a mechanical way of

WHY DO THESE METHODS WORK?

saying that the last trade was profitable, and we did not have a quick l-

or 2-day whipsaw. The results based on these parameters, over the same

For a channel breakout system to work well, two conditions must be true:

period used earlier, are shown in Table 10.5.

1. The market must be trending.

TABLE 10.5 RESULTS OF CHANNEL 2. The performance of a channel breakout system is linked to a rela-

BREAKOUT WITH EQUITY FILTER. tionship between the period used in breakout and the dominant

$57,886.25 cycle.

Net profit

69

Trades

Win% If the cycle length becomes longer, then the channel breakout system will

:L3.93

Average trade get whipsawed during Elliott Waves two and four. ADX can be used to

Drawdown -$10˜,335.00

see whether the market is trending. The problem is that the interaction

2.11

Profit factor

between the dominant cycle and how much the market is trending is

Statistically Based Market Prediction

148

complex and has a major effect on the performance of a channel break-

out system. Modeling this interaction would require developing complex

rules that will make the system less robust. Equity curve analysis can be

11

used to solve this problem because the equity curve contains information

about this interaction that can be used to improve the system without a lot

of complex analysis.

An Overview of

The concept of system feedback is a powerful trading tool that can im-

prove profit, drawdown, and the winning percentage for most trading

Advanced Technologies

strategies. Trading a system without at least being aware of the equity

curve is like driving a car at night without lights-possible, but dangerous.

Advanced technologies are methods based on machine learning or on

analysis of data and development of models or formulas. I have used many

different advanced technologies in developing market timing systems.

These technologies include neural networks, machine induction methods,

and genetic algorithms. This chapter gives a technical overview of each

of these methods and introduces chaos theory, statistical pattern recog-

nition, and â€śfuzzy logic.â€ť

THE BASICS OF NEURAL NETWORKS

Neural networks are loosely based on the human brain but are more sim-

ilar to standard regression analysis than to neurons and synapses. Neural

networks are much more powerful than regression analysis and can be

programmed for many complex relationships and patterns that standard

statistical methods can not. Their effectiveness in pattern recognition

makes them ideal for developing trading systems.

Neural networks â€ślearnâ€ť by using examples. They are given a set of

input data and the correct answers for each case. Using these examples,

a neural network will learn to develop a formula or model for solving a

given problem.

149

150 Statistically Based Market Prediction 151

An Overview of Advanced Technologies

Let us now discuss how a simple neural network works. Artificial is set to a random value. We call this process â€śinitializing the weights.â€ť

neural networks, like the human brain, are composed of neurons and Once we have initialized the weights, we can adjust them during the

synapses. Neurons are the processing elements of the brain, and synapses learning process.

connect them. In our computer-simulated neural networks, neurons, also A perceptron neural network learns by repeatedly producing an an-

called nodes, are simply elements that add together input values multi- swer for each case, using the current value of its weights and compar-

plied by the coefficients assigned to them. We call these coefficients ing that value to the correct answer. It then adjusts the weights to try to

weights. After we have added together these values, we take this total and better learn the complete set of data. We call this process â€śsupervised

apply a decision function. A decision function translates the total pro- learning.â€ť

duced by the node into a value used to solve the problem. With simple two-layer perceptrons, the method or algorithm used to

For example, a decision function could decide to buy when the sum is adjust these weights could not solve a very important type of problem.

In 1969, Minsky and Papart, in a book entitled The Perceptron, proved

greater than 5 and sell when it is less than or equal to 5. Figure 11.1 shows

a simple example of a neural network. that a simple perceptron with two layers could not solve â€śnon-linearly

The rows of one or more nodes are called layers. The first row of nodes separable problemsâ€ť such as â€śExclusive OR.â€ť An example of an Exclusive

is the input layer, and the last row is the output layer. When only these OR problem is: You can go to the store or to see a movie, but you cannot

simple neurons are connected, we call it a two-layer perceptron. do both.

During the early 196Os, Bernard Widrowâ€™used two-layer perceptrons This flaw in two-layer perceptron neural networks killed funding for

to solve many real-world problems-for example, short-range weather neural network research until the mid-1980s. Many researchers still con-

forecasts. Widrow even developed a weather forecasting neural network tinued working on neural networks, but, without funding, progress was

that was able to perform as well as the National Weather Service. slow.

How do we get the value of the weights used to solve a problem? Be- In 1974, Dr. Paul Werbos developed a method for using a three-layer

fore a neural network starts learning how to solve a problem, each weight neural network to solve nonlinearly separable problems such as Exclusive

OR. Rumelhart popularized a similar method and started the neural ex-

plosion in the mid-1980s. This method, called â€śbackpropagation,â€ť is the

most widely used neural network algorithm today.

Letâ€™s see how this method differs from two-layer perceptrons.

Figure 11.2 sh0ws.a simple backpropagation neural network. The sec-

ond row of nodes is called the hidden layer. The first and third layers are

called the input layer (inputs) and the output layer (outputs), respectively.

A backpropagation neural network will have one or more hidden layers.

There are two major differences between a backpropagation neural net-

work and a simple two-layer perceptron. The first difference is that the

decision functions must now be more complex and nonlinear. The second

difference is in how they learn. In general, a backpropagation neural net-

˜work learns in the same way as the two-layer perceptron. The main dif-

R a w Output=tnputlâ€ťWeighfl+Input2*Weight2+lnpti3*Weight3 ference is that, because of the hidden layer(s), we must use advanced

Then we apply the decision function mathematics to calculate the weight adjustments during learning.

The classic backpropagation algorithm learns slowly and could take

thousands ofipasses through the data to learn a given problem. This is

FIGURE 11 .l A simple two-layer pe%eptton.

An Overview of Advanced Technologies 153

152 Statistically Based Market Prediction

network can be used just like any other indicator to build trading systems.

I -

inputs

The basics Neural networksâ€™ predictions donâ€™t need to have high correlation with fu-

ture price action; a correlation of 2 or .3 can produce huge returns.

sap500

MACHINE INDUCTION METHODS

Bonds

S&P500 % change

+5 days Machine induction methods are ways to generate rules from data. There

are many different machine induction methods, and each one has its own

CRB

strengths and weaknesses. The two that I use are called C4.5, which is a

descendant of an algorithm called ID3, and rough sets.

This diagram shows a simple

neural networkâ€™s processing

ńňđ. 3 |