INSIGHTS AND CHALLENGES ASSOCIATED WITH DETERMINING SEISMIC DESIGN FORCES IN A LOADING CODE

This paper presents and discusses a number of important topics which affect the determination of seismic design forces in a loading code. These range broadly from seismic hazard through to design philosophy and include the following aspects: influence of uncertainty in determining seismic hazard, seismic hazard parameters, site effects, probability level of design ground motions, role of deformations in seismic design, performance expectations and level of protection. The discussion makes frequent reference to the seismic provisions of both the National Building Code of Canada (1995) and the New Zealand Loading Standard (1992). Also, comparisons are made of seismic hazard and seismic design forces for several Canadian and New Zealand cities.


INTRODUCTION
The author was a Visiting Erskine Fellow at the University of Canterbury from January through May 1995.Early in that period he was invited by the New Zealand National Society of Earthquake Engineering to present a lecture based on his experience in developing seismic provisions for the National Building Code of Canada (NBCC).
The Canadian National Committee on Earthquake Engineering (CANCEE) has the responsibility for preparing and recommending the NBCC seismic provisions, which have since 1980 been revised on a five-year cycle.The author has been a member of CANCEE since 1968and was Chairman from 1975to 1981.This code development experience, including a substantial amount of related research as well as numerous discussions with earthquake engineers in various countries, have led to a number of perspectives concerning the determination of seismic design forces; these perspectives form the basis for this paper.
The topics covered in this paper range broadly from seismic hazard through to design philosophy.The discussion of the various topics contains frequent references both to the seismic provisions of the 1995 edition of the NBCC [Associate Committee on the National Building Code] and to the earthquake provisions of the 1992 New Zealand Loading Standard (NZS) [Standards New Zealand].The approach taken in this paper is to highlight major issues rather than to develop a comprehensive approach to the determination of seismic design forces.University, Ontario, Canada. 1995 NZNSEE Travelling Lecturer (Member) 2. SEISMIC HAZARD DETERMINATION

Historical Perspectives
The determination of seismic hazard for application in specifying seismic design forces has changed very significantly during the past four decades, in parallel with the increasing sophistication of building code seismic provisions.
The historical development of hazard mapping in Canada [Basham, 1995] for use in the seismic provisions of the NBCC is illustrative: • The first map was introduced in the 1953 edition of NBCC, which, on the basis of a qualitative assessment of historical earthquake activity, divided Canada into four zones according to damage to be expected from future earthquakes.The highest zone (designated 3) was deemed to have an earthquake history comparable to that of California while the lowest zone (designed 0) had little known earthquake history.
• The next map, introduced in the 1970 edition of NBCC, was based on applying the Gumbel extreme-value method to peak horizontal ground accelerations estimated on the basis of historical earthquakes in Canada (and adjacent regions) from 1900 to 1963.This map also contained four zones whose boundaries were based on contours of peak acceleration with a 0.01 annual probability of exceedance.
• The next major change occurred in the 1985 edition of NBCC; the maps introduced at that time remain in use in the current (1995) edition.These maps of peak horizontal acceleration and velocity were based on "seismotectonic probabilism" by applying the "Cornell-McGuire" method to 32 earthquake source zones in Canada and adjacent regions [Basham et al., 1985].The map of peak horizontal velocity was added to depict strong ground motion at a period of approximately 1 s because peak horizontal acceleration depicts ground motion at only low periods, i.e. approximately 0.2 s.
There was also a change of probability level to 10% exceedance in 50 years (0.0021 per annum) to come closer to the perceived safety levels inherent in the NBCC design provisions.
The earthquake provisions of NZS 1992 are based on seismic hazard determined using the "Cornell-McGuire" method [Matuschka et al., 1985].In this case, 5 % damped uniform hazard spectra (UHS) at a return period of 450 years (0.0022 per annum) were determined.The spectrum recommended for use in the loading standard is a smoothed upper bound spectrum modified in the long period region to provide a constant velocity spectrum for periods greater than 1 s.This spectrum is normalized at a period of 0.2 sand a contour map of the 0.2 s spectral acceleration ordinate is used to scale the spectrum for use in various parts of the country.
The determination of design forces based on a UHS, as introduced in NZS 1992, is clearly an improvement on the method of applying amplification factors to peak horizontal ground motions, i.e. accelerations and velocities.Spectral ordinates are a direct representation of the seismic forces being imposed on a structure whereas additional approximations are introduced by applying amplification factors to peak horizontal ground motions.
The hazard analysis conducted by Matuschka et al. [1985] showed significant differences in spectral shape throughout New Zealand; nevertheless, their recommendation was to use a constant spectral shape scaled by the values at a period of 0.2 s.This is equivalent to scaling only to peak horizontal ground acceleration, since the spectral ordinate at 0.2 s correlates strongly to peak horizontal ground acceleration.This fonn of scaling has significant implications for medium to long period structures, which respond primarily to peak horizontal ground velocity.The choice of appropriate seismic hazard parameter(s) is discussed in more detail later in this paper.

Influence of Uncertainty
The determination of seismic hazard is associated with a high degree of uncertainty.It is important to distinguish between two kinds of uncertainty.Aleatory or random uncertainty arises from physical variability that is inherent in the unpredictable nature of future events.In the context of seismic hazard, the major source of aleatory uncertainty is the randomness of earthquakes and their ground motions which causes, for an earthquake of a specific magnitude at a specific distance, a scatter •of ground motion amplitudes about the median values.Consequently, even if the median value could be predicted with perfect accuracy, this random uncertainty would still exist.
The "Cornell-McGuire" method, as used in determining both the curre_nt Canadian and New Zealand seismic hazard, includes aleatory uncertainty in the ground motion relations, usually through assuming ground motion to be distributed in a log-normal manner and specifying the standard deviation of that distribution.Figure 1 illustrates aleatory uncertainty in the ground motion relations for peak horizontal acceleration developed for use in Japan [Fujima and Tanaka, 1990].This uncertainty is typically characterized by the standard deviation (a) in a log-normal distribution.In this figure, a is 0.21, but values can be as high as 0.30 to 0.40.A value of a = 0.30 means that the +/a values are approximately a factor of 2 from the median ground motion relationship; this would mean that 68 % of the data is within a factor of 2 of the median.

225
The second type of uncertainty is known as epistemic or modelling uncertainty and arises from incomplete knowledge associated with modelling assumptions, unknown or only partially known parameters and extrapolation beyond the observed range of data.While there are many sources of epistemic uncertainty in seismic hazard determination (e.g.specification of source zones, specification of earthquake depths and determination of rates of seismicity), the largest source of this kind of uncertainty is in the specification of ground motion attenuation relations.These relations, which are usually determined by regression analyses of strong motion data from previous earthquakes, are frequently in gross error in estimating the median attenuation of ground motion of future earthquakes.
A dramatic example is given in the attenuation of measured peak horizontal ground accelerations during the 1988 Saguenay earthquake. Figure 2 shows a comparison between the mean measured attenuation and several ground motion relations [Atkinson and Boore, 1990;Hasegawa et al., 1981;Joyner and Boore, 1981].The Hasegawa et al. relations, which were used in determining the 1985 Canadian seismic hazard maps, cross over the measured values at an epicentral distance of about 250 km but the attenuation shape is very poor.The attenuation shape of the Atkinson and Boore relations, which were determined for eastern North America, is quite reasonable but underestimates the measured values by more than a factor of 3. Because of the scientific judgements and modelling assumptions which go into developing such ground motion relations, different investigators will arrive at significantly different relations (as illustrated in Figure 2), each of which represents their best estimate of the median value of a specific ground motion parameter for a given magnitude and distance combination.
The Cornell-McGuire methodology does not explicitly include epistemic uncertainty, although it has been traditionally handled informally, either by making conservative assumptions or by using a sensitivity analysis.The treatment of this kind of uncertainty has been formalized in recent years through use of the logic tree approach [Coppersmith and Youngs, 1986].In this approach, a discrete distribution of alternatives is chosen for each variable in the analysis; the elements in these discrete distributions are weighted to represent the subjective assessment of the probability that the particular element is correct.For example, a best estimate value might be weighted at 0.5 whereas feasible upper and lower values might be weighted at 0.25 each, with a total weighting of 1.0.
For any given probability, the value of the hazard associated with each combination of all of the variables is itself computed using the Cornell-McGuire methodology.The distribution of these results is then analysed to determine the various fractiles, e.g.median and 84th percentile.The median is the value for which there is a 50% confidence level that the hazard will not be exceeded for the specified probability.Similarly, the 84th percentile value is one for which there is 84 % confidence of not being exceeded.The weighted sum of the contributions of all of the combinations represents the mean or expected value.The relationship between the mean, the median and the other fractiles depends upon the distribution of hazard, which is usually skewed significantly.If the single 'best' input for each parameter is used in lieu of a logic tree analysis, then the resulting hazard would be very near to the 50% confidence value.
Consequently, the relationship between the 84th percentile values and median values is a direct measure of the extent of epistemic uncertainty.
Uniform hazard spectra at 50% and 84% confidence levels for Vancouver and Montreal.
The logic tree approach combined with the Cornell-McGuire approach has been used in the determination of new seismic hazard results in Canada which are intended to be used as the basis for new seismic zoning maps which will be used in the seismic provisions of future editions of the NBCC [Adams et al. 1995].Three ground motion relations were used, comprising a "best estimate" relation as well as upper and lower relations.
For western Canadian locations, the upper and lower limits are +/one standard deviation from the best estimate.For eastern Canadian locations the upper and lower limits were determined by examining a range of "expert opinion" relations, as described by Atkinson [1995].
Figure 3 illustrates the range of professional opinion for spectral acceleration at lHz for an earthquake in eastern Canada of magnitude (mbLg) 5.5 at closest fault distances of 5, 20, 70 and 200 km.Each symbol represents the median value proposed by one expert with error bars for +/-one standard deviation.The solid filled circles are the best estimate relations used in the hazard analysis and the upper and lower relations are shown by the large horizontal bars.It can be seen that the epistemic uncertainty represented by the range of expert opinion is quite substantial.
The effects of epistemic uncertainty (in ground motion relations, seismicity rates, maximum magnitudes and hypocentral depths) on seismic hazard in Canada are illustrated in Figure 4, which shows the 84% and 50% confidence level uniform hazard spectra (for a 10% probability of exceedance in 50 years or a return period of 475 years) in Vancouver and Montreal [Heidebrecht and Naumoski, 1995].This figure shows that epistemic uncertainty is of great significance.The ratio of the 84th percentile to median values for Vancouver is approximately 2 throughout the spectrum and, for Montreal, ranges from 1.4 (short period) to 3. 3 (long period).These ratios are fairly typical for the western and eastern Canadian locations for which preliminary hazard computations have been done.
The above results indicate several implications for the computation of seismic design forces.First, the extent of epistemic uncertainty is such that it should be taken into account in determining the level of seismic hazard which is to be used for the calculation of seismic design forces.Hazard estimates made without including epistemic uncertainty may be umealistically low; it should be emphasized that there is only a 50% confidence that the hazard values will not be exceeded if median ground motion relations are used as the single "best estimate" relations.In fact, the published ground motion relations typically utilised in calculating hazard using the Cornell-McGuire approach are usually median curves.
Second, ground motion hazard estimates are highly uncertain, especially if the normal Cornell-McGuire methodology is applied using "best-estimate" values of the input variables.For the 50% confidence level, there is a sigificant likelihood that the actual motion, even for the same probability of exceedance, will be substantially higher than that which has been estimated.This means that engineers must not place the same level of reliance on the results of a seismic analysis as they would have in deformations due to dead and live load.There must be substantial capacity in excess of the design level in order to have a high degree of confidence that the designed structure can survive an earthquake.This aspect is discussed again later in this paper.

Historical and Geological Bases for Seismicity
Traditionally, seismicity in any region has been determined primarily on the basis of historical earthquake activity in the region.In many regions, because of the relatively short period of historical data, seismicity determined in this manner is often concentrated in small areas around significant historical events with sizeable other areas having little or no seismicity due to the absence of medium to large earthquakes during the period of historical record.One significant example of this phenomenon is in eastern Canada.There is considerable seismic activity centred in the Charlevoix region, approximately 120 km northeast of Quebec City.The historically-dominated seismic source zone models which were used in detemining the 1985 zoning maps resulted in very high values of peak ground motions with a very rapid reduction in those values as one moves away from that area.This can be seen clearly in Figure 5  Such so-called "hot spots" with rapid gradients of seismic hazard around a small area of historical earthquake activity are associated with relatively short historical records and are not likely to be representative of future hazard.One of the effects of the apparent high seismicity in such a region is to diminish the apparent hazard in other nearby regions which have not had a great deal of earthquake activity during the relatively short period of historical record.In the Canadian context, based on a reactivated rift hypothesis [Basham, 1995], earthquake activity can be expected to occur in a number of other regions of eastern Canada (near to the Charlevoix region) which have not been highly seismic during the short historical period.It is therefore desirable to develop alternative source zone models which are based on geological features which suggest potential future earthquake activity.
The geological basis for seismicity in eastern Canada has been incorporated into a source zone model (R model) with a small number of zones [Basham, 1995].Both this model as well as a model based primarily on historical seismicity (H model) have been used in the determination of new Canadian seismic hazard results as described by Adams et al. [1995].It would be possible to use the logic tree approach to combine the effects of the two alternative models (e.g. by weighting each at 0.50) but any weight given to the R model would of course reduce the calculated hazard (and consequently the level of protection) in regions of high historical seismicity.Instead, it has been decided to use a quasi-probabilistic alternative which involves computing the probabilistic hazard for each of the H and R models separately and then choosing the higher value of hazard for each spectral ordinate at each location.This so-called "robust" approach does not produce a truly uniform hazard map but is probabilistic in the sense that each spectral ordinate represents the probabilistic hazard associated with an identifiable model.
Figure 6   figure demonstrates the practical utility of the "robust" approach described above in that the use of the maximum value provides a reasonable representation of hazard in each location, without having one or other of the models diminish the hazard at either location.

Peak Ground Acceleration
While the limitations of peak ground acceleration (PGA) as a measure of the damage potential of strong seismic ground motions have been recognized for some time, it continues to be given an inappropriate level of importance in earthquake engineering.For every new earthquake which occurs, including the recent Hyogo-Ken Nambu earthquake (January 17, 1995) which devastated Kobe, the most common description of the severity of ground motions is the level of recorded PGA.There are two primary reasons why the PGA reference continues to predominate: i) it is the parameter which is the easiest to determine, since it can usually be read directly from the trace of a strong motion accelerogram, and ii) engineers feel comfortable with acceleration as a parameter, since inertial forces equal mass times acceleration, so that the lateral force coefficient (of weight) for seismic loading is often thought of as an acceleration-type term expressed as a decimal percentage of the gravitational acceleration "g".

Firm Ground Uniform Hazard Spectra
Montreal and Quebec The primary problem with using PGA as a measure of damage potential is that it only represents that potential for very low period structures, usually those with periods of about 0.2 s or shorter.Since most engineered structures have periods of 0.5 s or higher, it is inappropriate to use PGA as the single measure of damage potential.Two examples serve to illustrate this point.
In January 1982 a magnitude (mb) 5. 7 earthquake occurred in the Miramichi region of New Brunswick, Canada.While no strong motion instruments were present at the time, several instruments were installed immediately afterwards in order to record expected aftershock motions.The subsequent aftershock records included several records from a magnitude 5. 0 event on March 31, 1982.Several of these records had fairly high PGA levels, including one at approximately 0.4g.This information was the basis for some alarm because a nearby nuclear power plant (approx.150 km from the epicentre) had been designed for a PGA level well below 0.4g.However, while the earthquake occurred in a remote relatively unhabited area of New Brunswick, the crockery in a hunter's cabin very near the epicentre was undisturbed and the earthquake caused no discernable damage anywhere.
The explanation for this apparent inconsistency is provided in Figure 7 [ Heidebrecht and Naumoski, 1986], which shows the 2 % damped response spectra of the three components of strong motion recorded at one site; the transverse component is the one with the 0.4g PGA.The horizontal component spectra have peaks at very short periods (below 0.04 s) so that while the peak PGA is high, the values of the response spectra at periods above about 0.25 s are below 0. lg, which would correspor.dto an effective peak ground acceleration of well below 0.05g.
The second example is less dramatic but is of greater engineering significance.Accelerograms recorded during the 1994 Northridge earthquake showed very high values of horizontal PGA, with several exceeding lg.However, a number of these records with high PGA values also had exceedingly high values of horizontal peak ground velocity (PGV).Based on data reported by Iwan (1994), Table 1 shows both peak horizontal PGA and PGV values for the stations with the largest PGV values.
Records with typical spectral shapes have PGV values in mis which are approximately equal, in numerical value, to the PGA values in g; this would mean ratios of PGV IPGA of about 1.
However, it can be seen that the records in Table 1 have PGVIPGA ratios which range from nearly 1.6 to over 2. This means that the capability of these motions to excite medium to long period structures is 60% to 100% higher than would be expected strictly on the basis of PGA alone.Consequently, knowing only PGA would result in a significant underestimate of the damage potential of these records.

AIV Ratio as a Measure of Frequency Content
It is clear from the above discussion that one single parameter is insufficient to characterize the damage potential of strong earthquake ground motions.Neither PGA nor PGV can provide a measure of the potential of strong seismic ground motion to damage structures throughout the full period range.The varying frequency content of ground motion was recognized in considering how the maps of PGA and PGV would be used in the 1985 seismic provisions of the NBCC [Heidebrecht et al., 1983].The ratio of PGA to PGV, commonly referred to as the AIV ratio, was found to be a simple but good measure of frequency content.
If PGA is expressed as a decimal percentage of "g" and PGV as mis, then the AIV ratio for "typical" strong motion records is in the neighbourhood of 1 with a normal range from about 0.3 to 3. Design spectra used in many building codes, and which are scaled by PGA, are often associated with an implicit AIV of about 1. Values as low as 0.3 to 0.5 can be observed for records which are rich in low frequencies (i.e.long periods); motions recorded at the surface of the soft Mexico City "lake zone" during the 1985 Michoacan earthquake are in this range.
Intraplate earthquakes are rich in high frequencies with AIV ratios typically well above 1, normally in the range of 2 to 3 but often much higher.For example, the Miramichi records whose spectra are shown in Figure 7 have AIV ratios of 9 to 12.
In order to examine the influence of the AIV ratio on structural response, several ensembles of actual strong motion records having different AIV ratios were selected from a database of strong motion records [Naumoski et al., 1993].The mean 5% damped response spectra for those ensembles, with the records scaled to a PGA of lg, are shown in Figure Sa.The ranges and mean values of the A/V ratio for each of the ensembles shown in this figure are given in Table 2.
Figure Sa shows clearly the distinctions in response spectra between the various ensembles.While the amplitude of the spectral peaks is relatively uniform, the locations of those peaks move from about 0.2 s (high A/V) to about 1 s (low• A/V).
Another and perhaps more dramatic observation concerns the impact of AIV ratio on the response of medium to long-period structures for a constant PGA.For systems with a period of 1 s, the mean spectral ordinates for the two high AIV ensembles (NH and VH) are less than 20% of the peak value; even at a period of 0.5 s, the spectral values are less than one-third of the peak value.
An important evaluation of the influence of the AIV ratio can be made by examining the response spectra for the different ensembles when the motions are scaled to a PGV of 1 mis, as shown in Figure Sb.All of the ensembles have similar response spectra for periods longer than about 0.5 s; this indicates that, for a wide range of A/V ratios, medium to long period response is a direct function of PGV.In the short period region, the response spectra separate into different branches with the amplitude of the branches increasing as the AIV ratio increases.Very approximately, the peak spectral value of each branch is proportional to the AIV ratio.
Other investigators have also noted the significance of the A/V ratio.Sawada et al. [1992] reported that the A/V ratio is an excellent parameter to represent both spectral characteristics and duration of earthquake ground motion.They also identified the dependence of the AIV ratio on earthquake magnitude, epicentral distance and predominant site period.Of particular interest is the strong correlation between AIV and strong motion duration; duration decreases with increasing A/V.Research on the effect of AIV on structural damage and on the inelastic ductility demand has been reported by Zhu et al. [1988a;1988b]

PGV as the Basis for Code Seismic Design Forces
When the seismic design force fonnat was developed to utilize both PGA and PGV in the seismic provisions of NBCC 1985, it was decided to use PGV as the primary independent variable to characterize seismic hazard.
In the period range T>0.5 s, this factor is proportional to 1/ T, which is the functional relationship which had been in place prior to 1985.In the low period region, there are three branches which reflect the different A/V ratios.The PGA and PGV maps are given as zonal rather than contour maps, with Za being the acceleration-related zone number and Zv being the velocity-related zone number.While these are non-dimensional zone numbers, the system of defining zones corresponds to the units indicated earlier when defining the A/V ratio.z. > Zv represents high A/V ratios and Z 3 < Zv represents low A/V ratios.
Z 3 = Zv corresponds to A/V being in the neighbourhood of 1.The low period plateau for z. > Zv corresponds to an A/V ratio of approximately 1. 5 while the corresponding plateau for Z 3 < Zv corresponds to an A/V ratio of approximately O. 7.This form of seismic response factor allows PGV to be the primary variable to determine seismic design forces using the three low period branches to recognize a range of A/V ratios.While the branches of the seismic response factor do not cover the full range of possible A/V ratios (as shown in Figure Sb), they do give explicit recognition of the influence of differing frequency content on seismic design forces.

Uniform Hazard Spectra for Determination of Seismic Design Forces
It is important to note the distinction between UHS and acceleration response spectra (ARS).UHS are lines connecting spectral ordinates at given periods, each of which has been determined by a distinct process involving the ground motion relations at those periods and using several seismic source zone models.The ordinates at different periods are often dominated by earthquakes of different magnitudes and at different distances from the particular location.Consequently, a UHS at a particular location is not the same as an ARS, which is defined as a set of responses of SDOF systems with different periods subjected to one specific earthquake motion.However, while a UHS determined from a seismic hazard analysis may be the basis for a seismic design force specification, the UHS is often treated as an ARS in seismic analysis, e.g. in conducting a dynamic analysis using modal superposition.Consequently, while the A/V ratio originated as a characterization of individual earthquake records, it is also a reasonable characterization of the properties of a UHS.
As indicated previously, the earthquake provisions of NZS 1992 are based on the 5 % damped uniform hazard spectrum recommended by Matuschka et al. [ 1985].This is the direction in which other codes are also proceeding and the plan is for the seismic provisions of the next edition of the NBCC to be based on uniform hazard spectra.It is therefore instructive to relate spectral shapes to A/V ratios.This relationship was studied by Heidebrecht et al. [ 1994] for the previously mentioned ensembles of strong ground motion records [Naumoski et al. 1993].Based on a regression analysis, this study found that the ratio of peak spectral acceleration (SAm) to peak spectral velocity (SVm) equals 1.20 A/V.The fact that the slope of this regression relationship is approximately 1. 20 reflects the fact that the spectral amplification factor for acceleration is about 20% larger than that for velocity.The ratio SAm/SV m may actually be preferable to A/V as a description of frequency content in that the peak spectral acceleration is not significantly affected by isolated high ground acceleration spikes, which would be included in the A/V ratio.Matuschka et al. [1985] show upper bound, lower bound and mean 450 year uniform hazard spectra based on their seismic hazard analysis of all of New Zealand.Analysing these spectra and utilizing the above relationship results in equivalent A/V ratios of 1. 05 (upper bound), 1. 5 (mean) and 1. 9 (lower bound).These results indicate that there is quite a range of spectral frequency content throughout New Zealand.Nevertheless, a single shape was recommended for use in determining design loads, corresponding closely to the upper bound spectrum, with an equivalent A/V ratio of just over 1.The implication of this choice is that high frequency motions are not given any recognition in the design spectral shape.However, since the design spectrum is scaled by the value at the period of 0.2 s, which is equivalent to scaling by PGA, this does not necessarily mean that low period hazard is being discounted.Rather,

Geological and Local Effects
Geological site effects are those related to the propetties of surface geological formations, particularly with regard to the transmission of shear waves.For example, soft rock is more flexible than hard rock and therefore produces larger amplitudes.These effects are illustrated by Table 3, taken from Reiter (1990), which shows relative Modified Mercalli Intensity (MMI) units for various types of rocks in California compared with granitic and metamorphic rock.
This table indicates that the range of expected MMI levels, arising from the same earthquake, is substantiai.According to Newmark and Rosenblueth [1971], peak ground velocity doubles with each unit increase in MMI.Consequently, one can expect peak ground velocities up to 4 times those on granite for various types of sedimentary rock (excluding the youngest saturated alluvial deposits).
It should be noted that, while the range of relative intensities associated with geological effects can be quite large, these are not associated with the site amplification phenomenon, which is a local effect, described below.The variations in intensity of surface motion due to geological effects are now generally incorporated directly into the ground motion relations used in seismic hazard analysis.For example, Boore, Joyner and Furna! [1993]  Local effects, which are commonly called site effects, arise from the presence of deposits of softer rock or soil overlaying hard rock giving rise to a site amplification which is related to the stiffness, mass and damping properties of the layers of material.
The resulting surface motion will show a pseudo-resonance effect at the fundamental period of the soil-layer system; if the material damping is very low, this effect can be quite pronounced with a significant amplification of the surface response spectrum at the site period.
While it is interesting to distinguish between geological and local effects, it should be noted that the actual separation of the two effects is usually not possible since measured earthquake motions contain both effects and do not discriminate between them.It is important to recognize that both effects need to be incorporated into the determination of seismic design forces.This is generally done in a two-step process, which roughly corresponds to the geological and local effects.•The first step is to ensure that the seismic hazard, e.g. in the form of a zoning map, is specified for a particular reference ground condition.It is preferable that this ground condition be on~ which is commonly encountered and that it be specified clearly.The second step is to distinguish between different local site conditions, usually by applying site amplification and deamp!ification factors.Sometimes, as in NZS 1992, complete spectra are specified for the different site conditions.
With reference to the first step, NBCC 1995 is an example of a code in which the reference ground condition is not stated clearly.
The commentary which describes the seismic provisions nowhere mentions the ground conditions which are associated with the zoning maps for peak ground velocity and peak ground acceleration.While there is an implicit ground condition associated with the application of a foundation factor of 1, this is not particularly satisfactory because of the wide range of conditions included in the description of this category, i.e.
"rock, dense and very dense coarse-grained soils, very stiff and hard fine-grained soils; compact coarse-grained soils and firm and stiff fine-grained soils from O to 15 m deep".
Even the "rock" part of this description is ambiguous because the characteristics of rock vary significantly in various parts of the country.Western Canadian rock is typically in the Boore, Joyner and Furna! (BJF) category B ((3 between 360 to 750 mis) whereas the hard rock which typifies much of eastern Canada has shear wave velocities in excess of 2500 mis.
By contrast, the NZS 1992 commentary specifies clearly that the reference ground condition is the intermediate subsoil category (b), which corresponds the Katayama Type III ground conditions.These were the basis for the ground motion relations [Katayama, 1982] used in determining the uniform hazard spectra used in that loading standard.

Site Effects in Determining Seismic Design Forces
Given the foregoing discussion, site effect distinctions within building code design force specifications are intended to provide amplification and/or deamplification from the reference ground condition based on the properties of the local soil system.Amplification due to quasi-resonance is normally intended to be included in those specifications.
As shown in Eq. 2, NBCC 1995 incorporates site effects within the foundation factor F, whose value ranges from 1 (for rock and very stiff soils, as described above) to 2 (for very soft and soft fine-grained soils with depth greater than 15 m).However, the effective foundation factor in the short period range is, in most instances, equal to 1 because the code places a limit on the product FS.The product FS has a maximum value of 4.2 for Za > Zv and a maximum value of 3.0 for Z 3 :::; Zv.Referring to Figure 9, this means that site amplification due to soft soils is applicable to the medium to Jong period region, but that there is no short period amplification except for Z 3 < Zv (i.e.ground motions having low A/V ratios), for which the maximum amplification is 1.4.This table shows that the largest long period amplification of 2.25 is comparable to the highest foundation factor (2.0) in NBCC 1995.Also, amplifications ranging from 1.2 to 1.5 are applicable in the short period region; the maximum of 1.5 is comparable to ~he NBCC 1995 maximum of l.4forZa < Zv• .
There has been considerable recent research on site effects.Borcherdt and Glassmoyer [1993] conducted a detailed study of site amplifications in the 1989 Loma Prieta earthquake and also made recommendations for relative amplification factors between various site classes.While their investigation included non-linearity with respect to intensity of input ground motion, their recommended amplifications are for input ground motion levels near 0. lg; larger input motions would generally have slightly lower amplifications.The site class definitions and proposed amplifications [Borcherdt, 1994] are given in the top section of Table 5.The amplifcation factor F 3 is applicable in the short period region and the factor F v in the medium to long period region.
This table also includes the range of amplifications in each period region.That range is just over 2 in the short period region and over 4 in the medium to long period region, both of which are considerably larger than the ranges in NBCC 1995 and NZS 1992 (the reason being that neither code includes de-amplification for very hard rock sites).Martin and Dobry [1994] report on proposed site amplification coefficients developed for the 1994 NEHRP Provisions, based on work by Borcherdt and others.These coefficients were based on draft proposals presented at a workshop and subsequently merged in a concensus proposal.Definitions are similar to those given by Borcherdt and Glassmoyer, with some differences in terminology, values of shear wave velocity and amplification factors.The lower portion of Table 5 shows the definitions and proposed amplifications as well as the ranges of amplifications in both period ranges, which are similar to those given by Borcherdt.
The factors shown in Table 5 represent amplifications for low intensity ground motions, i.e. peak rock acceleration of about 0. lg.Amplifications are somewhat lower for higher intensity rock motion; for example, the maximum value of Fv is 2.8 when rock motion is•0.3g.Second, the F 3 values in the table generally represent mean values whereas the F v values generally represent values at about the mean plus one standard deviation .level because the actual value of F v is highly variable depending on the specific period being considered, site conditions and input motion.
In the period range where quasi-resonant site amplification can be expected, the F v values are well below the mean while they are much higher than the mean in the period range away from quasi-resonance.Consequently, the Fv values in the Table are not unrealistically large, given the need for seismic design forces to provide some protection against the higher amplifications associated with quasi-resonance.
In order to conduct a detailed comparison between the amplifications shown in Table 5, which represent the state of the art, and the factors in the Canadian and New Zealand codes, it is necessary to determine equivalences between the various category definitions.Adams et al. [1995]  Long period, T > 1.0 s 1.5 2.25  1.0 2.0 defined as: "ground of the tertiary era or older (defined as bedrock), and diluvial layer with depth less than 10 m above bedrock" [Katayama, 1982].A comparison of this definition with the categories defined in  In those circumstances it would be reasonable to permit a reduction of design forces, perhaps by 25 % , when a designer can demonstrate that a building foundation is supported directly by rock with a shear wave velocity i'1 excess of 2000 m/s.

5.lNBCC and NZS Historical Perspectives
It is of interest to review quickly the manner in which seismic hazard information has been used in the seismic provisions of the NBCC, paralleling the history of hazard mapping outlined earlier in this paper.This history is summarized in Table 7.In relation to the role of ground motions in determining seismic design forces, several significant observations can be drawn from this table: 1.There has been a movement from general hazard zones which are not at all associated with ground motions to zones which are directly based on peak ground motion values.
2. After the introduction of ground motion parameters, there has been a change in the hazard methodology used to determine those parameters.

Manner in which hazard information used to determine seismic design forces
base shear coefficients prescribed for design of buildings in zone I ; these are doubled for zone 2 and multiplied by 4 for zone 3 base shear coefficient includes a non-dimensional multiplier (0 for zone 0, 1 for zone 1, 2 for zone 2 and 4 for zone 3) base shear coefficient includes factor "A" which is numerically equal to the zonal peak acceleration (0 for zone 0, 0.02 for zone 1, 0.04 for zone 2 and 0.08 for zone3); value of seismic response factor adjusted so that base shear is approximately 20 % below that in NBCC 1970 base shear coefficient includes zonal velocity "v" which is numerically equal to peak ground velocity in m/s (values are 0, 0.05, 0.10, 0.15, 0.20, 0.30 and 0.40); value of seismic response factor adjusted by calibration process so that seismic forces are equivalent, in an average way across the country, to those in NBCC 1980 (see Heidebrecht et al. 1983) elastic force coefficient inciudes zonal velocity "v" (as above) with total seismic force V calculated as elastic force divided by force reduction factor, then mulriplieo by a calibration factor of 0.6 (see Eq.1); seismic response factor modified to maintain same design force for highly ductile systems as in NBCC 1985 3. There has also been a change in the probability level at which the ground motion parameters have been determined.
While the historical trend has been to move towards a more explicit and more rational use of ground motion parameters in determining seismic design forces, the actual levels of those design forces have remained more or less constant during a period of about 40 years, independent of changes in parameter (PGA to PGV), changes in methodology and changes in probability level.The 20% reduction from NBCC 1970 to 1975 was a deliberate adjustment, which reflected a sense that design forces could be reduced slightly without compromising the level of protection.Actually, that change was also accompanied by a significant increase in the overturning moment reduction factor for buildings with periods longer than about 0.5 s; the effective level of protection for buildings sensitive to overturning was therefore about the same as in the previous code.
When the force expressions were modified to include explicit peak ground motions, other factors were adjusted to maintain the same design force levels.This also occurred when the annual probability of exceedance was reduced from 0.01 to 0.0021.When the 1990 edition moved to a rational expression for elastic base shear (i.e. one which corresponds to the elastic dynamics of systems responding to earthquake time-histories with a specified peak ground motion) and the use of a rational force reduction factor (i.e. one which corresponds to the realistic ducility factor capacities of building structures), then it was necessary to introduce a calibration factor of O. 6 to maintain the same design force levels.
While the author has not had the opportunity to evaluate the historical development of the NZS earthquake requirements, it does appear that the force specifications in NZS 1992 exhibit some of the same features noted in the most recent portion of the NBCC history.The design force specifications in NZS, as mentioned previously, incorporate directly the 450 year return period UHS, modified as recommended by Matuschka et al. [ 1985].Period-dependent reduction factors are also incorporated directly by specifying design spectra for each of the structural ductility factors which can be used, depending upon the type of structural system.
However, this rationalization was also accompanied by the introduction of a structural performance factor SP (equal to 0.67).The NZS 1992 Commentary presents two primary arguments for including this performance factor, paraphrased as: i) a single peak response of short duration (as given by a spectral ordinate) will not necessarily result in damage; rather, the level of sustained shaking likely to damage structures will be somewhat lower than the peak response level, and ii) experience in past earthquakes indicates that, on average, buildings sustain less damage than would be predicted from calculations.
However, while the above arguments may certainly be valid in some or most instances, they do not necessarily support the inclusion of a factor which reduces all design forces by one-third.King [1994] provides an explanation for the performance factor Sp which seems more realistic, even if not as rational as that given in the NZS 1992 Commentary.In discussing the debate on the inclusion of this factor, he describes the situation as follows: "The problem was that of balancing current design practices (for which there is no evidence of inadequate performance -although very few have been subjected to design level earthquakes) with the internationally accepted probability of recurrence" .
The "internationally accepted probability of recurrence" is likely the 10% in 50 year level, which corresponds to a return period of 475 years.This is very similar to the 450 year return period for which the NZS 1992 spectrum was developed.Matuschka et al. [ 1985) also computed spectra at other return periods and presented maps of 5 % damped spectral accelerations (at a period of 0.2 sec) for return periods of 50, 150, 450 and 1000 years.An examination of the 150 and 450 year maps shows that the ratio of the 150 year to 450 year spectral accelerations is in the neighbourhood of 0.6 to 0.7.Therefore the use of an SP factor of 0.67 has the effect of changing the spectral accelerations to a 150 year recmrence interval; based on discussions with several persons involved in preparing this code, the author understands this to be the approximate return period associated with seismic hazard in the previous edition of NZS (1984).Consequently, one can interpret King's comments to mean that the performance factor is really intended to maintain the seismic design forces at more or less at the same level.King shows comparisons of NZS 1984 and1992 design coefficients which indicate this to be the case for fully ductile structures of medium period (0.5 to 1.5 sec) on intermediate soil sites located in high seismicity zones.

Basis for Choice of Level of Ground Motion
From the foregoing historical discussion, it is clear that design force levels have been developed and maintained primarily on the basis of engineering judgement concerning adequate performance and bear relatively little relationship to anticipated levels of ground motion, as determined from a seismic hazard analysis.This being the case, one can ask why there should be any concern about the actual level of ground motions which are referenced in design force specifications.Indeed, it seems to be the propensity of engineering practitioners and researchers to seek a rational basis for design forces which has driven the process of making more explicit use of ground motion information rather than any real inherent need for that information.
However, in defence of the trends which have been developed, there are several significant arguments for using ground motion at an appropriate probability in the formulation of seismic design forces.First, while the absolute level of those motions may be subject to adjustment, it is important that the geographical distribution of hazard within any country be realistic in order to provide consistent levels of protection throughout the country.Also, the shape of the design spectrum with respect to period also needs to be realistic so as to provide consistency among structures of different periods.Both geographical distribution and spectral shape are a function of the probability level at which the hazard is computed, since different kinds of earthquakes (i.e.different combinations of magnitudes and distances) govern hazard at different probability levels.
Second, most codes now permit seismic design forces to be detennined by means other than the equivalent static method (e.g.modal dynamic analysis or numerical integration time-history analysis) involving base shear coefficients which are based on hazard spectra or values of peak ground motion.Indeed, for some conditions, such alternate methods have become mandatory.For example NZS 1992 requires the use of one of these alternate methods depending upon the height, period range and regularity of the structure.Consequently, since these alternate methods make direct use of ground motion information in a rational manner, it is important that the levels of those motions be somewhat realistic.
Of less significance is the argument that code design levels, expressed in tenns of ground motions, are frequently used by other jurisdictions (e.g. manufacturers of equipment which is located within buildings) as the basis for a direct rational determination of seismic forces for purposes of design.Such alternative application of code ground motion information is quite likely if there is a general belief that the code design forces represent those which can be expected at the design levels of ground motion.If the code ground motion levels are inappropriate, particularly if they are too low, then this can cause serious problems.With reference to Table 7, this is one of the reasons why the annual probability of exceedance was decreased from 0.01 to 0.0021 when new hazard maps were computed for the 1985 edition of NBCC.
With reference to the selection of the actual level of ground motion to be used in determining seismic design forces, there is no strong argument for any particular probability level.The annual probability of exceedance should not be so low that the hazard is dominated largely by infrequent large magnitude events, since this would distort the geographical distribution of hazard significantly.The use of a probability level which is too high will result in low ground motions; this would have implications for the credibility of earthquake design provisions since many engineers still believe in a direct linkage between the levels of motion and design forces.As indicated previously, the 10% in 50 year probability of exceedance is becoming an internationally accepted reference level which is now being used by a number of countries [McGuire 1993].The choice of a level which is in common use internationally is far more than a convenience since it enables comparisons of design practice and performance during earthquakes to be made with some common understanding of the equivalence of seismic hazard.

ROLE OF DEFORMATIONS IN SEISMIC DESIGN
While loading codes specify design forces, the demand imposed by an earthquake on a structure is really a displacement or deformation demand, which is affected by the structure's strength and force-displacement characteristics.Engineers have found forces, stresses and stress resultants to be a convenient system with which to characterize the ability of a structure to withstand load, but this approach often obscures important behavioural features, particularly when inelastic behaviour is involved.Seismic design is normally based on a force/stress resultant assessment but the ability of a structure to survive strong seismic ground motions is governed by the deformations which it is forced to sustain.There is a need for a fresh design approach which is more heavily based on deformation-based performance rather than on evaluating load-carrying capacity.
The above statements should not be interpreted to mean that displacements play no role in current design practice.Many codes require the determination of lateral deflections at the serviceability and ultimate limit states and place a restriction on the maximum acceptable values of those deflections.However, when the equivalent static method is used, these deflections are normally determined by multiplying the elastic deflections by the applicable structural ductility factor.In general, the actual capability ot the structure to sustain those def1ections is not evaluated As somewhat of an exception, it is noted that under certain conditions NZS 1992 requires a rational analysis for P-delta effects which takes into account the ductility demand reqmred in the stmcture.The main point being made here is that deflections are usually considered as a requirement to be checked rather than being at the core of the design process.The fact that this continues to be the case is likely because codes continue to be primarily prescriptive-oriented rather than performance-oriented.
The so-called "push-over" analysis has been developed recently as a means of evaluating the lateral force resisting capacity of a building.
A push-over analysis involves applying a monotonically increasing static lateral load which has the same distribution as the seismic forces used in design.The resulting force-displacement curve, which includes inelastic deformation, is a simplified representation of the capacity of the structure to sustain high levels of deformation.It provides information as to both the actual strength as well as the amount of lateral deflection which can be sustained.This information can be compared with the design displacement and strength requirements to evaluate the acceptability of the building.It can also be used to evaluate the performance of a building relative to the demand imposed by particular earthquake, e.g. using the capacity spectrum method [Mahaney et al., 1993].
It would appear that the push-over analysis approach has the potential to be developed for use as an alternate seismic design method which places the emphasis on the capacity of the structure to sustain earthquake-induced deformations.Such an approach should be as simple as possible, so that it is particularly suitable for seismic design in regions of low to moderate seismicity.
While the author has not had the time to develop these ideas to 239 any great depth, the following process could well form the basis for such an alternate design approach: 1. Proportion members on basis of elastic forces corresponding to ground motions at a fairly high annual probability of exceedance, e.g.0.01.Determination of elastic deflections at this force level would ensure that the structure would remain serviceable.
2. Conduct a push-over analysis, including P-delta effects, by progressively increasing lateral deflection up to a total structural drift of approximately 2 % .If the structure can sustain that degree of lateral deflection without significant loss of strength, then it would be deemed satisfactory in terms of life-safety protection against strong earthquake ground motions with low annual probabilities of exceedance.Ductile structures designed with a specific energy dissipating mechanism should have little or no difficulty in meeting this requirement.Structures of limited ductility might well have to be re-proportioned to provide higher strengths in "order to comply.
While an approach based on the above would be intended primarily for structures designed in low to moderate seismicity regions, it could also be modified to be acceptable as an alternate method in high seismicity regions.A requirement that structures be ductile and be detailed in accordance with capacity design requirements (e.g. as defined in section 2.5.4.6 of NZS 1992) would likely be sufficient to ensure adequate life-safety protection in high seismicity regions.
A major advantage of a displacement-oriented approach as outlined above is that it focuses on the performance of a structure, namely its lateral deflection during an earthquake.As such, the approach would identify structures which are sensitive to seismic motions (e.g.soft storey structures) and those which have limited ability to survive large ground motions (e.g.structures with very limited ductility capacity).Such ill-conditioned structures would then not be built because they would not meet performance requirements rather than because of any particular prohibition.

PERFORMANCE EXPECTATIONS OF STRUCTURES DESIGNED ACCORDING TO CURRENT CODES
One of the questions which is faced frequently by those involved in developing seismic loading and design requirements concerns the performance which can be expected from buildings designed in accordance with those requirements.Statements in code commentaries are typically very general and not particularly enlightening.For example, the commentary associated with the 1990 edition of NBCC contains the following statement: "The earthquake-resistant design requirements of the National Building Code of Canada 1990 provide an acceptable level of public safety, which is achieved by designing to prevent major failure and loss of life.Structures designed in conformance with these provisions should be able to resist moderate earthquakes without significant damage and major earthquakes without collapse." In connection with the above statement, collapse is defined as the state at which exit of the occupants from the building becomes impossible because of failure of the primary structure.However, no guidance is given concerning the meaning of moderate or major earthquakes in relation to code design ground motion levels.
While it is not feasible to be very definitive in terms of performance expectations, both designers and the general public should have a better sense of what can be expected than is given by the kinds of statement illustrated above.The discussion and perspectives presented in this section of the paper are intended to shed some light on this matter.The author would like to acknowledge particularly the insights of Tom Paulay [1995], which are expecially valuable because of his lengthy experience in earthquake analysis and design.
l;irst, i.l is imp,,rtant lo identify and di.scus~ a number nf features which have an impact on the seismic performance of structures: 1.There is an important distinction between ductile structures whose elemellts have the capacitv to sustain a number of cycles of large inelastic defomiation and those whose ductility rnpacity is very limited, including brittle structures (e.g.unreinforced masonry) which can sustain only elastic deformations.

2.
Another important distinction is between well-conditioned structures which have been designed so that energy-dissipation is distributed throughout the structure from the ill-conditioned or sensitive structures which will develop a collapse mechanism with very limited energy-dissipation.A structure with a soft storey is an example of such an ill-conditioned structure, even though its elements may be highly ductile.
3 It is also helpful to distinguish between structures whose design is earthquake-dominated and those whose design is dominated by gravity-induced live and dead loads.The latter is more likely to occur in regions of low seismicity whereas designs in regions of high seismicity are likely to he governed by seismic considerati,ms The significance of P-delta effects arising during large lateral deflections is directly related to whether the design is gravity or earthquake-dominated.
4. While code provisions require that design be based on the "dependable" strength of a structure, the actual performance will be determined by its "probable" strength.The ratio ot prohable to dependable strength can be tem1ed "overstrength".For the example of a ductile multi-storey frame, when the recommended code values of strength reduction factors and the probable strength of materials are taken into account.the minimum overstrength is 1.5 [Paulay, 1995].It may well be considerably larger, depending upon the type of structure and the design approach.
5. Ductile membe1 and joint details, when designed and constructed in accordance with current material standards, normally have a generous reserve curvature or rotation capability beyond that re4uired to develop the structural ductility factors used in design.
6. Structures responding inelastically during an earthquake will soften as ductilty demand increases; this softening increases the natural period and, in general, results in a decrease in demand because spectral acceleration ordinates decrease with increasing period.One important exception would be structures located on soft soils, for which the period increase due to softening may result larger spectral accelerations.
Given the above considerations, it is assumed that a well-designed structure is a well-conditioned ductile structure designed and detailed in accordance with current material standards.Such structures should sustain little or no structural damage at ground motion levels which are one-half to two-thirds of the nominal design level.Even as design level ground motions are approached, damage should be minimal and repairable, both because of the inherent overstrength and because of the distribution of energy-dissipation throughout the structure.
Given the large degree of uncertainty associated with estimates of design ground motion, it is important to consider the performance of such a structure at ground motions which are well above the design level, perhaps two to three times that level.
Motion levels experienced in the 1994 Northridge ea11h4uake and, more recently.in the 1995 Hyogo-Ken Nambu earthquake were certainly of that order, and sometimes higher.
The first observation is that a ground motion of the order of three times the design motion will result in a substantial lateral displacement, with a top deflection of perhaps 2 % of the height of the structure.The concern is whether or not the structure can sustain such deformations without collapsing.Consider first the case in which the structural design is earthquake-dominated.This is important because the P-delta effect will be less significant in reducing the lateral-load carrying capacity during such large lateral displacements.Consequently, even though the total ductility demand may be somewhat beyond the structural ductility factor assumed in design, the distributed energy-dissipation combined with system softening and reserve deformation capacity in the joints and members should prevent collapse even at such high levels of ground motion.Nevertheless, such structures would likely be damaged beyond repair and be candidates for demolition immediately after an earthquake which produced such large motions.
However, if the quality of construction were not as high as expected or if errors had been made in some aspect of the design (e.g.improper connection detailing), then such features would have a significant effect on its capacity and it would likely collapse at lower ground motion levels.A review of the collapses in recent earthquakes indicates that the large majority of these were due to such errors rather than to the ground motion demand being substantially higher than the design level.
If the well-designed structure is dominated by gravity loads, then the situation is somewhat different.The P-delta effect is proportionately much larger when member strengths are controlled by vertical loads.Such structures are extremely sensitive to excessive lateral displacements.They would have very little chance of surviving ground motions two to three times the design level.Collapse would occur because the large imposed lateral displacements would dramatically reduce the lateral force resistance due to P-delta effects.
Somewhat of a caveat needs to be applied to the above arguments for relatively stiff short period structures, for the following reasons.The highest spectral accelerations occur in the short period region; such structures will attract the seismic load quickly and their behaviour will be quite dependent upon the ability of the structure to redistibute this high demand quickly so that there is time for the demand-reducing softening to take place.Wall structures are particularly sensitive since they are inherently less well-conditioned than frames structures in terms of being able to distribute energy-dissipation.Another related aspect is that most codes continue to use significant inelastic force-reduction in the design of short-period structures, On the other hand, the nature of many short-period structures is such that these often have unavoidable inherent overstrength.For example, walls whose primary role is to carry vertical loads also have substantial lateral stiffness and strength.However, the presence of such overstrength may well reduce the need to design for a high ductility capacity, which would decrease the performance capability of such a structure.
In general, it would be expected that short-period structures would remain serviceable at relatively high levels of ground motion, perhaps up to the nominal design level.Such stiff structures are not likely to have problems of excessive displacement provided that the strength is adequate.However, the performance of such structures at very high levels of ground motion is extremely sensitive to short high peaks in the earthquake accelerogram as well as the available energy-dissipating mechanism.
It is also important to address the performance expectations of structures with very limited ductility, i.e. a design structural ductility factor of about 2. Assuming that the design of such structures is otherwise satisfactory, the primary difference between these structures and more ductile structures is that they have substantially less reserve to accomodate the uncertainties in ground motion.Even if such structures are well-conditioned, the detailing which is associated with limited ductility means that the members and joints have very little reserve deformation capability prior to substantial loss of strength.Consequently, while such structures may well remain serviceable at higher ground motion levels than ductile structures, they are much more likely to collapse when ground motions are two to three times the design level.
Given that NZS 1992 includes the category of elastically responding structures (with design structural ductility factors of 1 or 1.25), it is also useful to address the performance potential of those structures.These structures are in fact the extreme case of the structures of limited ductility as described above.They will remain serviceable up to and slightly beyond design ground motions but are susceptible to collapse as soon as motions cause any inelastic deformation.Given the uncertainty associated with prediction of ground motions and the significant likelihood that actual motions will be above those predicted by seismic hazard studies, the use of elastically responding structures in regions of moderate to high seismicity is not recommended.
The role of capacity design is extremely significant in enabling structures to perform well at unexpectedly high levels of ground motion.Capacity design ensures that the structure has a defined failure mechanism and that the designer has proportioned and detailed the structure in order to force it to deform in accordance with that mechanism.In other words, the behaviour of the structure is predictable.Consequently, the accidental overloading of key elements will not occur and the structure will be in a position to sustain defonnations well beyond the design level.NZS 1992 requires capacity design for all ductile structures; for structures of limited ductility, it is only required if specified by the appropriate material standard.

COMPARISON OF HAZARD AND DESIGN FORCES, NZS 1992 AND NBCC 1995
Since this paper has already made a number of references to seismic hazard and seismic design in both New Zealand and Canada, it is useful to conclude by making some specific comparisons of both seismic hazard and seismic design forces.The respective seismic hazard methodologies are, for New Zealand, that given by Matuschka et al. [1985] and, for Canada, the new preliminary seismic hazard results given by Adams et al. [1995] and Heidebrecht and Naumoski [1995].The comparison of design forces is between those prescribed by NZS 1992 and by NBCC 1995.It should be noted that the NBCC 1995 design forces are still based on the 1985 seismic hazard methodology so they are not compatible with the new seismic hazard results.Hazard and design force determinations in New Zealand are for Wellington and Christchurch; in Canada they are for Victoria and Vancouver.In each country these locations represent high and intermediate seismic hazard.
With reference to seismic hazard, Figure 10 shows the uniform hazard spectra for these four cities.The New Zealand spectra are for the site category (b) intermediate soil sites which is equivalent to Borcherdt category C (Table 5).The New Zealand spectra comprise recommended normalized spectrum proposed by Matuschka et al. multiplied by the zone factors given in NZS 1992 for Wellington and Christchurch (1.2 and 0. 8 respectively) to provide a return period of 450 years.
In accordance with the description of the new Canadian seismic hazard methodology given earlier in this paper, the Vancouver and Victoria spectra comprise the maximum spectral ordinates from the H and R models, both determined at the 50% confidence level for a 10% in 50 year probability of exceedance (equivalent to a return period of 475 years).These ordinates were originally determined for BJF category B, which corresponds to Borcherdt category B. For this comparison they were converted to Borcherdt category C by using the relative (B to C) amplification factors in Table 5. Fa was used for periods of 0.2 s and shorter while Fv was used for periods of 0.5 s and longer; interpolation was used to determine factors for intermediate periods.
The 50% confidence level is used for this comparison because these values are not affected significantly by epistemic uncertainty, which has not been included in the New Zealand hazard determination.
However, as noted in the earlier discussion of uncertainty, epistemic uncertainty is important and should be included in estimating ground motions for seismic design.In this instance, 84% confidence values for Vancouver and Victoria would be approximately twice those at the 50% level shown in the figure .A comparison of the curves in Figure 10 shows that the hazard in Christchurch is quite comparable to that in Victoria.Both locations have essentially the same hazard in the very short period region (i.e.T :::; 0.2s) and also at T = 2s.In the region between those two extremes, the hazard in Christchurch is somewhat higher than in Victoria; this difference may well arise because hazard in all New Zealand locations is represented by a single spectral shape.While Victoria is the Canadian city having the highest level of seismic hazard, the highest hazard in New Zealand (at Wellington), is about 50% higher than that in Victoria.A recent comparison of Canadian and U.S. seismic code forces [Naumoski and Heidebrecht, 1995] suggests that the hazard in Victoria is comparable to that in Seattle.The seismic zoning map of the Unifonn Building Code (UBC) places the  ---------------------------------------------------C)  ------~-------~------~----------- Seattle hazard one zone below that in San Francisco.This would suggest that hazard in Wellington is likely to be comparable, in a very approximate sense, with that in San Francisco.
Figure 11 shows the elastic base shear coefficients for structures of normal importance located in the same four cities, all calculated for rock or stiff soil sites.The calculations for New Zealand are as prescribed in NZS 1992, including the truncation of the spectrum in the low period region and the application of the structural performance factor Sp.
For the Canadian locations, the elastic base shear coefficient is as given in Eq. 2.
Given the approximate equivalence of hazard between Christchurch and Victoria, the results in this figure indicate that the Canadian elastic base shear coefficients are quite conservative.However, the actual difference is not quite as great as it appears, since the Canadian hazard does not include the calibration factor U (equal to 0.6), which would be applied to the design of all structures, even those with very limited ductility.Nevetheless, even with that adjustment, elastic design forces in Victoria are about twice those in Christchurch in the short period region and about 50% larger in the medium to long period region.
It is more informative to make comparisons among ductile structural systems, since, as indicated above, elastically responding structures are not expected to give good performance in zones of high seismicity.Figure 12 shows the comparable base shear coefficients for ductile reinforced concrete frames.Given the earlier discussion of seismic hazard equivalence, this figure also includes San Francisco, using the specifications in the 1991 edition of UBC.
In NZS 1992 the structural ductility factor for ductile reinforced concrete frames is 6; the applicable force reduction factor in NBCC 1995 is 4. The equivalent UBC numerical coefficient Rw is 12.
Considering first the comparison between Christchurch and Victoria, the differences continue to be substantial (about a factor of 2) throughout the period region.Even the Wellington coefficient is well below that in Victoria, except for a small region in the neighbourhood of T = 0.5s.
The comparison between Wellington and San Francisco is more consistent.The short period plateaus in each case have comparable values, with that for San Francisco being slightly higher than that for Wellington.The shapes in the medium to long period region are quite different but the curves cross at a period of approximately 0.6 sec.While the proportional differences become larger in the Jong period region, the actual differences between the coefficients is less than 0.02.Given the approximate nature of seismic design forces, it can be considered that the coefficients for Wellington and San Francisco are comparable, which is also consistent with the earlier discussion of seismic hazard equivalence.----------------,-----------------------------------------------8 0.8 t--------------------------------------------------------  Canadian seismic design force levels, this difference can be traced back to the assertion, in the 1953 NBCC provisions, that the highest Canadian seismicity is comparable to that in California.Victoria is the most seismic urban area in Canada, but as has already been noted, recent comparisons place its seismic hazard somewhat below that of San Francisco.This would imply that, historically, Canadian seismic force levels have been too high; the comparison in this paper is consistent with that implication.
However, design forces are only one element of seismic design and may not even be the primary consideration in terms of the ability of structure to resist strong earthquake ground motions.
It is the overall level of protection provided by the structure as designed and built which is important.Some considerations concerning the level of protection are presented in the following section of this paper.

Concept
While the previous section has compared design forces as prescribed in NBCC 1995 and NZS 1992, other features (such as those outlined in the section on performance expectations) have a greater impact on the level of protection than the prescribed levels of design forces.In this context, level of protection is defined in terms of the performance achieved relative to stated objectives (e.g.life safety, damageability, and functionality) in relation to the anticipated level of seismic hazard.
Statements of absolute level of protection are not very meaningful since these depend very much on societal expectations, which vary both within and between different jurisdictions.Comparative statements are more helpful since they provide information about relative performance expectations.
One type of comparison involves comparing protection against earthquakes with protection against other hazards, e.g.strong winds.Probabilistic risk analyses can help to determine whether the risk due to one type of hazard is comparable with that due to another type of hazard.
Another and more immediately useful kind of comparison concerns seismic performance provided by codes and design practices in similar socio-economic jurisdictions.This kind of comparison is helpful because it enables the experience gained from earthquake damage in one country to be used in evaluating the level of protection in another country.Further discussion in this paper will focus on this second kind of comparison, which also has the advantage that some useful information can be obtained by direct comparison of design forces, assuming that other aspects of the design process (e.g.detailing and quality of construction) are comparable.

Equivalence of Seismic Hazard
Meaningful comparisons of level of protection for different geographical locations require that there be equivalence of seismic hazard between the jurisdictions being compared.
Without knowing that such an equivalence exists, comparisons of performance are meaningless.
There is a significant problem in determining whether hazard is equivalent because most code zoning maps are based on different hazard methodologies.Consequently, one cannot just compare code hazard values, even if similar quantities (e.g.PGA) have been mapped.The sources of significant differences in methodology include: probability level, basis for source zoning, ground motion relations, how modellirig uncertainty is taken into account (e.g.basic Cornell-McGuire vs logic tree approach), and reference ground conditions Ideally, seismic hazard equivalence should be based on detailed seismo-tectonic comparisons, e.g.type of faulting (e.g.subduction), recurrence rates or characteristic interval, ages of geological formations, fault slip or seismic moment rates.Practically, however, it is usually not feasible to identify locations which are fully comparable.It is therefore necessary to identify as many comparable characteristics as possible, and then to make adjustments which account for differences (e.g.same ground conditions).
The earlier comparison of hazard results between Canada and New Zealand illustrates the difficulties in determining seismic hazard equivalence.The hazard results (Figure 10) for both countries are based on using the Cornell-McGuire methodology and presented in terms of uniform hazard spectra at approximately the same probability level.However, differences in the ground motion relations, the treatment of uncertainty and site condition definitions all make it difficult to determine, with any degree of precision, locations which can be considered equivalent in terms of seismic hazard.

Issues in Comparing Performance
Assuming that locations with equivalent seismic hazard can be identified, the following issues are important in considering performance comparisons: a. translation of hazard to design forces (e.g., loading combinations, load factor used for the ultimate limit state, variation of design force with height, and provisions for torsional effects) b. design practices (e.g.whether or not capacity design is required) c. detailing practices (e.g.material code requirements) d. quality of construction e. performance measures (e.g.damage "indices, lateral drift and peak floor accelerations) Considerable research incorporating the above issues is required in order to conduct realistic assessments of performance of structures built in accordance with current codes and standards.With reference to the NBCC seismic provisions, CANCEE has recognized the need for an overall evaluation of the level of protection prior to the introduction of any further major changes in those provisions.CANCEE has given strong endorsement to a research project on the evaluation of the level of protection which has been initiated by the author.

CONCLUSIONS
The following are the primary conclusions which can be drawn from the various aspects of seismic design which are discussed in this paper: 1. Ground motion estimates determined from seismic hazard analyses are subject to a very high degree of uncertainty, much higher than that associated with design load parameters.
2. One single hazard parameter is insufficient to describe the damage potential of strong seismic ground motions.If peak ground motions are the basis for design, then both peak ground velocity and peak ground acceleration should be used.
For uniform hazard spectra, ordinates at both short and medium/long periods should be used rather than using a spectrum with a standard shape and scaled at one particular period.
3. The differences in design forces due to different site conditions can be very substantial, ranging from a factor of about 2 in the short period range to over 4 in the medium/long period range.
4. There is a real need to develop seismic design procedures which are based on deformations rather than forces.
5. Structures which are well designed can be expected to perform well during ground motions which are well in excess of design .levels,provided that the quality of construction is good and that there are no "errors" in design or detailing which would dramatically impact the deformation capacity of the structure.

FIGURE 5 .
FIGURE 5. Portion of NBCC 1995 Velocity Zoning Map, showing hazard in eastern Canada. 229 FIGURE 6. Unifonn hazard spectra for H and R models at 84% confidence level for Montreal and Quebec

FIGURE 8 .
FIGURE 8. Response Spectra (5% damping) for Ensembles with Different AIV Ratios; a) Scaled to Peak Ground Acceleration and b) Scaled to Peak Velocity

(
For comparison, it is interesting to examine the elastic spectra specified for the three site categories in NZS 1992, i.e.(a) rock or very stiff soil, (b) intermediate soil, and (c) flexible or deep soil.Table4indicates the approximate amplification factors for categories (b) and (c) relative to rock or very stiff soil in the short and long period ranges; the spectra include a smooth transition of amplification factors between those two period ranges.
report that the ground motion relations used in developing the 1985 maps (which are still in use in NBCC 1995) are equivalent to the BJF category B, i.e. (3 between 360 and 750 mis.Given the site class definitions in Table 5, this means that the ground condition for the rock and stiff soil category (F== 1) in NBCC 1995 corresponds almost exactly to the Borcherdt category SCH and the Martin and Dobry category B. The establishment of a similar equivalence for NZS 1992 is somewhat more difficult.The NZS 1992 Commentary indicates that subsoil category (a) (rock or very stiff soil sites) corresponds to the Katayama Type I ground condition, which is TABLE 4. Implicit Site Amplification Factors from NZS 1992 Elastic Design Spectra Period Range Category (b) relative to rock or stiff Category (c) relative to rock or stiff soil soil Short period, T < 0.45 s 1.2 1.5 ~~--~-=--=--=---=-=-=-=--=-:::::..=:-=-=_::+ 0 + -

TABLE 7 .
History of Application of Seismic Hazard Information in National Building Code of Canada