Extreme financial losses that occurred during the 2007-2008 financial crisis reignited questions of whether existing methodologies, which are largely based on the normal distribution, are adequate and suitable for the purpose of risk measurement and management. The major assumptions employed in these frameworks are that financial returns are independently and identically distributed, and follow the normal distribution. However, weaknesses in these methodologies have long been identified in the literature. Firstly, it is now widely accepted that financial returns are not normally distributed; they are asymmetric, skewed, leptokurtic, and fat-tailed. Secondly, it is a known fact that financial returns exhibit volatility clustering, thus the assumption of independently distributed is violated. The combined evidence concerning the stylized facts of financial returns necessitates the need for adapting existing methodologies or developing new methodologies that will account for all the stylized facts of financial returns explicitly. In this paper, I discuss two related measures of risk; extreme value-at-risk (EVaR) and extreme conditional value-at-risk (ECVaR). I argue that ECVaR is a better measure of extreme market risk than EVaR utilized by Kabundi and Mwambasince it is coherent and captures the effects of extreme market events. In contrast, even though EVaR captures the effect of extreme market events, it is non-coherent.
Markowitz (1952), Roy (1952), Shape (1964), Black and Scholes (1973), and Merton’s (1973) major toolkit in the development of modern portfolio theory (MPT) and the field of financial engineering consisted of means, variance, correlations, and covariance of asset returns. In MPT, the variance or equivalently the standard deviation was the panacea measure of risk. A major assumption employed in this theory is that financial asset returns are normally distributed. Under this assumption, extreme market events rarely happen. When they do occur, risk managers can simply treat them as outliers and disregard them when modeling financial asset returns. The assumption of normally distributed asset returns is too simplistic for use in financial modeling of extreme market events. During extreme market activity similar to the 2007-2008 financial crisis, financial returns exhibit behavior that is beyond what the normal distribution can model.
Starting with the work of Mandelbrot (1963) there is increasingly more convincing empirical evidence that suggests that asset returns are not normally distributed. They exhibit asymmetric behavior, ‘fat tails’, and high kurtosis than the normal distribution can accommodate. The implication is that extreme negative returns do occur, and are more frequent than predicted by the normal distribution. Therefore, measures of risk based on the normal distribution will underestimate the risk of portfolios and lead to huge financial losses, and potentially insolvencies of financial institutions. To mitigate the effects of inadequate risk capital buffers stemming from an underestimation of risk by normality-based financial modeling, risk measures such as EVaR that go beyond the assumption of normally distributed returns have been developed. However, EVaR is non-coherent just like VaR from which it is developed. The implication is that, even though it captures the effects of extreme market events, it is not a good measure of risk since it does not reflect diversification – a contradiction to one of the cornerstones of portfolio theory. ECVaR naturally overcomes these problems since it is coherent and can capture extreme market events.
The purpose of this paper is to develop extreme conditional value-at-risk (ECVaR), and propose it as a better measure of risk than EVaR under conditions of extreme market activity with financial returns that exhibit volatility clustering, and are not normally distributed. Kabundi and Mwamba (2009) have proposed EVaR as a better measure of extreme risk than the widely used VaR, however, it is non-coherent. ECVaR is coherent and captures the effect of extreme market activity, thus it is more suited to model extreme losses during market turmoil, and reflects diversification, which is an important requirement for any risk measure in portfolio theory.
RELEVANCE OF THE STUDY
The assumption that financial asset returns are normally distributed understates the possibility of infrequent extreme events whose impact is more detrimental than that of events that are more frequent. The use of VaR and CVaR underestimate the riskiness of assets and portfolios and eventually lead to huge losses and bankruptcies during times of extreme market activity. There are many adverse effects of using the normal distribution in the measurement of financial risk, the most visible being the loss of money due to underestimating risk. During the global financial crisis, a number of banks and non-financial institutions suffered huge financial losses; some went bankrupt and failed, partly because of inadequate capital allocation stemming from an underestimation of risk by models that assumed normally distributed returns. Measures of risk that do not assume normality of financial returns have been developed. One such measure is EVaR. EVaR captures the effect of extreme market events, however it is not coherent. As a result, EVaR is not a good measure of risk since it does not reflect diversification. In financial markets characterized by multiple sources of risk and extreme market volatility, it is important to have a risk measure that is coherent and can capture the effect of extreme market activity. ECVaR is advocated fulfills this role of ensuring extreme market risk while conforming to portfolio theory’s wisdom of diversification.
Chapter 2 will present a literature review of risk measurement methodologies currently used by financial institutions, in particular, VaR and CVaR. I also discuss the strengths and weaknesses of these measures. Another risk measure not widely known thus far is the EVaR. We discuss EVaR as an advancement in risk measurement methodologies. I advocate that EVaR is not a good measure of risk since it is non-coherent. This leads to the next chapter, which presents ECVaR as a better risk measure that is coherent and can capture extreme market events.
Chapter 3 will be concerned with extreme conditional value-at-risk (ECVaR) as a convenient modeling framework that naturally overcomes the normality assumption of asset returns in the modeling of extreme market events. This is followed by a comparative analysis of EVaR and ECVaR using financial data covering both the pre-financial crisis and the financial crisis periods.
Chapter 4 will be concerned with data sources, preliminary data description, and the estimation of EVaR, and ECVaR.
Chapter 5 will discuss the empirical results and the implication for risk measurement.
Finally, chapter 6 will give concussions and highlight the directions for future research.
CHAPTER 2: RISK MEASUREMENT AND THE EMPIRICAL
DISTRIBUTION OF FINANCIAL RETURNS
Risk Measurement in Finance: A Review of Its Origins
The concept of risk has been known for many years before Markowitz’s Portfolio Theory (MPT). Bernoulli (1738) solved the St. Petersburg paradox and derived fundamental insights of risk-averse behavior and the benefits of diversification. In his formulation of expected utility theory, Bernoulli did not define risk explicitly; however, he inferred it from the shape of the utility function. Irving Fisher (1906) suggested the use of variance to measure economic risk. Von Neumann and Morgenstern (1947) used expected utility theory in the analysis of games and consequently deduced many of the modern understanding of decision making under risk or uncertainty. Therefore, contrary to popular belief, the concept of risk has been known well before MPT. Even though the concept of risk was known before MPT, Markowitz (1952) first provided a systematic algorithm to measure risk using the variance in the formulation of the mean-variance model for which he won the Nobel Prize in 1990. The development of the mean-variance model inspired research in decision making under risk and the development of risk measures. The study of risk and decision making under uncertainty (which is treated the same as risk in most cases) stretch across disciplines. Indecision science and psychology, Coombs and Pruitt (1960), Pruitt (1962), Coombs (1964), Coombs and Meyer (1969), and Coombs and Huang (1970a, 1970b) studied the perception of gambles and how their preference is affected by their perceived risk. In economics, finance, and measurement theory, Markowitz (1952, 1959), Tobin (1958), Pratt (1964), Pollatsek & Tversky (1970), Luce (1980) and others investigate portfolio selection and the measurement of risk of those portfolios, and gambles in general. Their collective work produces a number of risk measures that vary in how they rank the riskiness of options, portfolios, or gambles.
Though the risk measures vary, Pollatsek and Tversky recognize that they share the following:
Risk is regarded as a property of choosing among options.
Options can be meaningfully ordered according to their riskiness.
As suggested by Irving Fisher in 1906, the risk of an option is somehow related to the variance or dispersion in its outcomes. In addition to these basic properties, Markowitz regards risk as a ‘bad’, implying something that is undesirable.
Since Markowitz (1952), many risk measures such as the semi-variance, absolute deviation, and the lower semi-variance, etc. (see Brachinger and Weber, (1997)) were developed, however, the variance continued to dominate empirical finance. It was in the 1990s that a new measure, VaR was popularised and became industry standard as a risk measure. I present this risk measure in the next section.
Definition and concepts
Besides these basic ideas concerning risk measures, there is no universally accepted definition of risk as a result, risk measures continue to be developed. J.P Morgan & Reuters (1996) pioneered a major breakthrough in the advancement of risk measurement with the use of value-at-risk (VaR), and the subsequent Basel committee recommendation that banks could use it for their internal risk management. VaR is concerned with measuring the risk of a financial position due to the uncertainty regarding the future levels of interest rates, stock prices, commodity prices, and exchange rates. The risk resulting in the movement of these market factors is called market risk. VaR is the expected maximum loss of a financial position with a given level of confidence over a specified horizon. VaR provides answers to the question: what is the maximum loss that I can lose over, say the next ten days with 99 percent confidence put differently, what is the maximum loss that will be exceeded only one percent of the times in the next ten days illustrate the computation of VaR using one of the methods that are available, namely parametric VaR. I denote by the rate of return and by the portfolio value at the time.
Then is given by
The actual loss (the negative of the profit, which is) is given.
When is normally distributed (as is normally assumed), the variable has a standard normal distribution with a mean of and standard deviation.
Where implies a confidence level. If we assume a 99% confidence level.
In we have -2.33 as our VaR at a 99% confidence level, and we will exceed this VaR only 1% of the time. From (4), it can be shown that the 99% confidence VaR is given.
Generalizing from (5), we can state the quantile VaR of the distribution as follows.
(6)VaR is an intuitive measure of risk that can be easily implemented. This is evident in its wide use in the industry. However, is it an optimal measure. The next section addresses the limitations of VaR.
Limitations of VaR
Artzner et al. (1997,1999) developed a set of axioms that are satisfied by a risk measure, then that risk measure is ‘coherent’. The implication of coherent measures of risk is that “it is not possible to assign a function for measuring risk unless it satisfies these axioms”. Risk measures that satisfy these axioms can be considered universal and optimal since they are founded on the same mathematical axioms that are generally accepted. Artzner et al. (1997, 1999) put forward the first axioms of risk measures, and any risk measure that satisfies them is a coherent measure of risk. Letting be a risk measure defined on two portfolios and.
Then, the risk measure is coherent if it satisfies the following axioms:
Monotonicity: if then we interpret the monotonicity axiom to mean that higher losses are associated with higher risk.
Homogeneity: for assuming that there is no liquidity risk, the homogeneity axiom means that risk is not a function of the quantity of stock purchased, therefore we cannot reduce or increase risk by investing different amounts in the same stock.
Translation invariance where is riskless security. This means that investing in a riskless asset does not increase risk with certainty.
Sub-additivity: Possibly the most important axiom, sub-additivity ensures that a risk measure reflects diversification – the combined risk of two portfolios is less than the sum of the risks of individual portfolios. VaR does not satisfy the most important axiom of sub-additivity, thus it is non-coherent. More so, VaR tells us what we can expect to lose if an extreme event does not occur, thus it does not tell us the extent of losses we can incur if a “tail” event occurs. VaR is therefore not an optimal measure of risk.
The non-coherence, and therefor non-optimality of VaR as a measuring of risk led to the development of conditional value-at-risk (CVaR) by Artzner et al. (1997, 1999), and Uryasev and Rockafeller (1999). I discuss CVaR in the next section.
CVaR is also known as “Expected Shortfall” (ES), “Tail VaR”, or “Tail conditional expectation”, and it measures risk beyond VaR. Yamai and Yoshiba (2002) define CVaR as the conditional expectation of losses given that the losses exceed VaR. Mathematically, CVaR is given by the following: CVaR offers more insights concerning the risk that VaR in that it tells us what we can expect to lose if the losses exceed VaR. Unfortunately, the finance industry has been slow in adopting CVaR as its preferred risk measure. This is besides the fact that “the actuarial/insurance community has tended to pick up on developments in financial risk management much more quickly than financial risk managers have picked up on developments in actuarial science”. Hopefully, the effects of the financial crisis will change this observation.
In much of the applications of VaR and CVaR, returns have been assumed to be normally distributed. However, it is widely accepted that returns are not normally distributed. The implication is that VaR and CVaR as currently used in finance will not capture extreme losses. This will lead to underestimation of risk and inadequate capital allocation across business units. In times of market stress when extra capital is required, it will be inadequate. This may lead to the insolvency of financial institutions. Methodologies that can capture extreme events are therefore needed. In the next section, I discuss the empirical evidence on financial returns and thereafter discuss extreme value theory (EVT) as a suitable framework for modeling extreme losses.
The Empirical Distribution of Financial Returns
Back in 1947, Geary wrote, “Normality is a myth; there never was, and never will be a normal distribution”. Today this remark is supported by a voluminous amount of empirical evidence against normally distributed returns; nevertheless, normality continues to be the workhorse of empirical finance. If the normality assumption fails to pass empirical tests, why are practitioners so obsessed with the bell curve could their obsession be justified uncover some of the possible responses to these questions, let us first look at the importance of being normal, and then look at the dangers of incorrectly assuming normality.
The Importance of Being Normal
The normal distribution is the widely used distribution in statistical analysis in all fields that utilizes statistics in explaining phenomena. The normal distribution can be assumed for a population, and it gives a rich set of mathematical results. In other words, the mathematical representations are tractable and are easy to implement. The populations can simply be explained by its mean and variance when the normal distribution is assumed. The panacea advantage is that the modeling process under the normality assumption is very simple. In fields that deal with natural phenomena, such as physics and geology, the normal distribution has unequivocally succeeded in explaining the variables of interest. The same cannot be said in the finance field. The normal probability distribution has been subject to rigorous empirical rejection. A number of stylized facts of asset returns, statistical tests of normality, and the occurrence of extreme negative returns dispute the normal distribution as the underlying data generating process for asset returns. We briefly discuss these empirical findings next.
Deviations From Normality
Ever since Mandelbrot (1963), Fama (1963), Fama (1965) among others, it is a known fact that asset returns are not normally distributed.
The combined empirical evidence since the 1960s points out the following stylized facts of asset returns:
Volatility clustering: periods of high volatility tend to be followed by periods of high volatility, and periods of low volatility tend to be followed by low volatility.
Autoregressive price changes: A price change depends on price changes in the past period.
Skewness: Positive price changes and negative price changes are not of the same magnitude.
Fat-tails: The probabilities of extreme negative (positive) returns are much larger than predicted by the normal distribution.
Time-varying tail thickness: More extreme losses occur during turbulent market activity than during normal market activity.
Frequency-dependent fat-tails: high-frequency data tends to be more fat-tailed than low-frequency data.
In addition to these stylized facts of asset returns, extreme events of the 1974 Germany banking crisis, the 1978 banking crisis in Spain, the 1990s Japanese banking crisis, September 2001, and the 2007-2008 US experience ( BIS, 2004) could not have happened under the normal distribution. Alternatively, we could just have treated them as outliers and disregarded them; however, experience has shown that even those who are obsessed with the Gaussian distribution could not ignore the detrimental effects of the 2007-2008 global financial crisis. With these empirical facts known to the quantitative finance community, what is the motivation for the continued use of the normality assumptions could be possible that those that stick with the normality assumption know only how to deal with normally distributed data. It is their hammer; everything that comes their way seems like a nail! As Esch (2010) notes, for those that do have other tools to deal with non-normal data, they continue to use the normal distribution on the grounds of parsimony. However, “representativity should not be sacrificed for simplicity”. Better modeling frameworks to deal with extreme values that are characteristic of departures from normality have been developed. Extreme value theory is one such methodology that has enjoyed success in other fields outside finance and has been used to model financial losses with success. In the next chapter, I present extreme value-based methodologies as a practical and better methodology to overcome non-normality in asset returns.
CHAPTER 3: EXTREME VALUE THEORY: A SUITABLE AND
Extreme Value Theory
Extreme value theory was developed to model extreme natural phenomena such as floods, extreme winds, and temperature, and is well established in fields such as engineering, insurance, and climatology. It provides a convenient way to model the tails of distributions that capture non-normal activities. Since it concentrates on the tails of distributions, it has been adopted to model asset returns in time of extreme market activity (see Embrechts et al. (1997); McNeil and Frey (2000); Danielsson and de Vries (2000). Gilli and Kellezi (2003) point out two related ways of modeling extreme events. The first way describes the maximum loss through a limit distribution known as the generalized extreme value distribution (GED), which is a family of asymptotic distributions that describe normalized maxima or minima. The second way provides asymptotic distribution that describes the limit distribution of scaled excesses over high thresholds and is known as the generalized Pareto distribution (GPD). The two limit distributions result in two approaches of EVT-based modeling – the block of maxima method and the peaks over the threshold method respectively.
The Block of Maxima Method
Let us consider the independent and identically distributed (i.i.d) random variables with common distribution functions. Let be the maximum of the first random variables. Also, let us suppose is the upper end. For, the corresponding results for the minima can be obtained from the following identity almost surely converges to whether it is finite or infinite since. Following Embrechts et al. (1997), and Shanbhang and Rao (2003), the limit theory finds norming constants and a non-degenerate distribution function in such a way that the distribution function of a normalized version of converges to as follows, as is an extreme value distribution function, and is the domain of attraction of, (written as), if equation holds for suitable values of and. It can also be said that the two extreme value distribution functions and belong in the same family if for some and all. Fisher and Tippett (1928), De Haan (1970, 1976), Weissman (1978), and Embrechts et al. (1997) show that the limit distribution function belongs to one of the following three density functions for some. Any extreme value distribution can be classified as one of the three types. For alternative characterization of the three distributions, see Nagaraja (1988), and Khan and Beg (1987).
The Generalized Extreme Value Distribution
The three distribution functions are given in (10), (11), and (12) above can be combined into one three-parameter distribution called the generalized extreme value distribution (GEV) given by, with (13) We denote the GEV by, and the values and give rise to the three distribution functions in (3). In equation (4) above, and represent the location parameter, the scale parameter, and the tail-shape parameter respectively. corresponds to the Frechet, and distribution corresponds to the Weibull distribution. The case where reduces to the Gumbel distribution. To obtain the estimates of we use the maximum likelihood method, following Kabundi and Mwamba (2009). To start with, we fit the sample of maximum losses to a GEV. Thereafter, we use the maximum likelihood method to estimate the parameters of the GEV from the logarithmic form of the likelihood function given by; (14) To obtain the estimates of we take partial derivatives of equation (14) with respect to and, and equating them to zero.
The EVaR defined as the maximum likelihood quantile estimator of is by definition given by (15) The quantity is the quantile of, and I denote it as the alpha percept VaR specified as follows following Kabundi and Mwamba (2009), and Embrech et al. (1997):(16). Even though EVaR captures extreme losses, by extension from VaR it is non-coherent. As such, it cannot be used for the purpose of portfolio optimization since it does not reflect diversification. To overcome this problem, In the next section, I extend CVaR to ECVaR so as to capture extreme losses coherently.
Extreme Conditional Value-at-Risk (ECVaR): An Extreme Coherent Measure of Risk
I extend ECVaR from EVaR in a similar manner that I used to extend CVaR from VaR. ECVaR can therefore be expressed as follows: (17) In the following chapter, we describe the data and its sources.
CHAPTER 4: DATA DESCRIPTION
I will use stock market indexes of five advanced economies comprising that of the United States, Japan, Germany, France, and the United Kingdom, and five emerging economies comprising Brazil, Russia, India, China, and South Africa. Possible sources of data that will be used are I-net Bride, Bloomberg, and individual country central banks.
CHAPTER 5: DISCUSSION OF EMPIRICAL RESULTS
In this chapter, I will discuss the empirical results. Specifically, the adequacy of ECVaR will be discussed relative to that of EVaR. Implications for risk measurement will also be discussed in this chapter.
CHAPTER 6: CONCLUSIONS
This chapter will give concluding remarks and directions for future research.
 Markowitz, H.M.: 1952, Portfolio selection, Journal of Finance 7 (1952), 77-91
Roy, A.D.: 1952, Safety First and the Holding of Assets. Econometrica, vol. 20 no 3 p 431-449.
Shape, W.F.: 1964, Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. The Journal of Finance, Vol. 19 No 3 p 425-442.
Black, F., and Scholes, M.: 1973, The Pricing of Options and Corporate Liabilities, Journal of Political Economy, vol. 18 () 637-59.
Merton, R. C.: 1973, The Theory of Rational Option Pricing. Bell Journal of Economics and Management Science, Spring.
Artzner, Ph., F. Delbaen, J.-M. Eber, And D. Heath .: 1997, Thinking Coherently, Risk 10 (11) 68–71.
Artzner, Ph., Delbaen, F., Eber, J-M., And Heath, D.: 1999, Thinking Coherently. Mathematical Finance, Vol. 9, No. 3 203–228
Bernoulli, D.: 1954, Exposition of a new theory on the measurement of risk, Econometrica 22 (1) 23-36, Translation of a paper originally published in Latin in St. Petersburg in 1738.
Butler, J.C., Dyer, J.S., and Jia, J.: 2005, An Empirical Investigation of the Assumption of Risk –Value Models. Journal of Risk and Uncertainty, vol. 30 (2), pp. 133-156.
Bracher, H.W., and Weber, M.: 1997, Risk as a primitive: a survey of measures of perceived risk. OR Spektrum, Vol 19 () 235-250
 Fisher, I.: 1906, The nature of Capital and Income. Macmillan.
 von Neumann, J. and Morgenstern, O.: 1947, Theory of games and economic behavior, 2nd ed., Princeton University Press.
Coombs, C.H., and Pruitt D.G.: 1960, Components of Risk in Decision Making: Probability and Variance preferences. Journal of Experimental Psychology, vol. 60 () pp. 265-277.
Pruitt, D.G.: 1962, Patten and Level of risk in Gambling Decisions. Psychological Review, vol. 69 ()( pp. 187-201.
Coombs, C.H.: 1964, A Theory of Data. New York: Wiley.
Coombs, C.H., and Meyer, D.E.: 1969, Risk preference in Coin-toss Games. Journal of Mathematical Psychology, vol. 6 () p 514-527.
Coombs, C.H., and Huang, L.C.: 1970a, Polynomial Psychophysics of Risk. Journal of Experimental psychology, vol 7 (), pp. 317-338.
Markowitz, H.M.: 1959, Portfolio Selection: Efficient diversification of Investment. Yale University Press, New Haven, USA.
Tobin, J. E.: 1958, liquidity preference as behavior towards risk. Review of Economic Studies p 65-86.
Pratt, J.W.: 1964, Risk Aversion in the Small and in the Large. Econometrica, vol. 32 () p 122-136.
20 Pollatsek, A. and Tversky, A.: 1970, A theory of Risk. Journal of Mathematical Psychology 7 (no issue) 540-553.
 Luce, D. R.:1980, Several possible measures of risk. Theory and Decision 12 (no issue) 217-228.
J.P. Morgan and Reuters.: 1996, RiskMetrics Technical document. Available at http://riskmetrics.comrmcovv.html Accessed…
Uryasev, S., and Rockafeller, R.T.: 1999, Optimization of Conditional Value-at-Risk. Available at http://www.gloriamundi.org
Mitra, S.: 2009, Risk measures in Quantitative Finance. Available online. [Accessed…]
Geary, R.C. 1947, Testing for Normality, Biometrika, vol. 34, pp. 209-242.
Mardia, K.V.: 1980, P.R. Krishnaiah, ed., Handbook of Statistics, Vol. 1. North-Holland Publishing Company. Pp. 279-320.
Mandelbrot, B.: 1963, The variation of certain speculative prices. Journal of Business, vol. 26, pp. 394-419.
Fama, E.: 1963, Mandelbrot and the stable Paretian hypothesis. Journal of Business, vol. 36, pp. 420-429.
Fama, E.: 1965, The behavior of stock market prices. Journal of Business, vol. 38, pp. 34-105.
Esch, D.: 2010, Non-Normality facts and fallacies. Journal of Investment Management, vol. 8 (1), pp. 49-61.
 Stoyanov, S.V., Rachev, S., Rachel-Iotova, B., & Fabozzi, F.J.: 2011, Fat-tailed Models for Risk Estimation. Journal of Portfolio Management, vol. 37 (2). Available at http://www.iijournals.com/doi/abs/10.3905/jpm.2011.37.2.107
Embrechts, P., Uppelberg, C.K.L, and T. Mikosch.: 1997, Modeling extremal events for insurance and finance, Springer
McNeil, A. and Frey, R.: 2000, Estimation of tail-related risk measures for heteroscedastic financial time series: an extreme value approach, Journal of Empirical Finance, Volume 7, Issues 3-4, 271- 300.
Danielsson, J. and de Vries, C.: 2000, Value-at-Risk and Extreme Returns, Annales d’Economie et deb Statistique, Volume 60, 239-270.
Gilli, G., and Kellezi, E.: (2003), An Application of Extreme Value Theory for Measuring Risk, Department of Econometrics, University of Geneva, Switzerland. Available from: http://www.gloriamundi.org/picsresources/mgek.pdf
Shanbhag, D.N., and Rao, C.R.: 2003, Extreme Value Theory, Models, and Simulation. Handbook of Statistics, Vol 21(). Elsevier Science B.V.
Fisher, R. A., and Tippett, L.H.C.: 1928, Limiting forms of the frequency distribution of the largest or smallest member of a sample. Proc. Cambridge Philos. Soc. Vol 24, 180-190.
De Haan, L.: 1970, On Regular Variation and Its Application to the Weak Convergence of Sample Extremes. Mathematical Centre Tract, Vol. 32. Mathematisch Centrum, Amsterdam
De Haan, L.: 1976, Sample extremes: an elementary introduction. Statistica Neerlandica, vol. 30, 161-172.
Weissman, I.: 1978, Estimation of parameters and large quantiles based on the k largest observations. J. Amer. Statist. Assoc. vol. 73, 812-815.
 Nagaraja, H. N.: 1988, Some characterizations of continuous distributions based on regressions of adjacent order statistics and record values. Sankhya A 50, 70-73.
Khan, A. H. and Beg, M.I.: 1987, Characterization of the Weibull distribution by conditional variance. Snaky A 49, 268-271.
Klabunde, A. and Mwamba, J.W.M.: 2009, Extreme value at Risk: a Scenario for Risk management. SAJE Forthcoming.