Heavy-Tail Phenomena: Probabilistic and Statistical Modeling


Free download. Book file PDF easily for everyone and every device. You can download and read online Heavy-Tail Phenomena: Probabilistic and Statistical Modeling file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Heavy-Tail Phenomena: Probabilistic and Statistical Modeling book. Happy reading Heavy-Tail Phenomena: Probabilistic and Statistical Modeling Bookeveryone. Download file Free Book PDF Heavy-Tail Phenomena: Probabilistic and Statistical Modeling at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Heavy-Tail Phenomena: Probabilistic and Statistical Modeling Pocket Guide.
Main content

Zentralblatt MATH identifier Keywords dependent entries eigenvectors largest eigenvalues regular variation sample covariance matrix stochastic volatility. The eigenvalues of the sample covariance matrix of a multivariate heavy-tailed stochastic volatility model. Bernoulli 24 , no.

Read more about accessing full-text Buy article. Abstract Article info and citation First page References Abstract We consider a multivariate heavy-tailed stochastic volatility model and analyze the large-sample behavior of its sample covariance matrix. Article information Source Bernoulli , Volume 24, Number 2 , Export citation. Export Cancel.

References [1] Andersen, T. Handbook of Financial Time Series. Berlin: Springer. Poisson convergence for the largest eigenvalues of heavy tailed random matrices. Limit of the smallest eigenvalue of a large-dimensional sample covariance matrix. To make the pairs as close as possible, the parameters of the second distribution i. More precisely, by using the first sample we estimated the stable distribution parameters and the obtained values were used for generation of the second sample. For each case we simulated a sample of length In Fig 1 we present the simulated samples and in Fig 2 we illustrate the results of the algorithm.

For the Gaussian sample they are equal to 2 for most of the cases, whereas for the non-Gaussian stable sample the stability index is always smaller than 2.

New Trial Heavy Tail Phenomena: Probabilistic and Statistical Modeling (Springer Series in

For both distributions the estimated values are almost independent of K as the aggregation does not change the index of stability. Moreover, boxplots are getting wider with increasing K as the estimation is performed for smaller samples, hence the variance of the estimator increases. This leads to the conclusion that the examined distributions are different.

In contrast to this, the two-sample Kolmogorov-Smirnov test applied to normalized samples does not reject the null hypothesis of common distributions, with p -value equal to 0. Let us mention that the tempered stable distribution was introduced in [ 60 ] and developed later in [ 61 ].

A general mathematical description of this class of distributions and processes was presented in [ 72 ]. It can be shown that in this case the Fourier transform of random variable T takes the following form:. In Fig 3 we present the simulated samples and in Fig 4 we illustrate the results of the algorithm. For the stable distribution the estimated values are almost independent of K as the aggregation does not change the index of stability.

Therefore, we conclude that the analyzed distributions are different. In contrast to this, the two-sample Kolmogorov-Smirnov test does not reject the null hypothesis of common distributions, with p -value equal to 0. The square Gaussian random variable W with zero-mean and unit variance is defined as follows:. The W random variable is a special case of univariate non-Gaussian systems considered in [ 46 ]. In Fig 5 we present the simulated samples and in Fig 6 we illustrate the results of the algorithm.

http://paytonraemusic.com

Heavy-tailed distribution

Hence, we conclude that the examined distributions are different. The probability density function of the random variable Z is given by:. In Fig 7 we present the simulated samples and in Fig 8 we illustrate the results of the algorithm. This clearly indicates that the analyzed distributions are different. In Fig 9 we present the simulated samples and in Fig 10 we illustrate the results of the algorithm. For both cases the estimated values are almost independent of K as the aggregation does not change the index of stability.

Finally, boxplots are getting wider with increasing K as the estimation is performed for smaller samples, hence the variance of the estimator increases.

Discriminating between Light- and Heavy-Tailed Distributions with Limit Theorem

Hence, we conclude that the analyzed distributions are different. In contrast to this, the two-sample Kolmogorov-Smirnov test fails, not rejecting the null hypothesis of common distributions, with p -value equal to 0. We investigate here the data obtained in an experiment on the controlled thermonuclear fusion device. One of the most important problems related to these data is the information about statistical properties of plasma fluctuations before and after the so-called L-H transition phenomenon.

This is the name of a sudden transition from the low confinement mode L mode to a high confinement mode H mode accompanied by suppression of turbulence and a rapid drop of turbulent transport at the edge of thermonuclear device [ 74 ]. We consider two datasets and want to statistically confirm if the L-H transition appeared.

For the detailed description of the experimental setup, see [ 19 , 75 ]. We extract two subsamples from this dataset. The first subsample consists of observations with the numbers between and data1 , while the second contains observations with the numbers between and data2. In Fig 11 we present the analyzed vectors of observations after normalization and the corresponding empirical tails and PDFs.

We can observe that the estimated values have a different behavior. This suggests that either the data are non-Gaussian stable or they belong to the domain of attraction of this law. We may claim that they are Gaussian. Hence, we conclude that the underlying distributions are different. In order to confirm these results we also performed the Jarque-Bera JB test for Gaussianity for both subsamples [ 22 , 27 , 37 ].

For the data1 the test rigorously rejects the hypothesis of Gaussianity, namely the obtained p -value is equal to 0. For the data2 the p -value of the JB test is equal to 0. Moreover, we employed the Anderson-Darling AD test for stable distribution [ 22 , 27 , 62 ]. It appears that the data1 can be modeled by the non-Gaussian stable distribution. The p -value is equal to 0. This confirms conclusions from our test.

In contract to this, the two-sample Kolmogorov-Smirnov test does not reject the hypothesis of the same distribution, namely p -value is quite high and equal to 0. The first subsample consists of observations with the numbers between and data1 while the second contains observations with the numbers between and data2.

mathematics and statistics online

In Fig 13 we present the analyzed vectors of observations after normalization and the corresponding empirical tails and PDFs. We can observe that the estimated values have different behaviour. This suggests that the data are Gaussian. For the second subsample the values converge to 2.

ADVERTISEMENT

We may claim that they are not Gaussian but belong to the domain of attraction of Gaussian law. Hence, the conclusion is that the underlying distributions are different. In order to confirm these results we also performed the JB test for Gaussianity for both subsamples. For the data1 the test does not reject Gaussianity the corresponding p -value is equal to 0. Moreover, we employed the AD test for the stable distribution. It appears that the stable distribution is rigorously rejected for the data2, with the p -value being only 0. In contract to this, the two-sample Kolmogorov-Smirnov test does not reject the hypothesis of the same distribution, namely p -value is extremely high and equal to 0.


  • Heavy-Tail Phenomena: Probabilistic and Statistical Modeling / Edition 1.
  • Deep Learning with Python. A Hands-on Introduction.
  • Heavy-tail phenomena : probabilistic and statistical modeling [electronic resource].
  • - Document - Heavy-tail Phenomena. Probabilistic and Statistical Modeling?
  • At the Point of a Cutlass: The Pirate Capture, Bold Escape, and Lonely Exile of Philip Ashton.
  • Citation metadata.

In this paper we introduced an algorithm for distinguishing between light- and heavy-tailed distributions based on the generalized limit theorem. Then, we plot the estimated values for increasing lengths of the blocks.

Heavy-Tail Phenomena: Probabilistic and Statistical Modeling Heavy-Tail Phenomena: Probabilistic and Statistical Modeling
Heavy-Tail Phenomena: Probabilistic and Statistical Modeling Heavy-Tail Phenomena: Probabilistic and Statistical Modeling
Heavy-Tail Phenomena: Probabilistic and Statistical Modeling Heavy-Tail Phenomena: Probabilistic and Statistical Modeling
Heavy-Tail Phenomena: Probabilistic and Statistical Modeling Heavy-Tail Phenomena: Probabilistic and Statistical Modeling
Heavy-Tail Phenomena: Probabilistic and Statistical Modeling Heavy-Tail Phenomena: Probabilistic and Statistical Modeling
Heavy-Tail Phenomena: Probabilistic and Statistical Modeling Heavy-Tail Phenomena: Probabilistic and Statistical Modeling
Heavy-Tail Phenomena: Probabilistic and Statistical Modeling Heavy-Tail Phenomena: Probabilistic and Statistical Modeling
Heavy-Tail Phenomena: Probabilistic and Statistical Modeling Heavy-Tail Phenomena: Probabilistic and Statistical Modeling

Related Heavy-Tail Phenomena: Probabilistic and Statistical Modeling



Copyright 2019 - All Right Reserved