Information Measure in Terms of the Hazard Function and Its Estimate
Next Article in Journal
Evolution towards Linguistic Coherence in Naming Game with Migrating Agents
Next Article in Special Issue
Robust Universal Inference
Previous Article in Journal
Why Do Big Data and Machine Learning Entail the Fractional Dynamics?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Measure in Terms of the Hazard Function and Its Estimate

1
Department of Statistics and Data Science, Yonsei University, Shinchon Dong 134, Seoul 03722, Korea
2
Department of Applied Statistics, Yonsei University, Shinchon Dong 134, Seoul 03722, Korea
Entropy 2021, 23(3), 298; https://doi.org/10.3390/e23030298
Submission received: 14 January 2021 / Revised: 22 February 2021 / Accepted: 25 February 2021 / Published: 28 February 2021
(This article belongs to the Special Issue Applications of Information Theory in Statistics)

Abstract

:
It is well-known that some information measures, including Fisher information and entropy, can be represented in terms of the hazard function. In this paper, we provide the representations of more information measures, including quantal Fisher information and quantal Kullback-leibler information, in terms of the hazard function and reverse hazard function. We provide some estimators of the quantal KL information, which include the Anderson-Darling test statistic, and compare their performances.

1. Introduction

Suppose that X is a random variable with a continuous probability density function (p.d.f.) f ( x ; θ ) , where θ is a real-valued scalar parameter. It is well-known that the Fisher information plays an important role in statistical estimation and inference, which is defined as
I ( θ ) = { θ log f ( x ; θ ) } 2 f ( x ; θ ) d x .
Fisher information identity in terms of the hazard function has been provided by Efron and Johnstone [1] as
I ( θ ) = { θ log h ( x ; θ ) } 2 f ( x ; θ ) d x ,
where h ( x ; θ ) is the hazard function defined as f ( x ; θ ) / ( 1 F ( x ; θ ) ) and F ( x ; θ ) is the cumulative distribution function.
It is also well-known that the entropy (Teitler et al., 1986) and Kullback-Leibler information [2] can be represented in terms of the hazard function, respectively, as
H ( X ) = 1 f ( x ) log h ( x ) d x
and
K L ( f : g ) = f ( x ) ( h g ( x ) h f ( x ) log h g ( x ) h f ( x ) 1 ) d x ,
where h f ( x ) and h g ( x ) are the hazard functions defined as f ( x ) / ( 1 F ( x ) ) and g ( x ) / ( 1 G ( x ) ) , respectively.
The quantal (randomly censored) Fisher information and the quantal (randomly censored) Kullback-Leibler information have been defined [3], respectively, as
I Q F ( θ ) = { F ( t ; θ ) θ log F ( t ; θ ) 2 + F ¯ ( t ; θ ) θ log F ¯ ( t ; θ ) 2 } d W ( t )
and
Q K L ( F : G ) = F ( t ) log F ( t ) G ( t ) + F ¯ ( t ) log F ¯ ( t ) G ¯ ( t ) d W ( t ) ,
where W ( t ) is an appropriate weight function which satisfies d W ( x ) = 1 .
The quantal Fisher information is related with the Fisher information in the ranked set sample, and the quantal Kullback-Leibler information is related with the cumulative residual entropy [4] and cumulative entropy [5], defined as
C R E ( F ) = 0 ( 1 F ( x ) ) log ( 1 F ( x ) ) d x
and
C E ( F ) = 0 F ( x ) log F ( x ) d x .
The information representation in terms of the cumulative functions enables us to estimate the information measure by employing the empirical distribution function.
The organization of this article is as follows: In Section 2, we discuss the relation between the quantal Fisher information and quantal Kullback-Leibler information. In Section 3, we provide the expression of the quantal Fisher information in terms of the hazard and reverse hazard functions as
I Q F ( θ ) = W ( x ) θ log r ( x ; θ ) 2 f ( x ; θ ) d x W ( x ) θ log h ( x ; θ ) 2 f ( x ; θ ) d x ,
where h ( x ; θ ) and r ( x ; θ ) are the hazard and reverse hazard functions, respectively.
We also provide the expression of the quantal (randomly censored) KL information in terms of the hazard and reverse hazard functions as
Q K L ( F : G ) = W ( x ) f ( x ) { r g ( x ) r f ( x ) log r g ( x ) r f ( x ) h g ( x ) h f ( x ) + log h g ( x ) h f ( x ) } d x ,
where r f ( x ) and r g ( x ) are the reverse hazard functions defined as f ( x ) / F ( x ) and g ( x ) / G ( x ) , respectively.
This representation enables us to estimate the quantal information by employing the nonparametric hazard function estimator. In Section 4, we discuss the choice of the weight function W ( x ) in terms of maximizing the related Fisher information. In Section 5, we provide the estimator of (2) and evaluate its performance as a goodness-of-fit test statistic. Finally, in Section 6, some concluding remarks are provided.

2. Quantal Fisher Information and Quantal Kullback-Leibler Information

If we define the quantal response variable Y at t as
Y = 1   i f X t 0   i f X > t ,
its density function is
f Y ( y : θ ) = F ( t ; θ ) y F ¯ ( t ; θ ) 1 y .
Then, the conditional Fisher information in the quantal response at t about θ can be obtained as
I Q F t ( θ ) = F ( t ; θ ) θ log F ( t ; θ ) 2 + F ¯ ( t ; θ ) θ log F ¯ ( t ; θ ) 2 .
This conditional Fisher information has been studied in terms of censoring by Gertsbakh [6] and Park [7], and its weighted average has been defined to be the quantal randomly censored Fisher information [3] as
I Q F ( θ ) = { F ( t ; θ ) θ log F ( t ; θ ) 2 + F ¯ ( t ; θ ) θ log F ¯ ( t ; θ ) 2 } d W ( t ) ,
where W ( t ) is an appropriate weight function. The expression (5) says that I Q F ( θ ) may be called cumulative Fisher information and can be written in a simpler way, as
I Q F ( θ ) = θ F ( x ; θ ) 2 F ( x ; θ ) ( 1 F ( x ; θ ) ) d W ( x ) .
Remark 1.
If we take W ( x ) to be F ( x ; θ ) , I Q F ( θ ) is related with the Fisher information in the ranked set sample [8] as
I R S S ( θ ) = I S R S ( θ ) + n ( n + 1 ) I Q F ( θ ) ,
where I S R S ( θ ) is the Fisher information in a simple random sample of size n, which is equal to n I ( θ ) , and I R S S ( θ ) is the Fisher information in a ranked set sample.
The result means that the ranked set sample has additional ordering information in the n ( n + 1 ) pairs to the simple random sample. Hence, 1 + ( n + 1 ) I Q F / I ( θ ) represents the efficiency level of the ranked set sample relative to the simple random sample.
In a similar way, the Kullback-Leibler (KL) information between two quantal random variables can be obtained as
K L t ( F : G ) = F ( t ) log F ( t ) G ( t ) + F ¯ ( t ) log F ¯ ( t ) G ¯ ( t ) .
Then, the weighted average of K L t ( F : G ) has been defined to be quantal (randomly censored) divergence [3], as
Q K L ( F : G ) = F ( x ) log F ( x ) G ( x ) + F ¯ ( x ) log F ¯ ( x ) G ¯ ( x ) d W ( x ) .
We note that the quantal KL information (quantal divergence) with d W ( x ) = d x is equal to the addition of the cumulative KL information (Park, 2015) and cumulative residual KL information [9]. This quantal Kullback-Leibler information has been discussed in constructing goodness-of-fit test statistics by Zhang [10].
The following approximation of the KL information in terms of the Fisher information is well-known [11], as
K L ( f ( x ; θ ) : f ( x ; θ + Δ θ ) ) 1 2 ( Δ θ ) 2 { θ log f ( x ; θ ) } 2 f ( x ; θ ) d x .
Hence, we can also apply Taylor’s expansion to (2) to have the approximation of the quantal KL information in terms of the quantal Fisher information as follows:
Lemma 1.
Q K L ( F ( x ; θ ) : F ( x ; θ + Δ θ ) ) 1 2 ( Δ θ ) 2 I Q F ( θ ) .
Proof of Lemma 1.
By applying the Taylor expansion, we have
Q K L ( F ( x ; θ ) : F ( x ; θ + Δ θ ) )               1 2 ( Δ θ ) 2 { F ( x ; θ ) 2 θ 2 log F ( x ; θ ) + F ¯ ( x ; θ ) 2 θ 2 log F ¯ ( x ; θ ) } d W ( x ) .
Then, we can show that
F ( x ; θ ) 2 θ 2 log F ( x ; θ ) F ¯ ( x ; θ ) 2 θ 2 log F ¯ ( x ; θ ) =                                       F ( x ; θ ) θ log F ( x ; θ ) 2 + F ¯ ( x ; θ ) θ log F ¯ ( x ; θ ) 2 .

3. Quantal Fisher Information in Terms of the (Reversed) Hazard Function

It is well-known that the Fisher information can be represented in terms of the hazard function [1] as
I ( θ ) = { θ log h ( x ; θ ) } 2 f ( x ; θ ) d x ,
where h ( x ; θ ) is the hazard function defined as f ( x ; θ ) / ( 1 F ( x ; θ ) ) .
The mirror image of (1) provides another representation of the Fisher information in terms of the reverse hazard function [12] as
I ( θ ) = { θ log r ( x ; θ ) } 2 f ( x ; θ ) d x ,
where r ( x ; θ ) is the reverse hazard function defined as f ( x ; θ ) / F ( x ; θ ) .
Then, (6) can be written again in terms of both hazard function and reversed hazard function in view of (7) and (8) as follows:
Lemma 2.
K L ( f ( x ; θ ) : f ( x ; θ + Δ θ ) ) = 1 2 ( Δ θ ) 2 { θ log h ( x ; θ ) } 2 f ( x ; θ ) d x = 1 2 ( Δ θ ) 2 { θ log r ( x ; θ ) } 2 f ( x ; θ ) d x .
Now, we show that the quantal Fisher information can also be expressed in terms of both hazard function and reversed hazard function, as follows.
Theorem 1.
Suppose that W ( x ) is bounded and the regularity conditions for the existence of the Fisher information hold.
I Q F ( θ ) = W ( x ) θ log r ( x ; θ ) 2 f ( x ; θ ) d x W ( x ) θ log h ( x ; θ ) 2 f ( x ; θ ) d x .
Proof of Theorem 1.
In view of Park [7], we have the decomposition of the Fisher information as
I ( θ ) = I L t ( θ ) + I Q F t ( θ ) + I R t ( θ ) ,
where
I L t ( θ ) = t θ log r ( x ; θ ) 2 f ( x ; θ ) d x
and
I R t ( θ ) = t θ log h ( x ; θ ) 2 f ( x ; θ ) d x .
Hence, I Q F t ( θ ) can also be expressed from (10) as
I Q F t ( θ ) = t θ log h ( x ; θ ) 2 f ( x ; θ ) d x t θ log r ( x ; θ ) 2 f ( x ; θ ) d x .
We can take the expectation of (11) and apply Fubini’s theorem to get the result. □
Example 1.
If W ( x ) is taken to be F ( x ; θ ) , (9) can be written as
I Q F ( θ ) = 1 2 ( I 1 : 2 ( θ ) + I 2 : 2 ( θ ) 2 I ( θ ) )
because it has been shown in Park (1996) that
I 1 : 2 ( θ ) = 2 θ log h ( x ; θ ) 2 f ( x ; θ ) ( 1 F ( x ; θ ) ) d x I 2 : 2 ( θ ) = 2 θ log r ( x ; θ ) 2 f ( x ; θ ) F ( x ; θ ) d x ,
where I i : n ( θ ) is the Fisher information in the ith order statistic from an independently and identically distributed sample of size n.

4. Quantal KL Information and Choice of the Weight Function in Terms of Maximizing the Quantal Fisher Information

Because Lemma 2 shows that the approximation of the Kullback-Leibler information can be represented in terms of the hazard function and reverse hazard function, the following representations of the KL information in terms of the hazard function and reverse hazard function have been shown in Park and Shin [2] as
K L ( f : g ) = f ( x ) ( h g ( x ) h f ( x ) log h g ( x ) h f ( x ) 1 ) d x
and
K L ( f : g ) = f ( x ) ( r g ( x ) r f ( x ) log r g ( x ) r f ( x ) 1 ) d x .
In a similar context, Lemma 2 and Theorem 1 says that the approximation of the quantal Kullback-Leibler information can also be represented in terms of the hazard function and reverse hazard function; hence, we can expect the following quantal KL information representation in terms of the hazard function and reverse hazard function.
Theorem 2.
Q K L ( F : G ) = W ( x ) f ( x ) { r g ( x ) r f ( x ) log r g ( x ) r f ( x ) 1 } d x W ( x ) f ( x ) { h g ( x ) h f ( x ) log h g ( x ) h f ( x ) 1 } d x .
Proof of Theorem 2.
We can show that
d d x { F ( x ) log F ( x ) G ( x ) } = f ( x ) { r g ( x ) r f ( x ) log r g ( x ) r f ( x ) 1 + log g ( x ) f ( x ) } d d x { F ¯ ( x ) log F ¯ ( x ) G ¯ ( x ) } = f ( x ) { h g ( x ) h f ( x ) log h g ( x ) h f ( x ) 1 + log g ( x ) f ( x ) } .
Then, we can apply the integration by parts to (2) to get the result. □
Equation (13) can be rewritten in terms of the cumulative distribution function as follows:
Q K L ( F : G ) = W ( x ) F ( x ) G ( x ) G ( x ) ( 1 G ( x ) ) d G ( x ) + W ( x ) log G ( x ) / ( 1 G ( x ) ) F ( x ) / ( 1 F ( x ) ) d F ( x ) .
Hence, the quantal KL information has another representation in terms of the cumulative distribution function, which measures the weighted differences in distribution functions and the log odds ratio.
Now, we consider the choice of the weight function W ( x ) in Q K L ( F : G ) , which has not been discussed much so far. Here, we consider the criterion of maximizing the quantal Fisher information in Theorem 1. For the multi-parameter case, we have the quantal Fisher information matrix and can consider its determinant, which is called generalized Fisher information.
For illustration, we take F ( x ) to be the normal distribution. Then we consider the following d W ( x ) ’s and plotted their shapes in Figure 1 where d W 1 ( x ) is the bimodal weight function and the shapes get more centralized as i in d W i ( x ) increases.
  • d W 1 ( x ) = d { 0.5 Φ ( x 2 ) + 0.5 Φ ( x + 2 ) }
  • d W 2 ( x ) = π / 8 × Φ ( x ) 0.5 ( 1 Φ ( x ) ) 0.5 d Φ ( x )
  • d W 3 ( x ) = d Φ ( x )
  • d W 4 ( x ) = 6 Φ ( x ) ( 1 Φ ( x ) ) d Φ ( x ) ,
where Φ ( x ) is the cumulative distribution function of the normal random variable.
We calculate the corresponding quantal Fisher information and summarize the results in Table 1. We can see from Table 1 that I Q F ( θ ) about the location parameter gets larger as the weight function becomes more centralized. We also note that we have the maximum I Q F ( θ ) about the scale parameter at the bimodal weight function. However, we can see that we have the maximum generalized quantal Fisher information at d W ( x ) = d Φ ( x ) .

5. Estimation of the Quantal KL Information

Suppose that we have an independently and identically distributed (IID) sample of size n, ( x 1 , , x n ) , from an assumed density function f θ ( x ) , and ( x 1 : n , , x n : n ) are their ordered values. Then, the distance between the sample distribution and the assumed distribution can be measured as K L ( f n : f θ ) , where f n is an appropriate nonparametric density function estimator, and its estimate has been studied as a goodness-of-fit test statistic by lots of authors, including Pakyari and Balakrishnan [13], Noughabi and Arghami [14], and Qiu and Jia [15] by considering a piecewise uniform density function estimator or nonparametric kernel density function estimator. In the same manner, the estimate of (12) has been studied by Park and Shin (2015) for the same purpose by considering a nonparametric hazard function estimator. However, we note that the critical values based on those nonparametric density (hazard) function estimators depend on the choice of the bandwidth-type parameter.
We can also measure the distance between the sample distribution and the assumed distribution with Q K L ( F n : F θ ) , if we choose the weight function to be F n ( x ) in view of Section 4, which can be written as
Q K L ( F n : F θ ) = F n ( x ) log F n ( x ) F θ ( x ) + F ¯ n ( x ) log F ¯ n ( x ) F θ ¯ ( x ) d F n ( x ) ,
where F n is the empirical distribution function.
Then, F ¯ n ( x i : n ) can be obtained as i / n , and d F n ( x ) is obtained as 1 / n only at x i : n ’s, and (14) can be written as
Q K L R ( F n : F θ ) = i = 1 n { i n log i / n ξ i + n i n log ( n i ) / n 1 ξ i } = 1 n i = 1 n { i log ξ i + ( n i ) log ( 1 ξ i ) } + C 1 ,
where ξ i = F θ ( x i : n ) , ξ 0 = 0 and ξ n + 1 = 1 , and C 1 = i = 1 n { ( i / n ) log ( i / n ) + ( 1 i / n ) log ( 1 i / n ) } .
However, because the empirical distribution function is only right-continuous, we also consider F ¯ n ( x i : n ) to be ( n i + 1 ) / n so that F ¯ n ( x i : n ) to be ( i 1 ) / n , then we have
Q K L L ( F n : F θ ) = i = 1 n { i 1 n log ( i 1 ) / n ξ i + n i + 1 n log ( n i + 1 ) / n 1 ξ i } = 1 n i = 1 n { ( i 1 ) log ξ i + ( n i + 1 ) log ( 1 ξ i ) } + C 1 .
Hence, we may obtain the average of both and obtain
Q K L n ( F n : F θ ) = 1 n i = 1 n { ( i 0.5 ) log ξ i + ( n i + 0.5 ) log ( 1 ξ i ) } + C 1 ,
which is actually equivalent to the Anderson-Darling test.
Zhang [10] proposed a test statistic by choosing a weight function,
d W ( x ) = 1 / { F n ( x ) F ¯ n ( x ) } d F n ( x ) ,
as
Z A = i = 1 n { log ξ i n i + 0.5 + log ( 1 ξ i ) i 0.5 } + C 2 ,
where C 2 = i = 1 n { log ( ( i 0.5 ) / n ) / ( ( n i + 0.5 ) / n ) + log ( ( n i + 0.5 ) / n ) / ( ( i 0.5 ) / n ) } .
For example, we consider the performance of the above statistics for testing the following hypothesis:
H 0 : The true distribution function is N ( μ , σ )
versus
H 1 : The true distribution function is not N ( μ , σ ) .
The unknown parameters, μ and σ , are estimated with the sample mean and sample standard deviation, respectively. We also consider the classical Kolmogorov-Smirnov test statistic (Lilliefors test) for comparison as
K S = sup z | F n ( z ) N ( 0 , 1 ) | ,
where z = ( x x ¯ ) / s and x ¯ and s are the sample mean and the sample standard deviation, respectively.
We provide the critical values of the above test statistics for n = 10 , 20 , , 100 in Table 2, which are obtained by employing the Monte Carlo simulations of size 200,000.
Then, we compare the power estimates of the above test statistics, for illustration, against the following alternatives to compare the powers:
  • Symmetric alternatives: Logistic, t ( 5 ) , t ( 3 ) , t ( 1 ) , Uniform, Beta(0.5,0.5), Beta(2,2);
  • Asymmetric alternatives: Beta(2,5), Beta(5,2), Exponential, Lognormal(0,0.5), Lognormal(0,1).
We also employed the Monte Carlo simulation to estimate the powers against the above alternatives for n = 20 , 50 , 100 , respectively, where the simiulation size is 100,000. The numerical results are summarized in Table 3, Table 4 and Table 5. These show that Q K L n performs better than Q K L R and Q K L L against symmetric alternatives, and the powers of Q K L n against asymmetric alternatives are in between Q K L R and Q K L L . They all outperform the classical Kolmogorov-Smirnov test statistic. Z A generally performs better than Q K L n against asymmetric alternatives, but the simulation result shows that Z A seems to be a biased test, which can be known from the power estimate against B e t a ( 2 , 2 ) for n = 20 .

6. Concluding Remarks

It is well-known that both Fisher information and Kullback-Leibler information can be in terms of the hazard function or reverse hazard function. We considered the quantal response variable and showed that the quantal Fisher information and quantal KL information can also be represented in terms of both hazard function and reverse hazard function. We also provided the criterion of maximizing the standardized quantal Fisher information in choosing the weight function in the quantal KL information. For illustration, we considered the normal distribution and studied the choice of weight function, and compared the performance of the estimators of the quantal KL information as a goodness-of-fit test.

Funding

This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(2018R1D1A1B07042581).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Efron, B.; Johnstone, I. Fisher information in terms of the hazard rate. Ann. Stat. 1990, 18, 38–62. [Google Scholar] [CrossRef]
  2. Park, S.; Shin, M. Kullback-Libler information of a censored variable and its applications. Statistics 2014, 48, 756–765. [Google Scholar] [CrossRef]
  3. Tsairidis, C.; Zogfrafos, K.; Ferentinos, K.; Papaioannou, T. Information in quantal response data and random censoring. Ann. Inst. Stat. Math. 2001, 53, 528–542. [Google Scholar] [CrossRef]
  4. Rao, M.; Chen, Y.; Vemuri, B.C.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
  5. Di Crescenzo, A.; Longobardi, M. On cumulative entropies. J. Stat. Plan. Inference 2009, 139, 4072–4087. [Google Scholar] [CrossRef]
  6. Gertsbakh, I. On the Fisher information in the type-I censored and quantal response data. Stat. Probab. Lett. 1995, 32, 297–306. [Google Scholar] [CrossRef]
  7. Park, S. On the asymptotic Fisher information in order statistics. Metrika 2003, 57, 71–80. [Google Scholar] [CrossRef]
  8. Chen, Z. The efficiency of ranked-set sampling relative to simple random sampling under multi-parameter families. Stat. Sin. 2000, 10, 247–263. [Google Scholar]
  9. Baratpour, S.; Rad, A.H. Testing goodness-of-fit for exponential distribution based on cumulative residual entropy. Commun. Stat. Theory Methods 2012, 41, 1387–1396. [Google Scholar] [CrossRef]
  10. Zhang, J. Powerful goodness-of-fit tests based on the likelihood ratio. J. R. Stat. Soc. B 2002, 64, 281–294. [Google Scholar] [CrossRef]
  11. Kullback, S. Information Theory and Statistics; Wiley: New York, NY, USA, 1959. [Google Scholar]
  12. Gupta, R.D.; Gupta, R.C.; Sankaran, P.G. Some Characterization Results Based on Factorization of the (Reversed) Hazard Rate Function. Commun. Stat. Theory Methods 2004, 33, 3009–3031. [Google Scholar] [CrossRef]
  13. Pakyari, R.; Balakrishnan, N. A general purpose approximate goodness-of-fit test for progressively Type-II censored data. IEEE Trans. Reliab. 2012, 61, 238–244. [Google Scholar] [CrossRef]
  14. Noughabi, H.A.; Arghami, N.R. Goodness-of-Fit Tests Based on Correcting Moments of Entropy Estimators. Commun. Stat. Simul. Comput. 2013, 42, 499–513. [Google Scholar] [CrossRef]
  15. Qiu, G.; Jia, K. Extropy estimators with applications in testing uniformity. J. Nonparametric Stat. 2018, 30, 182–196. [Google Scholar] [CrossRef]
Figure 1. Shapes of the chosen weight functions.
Figure 1. Shapes of the chosen weight functions.
Entropy 23 00298 g001
Table 1. Quantal FI for some weight functions.
Table 1. Quantal FI for some weight functions.
Specification I QF ( θ )
DistributionParameter dW 1 ( x ) dW 2 ( x ) dW 3 ( x ) dW 4 ( x )
NormalLocation0.19830.42050.48050.5513
NormalScale0.34970.30140.27010.1883
NormalGeneralized FI0.06930.12670.12980.1013
Table 2. Critical values of test statistics.
Table 2. Critical values of test statistics.
n QKL R QKL L QKL n ( AD n ) Z A KS
100.43600.43500.43222.73950.2619
200.41310.41410.41153.91080.1920
300.40280.40250.40084.63100.1586
400.39770.39780.39605.14860.1385
500.39280.39240.39185.50000.1244
600.39010.39050.38925.87640.1138
700.39140.39140.39056.17230.1060
800.38720.38730.38676.39880.0991
900.38740.38740.38676.63070.0935
1000.38580.38510.38516.83190.0888
Table 3. Power estimate (%) of 0.05 tests against 10 alternatives of the normal distribution based on 100,000 simulations; n = 20 .
Table 3. Power estimate (%) of 0.05 tests against 10 alternatives of the normal distribution based on 100,000 simulations; n = 20 .
Alternatives QKL R QKL L QKL n ( AD n ) Z A KS
N(0,1)5.015.015.005.084.96
Logistic(0,1)10.5410.3510.4812.348.55
t(5)16.9816.8617.0419.6913.15
t(3)32.1531.9632.3534.5626.03
t(1)88.0688.0488.2386.4984.63
Uniform16.5716.4016.7813.579.71
Beta(0.5,0.5)60.6660.5261.1066.5531.82
Beta(1,1)16.7316.5516.9113.539.89
Beta(2,2)5.525.415.523.265.08
Beta(2,5)11.5317.4714.6417.2611.54
Beta(5,2)17.9211.4814.7717.6211.51
Exponential(1)72.6281.2377.5986.7258.54
Log normal(0,0.5)40.6451.4046.6453.9834.29
Log normal(0,1)88.0392.2090.4894.3279.20
Table 4. Power estimate (%) of 0.05 tests against 10 alternatives of the normal distribution based on 100,000 simulations; n = 50 .
Table 4. Power estimate (%) of 0.05 tests against 10 alternatives of the normal distribution based on 100,000 simulations; n = 50 .
Alternatives QKL R QKL L QKL n ( AD n ) Z A KS
N(0,1)5.065.125.055.165.05
Logistic(0,1)16.1316.2116.2018.4911.45
t(5)30.2530.3130.4133.6321.10
t(3)60.8660.8560.9961.6048.57
t(1)99.7299.7299.7399.4899.33
Uniform57.4357.6157.7380.0825.92
Beta(0.5,0.5)99.0699.0899.0899.9780.21
Beta(1,1)57.5457.5957.8080.0426.07
Beta(2,2)13.1813.3013.3314.758.21
Beta(2,5)35.0943.7939.6759.4125.65
Beta(5,2)43.5635.0939.4759.0325.57
Exponential(1)99.5099.7699.6599.9996.05
Log normal(0,0.5)84.7089.5287.4094.1671.05
Log normal(0,1)99.9499.9799.96100.0099.52
Table 5. Power estimate (%) of 0.05 tests against 10 alternatives of the normal distribution based on 100,000 simulations; n = 100 .
Table 5. Power estimate (%) of 0.05 tests against 10 alternatives of the normal distribution based on 100,000 simulations; n = 100 .
Alternatives QKL R QKL L QKL n ( AD n ) Z A KS
N(0,1)5.065.085.045.055.05
Logistic(0,1)24.1524.1624.1924.9915.57
t(5)48.1848.2348.2650.3433.23
t(3)84.9484.9784.9783.5273.09
t(1)100.00100.00100.00100.00100.00
Uniform95.0295.0795.0799.9359.20
Beta(0.5,0.5)100.00100.00100.00100.0099.47
Beta(1,1)94.8294.8994.8899.9558.92
Beta(2,2)31.9632.1032.1054.8115.39
Beta(2,5)72.8878.8076.0096.2250.56
Beta(5,2)78.7372.9876.0496.1150.70
Exponential(1)100.00100.00100.00100.00100.00
Log normal(0,0.5)99.2899.5999.4799.9495.07
Log normal(0,1)100.00100.00100.00100.00100.00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Park, S. Information Measure in Terms of the Hazard Function and Its Estimate. Entropy 2021, 23, 298. https://doi.org/10.3390/e23030298

AMA Style

Park S. Information Measure in Terms of the Hazard Function and Its Estimate. Entropy. 2021; 23(3):298. https://doi.org/10.3390/e23030298

Chicago/Turabian Style

Park, Sangun. 2021. "Information Measure in Terms of the Hazard Function and Its Estimate" Entropy 23, no. 3: 298. https://doi.org/10.3390/e23030298

APA Style

Park, S. (2021). Information Measure in Terms of the Hazard Function and Its Estimate. Entropy, 23(3), 298. https://doi.org/10.3390/e23030298

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop