Parameter Estimation of Linear Stochastic Differential Equations with Sparse Observations
Next Article in Journal
Strain Monitoring-Based Fatigue Assessment and Remaining Life Prediction of Stiff Hangers in Highway Arch Bridge
Previous Article in Journal
Asymmetry Opinion Evolution Model Based on Dynamic Network Structure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Estimation of Linear Stochastic Differential Equations with Sparse Observations

1
School of Mathematics, Jilin University, Changchun 130012, China
2
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2022, 14(12), 2500; https://doi.org/10.3390/sym14122500
Submission received: 29 October 2022 / Revised: 13 November 2022 / Accepted: 21 November 2022 / Published: 25 November 2022

Abstract

:
We consider parameter estimation for linear stochastic differential equations with independent experiments observed at infrequent and irregularly spaced follow-up times. The maximum likelihood method is used to obtain an asymptotically consistent estimator. A kernel-weighted score function is proposed for the parameter in drift terms. The strong consistency and the rate of convergence of the estimator are obtained. The numerical results show that the proposed estimator performs well with moderate sample sizes.

1. Introduction

To simulate the dynamic behavior of a complex system, linear stochastic differential equations (LSDEs) are frequently used. In many real-world applications, it is customary that the parameters that define the system must be estimated from the data. As an example, geometric Brownian motion (GBM) is one of the most popular stochastic processes and undoubtedly an effective instrument in modeling and predicting random changes in stock prices [1,2]. Both deterministic and stochastic components contribute to the pharmacokinetic and pharmacodynamic models: Although there are predictable trends in drug concentrations, it is not always possible to establish the precise concentration at any particular time [3].
In biometrics, a GBM model and an estimation procedure were developed for predicting the height growth of even-aged forest stands as part of a methodology for modeling growth in forest plantations [4].
Due to its growing use in a variety of domains, parameter estimation problems involving stochastic differential equations have received a lot of attention lately. Using the data, one should estimate the parameters that characterize the system. Several methods are proposed to evaluate the parameters, such as the least squares method [5,6,7,8,9], the maximum likelihood method [10,11,12,13,14], and the numerical approximation approach [15]. Several other methods, such as the generalized method of moments procedures [16,17], local linearization method [11,18], and MCMC methods [19], are also proposed.
Assume that n identical and independently distributed paths are observed. When a number of patients can be watched, for example, this situation can arise in pharmacokinetics. For each patient, a bolus of the medication is given, and the “path” of its diffusion through the body can be observed [3]. Such observations are typically sparse and only observed at infrequent and irregularly spaced follow-up times; the above methods are no longer applicable. In this case, we develop a computationally efficient method to deal with the observations with infrequent and irregularly spaced follow-up times. In this paper, we apply kernel methods to the parameter estimation of LSDEs. At the heart of the proposed approach is to “smooth” an individual’s contributions to the likelihood based on the distance of their observed time to the time of interest. The smoothing methods employed are where smoothing happens on an individual basis as compared to the population level, where all individuals are given the same weights. With a suitable choice of bandwidth, the consistency and asymptotic normality for the proposed estimator can be obtained. One can refer to [20,21] for statistical inference of diffusion processes.
In future research, we may consider parameter estimation for a more generalized drift term such as f ( X t , θ ) and the nonparametric estimation for the drift function f ( X t ) where f ( · ) is a measurable function based on identical and independently distributed observations.
Our paper is organized as follows. In Section 2, we propose an estimator for the drift parameter of the LSDE; we also obtain the consistency of μ and σ and show their asymptotic normality where the convergence rate is ( n h n ) 1 / 2 , which is slower than n 1 / 2 . In Section 3, numerical simulations are performed. Simulation findings show that the large sample approximations are suitable for usage in practice. Section 4 is the conclusion.

2. Models and Methods

2.1. Description of Models

Consider an LSDE model as follows
d S ( t ) = μ S ( t ) d t + σ S ( t ) d W ( t ) , S ( 0 ) = s 0 ,
where μ is an unknown parameter, σ is a constant, and W ( t ) are independent standard Brownian motions. W ( t ) is characterized by the following properties: (1) W ( 0 ) = 0 ; (2) W ( t ) has independent increments, which is, for every t > 0 , the future increments W ( t + u ) W ( t ) , u 0 , are independent of the past values W ( s ) , s t , and W ( t + u ) W ( t ) N ( 0 , u ) ; (3) W ( t ) is continuous in t. Under the condition of Lipschitz and linear growth, the LSDE (1) has a unique strong solution S ( t ) ,
S ( t ) = s 0 exp μ 1 2 σ 2 t + σ W j ( t ) .
Let S i ( t ) , s 0 i , μ i , σ i ; i = 1 , , n be n dependent copies of { S ( t ) , s 0 , μ , σ } . As is known to all (see, e.g., [22]), for given t, S i ( t ) , i = 1 , , n follow a lognormal distribution
log S i ( t ) N log s 0 + μ t 1 2 σ 2 t , σ 2 t .
We are aiming to use the observations to estimate μ , where the observations consist of y = { y i ( t i k ) ; i = 1 , , n ; k = 1 , , d i } , and d i < . The probability of the ith subject at time point t i k is
p y i ( t i k ) ; μ = 1 2 π σ 2 t i k exp ( log y i ( t i k ) ϕ ( t i k ) 2 2 σ 2 t i k ,
where ϕ ( t ) = log s 0 + μ t 1 2 σ 2 t .
If the observations are continuous, consider a time point t * . We know that y i * = y ( t * ) are independent and identically distributed variables on the same probability space. Then we get the likelihood function
L n ( y * ; μ ) = i = 1 n p ( y i * ; μ ) = i = 1 n 1 2 π σ i 2 t * exp ( log y i * ϕ ( t * ) 2 2 σ i 2 t * ,
where y * = y 1 * , y 2 * , , y n * . The log-likelihood function is
l n ( y * ; μ ) = 1 n i = 1 n ( log y i * ϕ ( t * ) ) 2 2 Σ i ( t * ) 2 log 2 π log Σ i ( t * ) ,
where Σ i ( t * ) = σ i 2 t * and the score function is
B n ( y * , μ ) = l n ( y * ; μ ) μ .

2.2. Kernel Estimation with Forward and Lagged Observation

The data y i ( t ) , i = 1 , , n are usually not observed continuously, and it is almost impossible for each individual to be observed at t * . Hence l n ( y * , θ ) is not computable from the observations. We propose a method that formalizes the forwarding and lagging strategy, with kernel weighting enabling the use of all available forward and lagged observations. We “smooth” the observations’ contribution to the likelihood based on the distance of their observation time to the time of interest. If data continues to be collected on subjects for which observation has occurred, as in the case of the recurrent event, we use the kernel to impute missing values using both forward and backward-lagged observations. We construct a smoothed log-likelihood function by using kernel estimation
l n ( y * ; μ ) = i = 1 n K h n ( s t k ) log y i ( s ) ϕ i ( t * ) 2 2 Σ i 2 ( t * ) + log 2 π + log Σ i 2 ( t * ) d N i ( s ) ,
where Σ i 2 ( t * ) < is the variance of y i * , K h n ( t ) = K ( ( t t * ) / h n ) / h n , h n is the bandwidth, and the kernel function K ( t ) is a symmetric probability density with support [ 1 , 1 ] and mean 0 that bound the first derivative. In addition, E [ d N ( t ) ] = λ ( t ) d t , where λ ( t ) is twice continuously differentiable and strictly positive for t [ 0 , T ] . The scoring function is given by
U n ( μ ) = 1 n i = 1 n K h n ( t * t ) log y i ( t ) ( log s 0 + ( μ 1 2 Σ i 2 ( t * ) ) t * ) Σ i 2 ( t * ) d N i ( t ) .
Assume that the following conditions hold:
(A.1)
Θ μ 0 is an open sets of R , and Θ μ 0 = { μ : | μ μ 0 | < ρ } for some ρ > 0 and μ 0 is the true parameter.
(A.2)
λ * ( t ) is twice continuously differentiable.
(A.3)
K ( z ) is a symmetric density function satisfying K ( z ) 2 d z < . In addition, h n 0 , n h n , n h n 5 0 .
Condition (A.1) is a usual assumption for the proof of consistency, and condition (A.2) ensures the Taylor expansion of the score function to the second order. Our methods depend on a proper choice of bandwidth, which is shown in condition (A.3). The estimator μ ^ n is obtained based on solving Equation (5) with a kernel bandwidth selected to obtain the consistency.
Lemma 1.
Under conditions (A.1)–(A.3), we have
E [ ( n h n ) 1 2 U n ( μ 0 ) ] = 0 ,
as n .
Proof of Lemma 1.
From the smoothed likelihood function (5), we have the smoothed scoring function
( n h n ) 1 2 U n ( μ 0 ) = ( n h n ) 1 2 1 n i = 1 n K h n ( t * t ) log y i ( t ) ( log s 0 + ( μ 1 2 Σ i 2 ( t * ) ) t * ) Σ i 2 ( t * ) d N i ( t ) = ( n h n ) 1 2 1 n i = 1 n 1 h n K ( t * t h n ) log y i ( t ) ( log s 0 + ( μ 1 2 Σ i 2 ( t * ) ) t * ) Σ i 2 ( t * ) d N i ( t ) = I .
Let z = t * t h n . We have
I = h n 1 2 n 1 2 i = 1 n K ( z ) log y i ( t * h n z ) ( log s 0 + ( μ 1 2 Σ i 2 ( t * ) ) t * ) Σ i 2 ( t * ) d N i ( z ) .
Define F ( r , s ) = E [ y ( r s ) ( log s 0 + ( μ Σ i 2 ( t * ) / 2 ) r ) ] / Σ i 2 ( t * ) . Obviously, F ( t * , 0 ) = 0 . By taking expectations together with Taylor expansion, K ( z ) d z = 1 and z K ( z ) d z = 0 , we have
E [ I ] = h n 1 2 n 1 2 i = 1 n K ( z ) log y i ( t * h n z ) ( log s 0 + ( μ Σ i 2 ( t * ) 2 ) t * ) Σ i 2 ( t * ) λ ( t * h n z ) d ( z ) = h n 1 2 n 1 2 1 σ F s | s = 0 h n λ t | t = t * h n + 1 2 2 F s 2 | s = 0 h n 2 = o n 1 2 h n 5 2 .
From condition (A.3), we have E [ I ] = o ( 1 ) . □
The following theorem shows the consistency of the proposed estimator μ ^ n obtained based on solving Equation (5).
Theorem 1.
Under conditions (A.1)–(A.3), μ ^ n admits the consistency as n .
Proof of Theorem 1.
Solving Equation (5), we have
μ ^ n t * = 1 n h n i = 1 n h n K h n ( t t * ) log y ( t ) d N i ( t ) K h n ( t t * ) d N i ( t ) log s 0 + 1 2 σ 2 t * .
By properties of the kernel function K ( · ) , we have
| μ ^ n t * μ t * | = 1 n h n i = 1 n h n K h n ( t t * ) ( log y ( t ) log s 0 ( μ σ 2 2 ) t * ) d N i ( t ) K h n ( t t * ) d N i ( t ) 1 n h n i = 1 n h n ( log y ( b i ) log s 0 ( μ σ 2 2 ) t * ) ,
where b i [ t * h n , t * + h n ] is some constant. By Equation (3), we have
log y ( t ) N ( log s 0 + ( μ σ 2 2 ) t , σ 2 t ) .
Hence, we have
log y ( b i ) log s 0 ( μ σ 2 2 ) t * N ( a , σ 2 t ) ,
where a = O ( h n ) is some constant. By the Wiener–Khinchin law of large numbers, we have
| μ ^ n t * μ t * | = O ( h n ) ,
which goes to zero, as n . □
The following theorem shows the asymptotic normality of μ ^ n .
Theorem 2.
Assume conditions (A.1)–(A.3) hold, μ ^ n is consistent, and the asymptotic distribution of μ ^ n satisfies
( n h n ) 1 2 ( μ ^ n μ ) N 0 , C n ( μ 0 ) 2 Σ ( μ 0 ) ,
as n , where
C n ( μ 0 ) = 0 1 U n μ ( μ 0 + λ ( μ μ 0 ) ) d λ 1 ,
and
Γ ( μ 0 ) = K ( z ) 2 1 σ 2 t * λ ( t * h n z ) d z .
Proof of Theorem 2.
Let { μ n } be a strongly consistent sequence of μ , i.e., μ n a . s . μ 0 , and { μ n } Θ μ 0 . We can seek a solution μ ^ n of the log-likelihood function l n ( y * , μ ^ n ) , and μ ^ n is a strongly consistent sequence. Note that
U n ( μ ) = 1 n i = 1 n K h n ( t * t ) y i ( t ) ( log s 0 + ( μ 1 2 Σ i ( t * ) 2 ) t * ) Σ i 2 ( t * ) d N i ( t ) ,
we denote U n ( μ ) = i = 1 n ψ ( y i * , μ ) . Expand U n as
U n ( μ ) = U n ( μ 0 ) + μ 0 μ U n ( u ) μ d u = U n ( μ 0 ) + 0 1 U n μ μ 0 + λ ( μ μ 0 ) d λ ( μ μ 0 ) .
Let μ = μ ^ n , we have U n ( μ ^ n ) = 0 . Then
μ ^ n μ = 0 1 U n μ ( μ 0 + λ ( μ μ 0 ) ) d λ 1 U n ( μ 0 ) .
Multiply both sides of Equation (7) by ( n h n ) 1 2 , we denote
C n ( μ 0 ) = 0 1 U n μ ( μ 0 + λ ( μ μ 0 ) ) d λ 1 .
Then
( n h n ) 1 2 ( μ ^ n μ ) = C n ( μ 0 ) ( n h n ) 1 2 U n ( μ 0 ) .
Hence we give the variance of ( n h n ) 1 2 U n ( μ 0 ) ,
( n h n ) 1 2 U n ( μ 0 ) = ( n h n ) 1 2 n i = 1 n ψ ( y i * , μ 0 ) = n 1 2 1 n i = 1 n h n 1 2 ψ ( y i * , μ 0 ) .
From Lemma 1, we have that E [ h n 1 2 ψ ( y i * , μ 0 ) ] = 0 , i = 1 , , n . By central limit theorem, var ( n h n ) 1 2 U n ( μ 0 ) = var h n 1 2 ψ ( y i * , μ 0 ) , and we denote ϕ i ( t ) = y i ( t ) ( log s 0 + ( μ 0 1 2 σ 2 ) t * ) . Then
var h n 1 2 ψ ( y i * , μ 0 ) = E ( h n 1 2 ψ ( y i * , μ 0 ) ) 2 = h n E k = 1 n K h n ( t * t i k ) ( y i ( t i k ) ( log s 0 + ( μ 0 1 2 Σ i 2 ( t * ) ) t * ) ) Σ i 2 ( t * ) 2 = h n 1 σ 4 t 1 t 2 K h n ( t * t 1 ) K h n ( t * t 2 ) E [ ϕ i ( t 1 ) ϕ i ( t 2 ) ] E [ d N ( t 1 ) d N ( t 2 ) ] + t 1 = t 2 ( K h n ( t * t 1 ) ) 2 λ ( t 1 ) E [ ϕ i ( t 1 ) 2 ] d t 1 .
Assume that for t 1 t 2 , p r ( d N ( t 1 ) = 1 | N ( t 2 ) N ( t 2 ) = 1 ) = g ( t 1 , t 2 ) d t 1 , where p r means the probability, g ( t 1 , t 2 ) is continuous for t 1 t 2 , and g ( t 1 ± , t 2 ± ) exists. Then
E h n 1 2 ψ ( y i * , μ 0 ) 2 = h n 1 σ 4 t 1 t 2 K h n ( t * t 1 ) K h n ( t * t 2 ) E [ ϕ i ( t 1 ) ϕ i ( t 2 ) ] g ( t 1 , t 2 ) E [ d N ( t 2 ) ] d t 1 + 1 σ 4 t 1 = t 2 ( K h n ( t * t 1 ) ) 2 λ ( t 1 ) E [ ϕ i ( t 1 ) 2 ] d t 1 = h n ( I 1 + I 2 ) .
Using a change of variables, we have h n I 1 = O ( h n ) .
With notation E ( ϕ i ( t * h n z ) ) 2 = σ 2 t * , we have
h n I 2 = h n 1 σ 4 ( h n ) 2 ( K ( z ) ) 2 E ( ϕ i ( t * h n z ) ) 2 λ ( t * h n z ) h n d z = K ( z ) 2 1 σ 2 t * λ ( t * h n z ) d z .
From the Lyapunov central limit theorem, we have that ( n h n ) 1 2 U n ( μ 0 ) converges to a continuous Gaussian process Z N 0 , Γ ( μ 0 ) . Hence, we have
( n h n ) 1 2 ( μ ^ n μ ) = C n ( μ 0 ) ( n h n ) 1 2 U n ( μ 0 ) N 0 , C n ( μ 0 ) 2 Γ ( μ 0 ) .
Remark 1.
When there are several observations d i > 1 , one can estimate the drift parameter μ by a standard maximum likelihood method. Let
z i k = log y i ( t i k + 1 ) y i ( t i k ) ,
for k = 1 , , d i and i = 1 , 2 , , n . Thus, conditional on the observation times,
z i k N μ σ 2 / 2 t i k + 1 t i k , σ 2 t i k + 1 t i k
and they are independent. For example, if we reparameterize μ as ν = μ σ 2 / 2 , the MLE for ν would be given by
ν ^ = i = 1 n j = 1 d i 1 z j ( i ) / Δ j ( i ) i = 1 n ( d i 1 ) ,
where Δ k ( i ) = t i k + 1 t i k . Then μ is estimated by μ ^ = ν ^ + σ 2 / 2 .
Remark 2.
When there is only one observation d i = 1 , the estimator proposed in Equation (8) is not effective. Our estimator performs reasonably well in this extreme case, and we could give an explicit asymptotic variance for our estimator, which is σ 2 t * .

3. Simulation

In this section, utilizing both forward and backward-lagged observation, we examine the kernel estimator. We generate 1000 datasets, and each dataset consists of n = 100 , 400 , 900 subjects with different bandwidths (BD). The process is generated through model (1); we set the initial condition s 0 = 1 , μ = 2 and σ = 0.02 . Then the solution is
S ( t ) = s 0 exp { 1.9998 t + 0.02 w ( t ) } ,
where w ( t ) is a standard Brownian motion. The number of observation times for each subject is a Poisson distributed with an intensity rate of 5. The time points of each individual’s observation are generated from a uniform distribution, Unif ( 0 , 1 ) . The outcomes from other models’ parameter selections are not mentioned because they are essentially identical. All simulations were performed on a laptop running R 4.2.9 with 8G of RAM.
Based on Theorems 1 and 2, we obtain a kernel estimator with asymptotically negligible bias and employ bandwidths in the range ( n 1 , n 1 5 ) when calculating μ ^ n using the smoothed likelihood score function (6). The kernel function we choose is the Epanechnikov kernel, which is K ( x ) = 3 4 ( 1 x 2 ) + . The usage of additional kernel functions has little effect on the estimator’s empirical performance, according to additional simulations (not published).
The simulation results show that the estimates for the parameter in the model are accurate. Table 1 summarizes the main findings from over 1000 simulations. We note that the bias diminishes and is minor as the sample size grows. The performance improves the larger sample sizes and smaller bandwidths. The overall parameter estimates are evaluated by the bias and relative bias (RB), which are defined as
B i a s ( μ ^ ) = μ 0 μ ^ ,         R B ( μ ^ ) = | B i a s ( μ ^ ) | | μ 0 | ,
where θ 0 denotes the true parameter.

4. Conclusions

In this paper, we have presented kernel-weighting methods for the estimation of the LSDE model (1) in repeatable experiments when the observation time is a random variable, and the number of observations of each individual is uncertain or even sparse. This is a real improvement because the past literature usually supposed that observation intervals are equally spaced and could not deal with the sparse observations. We consider the maximum likelihood estimation of the drift parameter. This method has some assumptions, and we give the asymptotic normality of the proposed estimator. In numerical studies, we set the true parameter μ 0 = 2 , σ 0 = 0.02 , and the initial condition s 0 = 1 for each individual with sparse observations (the frequency of observation follows a Poisson distribution with mean 5). Using the smoothed scoring function, we obtain the estimation of the drift parameter.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by NSFC (grants 11871244), and the Fundamental Research Funds for the Central Universities, JLU.

Data Availability Statement

There is no data used in this paper.

Conflicts of Interest

There are no competing interest to declare that arose during the preparation or publication process of this article.

References

  1. Black, F.; Scholes, M. The pricing of options and corporate liabilities. J. Political Econ. 1973, 81, 637–654. [Google Scholar] [CrossRef] [Green Version]
  2. Merton, R.C. Theory of rational option pricing. Bell J. Econ. Manag. Sci. 1973, 4, 141–183. [Google Scholar] [CrossRef] [Green Version]
  3. Donnet, S.; Samson, A. A review on estimation of stochastic differential equations for pharmacokinetic/pharmacodynamic models. Adv. Drug Deliv. Rev. 2013, 65, 929–939. [Google Scholar] [CrossRef] [PubMed]
  4. Garcia, O. A stochastic differential equation model for the height growth of forest stands. Biometrics 1983, 39, 1059–1072. [Google Scholar] [CrossRef]
  5. Hu, Y.; Long, H. Least squares estimator for Ornstein–Uhlenbeck processes driven by α-stable motions. Stoch. Process. Their Appl. 2009, 119, 2465–2480. [Google Scholar] [CrossRef] [Green Version]
  6. Hu, Y.; Nualart, D.; Zhou, H. Drift parameter estimation for nonlinear stochastic differential equations driven by fractional Brownian motion. Stochastics 2019, 91, 1067–1091. [Google Scholar] [CrossRef]
  7. Long, H.; Ma, C.; Shimizu, Y. Least squares estimators for stochastic differential equations driven by small lévy noises. Stoch. Process. Their Appl. 2017, 127, 1475–1495. [Google Scholar] [CrossRef]
  8. Neuenkirch, A.; Tindel, S. A least square-type procedure for parameter estimation in stochastic differential equations with additive fractional noise. Stat. Inference Stoch. Process. 2014, 17, 99–120. [Google Scholar] [CrossRef] [Green Version]
  9. Gallant, A.R.; Long, J.R. Estimating stochastic differential equations efficiently by minimum chi-squared. Biometrika 1997, 84, 125–141. [Google Scholar] [CrossRef]
  10. Elerian, O.; Chib, S.; Shephard, N. Likelihood inference for discretely observed nonlinear diffusions. Econometrica 2001, 69, 959–993. [Google Scholar] [CrossRef]
  11. Shoji, I.; Ozaki, T. A statistical method of estimation and simulation for systems of stochastic differential equations. Biometrika 1998, 85, 240–243. [Google Scholar] [CrossRef]
  12. Shimizu, Y. M-estimation for discretely observed ergodic diffusion processes with infinitely many jumps. Stat. Inference Stoch. Process. 2006, 9, 179–225. [Google Scholar] [CrossRef]
  13. Shimizu, Y.; Yoshida, N. Estimation of parameters for diffusion processes with jumps from discrete observations. Stat. Inference Stoch. Process. 2006, 9, 227–277. [Google Scholar] [CrossRef]
  14. Yacine, A.S. Maximum likelihood estimation of discretely sampled diffusions: A closed-form approximation approach. Econometrica 2002, 70, 223–262. [Google Scholar]
  15. Milshtein, G.N. A method of second-order accuracy integration of stochastic differential equations. Theory Probab. Its Appl. 1979, 23, 396–401. [Google Scholar] [CrossRef]
  16. Andersen, T.G.; Sørensen, B.E. Gmm estimation of a stochastic volatility model: A Monte Carlo study. J. Bus. Econ. Stat. 1996, 14, 328–352. [Google Scholar]
  17. Hu, Y.; Xi, Y. Estimation of all parameters in the reflected Ornstein–Uhlenbeck process from discrete observations. Stat. Probab. Lett. 2021, 174, 109099. [Google Scholar] [CrossRef]
  18. Shoji, I. Approximation of continuous time stochastic processes by a local linearization method. Math. Comput. 1998, 67, 287–298. [Google Scholar] [CrossRef] [Green Version]
  19. Martin, J.; Wilcox, L.C.; Burstedde, C.; Ghattas, O. A stochastic newton MCMC method for large-scale statistical inverse problems with application to seismic inversion. Siam J. Sci. Comput. 2012, 34, 1460–1487. [Google Scholar] [CrossRef] [Green Version]
  20. Brown, B.M.; Hewitt, J.I. Asymptotic likelihood theory for diffusion processes. J. Appl. Probab. 1975, 12, 228–238. [Google Scholar] [CrossRef]
  21. Bladt, M.; Sørensen, M. Statistical inference for discretely observed markov jump processes. J. R. Stat. Soc. Ser. 2005, 67, 395–410. [Google Scholar] [CrossRef]
  22. Øksendal, B. Stochastic Differential Equations: An Introduction with Applications; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
Table 1. Simulation results with different n and bandwidths.
Table 1. Simulation results with different n and bandwidths.
nBD μ ^ Bias ( μ ^ )RB ( μ ^ )
100 n 0.25 2.0950.0950.048
n 0.3 2.0650.0650.033
400 n 0.25 2.0520.0520.026
n 0.3 2.0300.0300.015
900 n 0.25 2.0200.0200.010
n 0.3 2.0200.0200.010
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Han, Y.; Yin, Z.; Zhang, D. Parameter Estimation of Linear Stochastic Differential Equations with Sparse Observations. Symmetry 2022, 14, 2500. https://doi.org/10.3390/sym14122500

AMA Style

Han Y, Yin Z, Zhang D. Parameter Estimation of Linear Stochastic Differential Equations with Sparse Observations. Symmetry. 2022; 14(12):2500. https://doi.org/10.3390/sym14122500

Chicago/Turabian Style

Han, Yuecai, Zhe Yin, and Dingwen Zhang. 2022. "Parameter Estimation of Linear Stochastic Differential Equations with Sparse Observations" Symmetry 14, no. 12: 2500. https://doi.org/10.3390/sym14122500

APA Style

Han, Y., Yin, Z., & Zhang, D. (2022). Parameter Estimation of Linear Stochastic Differential Equations with Sparse Observations. Symmetry, 14(12), 2500. https://doi.org/10.3390/sym14122500

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop