Abstract
The identification of the functional relationship that regulates the variation of individual leaf biomass in terms of related area in eelgrass, allows the derivation of convenient proxies for a nondestructive estimation of the average biomass of the leaves in shoots. The concourse of these assessment methods is fundamental for assessing the performance of restoration efforts for this species that are based on transplanting techniques. Prior developments proposed proxies for a nondestructive estimation of aforementioned average biomass of leaves in shoots derived from allometric models for the dependence of leaf biomass in terms of linked area. The reproducibility power of these methods is highly dependent on analysis method and data quality. Indeed, previous results show that allometric proxies for average biomass of leaves in shoots produced by parameter estimates fitted from quality controlled data via nonlinear regression yield the highest reproducibility strength. Nevertheless, the use of data processing entails subtleties mainly related to the subjectivity of the criteria for the rejection of inconsistent replicates in raw data. Here we introduce efficient- data quality control- free surrogates derived from a first order Takagi-Sugeno-Kang fuzzy model aimed to approximate the mean response of eelgrass leaf biomass depending on associated area. A comparison of the performances of the allometric and the fuzzy model constructs identified using available raw data shows that the Takagi-Sugeno-Kang paradigm for individual leaf biomass in terms of related area produced the most precise proxies for observed average biomass of leaves in shoots. The present results show how gains derived from the outstanding approximation capabilities of the first order Takagi-Sugeno-Kang fuzzy model for the nonlinear dynamics can be extended to the realm of eelgrass allometry.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
R.M. McCloskey, R.K.F. Unworthy, Decreasing seagrass density negatively influences associated fauna, vol. 3 (PeerJ, 2015), p. e1053
M.L. Plummer, C.J. Harvey, L.E. Anderson, A.D. Guerry, M.H. Ruckelshaus, The role of eelgrass in marine community interactions and ecosystem services: results from ecosystem-scale food web models. Ecosystems 16(2), 237–251 (2013)
E.I. Paling, M. Fonseca, M.M. van Katwijk, M. van Keulen, Seagrass restoration, ed. by M.E. Gerardo, E.W. Perillo, R.C. Donald, M.B. Mark. Coastal Wetlands: An Integrated Ecosystem Approach, 1st edn. (Elsevier Science, 2009) pp. 1–62
W.T. Li, Y.K. Kim, J.I. Park, X.M. Zhang, G.Y. Du, K.S. Lee, Comparison of seasonal growth responses of Zostera marina transplants to determine the optimal transplant season for habitat restoration. Ecol. Eng. 71, 56–65 (2014)
M.S. Fonseca, Addy revisited: what has changed with seagrass restoration in 64 years? Ecol. Restor. 29(1–2), 73–81 (2011)
H. Echavarría-Heras, C. Leal-Ramírez, E. Villa-Diharce, E. Montiel-Arzate, On the appropriateness of an allometric proxy for nondestructive estimation of average biomass of leaves in shoots of eelgrass (Zostera marina). Submitted, (2017)
H.A. Echavarría-Heras, C. Leal-Ramírez, E. Villa-Diharce, N.R. Cazarez-Castro, The effect of parameter variability in the allometric projection of leaf growth rates for eelgrass (Zostera marina L.) II: the importance of data quality control procedures in bias reduction. Theor. Biol. Med. Model. 12(30), 2015
S.L. Chiu, Fuzzy model identification based on cluster estimation. J. Intell. Fuzzy Syst. 2(3), 267–278 (1994)
J.R. Castro, O. Castillo, M.A. Sanchez, O. Mendoza, A. Rodríguez-Díaz, P. Melin, Method for higher order polynomial sugeno fuzzy inference systems. Inf. Sci. 351, 76–89 (2016)
L.X. Wang, J.M. Mendel, Fuzzy basis functions, universal approximation, and orthogonal least-squares learning. IEEE Trans. Neural Netw. 3(5), 807–814 (1992)
J.S.R. Jang, C.T. Sun, E.S. Mizutani, Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence (Prentice Hall, USA, 1997)
L.I.K. Lin, A concordance correlation coefficient to evaluate reproducibility. Biometrics 45, 255–268 (1989)
C. Leal-Ramírez, H.A. Echavarría-Heras, O. Castillo, Exploring the suitability of a genetic algorithm as tool for boosting efficiency in monte carlo estimation of leaf area of eelgrass, ed. by P. Melin, O. Castillo, J. Kacprzyk. Design of Intelligent Systems Based on Fuzzy Logic, Neural Networks and Nature-Inspired Optimization. Stud. Comput. Intell. vol. 601, (Springer, 2015) pp. 291–303
C. Leys, O. Klein, P. Bernard, L. Licata, Detecting outliers: do not use standard deviation around the mean, use absolute deviation around the median. J. Exp. Soc. Psychol. 49(4), 764–766 (2013)
P.J. Huber, Robust statistics (Wiley, New York, 1981)
M. Sugeno, G.T. Kang, Structure identification of fuzzy model. Fuzzy Sets Syst. 28, 15–33 (1988)
J.R. Castro, O. Castillo, P. Melin, A. Rodríguez-Díaz, A hybrid learning algorithm for a class of interval type-2 fuzzy neural networks. Inf. Sci. 179(13), 2175–2193 (2009)
D.W. Marquardt, An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 11(2), 431–441 (1963)
M.K. Transtrum, J.P. Sethna, Improvements to the Levenberg-Marquardt algorithm for nonlinear least-squares minimization. Cornell University, USA, (2012). doi:arXiv:1201.5885
S. Jang, ANFIS: adaptive-network-based fuzzy inference system. IEEE Trans. Syst Man Cybern. 23, 665–685 (1993)
D. Hui, R.B. Jackson, Uncertainty in allometric exponent estimation: a case study in scaling metabolic rate with body mass. J TheorBiol 249, 168–177 (2007)
C. Leal-Ramírez, H.A. Echavarría-Heras, O. Castillo, E. Montiel-Arzate, On the use of parallel genetic algorithms for improving the efficiency of a monte carlo-digital image based approximation of eelgrass leaf area I: comparing the performances of simple and master-slaves structures, ed. by P. Melin, O. Castillo, J. Kacprzyk. Nature-Inspired Design of Hybrid Intelligent Systems, Volume 667 of the series Studies in Computational Intelligence, pp. 431–455, Springer (2016)
J. Miller, Reaction time analysis with outlier exclusion: Bias varies with sample size. Q. J. Exp. Psychol. 43(4), 907–912 (1991)
L.I.K. Lin, Assay validation using the concordance correlation coefficient. Biometrics 48, 599–604 (1992)
G.B. McBride, A proposal for strength-of-agreement criteria for lin’s concordance correlation coefficient. NIWA Client Report: HAM2005-062; National Institute of Water & Atmospheric Research: Hamilton, New Zealand, May 2005. Available online: http://www.medcalc.org/download/pdf/McBride2005.pdf
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix 1: Data Quality Control Approach
Previous results demonstrate that it is reasonable to assume that for eelgrass the leaf area-to-leaf weight variation pattern should distribute about a power function-like trend [7]. Thus, by analyzing in a leaf area-to-weight plot, the spreading of leaf biomass values we can detect values that present severe deviations to a dominant power function trend. Since in the present settings leaf area is obtained by means of the length times width proxy [22] unduly deviations of replicates from an expected trend could be attributed to errors in leaf length or width estimations, imprecise gear for dry weight assessment or even due to inappropriate registering of measurements. In order to conceive a data cleaning criteria, we firstly observed that the set of \(n_{rw}\) leaves constituting the raw data can be arranged into several groups \(G\left( a \right) = \left\{ {w_{i} \left( a \right) | 1 \le i \le n\left( g \right)} \right\}\) formed by the set of \(n\left( g \right)\) leaf biomass replicates \(w_{i} \left( a \right)\) that associate to an observed leaf area value \(a\). Considering that the median of a group of data is totally immune to the sample size and a robust estimator of scale, for groups of ten or more replicates, we adapted a Median Absolute Deviation (MAD) data cleaning procedure [7, 14]. For each one of the groups \(G\left( a \right)\) acquired median is denoted by means of the symbol \({\text{MED}}\left\{ {w_{1} \left( a \right), \ldots ,w_{n\left( g \right)} \left( a \right)} \right\}\) or simply by means of \({\text{MED}}\left( {G\left( a \right)} \right)\) for short. Then, for each replicate \(w_{j} \left( a \right)\) in \(G\left( a \right)\) its absolute deviation \(\delta_{j} \left( a \right)\) relative to the group median \({\text{MED}}\left( {G\left( a \right)} \right)\) is
Similarly, we obtained the median of the set of absolute deviations denoted by the symbol \(MED\left\{ {\delta_{1} \left( a \right), \ldots ,\delta_{n\left( g \right)} \left( a \right)} \right\}\). Following Huber [15] and by also eliciting that eelgrass leaf biomass values are log-normally distributed [7] then obtained Median Absolute Deviation of a group \(G\left( a \right),\) denoted through \({\text{MAD}}\left( {G\left( a \right)} \right)\) is given by
where \(b = 1/Q\left( {0.75} \right)\), and \(Q\left( {0.75} \right)\) stands for the \(0.75\) quantile associated to a lognormal distribution. Henceforth, for the deletion of uneven replicates in a group \(G\left( a \right)\) we used the decision criterion
where \(w_{j} \left( a \right)\) stands for the \(jth\) leaf in the group \(G\left( a \right)\) and \(T\) is the rejection threshold, which after Miller [23] we set at a value \(T = 3.\)
Appendix 2: Lin’s Concordance Correlation Coefficient \(\left( \rho \right)\)
The value of \(\rho\) the concordance correlation coefficient [12, 24] is commonly used to assess how well a new set of observations \(y\) mimic an original set \(x\). In other words, the value of \(\rho\) provides a measure of reproducibility as it evaluates the agreement between the variables \(x\) and \(y\) by assessing the extent to which they fall on the 45° line through the origin. Its value is characterized in terms of the ratio of the expected orthogonal squared distance from the diagonal \(y = x\) to the expected orthogonal squared distance from the diagonal \(y = x\) assuming independency. When \(\rho\) is computed on a m-length data set (i.e., two vectors (\(x_{1} ,x_{2} , \ldots ,x_{m}\)) and (\(y_{1} ,y_{2} , \ldots ,y_{m}\)) the resulting statistics is denoted by means of \(\hat{\rho }\) and calculated through
being
and
In the present work the value of \(\hat{\rho }\) provides a criterion to assess to what extent the allometric proxies \(w_{m} \left( {\alpha ,\beta ,t} \right)\) or the weighted output \(w_{TSK} \left( {a(t)} \right)\) of the considered. Takagi–Sugeno-Kang model reproduce observed \(w_{m} \left( t \right)\) values. Agreement will be defined as poor whenever \(\hat{\rho } <\) 0.90, moderate for \(0.90 \le \hat{\rho } <\) 0.95, good for \(0.95 \le \hat{\rho } \le 0.99\), or excellent for \(\hat{\rho } > 0.99\) [25].
Appendix 3: The Takagi-Sugeno-Kang Fuzzy Inference System
A Takagi-Sugeno-Kang inference system is a special characterization of what is known as a fuzzy inference system. Generally in setting a fuzzy inference system for \(1 \le j \le n\) we initially consider input values \(x_{j}\) an input domain \(U\). The collection of input values is denoted by the symbol
and similarly, for \(1 \le s \le p\) we consider output values \(y_{s} \left( {x_{1} , \ldots ,x_{n} } \right)\) in a range \(V\), and the collection of output variables will be denoted by the symbol
For each input variable \(x_{j}\) we associate a set \(L_{j}\) containing a number \(a\left( j \right)\) of linguistic terms \(A_{k} \left( {x_{j} } \right)\), namely
The symbol \(A_{X}\) will stand for the collection of linguistic terms that associate to \(X\), formally
The linguistic term \(A_{k} \left( {x_{j} } \right)\) associates to a membership function \(\mu A_{k} \left( {x_{j} } \right)\) that setting a mapping \(\mu A_{k} \left( {x_{j} } \right):U \to \left[ {0,1} \right]\) characterizes \(A_{k} \left( {x_{j} } \right)\) as the fuzzy set:
where \(x_{j1} ,x_{j2} , \ldots ,x_{jb\left( j \right)}\) stand for the values that \(x_{j}\) takes on.
The symbol \(\mu_{X}\) will stand for the collection of membership functions describing the set of input variables \(X\), that is,
Moreover, the pair \(\left( {A_{X} ,\mu_{X} } \right)\) will stand for what is known as a fuzzy partition of the input domain \(U\).
Respectively, the output variable \(y_{s}\) with \(s = 1,2,..,p\) associates to a set \(L_{s}\), of linguistic terms \(B_{k} \left( {y_{s} \left( {x_{1} , \ldots ,x_{n} } \right)} \right)\) namely
Similarly,
will stand for the collection of linguistic terms that characterize \(Y\). Also, for \(B_{k} \left( {y_{s} } \right)\) we associate a membership function \(\mu B_{k} \left( {y_{s} \left( {x_{1} , \ldots ,x_{n} } \right)} \right)\) such that the mapping \(\mu B_{k} \left( {y_{s} } \right): V \to \left[ {0,1} \right]\) establishes the fuzzy set
where \(y_{s1} , \ldots ,y_{sc\left( s \right)}\) denote the values that \(y_{s} \left( {x_{1} , \ldots ,x_{n} } \right)\) acquires. Concurrently, we have the collection of membership functions tied to \(Y\),
and concomitantly we may also say that the pair \(\left( {B_{Y} ,\mu_{Y} } \right)\) sets a fuzzy partition for the output domain \(V.\)
Now, for \(n > 1,\) we will consider the Cartesian product \(L_{X}\) of fuzzy sets that designate the set of input variables \(X,\) namely
and use the symbol \(m\) to denote its cardinality, that is,
This leads to
Moreover, the \(i{\text{th}}\) element of the Cartesian product \(L_{X}\). with \(i = 1,2, \ldots ,m\) associates univocally with a \(n - tuple\) of linguistic terms \(A_{k} \left( {x_{j} } \right).\) That is, we have an ordering
where in the above \(n - tuple\left[ {A_{{k\left( {j,i} \right)}} \left( {x_{j} } \right)} \right]_{1}^{n}\) the index \(k\left( {j,i} \right)\) takes a value out of the set \(\left\{ {1,2, \ldots ,q\left( j \right)} \right\}\). Moreover, the elements of the Cartesian product \(L_{X}\) have form of a conjunction \(Q^{i} \left( {x_{1} , \ldots ,x_{n} } \right)\), namely
And it is by no way redundant to write down the equivalence
Meanwhile, the set of membership functions associated to the conjunction \(Q^{i} \left( {x_{1} , \ldots ,x_{n} } \right)\) will be denoted by means of the symbol \(\mu Q^{i} \left( {x_{1} , \ldots ,x_{n} } \right)\) and formally expressed through:
Moreover, for each value of the index \(i = 1,2 , \ldots ,m\) we can consider a relationship
being \(R^{i}\) a rule that associates an antecedent \(Q^{i} \left( {x_{1} , \ldots ,x_{n} } \right)\) to a consequent on the output value \(y_{i}\). In the general fuzzy inference system \(R^{i}\) takes the form
with the set of tied membership functions \(\mu Q^{i} \left( {x_{1} , \ldots ,x_{n} } \right)\) given by Eq. (51).
For the case \(n = 1\), we have only one input variable \(x_{1}\). Hence, according to Eq. (37) the input space can be characterized by a number \(q\left( 1 \right) \ge 1\) of linguistic terms, i.e., in this case we could have
and correspondingly from Eq. (40) \(\mu_{X}\) becomes
and by virtue of Eq. (47) for this case we have
And regarding the arrangement of the elements of the Cartesian product established by Eq. (48), in this case \(i = 1,2, \ldots ,m,\) it suffices to advance a correspondence \(i \to A_{{k\left( {1,i} \right)}} \left( {x_{1} } \right)\), where \(k\left( {1,i} \right) = i\). Therefore, we can consider antecedent conjunctions
sustaining inferential rules \(R^{i} :\)
We also have
In summary, for case \(n \ge 1\) as stated by Eqs. (35)–(59) we may consider a general fuzzy inference system F as a construct including an application F: \(X \to Y\) characterized by sets of fuzzy partitions \(\left( {A_{X} ,\mu_{X} } \right)\) and \(\left( {B_{Y} ,\mu_{Y} } \right),\) the set inference rules \(R = \mathop {\text{U}}\nolimits_{1}^{m} \left\{ {R^{i} } \right\}\) and a defuzzification operator \(D\) that associates to the fuzzy set \(\left[ {y_{i} \,is\,B_{k\left( i \right)} \left( {y_{i} } \right)} \right]\) in Eq. (53) or a crisp value \(y_{i}\) in \(V.\)
In the Takagi-Sugeno-Kang fuzzy inference system representation, we consider decision rules \(R^{i}\) having an antecedent \(Q^{i} \left( {x_{1} , \ldots ,x_{n} } \right)\) of the form given by Eq. (49) but with a consequent taking a crisp functional form \(y_{i} = f^{i} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)\). That is, for \(n > 1,\) in a TSK system we may envision inference rules
for \(i = 1,2, \ldots ,m,\) and with \(\mu Q^{i} \left( {x_{1} , \ldots ,x_{n} } \right)\) given by Eq. (51).
Particularly, in defining a TSK system the function \(f^{i} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)\) could have a linear form, that is,
where the numbers \(p_{j}^{i}\) stand for parameters that can be empirically identified from data and being at least one nonzero. Nevertheless, in the general Takagi-Sugeno-Kang settings \(f^{i} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)\) can take a nonlinear form.
An important concept tied to a TSK fuzzy model is that of the firing strength \(\vartheta^{i} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)\) of the antecedent of a rule \(R^{i}\). For the case \(n > 1\) where the rules involve in their antecedents conjunctions of the form \(Q^{i} \left( {x_{1} , \ldots ,x_{n} } \right)\) (cf. Eq. (62)) the firing strength \(\vartheta^{i} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)\) is obtained through the algebraic product of involved membership functions \(\mu Q^{i} \left( {x_{1} , \ldots ,x_{n} } \right),\) formally
and the normalized firing strength \(\varphi^{i} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)\) is defined through
The final output \(y\) of the Takagi-Sugeno-Kang inference system, is the weighted average of all rule outputs, computed as
and identified as the output variable of the modeled system. The steps of the derivation of the formulae for the case \(n = 1\) of a TSK fuzzy inference system are alike those establishing Eqs. (54)–(59) of the general fuzzy inference system and are elucidated in the methods section.
Appendix 4: Parameter Estimation for the TSK Fuzzy Model
In doing the parameter identification tasks, we bear in mind that the specification of the structure and the estimation of parameters of the TSK fuzzy model interrelates in such a way that none of them can be independently achieved without considering the other. Moreover, in the first stage of the involved identification tasks we made use of Substractive Clustering [8, 9], an efficient procedure endowing interesting advantages because it does not depend on optimization methods. Subtractive Clustering mainly relies on a measure of density of each data in the input space \(X\). The goal is finding regions in the input space with high data densities. The data with the highest number of neighbors is selected as the center for a group. The data of the selected group that fall within a pre-specified fuzzy radius are then withdrew, and the algorithm keeps on searching for new data with the largest number of neighbors. This process is recursively applied until all the data are examined. Once Subtractive Clustering is completed, the number of decision rules can be known as each group associates to one of them. Moreover, the parameters tied to the membership functions characterizing the fuzzy sets of the antecedents of the rules are also estimated at this SC stage, and these are plugged into a recursive minimum squared (RLS) routine to obtain estimates of the parameters of the linear functions of the consequents of the rules. Following Jang et al. [20], we now explain the steps involved in the formalization of the RLS algorithm.
In the general least-squares problem, the output of a model \(y\) is given by a linearly parameterized expression known as a regression function, formally
where \(\varvec{u} = \left[ {u_{1} , \ldots ,u_{p} } \right]^{T}\) is the model’s input values vector, \(h_{1} \left( \varvec{u} \right), \ldots ,h_{n} \left( \varvec{u} \right)\) are known functions of \(\varvec{u}\), and \(\gamma_{1} , \ldots ,\gamma_{n}\) called regression coefficients are parameters to be estimated.
Drooping the time variable for the sake of simplifying notation, we have that the general output of the TKS model of Eq. (7) becomes
where \(\varphi^{i} \left( a \right)\) are defined by Eq. (16),
Replacing Eqs. (67) and (68) into (66) we have
Then, rearranging we obtain that the regression function tied to model \(w_{TSK}\) expressed in form (65) becomes
where \(\varvec{a} = \left[ { a } \right]^{T}\) stand for the model’s input values vector, and \(p_{1}^{1}\), \(p_{2}^{1}\), \(p_{1}^{2}\) and \(p_{2}^{2}\) are the unknown parameters.
In order to get estimates for the involved parameters, we have to take into account that in the present settings the target system to be modeled involves an input-output relationship \(a \to w\left( a \right)\) being \(a\) the descriptor variable leaf area and \(w\left( a \right)\) standing for the leaf biomass response. Therefore, we have a training data set composed of data pairs \(\left( {a_{i} :w_{i} } \right)\), for \(i = 1, \ldots ,m\) representing replicates of the addressed input-output relationship. Therefore, in order to identify the unknown parameters \(p_{1}^{1}\), \(p_{2}^{1}\), \(p_{1}^{2}\) and \(p_{2}^{2}\), in (70) we must fill in for each data pair \(\left( {a_{i} :w_{i} } \right)\), into Eq. (70) in order to obtain the set of \(m\) linear equations:
In seeking for a solution to the above system we notice that it can be equivalently written in the concise form
where \(B\) is the \(m \times n\) matrix,
and \(P\) the \(n \times 1\) vector of unknown parameters,
And being \(w\) the \(m \times 1\) output values vector:
The i-th row of the data matrix \(\left[ {B \vdots \varvec{w}} \right]\) is denoted by \(\left[ {b_{i}^{T} ,w_{i} } \right]\) and formally represented by,
Then, Eq. (72) is modified to incorporate an error vector \(e\) in order to account for random noise or modeling error, that is,
Since \(e = w - BP\) then \(e^{T} e = \left( {w - BP} \right)^{T} \left( {w - BP} \right)\), and \({\text{if we let }}E\left( P \right) = e^{T} e\) we will have
We call \(E\left( P \right)\) the sum of squared errors and we need to search for \(\hat{P}\), a characterization of the vector \(P\), which minimizes \(E\left( P \right)\). Moreover, the vector \(\hat{P}\) is called the least-squares estimator (LSE) of \(P\) and since \(E\left( P \right)\) is in quadratic form, \(\hat{P}\) is unique. It turns out that \(\hat{P}\) satisfies the normal equation
Furthermore, \(\hat{P}\) is given by
A \(k\)-order least squares estimator \(\hat{P}_{k}\) of \(\hat{P}\) defined by means of the expression
is a characterization of \(\hat{P}\) that associates to \(k\) data pairs taken out of the training data set \(\left( {a_{i} :w_{i} } \right)\). Once we have obtained \(\hat{P}_{k}\) we can get the succeeding estimator \(\hat{P}_{k + 1}\) with a minimum of effort, by using use the recursive least-squares estimator (RLSE) technique, a procedure where the \(k{\text{th}}\left( {1 \le k \le m} \right)\) row of \([ {B \vdots \varvec{w}} ]\), denoted by \([ {b_{k}^{T} \vdots w_{k} } ]\), is recursively obtained. In what follows, we will explain the formulism sustaining the RLSE method.
A new data pair \(\left( {b_{k + 1}^{T} ;w_{k + 1} } \right)\) becomes available as the \(\left( {k + 1} \right)\)th entry in the data set, producing the \(\hat{P}_{k + 1}\) estimate,
Further, in order to simplify the notation, the pair \(\left( {b_{k + 1}^{T} ;w_{k + 1} } \right)\) will be symbolized by (\(b^{T} ;w)\) and we also introduce the \(n \times n\) matrices \(H_{k}\) and \(H_{k + 1}\) defined by means of
and
or equivalently
Then \(H_{k}\) and \(H_{k + 1}\) are related through
Therefore, using \(H_{k}\) from Eq. (83) and \(H_{k + 1}\) from Eq. (85), we explain why Eqs. (81) and (82) can be equivalently written in the form
and
From (86) we have \(B^{T} w = H_{k}^{ - 1} \hat{P}_{k}\), then replacing this result in Eq. (87) we get
Now, from Eq. (85) we have \(H_{k}^{ - 1} \hat{P} = \left( {H_{k + 1}^{ - 1} - bb^{T} } \right)\hat{P}_{k}\), so replacing this result in the above expression we get
then simplifying yields
Thus \(\hat{P}_{k + 1}\) can indeed be recursively specified in terms of the previous estimate \(\hat{P}_{k}\) and the new data pairs \(\left( {b^{T} ;w} \right)\). Moreover, the current estimate \(\hat{P}_{k + 1}\) is expressed as the previous one \(\hat{P}_{k}\) plus a correcting term based on the new data \(\left( {b^{T} ;w} \right)\); this amending term can be interpreted as an adaptation gain vector \(H_{k + 1}\) multiplied by a prediction error \(\left( {w - b^{T} \hat{P}_{k} } \right)\) linked to the previous estimator \(\hat{P}_{k} .\)
Calculating \(H_{k + 1}\) as given by Eq. (84) is computationally costly and requires the adaptation of a recursive formula. From Eq. (85), we have
Applying the matrix inversion formula of Lemma 5.6 in Jang et al. [11] with \(A = H_{k}^{ - 1}\), \(B = b\), and \(C = b^{T}\), we obtain the following recursive formula for \(H_{k + 1}\) in terms of \(H_{k}\):
equivalently,
In summary, the recursive least-squares estimator for the problem of \(AP + e = y\), where the \(k{\text{th}}{\mkern 1mu} \left( {1 \le k \le m} \right)\) row of \(\left[ {B \vdots w} \right]\), denoted by \(\left[ {b_{k}^{T} \vdots w_{k} } \right]\), is sequentially obtained, can be calculated as follows:
where in establishing Eq. (91) we have taken into account Eq. (90) and the fact that we had formerly established the convention that in order to shorten the presentation the pair \(\left( {b_{k + 1}^{T} ;w_{k + 1} } \right)\) would be symbolized by the simpler expression (\(b^{T} ;w\)).
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this chapter
Cite this chapter
Echavarria-Heras, H., Leal-Ramirez, C., Castro-Rodríguez, J.R., Diharce, E.V., Castillo, O. (2018). A Takagi–Sugeno-Kang Fuzzy Model Formalization of Eelgrass Leaf Biomass Allometry with Application to the Estimation of Average Biomass of Leaves in Shoots: Comparing the Reproducibility Strength of the Present Fuzzy and Related Crisp Proxies. In: Castillo, O., Melin, P., Kacprzyk, J. (eds) Fuzzy Logic Augmentation of Neural and Optimization Algorithms: Theoretical Aspects and Real Applications. Studies in Computational Intelligence, vol 749. Springer, Cham. https://doi.org/10.1007/978-3-319-71008-2_25
Download citation
DOI: https://doi.org/10.1007/978-3-319-71008-2_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-71007-5
Online ISBN: 978-3-319-71008-2
eBook Packages: EngineeringEngineering (R0)