Abstract
We review the parameterization of orthogonal wavelet based filters of length 4, 6, 8, and 10, and present their inverse formulas, which means to determine the parameter values from filter coefficients. Experimental results support the validity of these inverse formulas when parameters are restricted to \([0, 2\pi )\) for practical applications, such as image processing where parameters are optimized to maximize the number of negligible wavelet coefficients.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Wavelet analysis involves interesting mathematical properties such as function approximation and applications in signal processing through the discrete wavelet transform which can be implemented with perfect reconstruction filters [1, 2].
Compactly supported orthogonal wavelets can be used to implement the non-redundant discrete wavelet transform throughout lowpass and highpass filters that satisfy the quadrature mirror properties [3]. The compactly supported orthogonal wavelets can be obtained from scaling functions through the dyadic dilation equation:
where \(N \in \mathbb {N}\), \(h_k \in \mathbb {R}\), and \(h_k=0\) for \(k<0\) and \(k \ge N\). The sequences \(h_k\) and \(g_k=(-1)^{k}h_{N-1-k}\), for \(k=0, 1, \dots , N-1\) represent the impulse response of the filter bank conformed by the low pass filter and the high pass filter respectively. From this point of view, the design of orthogonal wavelets can be viewed as a filter design problem.
Several design criteria have been proposed based on the smoothness, differentiability, or compact support, such as those presented by Daubechies [4] that maximizes the vanishing moments of the wavelets.
An algebraic methodology to construct wavelets appeals to the parameterization of the \(h_k\) coefficients (see [3, 5,6,7,8]) once it reduces the search space of solutions at the same time it provides flexibility of the characteristics of the wavelets.
There are several works related to the construction of optimal wavelets for specific tasks. For example, in [9], an electrocardiogram compressor is proposed based on the optimal selection of wavelet filters and threshold levels in different subbands, to achieve maximum data volume reduction while guaranteeing the reconstruction quality. However, the computational complexity of the proposed technique is the price paid for the improvement in the compression performance measures. In [10], a secret key is implemented using parameterized wavelet filters to improve the security of digital watermarking, and is highlighted the importance of the security as a priority to protect the location of the watermark information. In [11], the researchers support the utility of exploring wavelet parameterizations to maximize the classification accuracy in the development of biomarkers of psychiatric disease and neurological disorders, and report that the length of the wavelet filter has a greater impact on the estimated values of graph metrics than the type of the chosen wavelet. Since one of the main drawbacks of constructing wavelets is the computational cost required to optimize large parameterized filters, we contribute with an approach based on inverse parameterizations, and we expect that other research areas could be benefit with them.
In this work, we review the parameterization of length 4, 6, 8, and 10 filters given in [12,13,14], and present their inverse formulas to get the parameter values from a set of valid \(h_k\) coefficients. We also present an application of these inverse formulas in image compression, throughout the optimization of the parameters to maximize the number of negligible wavelet coefficients.
The article is organized in sections as follows: in Sect. 2 we review the parameterization of filters and obtain their inverse formulas. In Sect. 3, we present several computational experiments to validate the inverse formulas, discuss some properties related to their domain, and present some results with applications in image processing. Finally, in Sect. 4, we present some conclusions.
2 Parameterization of filter coefficients
In this section we review the orthogonality conditions, the parameterization of filters with length 4, 6, 8, and 10, and present their inverse formulas.
From [12,13,14,15] we consider the orthonormality conditions:
and the zero moment condition:
where H(z) is the trigonometric polynomial expressed as \(H(z)= \sum _k h_k z^k\).
For a filter of length N we have:
where we use the notation \(_Nh_{k}\) to represent the k-th coefficient of a filter of length N. Also, we will use parameters with sub-indices to indicate the filter length, for example, \(_4h_{0}\), \(_4h_{1}\), \(_4h_{2}\), and \(_4h_{3}\) are the coefficients of the length four filter, and \(\alpha _4\) the free parameter.
2.1 Parameterization of the length four filters
The parameterization of a perfect reconstruction filter is not unique (see for example [3, 6, 12,13,14]). For our purposes, we will use the parameterizations of [12,13,14]. In particular, for the length four case, we have:Footnote 1
And, from Eqs. (6) to (9), we proceed to calculate the inverse parameterizations, that consist in the determination of the parameter \(\alpha _4\) in terms of the \(_4h_k\) coefficients.
2.2 Inverse parameterization of the length four filters
It should be noted that, the parameterization of a given filter is not unique and moreover, given a single parameterization it is possible to get several inverse equations for the parameters. For example, it is possible to calculate the \(\alpha _4\) parameter in terms of the filter coefficients \(_4h_{0}\), \(_4h_{1}\), \(_4h_{2}\), and \(_4h_{3}\) by considering Eqs. (6)–(9), as follows:
Additional formulas for \(\alpha _4\) can be obtained. For example, subtracting Eq. (8) from (6) we get:
and, subtracting Eq. (9) from (7) we get:
In Eqs. (10)–(13) the \(\alpha _4\) parameter depends on a single coefficient, whereas in Eqs. (14) and (15) it depends on two coefficients.
2.3 Parameterization of the length sixth filters
The parameterizations of Eqs. (16)–(21) for length sixth coefficients can be obtained following an algebraic approach, as described in detail in [12]:
where \(p_6=\frac{1}{2}\sqrt{1+\sin (\alpha _6+\frac{\pi }{4})}\), for \(\alpha _6, \beta _6 \in \mathbb {R}\).
Now, with an algebraic approach we get the inverse formulas for \(\alpha _6\) and \(\beta _6\) from the \(_6h_k\) coefficients.
2.4 Inverse parameterization of the length sixth filters
From Eq. (18) we get:
but, other alternative is to use Eq. (19), so:
In case of the \(\beta _6\) parameter, we subtract Eqs. (21) from (17), and (20) from (16). Then, these partial results are divided to get \(\tan \beta _6\) in terms of \(_6h_{0}, _6h_{1}, _6h_{4}\) and \(_6h_{5}\), from which \(\beta _6\) can be expressed as:
2.5 Parameterization of the length eight filters with constraints
We identify two kinds of parameterizations for the length eight filters, namely, with and without parameter constraints. First, we will work with the constrained case.
In the case of the constrained parameterization of the length eight filters (see [12]) the formulas depend on parameters \(\alpha _8\), \(\beta _8\), \(\gamma _8\), and \(\theta _8\) as follows:
Subject to the constraint:
In Sect. 2.6, we calculate the inverse formulas for \(\alpha _8\), \(\beta _8\), \(\gamma _8\) and \(\theta _8\).
2.6 Inverse parameterization of the length eight filters with constraints
To obtain \(\alpha _8\), the addition of Eqs. (25)–(29) yields to:
and, once \(\alpha _8\) is known, the \(\gamma _8\) can be calculated from Eq. (31) as follows:
and, from Eq. (25), we get:
Then, taken the division between Eqs. (35) and (36), and solving for \(\gamma _8\) we get:
Now, since \(\alpha _8\) and \(\gamma _8\) are known, \(\beta _8\) can be solved from Eq. (25) as follows:
Since Eq. (26) involves \(\alpha _8\), \(\beta _8\) and \(\theta _8\), and the first two parameters are known, we can solve for \(\theta _8\) as follows:
The constraint given in Eq. (33) must be satisfied for valid filter coefficients.
2.7 Parameterization of the length eight filters with no constraints
An improved version of the constrained parameterizations for length eight filters (see [13]) depends only on three parameters \(\alpha _8\), \(\beta _8\) and \(\gamma _8\), as is shown in Eqs. (40)–(48):
From which we solve for \(\alpha _8, \beta _8\) and \(\gamma _8\) in Sect. 2.8.
2.8 Inverse parameterization of the length eight filters with no constraints
By adding Eq. (42) to Eq. (46), Eq. (41) to Eq. (45), taking the quotient of these partial results, and after applying the inverse of the tangent function we get:
Also, by adding Eq. (42) to Eq. (48) and applying the two trigonometric identities \(\cos (x-y)= \cos x \cos y + \sin x \sin y\) and \(\sin (x-y)= \sin x \cos y - \cos x \sin y\) we get:
and
thus,
For the \(\gamma _8\) parameter, we divide \(_8h_6\) by \(_8h_7\), and applying the inverse of the tangent function, we get:
or, alternatively dividing Eq. (42) by (41) it takes to:
In this case, there are no constraints for \(\alpha _8\), \(\beta _8\) and \(\gamma _8\).
Note that, in contrast with the constrained length eight case, the inverse formulas of Eqs. (49)–(53) depend on the filter coefficients exclusively, and not on the other parameters as in Eqs. (37), (38) and (39) of Sect. 2.6.
2.9 Parameterization of the length ten filters with constraints
According to [13], a constrained parameterization for the length ten coefficients is:
where \(r_{10}=\sqrt{\frac{1}{2}-a_1^2-a_2^2-a_3^2-b_1^2-b_2^2-b_3^2}\), and subject to the constraint:
From which we will obtain the inverse formulas for \(\alpha _{10}, \beta _{10}, \gamma _{10}, \theta _{10},\) and \(\delta _{10}\).
2.10 Inverse parameterization of the length ten filters with constraints
For \(\alpha _{10}\) note that the signed summation of Eqs. (55), (57), (59), (61), and (63) produces:
hence,
For \(\gamma _{10}\), the signed summation of Eqs. (55), (59), and (63) gives place to:
and, subtracting Eqs. (61) to (57) it yields to:
Then, dividing Eq. (69) by (68) and solving for \(\gamma _{10}\) we get:
For \(\beta _{10}\), once \(\gamma _{10}\) is known, it can be calculated from Eq. (68) as follows:
For \(\theta _{10}\) note that
and
then, dividing Eq. (72) by (73) and solving for \(\theta _{10}\) we get:
Finally, since
and
we obtain:
The constraint for the five parameters \(\alpha _{10}, \beta _{10}, \gamma _{10}, \delta _{10}\) and \(\theta _{10}\) given in Eq. (65) must be fulfilled.
2.11 Parameterization of the length ten filters with no constraints
An alternative parameterization to those of Sect. 2.9 was presented in [14] with no parameter constraints and depending on four parameters \(\alpha _{10}\), \(\beta _{10}\), \(\gamma _{10}\), and \(\delta _{10}\), as is shown in Eqs. (78)–(90):
From Eq. (78) to (90) we obtain the inverse formulas for \(\alpha _{10}\), \(\beta _{10}\), \(\gamma _{10}\), and \(\delta _{10}\).
2.12 Inverse parameterization of the length ten filters with no constraints
To solve for \(\alpha _{10}\) see that
and since \( \cos 2\alpha _{10} - \sin 2\alpha _{10} = \sqrt{2} \cos (2\alpha + \frac{\pi }{4})\) hence
To solve for \(\beta _{10}\) see that
and,
hence
To solve for \(\gamma _{10}\) see that
and,
hence
To solve for \(\delta _{10}\), the \(p_{10}\) value is required, so we use Eq. (81) to get:
And, from Eq. (78), we have for \(\delta _{10}\):
where there are no constraints for the four parameters \(\alpha _{10}, \beta _{10}, \gamma _{10}\) and \(\theta _{10}\) as in Sect. 2.9.
3 Experiments
In this section, we develop five experiments to validate the inverse formulas and present a parameter repair technique. We also include an application in image compression.
3.1 Experiment 1
In Experiment 1, we deal with the problem of mismatching between the domain and the range of the parameters for which the filter parameterizations and their inverse formulas are defined.
Note that the filter parameterizations are defined in the \(\mathbb {R}\) domain but, in practical applications such as language programming implementations, it is convenient to use a finite interval, such as \([0, 2\pi )\) by appealing to the periodicity of the involved trigonometric functions.
Additionally, note that the range of the arcsin, arccos and arctan trigonometric functions involved in the inverse formulas, does not match the \([0, 2\pi )\) interval, so it is impossible to recover an original value of the parameters. To deal with this case, we propose a repair technique described next.
To repair a parameter we proceed to explore several alternatives by considering the symmetry and periodicity of the trigonometric functions. For example, given the \(\alpha \) parameter obtained from an inverse formula, the next alternatives are considered:
where \({\textit{MOD}}(x)\) is a function from \(x \in R \rightarrow [0, 2\pi )\) described in Algorithm 1.

The same applies for each parameter. The number of parameter combinations is \(C=\vert A \vert ^P\), where P is the number of free parameters involved in the corresponding filter parameterization.
In our experiments, we consider filters of length four and six for constrained and length eight and ten for unconstrained optimization. So, we do not include the constrained cases for length eight and ten filters because a random generation of their parameters in \([0, 2\pi )\) does not necessarily satisfy the restrictions of Eqs. (33) and (65). However, for further information about the optimization of constrained parameterizations, see for example [16].
We executed an exhaustive search algorithm to determine the repaired value of each parameter obtained from the inverse formulas following the steps of Algorithm 2.

With MAX=27000 iterations, and \(\epsilon < 10^{-10}\) we got the results of Table 1, where we can observe that a reduced number of parameter alternatives (column marked with *) are required to repair the parameters with an efficiency of \(100\%\).
The most notable time and space search saving is for the length ten case, where the elapsed time was reduced from 99.167 s to 0.562 s, and the space search was reduced from \(34^4\) to 135. The experiments were executed on a Linux based system with 4 Intel Xeon CPU @2.53GHz and 16GBRAM.
3.2 Experiment 2
In Experiment 2, we apply the inverse equations and the repair technique of Experiment 1 to calculate parameters of well known filters.
In Table 2 we report the parameters for the Daubechies filters, and in Table 3, we report the parameters for Coiflets and Symlets (see [17]). Of course, all the parameters are in \([0, 2 \pi )\).
We remark the importance of detecting the cases where the inverse equations are not defined, for example, in the Haar filter the denominator of Eq. (53) is zero, but we apply Eq. (54) instead.
3.3 Experiment 3
In Experiment 3, we illustrate the nesting property of parameterized filters: shorter filters can be seen as special cases of larger filters.
The nesting property can be useful for optimization purposes because shorter filters have a lower number of parameters than larger filters. Therefore, once the parameters of a shorter filter are optimized, the coefficients can be exported to larger filters via the inverse formulas (see [16, 18]).
In Table 2, it is possible to see that the named parameters \((\alpha , \beta , \gamma \) or \(\delta )\) of shorter filters does not always correspond to those of larger filters for the same nested filter, and the inversion formulas make possible to export the \(h_k\) coefficients to larger filters to get the right parameter values.
As part of Experiment 3, we apply the nesting property to image compression by comparing two approaches: (1) Nesting optimization: the optimization of parameters of shorter filters and exporting the coefficients to larger filters via the inverse formulas, and (2) Direct optimization: the optimization of the length ten filter parameters with no parameter exportation. Note that we refer to image compression although we do not apply any encoding step, since we have studied the relation between compressibility and the energy concentration in the lowest number of wavelet coefficients [16].
We recall that the discrete wavelet transform of an image with dimensions width (w) and height (h) can be calculated with horizontal and vertical transformation steps [19] which generates a pyramidal effect that gives place to four sub-bands HH, HL, LH and LL, with \(\frac{w}{2} \times \frac{h}{2}\) wavelet coefficients. The HH sub-band stores the most detailed information of the image, whereas the LL sub-band stores the coarser information. When a second transformation level is applied to the LL sub-band, it generates the other four sub-bands, and the process continues while the width and height of the new sub-bands are larger than the parameterized filter.
As we have already mentioned, we estimate the image compressibility in terms of the concentration of energy in the lowest number of wavelet coefficients that minimizes the entropy in the wavelet domain [16, 20]. The energy of a gray-scale image with \(w \times h\) pixels (in the time domain) is given by:
where the \(p_i\) is a pixel value.
After applying the discrete wavelet transform, the wavelet coefficients \(W_i\) conserve the energy,Footnote 2 so the energy in the time-frequency domain is:
By sorting the \(w\times h\) wavelet coefficients in descendent order, we define the cumulative energy of the signal as follows:
Then, we use a genetic algorithm to optimize the filter parameters that maximizes the number of negligible wavelet coefficients (NC) that are less than a threshold value (Th). The threshold Th is automatically calculated when the percentage of cumulative energy \(PE = \frac{E_c}{E}\) reaches a predetermined value.
The fitness function for the genetic algorithm is:
and the genetic algorithm is characterized by [21]:
-
A population of even size P, with a temporal population of size 2P.
-
Pair wise for crossover between the i-th and the (\(P-1-i\))-th sorted individuals, \(i \in [0, P -1]\).
-
Deterministic selection, the best P individuals from the temporal population.
-
Annular crossover.
Each individual encodes the values of the parameters in the interval \([0, 2\pi )\) and \(PE=0.999999\). The parameters for the genetic algorithm were \(Pm=0.03\), \(Pc=0.97\), and \(P=8\) individuals, with 30 bits of precision.
3.4 Results of Experiment 3
Table 4 shows the results for the Lena image with \(64 \times 64\) pixels. The first column is the genetic algorithm generation, the second column corresponds to the length four filter (F4), the third column to the length sixth filter (F6), the fourth column to the length eight filter (F8), and the fifth column to the length ten filter (F10) exporting their best individual to the next larger filter (\(F4 \rightarrow F6 \rightarrow F8 \rightarrow F10\)). The sixth column corresponds to the direct optimization of the length ten filter (F10*).
Note that the nesting property provides an acceleration in the convergence, and maintains the number of negligible coefficients above the counterpart, along 20 generations. For each generation, the larger filters dominate to shorter filters, taking advantage of their best individuals.
Also, in Table 4, a similar result was obtained for the Lena image of \(128 \times 128\) pixels. The seventh column corresponds to the length four filter (F4), the eight column to the length sixth filter (F6), the ninth column to the length eight filter (F8), and the tenth column to the length ten filter (F10) exporting their best individual to the next larger filter (\(F4 \rightarrow F6 \rightarrow F8 \rightarrow F10\)). The last column corresponds to the direct optimization of the length ten filter (\(F10*\)).
Figure 1 illustrates the data of Table 4: a) for the Lena image with \(64 \times 64\) pixels that reaches \(11.7\%\) of negligible coefficients (left side), and b) for the Lena image of \(128 \times 128\) pixels that reaches \(13.9\%\) of negligible coefficients (right side), with an almost perfect reconstruction preserving \(99.9999\%\) of the energy.
Comparing the percentage of F4 with the percentage of F10 after 20 generations, we have a difference of \(0.53\%\) for Lena \(64\times 64\) and \(1.39\%\) for Lena \(128\times 128\).
For the Lena image with \(64 \times 64\) pixels, the optimized parameters for F10 were \(\alpha _{10}=0.43752732136\), \(\beta _{10}=1.11193750642\), \(\gamma _{10}=1.11193750642\), and \(\delta _{10}=4.35859793125\) (cf. Table 2).
For the Lena image with \(128 \times 128\) pixels, the optimized parameters for F10 were \(\alpha _{10}=0.57094882076\), \(\beta _{10}=0.99754652901\), \(\gamma _{10}=0.99754652901\), and \(\delta _{10}=4.49793963189\) (cf. Table 2).
3.5 Experiment 4
In Experiment 4, we processed the Lena image with \(512 \times 512\) pixels, and let the genetic algorithm to execute \(G=100\) generations to get a better perspective of the convergence towards the optimal value. We tested with an energy conservation \(PE=0.999999\) and \(PE=0.999995\).
3.6 Results of Experiment 4
Table 5 shows the results for the Lena image with \(512 \times 512\) pixels every 5 generations. Figure 2 illustrates the Table 5 data with the results of all the 100 generations.
On one hand, note that for a cumulative energy \(PE=0.999999\), the filter F10 produced \(NC=40160\) negligible coefficients (\(15.3\%\)) with an \(RMSE=0.137444\), and for \(PE=0.999995\) the number of negligible coefficients increased to \(NC=67702\) (\(25.8\%\)) with an \(RMSE=0.307334\), so we have considered that PE can be used as a quality factor.
For the Lena image with \(512\times 512\) pixeles, the optimized parameters for F10 were \(\alpha _{10}=2.77460611264\), \(\beta _{10}=1.54593953461\), \(\gamma _{10}=1.53378153259\), and \(\delta _{10}=0.58070206633\) (cf. Table 2).
On the other hand, note that the smallest filter F4 almost reaches its maximum in 10 generations, but its outperformed by the nested filters (\(F4 \rightarrow F6 \rightarrow F8 \rightarrow F10\)) from the beginning. In fact, F10 is better than F4 in \(1.16\%\) from the first generation even though they were randomly initialized.
Also, note that the \(F10*\) with no inverse parameterizations follows a similar behavior to F4 in the first 40 generations.
It is possible to see that the application of the inverse parameterizations allows F10 to reach \(0.99\%\) of the global value in the first 10 generations, which is never reached by the direct optimization \(F10*\).
As additional information, consider that the average time for each generation was 5.96 s.
Experiment 4 supports the advantages of the inverse parameterizations and the nesting property, by guaranteeing that a larger filter will not be overtaken by a shorter filter and, at the same time, by providing a better flexibility in the search of additional solutions without a demerit of the convergence acceleration to the global solution.
3.7 Experiment 5
In Experiment 5, we processed 28 face images from a repository (see [22]). The images were resized to \(1024 \times 1024\) pixels in grayscale that correspond to unique people.
To see the convergence towards the optimal value, we executed the genetic algorithm with \(G=100\) generations and a population of \(P=4\) individuals, that spent an average of 23 s for each generation.
In this case, we calculated the average of the negligible coefficients over the 28 images for each filter F4, F6, F8, F10 with nesting optimization, and \(F10*\) with direct optimization.
In Table 6, we show these average results for 28 images every 5 generations, and in Fig. 3 (left subfigure) we illustrate the performance of each filter F4, F6, F8, F10 and \(F10*\) for \(G=100\) generations.
In Fig. 3 (right subfigure) we illustrate a box plot with the performance of each filter F4, F6, F8, F10 and \(F10*\).
3.8 Results of Experiment 5
In Fig. 3 (left subfigure) and Table 6, we can see that F10 outperforms all the filters, from the first generation.
Note that both filters, F10 and \(F10*\), converge to the same result in a large number of generations.
After 85 generations \(F10*\) follows, but does not overcome, F10. It shows how the inverse formulas and the nesting property allow to accelerate the convergence towards the global optimum, even from the first generation.
Although the small filter F4 has a fast convergence, it almost reaches its maximum value in a few generations (bold values in Table 6) but it follows an asymptotical convergence to a sub-optimal solution. Moreover, F4 never reaches the worst case of F10 (first generation).
In the box plot of Fig. 3, we can see that, effectively, F10 outperforms the other filters and provides a consistent improvement of \(3.4\%\) while conserving \(99.9999\%\) of the energy.
Also, in the box plot of Fig. 3, we can see that all the boxes are placed under the F10 box, and it corroborates the utility of the the parameter repair, the nesting property and the inverse parameterizations.
As the size of the images increases, the difference in the percentage of negligible wavelet coefficients between small and large filters also increases, so we consider that it can lead to applications with higher resolution images.
4 Conclusions
The parameterization of orthogonal filters have been widely studied motivated for the problem of filter design and the construction of wavelet based systems. In this sense, we contribute with the inverse formulas to determine the parameters from valid filter coefficients.
One of the main drawbacks of personalizing the filters is the slowness to calculate them. However, since there is evidence that standard wavelets can be outperformed by personalized wavelets and that short filters yield to sub-optimal solutions, we present a nesting property of parametric filters where the optimization of shorter filters can be used to improve the performance of larger filters by exporting the best coefficients throughout the inverse formulas. This property can be useful in the optimization of larger filters for several applications where is justified the use of these filters. For example, in digital watermarking to enhance the security, since optimized filters allow to protect the location of the watermark information. Also, in classification of neurological signals to enhance the accuracy, and in applications of image compression where we have show some advantages of using the inverse parameterizations. So, we expect that other research areas could be benefit of the inverse parameterizations.
Change history
14 September 2018
The article “Inverse formulas of parameterized orthogonal wavelets”, written by Oscar Herrera-Alcántara and Miguel González-Mendoza, was originally published electronically on the publisher’s internet portal (currently SpringerLink) on 25 January 2018 without open access. The original article has been corrected.
14 September 2018
The article “Inverse formulas of parameterized orthogonal wavelets”, written by Oscar Herrera-Alcántara and Miguel González-Mendoza, was originally published electronically on the publisher’s internet portal (currently SpringerLink) on 25 January 2018 without open access. The original article has been corrected.
14 September 2018
The article ���Inverse formulas of parameterized orthogonal wavelets���, written by Oscar Herrera-Alc��ntara and Miguel Gonz��lez-Mendoza, was originally published electronically on the publisher���s internet portal (currently SpringerLink) on 25 January 2018 without open access. The original article has been corrected.
References
Mallat SG (1989) A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans Pattern Anal Mach Intell 11(7):674–693
Vetterli M, Herley C (1992) Wavelets and filter banks: theory and design. IEEE Trans Signal Process 40(9):2207–2232
Hehong Z, Tewfik A (1993) Parametrization of compactly supported orthonormal wavelets. IEEE Trans Signal Process 41(3):1428–1431
Daubechies I (1992) Ten lectures on wavelets. Society for Industrial and Applied Mathematics, Philadelphia
Pollen D (1990) SU1(2F(z,1/z) for F a subfield of C. J Am Math Soc 3:611–624
Wells JRO (1993) Parameterizing smooth compactly supported wavelets. Trans Am Math Soc 338(2):919–931
Schneid J, Pittner S (1993) On the parametrization of the coefficients of dilation equations for compactly supported wavelets. Computing 51:165–173
Lina JM, Mayrand M (1993) Parametrizations for Daubechies wavelets. Phys Rev E 48(6):R4160–R4163
Sabah MA (2008) Optimal selection of threshold levels and wavelet filters for high quality ECG signal compression. J Eng Sci 36(5):1225–1243
Suhail MA, Dawoud MM (2001) Watermarking security enhancement using filter parametrization in feature domain. In: Proceedings of the 2001 workshop on multimedia and security: new challenges, MM 38; Sec 01, New York. ACM, pp 15–18
Zhang Z, Telesford QK, Giusti C, Lim KO, Bassett DS (2016) Choosing wavelet methods, filters, and lengths for functional brain network construction. PLOS ONE 11(6):e0157243
Lai MJ, Roach DW (2002) Parameterizations of univariate orthogonal wavelets with short support. In: Chui CK, Schumaker LL, Stoeckler J (eds) Approximation theory X: splines, wavelets, and applications. Vanderbilt University Press, Nashville, pp 369–384
Roach DW (2008) The parameterization of the length eight orthogonal wavelets with no parameter constraints. In: Neamtu M, Schumaker LL (eds) Approximation theory XII: San Antonio 2007. Nashboro Press, Brentwood, pp 332–347
Roach DW (2010) Frequency selective parameterized wavelets of length ten. J Concr Appl Math 8(1):165–179
Schneid J, Pittner S (1993) On the parametrization of the coefficients of dilation equations for compactly supported wavelets. Computing 51(2):165–173
Herrera O, González M (2011) Optimization of parameterized compactly supported orthogonal wavelets for data compression. Springer, Berlin, pp 510–521
Soman KP, Ramachandran KI (2005) Insight into wavelets: from theory to practice. Prentice-Hall, New Delhi, p 71289710
Herrera O, Mora R (2011) Aplicación de algoritmos genéticos a la compresión de imágenes con evolets. Sociedad Mexicana de Inteligencia Artificial, Mexico, pp 157–165
Mallat S (1998) A wavelet tour of signal processing. Academic Press Inc., Cambridge
Herrera O (2010) On the best evolutionary wavelet based filter to compress a specific signal. Springer, Berlin, pp 394–405
Kuri A (1999) A comprehensive approach to genetic algorithms in optimization and learning. National Polytechnic Institute, Mexico
Weber M (1999) Frontal face dataset. California Institute of Technology. http://www.vision.caltech.edu/Image_Datasets/Caltech256
Author information
Authors and Affiliations
Corresponding author
Additional information
The original version of this article was revised. The copyright line was incorrect and it has been corrected.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Herrera-Alcántara, O., González-Mendoza, M. Inverse formulas of parameterized orthogonal wavelets. Computing 100, 715–739 (2018). https://doi.org/10.1007/s00607-018-0585-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00607-018-0585-x