default search action
Joon-Hyuk Chang
Person information
Other persons with a similar name
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j125]Yungyeo Kim, Joon-Hyuk Chang:
Acoustic-Scene-Aware Target Sound Separation With Sound Embedding Refinement. IEEE Access 12: 71606-71616 (2024) - [j124]Chee-Hyun Park, Joon-Hyuk Chang:
Robust Time-of-Arrival-Based Splitting Mean Moving Object Localization. IEEE Signal Process. Lett. 31: 226-230 (2024) - [j123]Jaeuk Lee, Yoonsoo Shin, Joon-Hyuk Chang:
Differentiable Duration Refinement Using Internal Division for Non-Autoregressive Text-to-Speech. IEEE Signal Process. Lett. 31: 3154-3158 (2024) - [j122]Jae-Hong Lee, Joon-Hyuk Chang:
Partitioning Attention Weight: Mitigating Adverse Effect of Incorrect Pseudo-Labels for Self-Supervised ASR. IEEE ACM Trans. Audio Speech Lang. Process. 32: 891-905 (2024) - [j121]Jeong-Hwan Choi, Joon-Young Yang, Joon-Hyuk Chang:
Efficient Lightweight Speaker Verification With Broadcasting CNN-Transformer and Knowledge Distillation Training of Self-Attention Maps. IEEE ACM Trans. Audio Speech Lang. Process. 32: 4580-4595 (2024) - [j120]Mun-Hak Lee, Joon-Hyuk Chang:
Proper Error Estimation and Calibration for Attention-Based Encoder-Decoder Models. IEEE ACM Trans. Audio Speech Lang. Process. 32: 4919-4930 (2024) - [c85]Pil Moo Byun, Joon-Hyuk Chang:
Generalized Specaugment via Multi-Rectangle Inverse Masking For Acoustic Scene Classification. ICASSP 2024: 841-845 - [c84]Dail Kim, Min-Sang Baek, Yungyeo Kim, Joon-Hyuk Chang:
Improving Target Sound Extraction with Timestamp Knowledge Distillation. ICASSP 2024: 1396-1400 - [c83]Donghyun Kim, Yungyeo Kim, Joon-Hyuk Chang:
Class: Continual Learning Approach for Speech Super-Resolution. ICASSP 2024: 1401-1405 - [c82]Won-Gook Choi, Donghyun Seong, Joon-Hyuk Chang:
Adversarial Learning on Compressed Posterior Space for Non-Iterative Score-based End-to-End Text-to-Speech. ICASSP 2024: 10946-10950 - [c81]Dong-Hyun Kim, Jae-Hong Lee, Joon-Hyuk Chang:
Text-Only Unsupervised Domain Adaptation for Neural Transducer-Based ASR Personalization Using Synthesized Data. ICASSP 2024: 11131-11135 - [c80]Jae-Hong Lee, Joon-Hyuk Chang:
Continual Momentum Filtering on Parameter Space for Online Test-time Adaptation. ICLR 2024 - [c79]Jae-Hong Lee, Joon-Hyuk Chang:
Stationary Latent Weight Inference for Unreliable Observations from Online Test-Time Adaptation. ICML 2024 - [i7]Da-Hee Yang, Dail Kim, Joon-Hyuk Chang, Jeonghwan Choi, Han-gil Moon:
DM: Dual-path Magnitude Network for General Speech Restoration. CoRR abs/2409.08702 (2024) - 2023
- [j119]Chee-Hyun Park, Joon-Hyuk Chang:
Robust Localization Method Based on Non-Parametric Probability Density Estimation. IEEE Access 11: 61468-61480 (2023) - [j118]Da-Hee Yang, Joon-Hyuk Chang:
Attention-based latent features for jointly trained end-to-end automatic speech recognition with modified speech enhancement. J. King Saud Univ. Comput. Inf. Sci. 35(3): 202-210 (2023) - [c78]Jeong-Hwan Choi, Jehyun Kyung, Ju-Seok Seong, Ye-Rin Jeoung, Joon-Hyuk Chang:
Extending Self-Distilled Self-Supervised Learning For Semi-Supervised Speaker Verification. ASRU 2023: 1-8 - [c77]Jae-Hong Lee, Do-Hee Kim, Joon-Hyuk Chang:
AWMC: Online Test-Time Adaptation Without Mode Collapse for Continual Adaptation. ASRU 2023: 1-8 - [c76]Mun-Hak Lee, Sang-Eon Lee, Ji-Eun Choi, Joon-Hyuk Chang:
Cross-Modal Learning for CTC-Based ASR: Leveraging CTC-Bertscore and Sequence-Level Training. ASRU 2023: 1-8 - [c75]Ji-Hwan Mo, Jae-Jin Jeon, Mun-Hak Lee, Joon-Hyuk Chang:
Knowledge Distillation From Offline to Streaming Transducer: Towards Accurate and Fast Streaming Model by Matching Alignments. ASRU 2023: 1-7 - [c74]Da-Hee Yang, Joon-Hyuk Chang:
Towards Robust Packet Loss Concealment System With ASR-Guided Representations. ASRU 2023: 1-8 - [c73]Jehyun Kyung, Jeong-Hwan Choi, Ju-Seok Seong, Ye-Rin Jeoung, Joon-Hyuk Chang:
A Multi-modal Teacher-student Framework for Improved Blood Pressure Estimation. EMBC 2023: 1-5 - [c72]Jae-Heung Cho, Joon-Hyuk Chang:
CAN2V: Can-Bus Data-Based Seq2seq Model for Vehicle Velocity Prediction. ICASSP 2023: 1-5 - [c71]Jin-Seong Choi, Jae-Hong Lee, Chae-Won Lee, Joon-Hyuk Chang:
M-CTRL: A Continual Representation Learning Framework with Slowly Improving Past Pre-Trained Model. ICASSP 2023: 1-5 - [c70]Sohee Jang, Jiye Kim, Yeon-Ju Kim, Joon-Hyuk Chang:
Adaptive Time-Scale Modification for Improving Speech Intelligibility Based On Phoneme Clustering For Streaming Services. ICASSP 2023: 1-5 - [c69]Ye-Rin Jeoung, Joon-Young Yang, Jeong-Hwan Choi, Joon-Hyuk Chang:
Improving Transformer-Based End-to-End Speaker Diarization by Assigning Auxiliary Losses to Attention Heads. ICASSP 2023: 1-5 - [c68]Jae-Hong Lee, Dong-Hyun Kim, Joon-Hyuk Chang:
Repackagingaugment: Overcoming Prediction Error Amplification in Weight-Averaged Speech Recognition Models Subject to Self-Training. ICASSP 2023: 1-5 - [c67]Ju-Seok Seong, Jeong-Hwan Choi, Jehyun Kyung, Ye-Rin Jeoung, Joon-Hyuk Chang:
Noise-Aware Target Extension with Self-Distillation for Robust Speech Recognition. ICASSP 2023: 1-5 - [c66]Da-Hee Yang, Joon-Hyuk Chang:
Selective Film Conditioning with CTC-Based ASR Probability for Speech Enhancement. ICASSP 2023: 1-5 - [c65]Yoon-Ah Park, Joon-Hyuk Chang:
Audio Captioning Using Semantic Alignment Enhancer. IC-NIDC 2023: 374-378 - [c64]Pil Moo Byun, Joon-Hyuk Chang:
Effective Masking Shapes Based Robust Data Augmentation for Acoustic Scene Classification. IC-NIDC 2023: 404-408 - [c63]Donghyun Kim, Joon-Hyuk Chang:
Restoration of Face Mask-Induced Speech Intelligibility Degradation Via Neural Bandwidth Extension. IC-NIDC 2023: 409-413 - [c62]Won-Gook Choi, Joon-Hyuk Chang:
Resolution Consistency Training on Time-Frequency Domain for Semi-Supervised Sound Event Detection. INTERSPEECH 2023: 286-290 - [c61]Do-Hee Kim, Daeyeol Shim, Joon-Hyuk Chang:
General-purpose Adversarial Training for Enhanced Automatic Speech Recognition Model Generalization. INTERSPEECH 2023: 889-893 - [c60]Do-Hee Kim, Ji-Eun Choi, Joon-Hyuk Chang:
Intra-ensemble: A New Method for Combining Intermediate Outputs in Transformer-based Automatic Speech Recognition. INTERSPEECH 2023: 2203-2207 - [c59]JungPhil Park, Jeong-Hwan Choi, Yungyeo Kim, Joon-Hyuk Chang:
HAD-ANC: A Hybrid System Comprising an Adaptive Filter and Deep Neural Networks for Active Noise Control. INTERSPEECH 2023: 2513-2517 - [c58]Ye-Rin Jeoung, Jeong-Hwan Choi, Ju-Seok Seong, Jehyun Kyung, Joon-Hyuk Chang:
Self-Distillation into Self-Attention Heads for Improving Transformer-based End-to-End Neural Speaker Diarization. INTERSPEECH 2023: 3197-3201 - [c57]Min-Sang Baek, Joon-Young Yang, Joon-Hyuk Chang:
Deeply Supervised Curriculum Learning for Deep Neural Network-based Sound Source Localization. INTERSPEECH 2023: 3744-3748 - [c56]Jae-Heung Cho, Joon-Hyuk Chang:
SR-SRP: Super-Resolution based SRP-PHAT for Sound Source Localization and Tracking. INTERSPEECH 2023: 3769-3773 - [c55]Won-Gook Choi, So-Jeong Kim, Tae-Ho Kim, Joon-Hyuk Chang:
Prior-free Guided TTS: An Improved and Efficient Diffusion-based Text-Guided Speech Synthesis. INTERSPEECH 2023: 4289-4293 - [c54]Jehyun Kyung, Ju-Seok Seong, Jeong-Hwan Choi, Ye-Rin Jeoung, Joon-Hyuk Chang:
Improving Joint Speech and Emotion Recognition Using Global Style Tokens. INTERSPEECH 2023: 4528-4532 - [c53]Pil Moo Byun, Jeong-Hwan Choi, Joon-Hyuk Chang:
Class Activation Mapping-Driven Data Augmentation: Masking Significant Regions for Enhanced Acoustic Scene Classification. WASPAA 2023: 1-5 - [c52]Da-Hee Yang, Donghyun Kim, Joon-Hyuk Chang:
Masked Frequency Modeling for Improving Packet Loss Concealment in Speech Transmission Systems. WASPAA 2023: 1-5 - [i6]Ye-Rin Jeoung, Joon-Young Yang, Jeong-Hwan Choi, Joon-Hyuk Chang:
Improving Transformer-based End-to-End Speaker Diarization by Assigning Auxiliary Losses to Attention Heads. CoRR abs/2303.01192 (2023) - 2022
- [j117]Joo-Hyun Lee, Joon-Hyuk Chang, Jae-Mo Yang, Han-Gil Moon:
NAS-TasNet: Neural Architecture Search for Time-Domain Speech Separation. IEEE Access 10: 56031-56043 (2022) - [j116]Chee-Hyun Park, Joon-Hyuk Chang:
Robust Localization Based on Mixed-Norm Minimization Criterion. IEEE Access 10: 57080-57093 (2022) - [j115]Jeong-Hwan Choi, Joon-Hyuk Chang:
Supervised Learning Approach for Explicit Spatial Filtering of Speech. IEEE Signal Process. Lett. 29: 1412-1416 (2022) - [j114]Joon-Young Yang, Joon-Hyuk Chang:
VACE-WPE: Virtual Acoustic Channel Expansion Based on Neural Networks for Weighted Prediction Error-Based Speech Dereverberation. IEEE ACM Trans. Audio Speech Lang. Process. 30: 174-189 (2022) - [j113]Moa Lee, Junmo Lee, Joon-Hyuk Chang:
Non-Autoregressive Fully Parallel Deep Convolutional Neural Speech Synthesis. IEEE ACM Trans. Audio Speech Lang. Process. 30: 1150-1159 (2022) - [j112]Joon-Young Yang, Joon-Hyuk Chang:
Task-Specific Optimization of Virtual Channel Linear Prediction-Based Speech Dereverberation Front-End for Far-Field Speaker Verification. IEEE ACM Trans. Audio Speech Lang. Process. 30: 3144-3159 (2022) - [j111]Chee-Hyun Park, Joon-Hyuk Chang:
Revisiting Skipped Filter and Development of Robust Localization Method Based on Variational Bayesian Gaussian Mixture Algorithm. IEEE Trans. Signal Process. 70: 5639-5651 (2022) - [c51]Won-Gook Choi, Joon-Hyuk Chang:
Confidence Regularized Entropy for Polyphonic Sound Event Detection. DCASE 2022 - [c50]Joo-Hyun Lee, Jeong-Hwan Choi, Pil Moo Byun, Joon-Hyuk Chang:
Multi-Scale Architecture and Device-Aware Data-Random-Drop Based Fine-Tuning Method for Acoustic Scene Classification. DCASE 2022 - [c49]Mun-Hak Lee, Joon-Hyuk Chang:
Knowledge Distillation from Language Model to Acoustic Model: A Hierarchical Multi-Task Learning Approach. ICASSP 2022: 8392-8396 - [c48]Mun-Hak Lee, Joon-Hyuk Chang, Sang-Eon Lee, Ju-Seok Seong, Chanhee Park, Haeyoung Kwon:
Regularizing Transformer-based Acoustic Models by Penalizing Attention Weights. INTERSPEECH 2022: 56-60 - [c47]Jeong-Hwan Choi, Joon-Young Yang, Ye-Rin Jeoung, Joon-Hyuk Chang:
Improved CNN-Transformer using Broadcasted Residual Learning for Text-Independent Speaker Verification. INTERSPEECH 2022: 2223-2227 - [c46]Joon-Hyuk Chang, Won-Gook Choi:
Convolutional Recurrent Neural Network with Auxiliary Stream for Robust Variable-Length Acoustic Scene Classification. INTERSPEECH 2022: 2418-2422 - [c45]Jeong-Hwan Choi, Joon-Young Yang, Ye-Rin Jeoung, Joon-Hyuk Chang:
HYU Submission for the SASV Challenge 2022: Reforming Speaker Embeddings with Spoofing-Aware Conditioning. INTERSPEECH 2022: 2873-2877 - [c44]Jaeuk Lee, Joon-Hyuk Chang:
One-Shot Speaker Adaptation Based on Initialization by Generative Adversarial Networks for TTS. INTERSPEECH 2022: 2978-2982 - [c43]Jaeuk Lee, Joon-Hyuk Chang:
Advanced Speaker Embedding with Predictive Variance of Gaussian Distribution for Speaker Adaptation in TTS. INTERSPEECH 2022: 2988-2992 - [c42]Dong-Hyun Kim, Jae-Hong Lee, Ji-Hwan Mo, Joon-Hyuk Chang:
W2V2-Light: A Lightweight Version of Wav2vec 2.0 for Automatic Speech Recognition. INTERSPEECH 2022: 3038-3042 - [c41]Jae-Hong Lee, Chae Won Lee, Jin-Seong Choi, Joon-Hyuk Chang, Woo Kyeong Seong, Jeonghan Lee:
CTRL: Continual Representation Learning to Transfer Information of Pre-trained for WAV2VEC 2.0. INTERSPEECH 2022: 3398-3402 - [c40]Da-Hee Yang, Joon-Hyuk Chang:
FiLM Conditioning with Enhanced Feature to the Transformer-based End-to-End Noisy Speech Recognition. INTERSPEECH 2022: 4098-4102 - [c39]Min-Kyung Kim, Joon-Hyuk Chang:
Adversarial and Sequential Training for Cross-lingual Prosody Transfer TTS. INTERSPEECH 2022: 4556-4560 - [i5]Won-Gook Choi, Joon-Hyuk Chang, Jae-Mo Yang, Han-Gil Moon:
Instance-level loss based multiple-instance learning for acoustic scene classification. CoRR abs/2203.08439 (2022) - 2021
- [j110]Chee-Hyun Park, Joon-Hyuk Chang:
Modified MM Algorithm and Bayesian Expectation Maximization-Based Robust Localization Under NLOS Contaminated Environments. IEEE Access 9: 4059-4071 (2021) - [j109]Sung-Woong Hwang, Joon-Hyuk Chang:
Document-Level Neural TTS Using Curriculum Learning and Attention Masking. IEEE Access 9: 8954-8960 (2021) - [j108]Tae-Jun Park, Joon-Hyuk Chang:
Deep Q-network-based noise suppression for robust speech recognition. Turkish J. Electr. Eng. Comput. Sci. 29(5): 2362-2373 (2021) - [j107]Seong-Chel Park, Kwan-Ho Park, Joon-Hyuk Chang:
Luminance-Degradation Compensation Based on Multistream Self-Attention to Address Thin-Film Transistor-Organic Light Emitting Diode Burn-In. Sensors 21(9): 3182 (2021) - [j106]Jin-Young Son, Joon-Hyuk Chang:
Attention-Based Joint Training of Noise Suppression and Sound Event Detection for Noise-Robust Classification. Sensors 21(20): 6718 (2021) - [j105]Chee-Hyun Park, Joon-Hyuk Chang:
Robust Localization Employing Weighted Least Squares Method Based on MM Estimator and Kalman Filter With Maximum Versoria Criterion. IEEE Signal Process. Lett. 28: 1075-1079 (2021) - [c38]Jeong-Hwan Choi, Joon-Young Yang, Joon-Hyuk Chang:
Short-Utterance Embedding Enhancement Method Based on Time Series Forecasting Technique for Text-Independent Speaker Verification. ASRU 2021: 130-137 - [c37]Jaeuk Lee, Jiye Kim, Joon-Hyuk Chang:
Zero-Shot Voice Cloning Using Variational Embedding with Attention Mechanism. IC-NIDC 2021: 344-348 - [c36]Mun-Hak Lee, Joon-Hyuk Chang:
Deep Neural Network Calibration for E2E Speech Recognition System. Interspeech 2021: 4064-4068 - [c35]Jung-Hee Kim, Jeong-Hwan Choi, Jinyoung Son, Gyeong-Su Kim, Jihwan Park, Joon-Hyuk Chang:
MIMO Noise Suppression Preserving Spatial Cues for Sound Source Localization in Mobile Robot. ISCAS 2021: 1-5 - [i4]Jae-Hong Lee, Joon-Hyuk Chang:
Attribution Mask: Filtering Out Irrelevant Features By Recursively Focusing Attention on Inputs of DNNs. CoRR abs/2102.07332 (2021) - [i3]Mun-Hak Lee, Joon-Hyuk Chang:
Knowledge distillation from language model to acoustic model: a hierarchical multi-task learning approach. CoRR abs/2110.10429 (2021) - 2020
- [j104]Inyoung Hwang, Joon-Hyuk Chang:
End-to-End Speech Endpoint Detection Utilizing Acoustic and Language Modeling Knowledge for Online Low-Latency Speech Recognition. IEEE Access 8: 161109-161123 (2020) - [j103]Jung-Hee Kim, Jeong-Hwan Choi, Sang Won Nam, Joon-Hyuk Chang:
Delayless Block Individual-Weighting-Factors Sign Subband Adaptive Filters With an Improved Band-Dependent Variable Step-Size. IEEE Access 8: 185796-185803 (2020) - [j102]Kyoung Jin Noh, Joon-Hyuk Chang:
Deep neural network ensemble for reducing artificial noise in bandwidth extension. Digit. Signal Process. 102: 102760 (2020) - [j101]Chee-Hyun Park, Joon-Hyuk Chang:
Robust range estimation algorithm based on hyper-tangent loss function. IET Signal Process. 14(5): 314-321 (2020) - [j100]Kyoung Jin Noh, Joon-Hyuk Chang:
Joint Optimization of Deep Neural Network-Based Dereverberation and Beamforming for Sound Event Detection in Multi-Channel Environments. Sensors 20(7): 1883 (2020) - [j99]Song-Kyu Park, Joon-Hyuk Chang:
Multi-TALK: Multi-Microphone Cross-Tower Network for Jointly Suppressing Acoustic Echo and Background Noise. Sensors 20(22): 6493 (2020) - [j98]Kwang-Sub Song, Ku-young Chung, Joon-Hyuk Chang:
Cuffless Deep Learning-Based Blood Pressure Estimation for Smart Wristwatches. IEEE Trans. Instrum. Meas. 69(7): 4292-4302 (2020) - [j97]Chee-Hyun Park, Joon-Hyuk Chang:
Robust Localization Based on ML-Type, Multi-Stage ML-Type, and Extrapolated Single Propagation UKF Methods Under Mixed LOS/NLOS Conditions. IEEE Trans. Wirel. Commun. 19(9): 5819-5832 (2020) - [c34]Joon-Young Yang, Joon-Hyuk Chang:
Virtual Acoustic Channel Expansion Based on Neural Networks for Weighted Prediction Error-Based Speech Dereverberation. INTERSPEECH 2020: 3930-3934 - [c33]Jung-Hee Kim, Joon-Hyuk Chang:
Attention Wave-U-Net for Acoustic Echo Cancellation. INTERSPEECH 2020: 3969-3973
2010 – 2019
- 2019
- [j96]Chee-Hyun Park, Joon-Hyuk Chang:
WLS Localization Using Skipped Filter, Hampel Filter, Bootstrapping and Gaussian Mixture EM in LOS/NLOS Conditions. IEEE Access 7: 35919-35928 (2019) - [j95]Chee-Hyun Park, Joon-Hyuk Chang:
Robust LMedS-Based WLS and Tukey-Based EKF Algorithms Under LOS/NLOS Mixture Conditions. IEEE Access 7: 148198-148207 (2019) - [j94]Moa Lee, Jee-Hye Lee, Joon-Hyuk Chang:
Ensemble of jointly trained deep neural network-based acoustic models for reverberant speech recognition. Digit. Signal Process. 85: 1-9 (2019) - [j93]Chee-Hyun Park, Joon-Hyuk Chang:
Shrinkage sinusoidal phase estimation based on spherical simplex unscented transform and bootstrap method. Trans. Emerg. Telecommun. Technol. 30(7) (2019) - [j92]Jihwan Park, Joon-Hyuk Chang:
State-Space Microphone Array Nonlinear Acoustic Echo Cancellation Using Multi-Microphone Near-End Speech Covariance. IEEE ACM Trans. Audio Speech Lang. Process. 27(10): 1520-1534 (2019) - [j91]Kwang-Sub Song, Joon-Hyuk Chang:
Smart Wristwatches Employing Finger-Conducted Voice Transmission System. IEEE Trans. Ind. Informatics 15(2): 965-975 (2019) - [j90]Chee-Hyun Park, Joon-Hyuk Chang:
Robust Shrinkage Range Estimation Algorithms Based on Hampel and Skipped Filters. Wirel. Commun. Mob. Comput. 2019: 6592406:1-6592406:9 (2019) - [c32]Junmo Lee, Kwang-Sub Song, Kyoung Jin Noh, Tae-Jun Park, Joon-Hyuk Chang:
DNN based multi-speaker speech synthesis with temporal auxiliary speaker ID embedding. ICEIC 2019: 1-4 - [c31]Joon-Young Yang, Joon-Hyuk Chang:
Joint Optimization of Neural Acoustic Beamforming and Dereverberation with x-Vectors for Robust Speaker Verification. INTERSPEECH 2019: 4075-4079 - 2018
- [j89]Bong-Ki Lee, Kyoung Jin Noh, Joon-Hyuk Chang, Kihyun Choo, Eunmi Oh:
Sequential Deep Neural Networks Ensemble for Speech Bandwidth Extension. IEEE Access 6: 27039-27047 (2018) - [j88]Tae-Jun Park, Joon-Hyuk Chang:
Dempster-Shafer theory for enhanced statistical model-based voice activity detection. Comput. Speech Lang. 47: 47-58 (2018) - [j87]Chee-Hyun Park, Joon-Hyuk Chang:
Sequential source localisation and range estimation based on shrinkage algorithm. IET Signal Process. 12(2): 182-187 (2018) - [c30]Moa Lee, Joon-Hyuk Chang:
DNN-based Speech Recognition System dealing with Motor State as Auxiliary Information of DNN for Head Shaking Robot. IROS 2018: 1859-1863 - [i2]Moa Lee, Joon-Hyuk Chang:
Augmenting Bottleneck Features of Deep Neural Network Employing Motor State for Speech Recognition at Humanoid Robots. CoRR abs/1808.08702 (2018) - 2017
- [j86]Soojeong Lee, Joon-Hyuk Chang:
Deep Belief Networks Ensemble for Blood Pressure Estimation. IEEE Access 5: 9962-9972 (2017) - [j85]Soojeong Lee, Sreeraman Rajan, Gwanggil Jeon, Joon-Hyuk Chang, Hilmi R. Dajani, Voicu Z. Groza:
Oscillometric blood pressure estimation by combining nonparametric bootstrap with Gaussian mixture model. Comput. Biol. Medicine 85: 112-124 (2017) - [j84]Soojeong Lee, Joon-Hyuk Chang:
Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation. Comput. Methods Programs Biomed. 151: 1-13 (2017) - [j83]Ji-Won Cho, Jong-Hyeon Park, Joon-Hyuk Chang, Hyung-Min Park:
Bayesian feature enhancement using independent vector analysis and reverberation parameter re-estimation for noisy reverberant speech recognition. Comput. Speech Lang. 46: 496-516 (2017) - [j82]Chee-Hyun Park, Joon-Hyuk Chang:
TOA source localization and DOA estimation algorithms using prior distribution for calibrated source. Digit. Signal Process. 71: 61-68 (2017) - [j81]Chee-Hyun Park, Joon-Hyuk Chang:
Adaptive robust time-of-arrival source localization algorithm based on variable step size weighted block Newton method. EURASIP J. Wirel. Commun. Netw. 2017: 121 (2017) - [j80]Soojeong Lee, Joon-Hyuk Chang:
Spectral difference for statistical model-based speech enhancement in speech recognition. Multim. Tools Appl. 76(23): 24917-24929 (2017) - [j79]Ku-young Chung, Kwang-Sub Song, Kangsoo Shin, Jinho Sohn, Seok Hyun Cho, Joon-Hyuk Chang:
Noncontact Sleep Study by Multi-Modal Sensor Fusion. Sensors 17(7): 1685 (2017) - [j78]Soojeong Lee, Joon-Hyuk Chang:
Oscillometric Blood Pressure Estimation Based on Deep Learning. IEEE Trans. Ind. Informatics 13(2): 461-472 (2017) - 2016
- [j77]Inyoung Hwang, Hyung-Min Park, Joon-Hyuk Chang:
Ensemble of deep neural networks using acoustic environment classification for statistical model-based voice activity detection. Comput. Speech Lang. 38: 1-12 (2016) - [j76]Bong-Ki Lee, Joon-Hyuk Chang:
Novel adaptive muting technique for packet loss concealment of ITU-T G.722 using optimized parametric shaping functions. EURASIP J. Audio Speech Music. Process. 2016: 11 (2016) - [j75]Chee-Hyun Park, Joon-Hyuk Chang:
Robust time-of-arrival source localization employing error covariance of sample mean and sample median in line-of-sight/non-line-of-sight mixture environments. EURASIP J. Adv. Signal Process. 2016: 89 (2016) - [j74]Chee-Hyun Park, Joon-Hyuk Chang:
Closed-form two-step weighted-least-squares-based time-of-arrival source localisation using invariance property of maximum likelihood estimator in multiple-sample environment. IET Commun. 10(10): 1206-1213 (2016) - [j73]Chee-Hyun Park, Joon-Hyuk Chang:
Time-of-arrival source localization based on weighted least squares estimator in line-of-sight/non-line-of-sight mixture environments. Int. J. Distributed Sens. Networks 12(12) (2016) - [j72]Soojeong Lee, Joon-Hyuk Chang:
On using multivariate polynomial regression model with spectral difference for statistical model-based speech enhancement. J. Syst. Archit. 64: 76-85 (2016) - [j71]Bong-Ki Lee, Joon-Hyuk Chang:
Packet Loss Concealment Based on Deep Neural Networks for Digital Speech Transmission. IEEE ACM Trans. Audio Speech Lang. Process. 24(2): 378-387 (2016) - [j70]Soojeong Lee, Chee-Hyun Park, Joon-Hyuk Chang:
Improved Gaussian Mixture Regression Based on Pseudo Feature Generation Using Bootstrap in Blood Pressure Estimation. IEEE Trans. Ind. Informatics 12(6): 2269-2280 (2016) - [j69]Chee-Hyun Park, Joon-Hyuk Chang:
Closed-Form Localization for Distributed MIMO Radar Systems Using Time Delay Measurements. IEEE Trans. Wirel. Commun. 15(2): 1480-1490 (2016) - [c29]Seng Hyun Huang, Jihwan Park, Joon-Hyuk Chang:
Dual-microphone voice activity detection based on using optimally weighted maximum a posteriori probabilities. ICASSP 2016: 5360-5364 - [c28]Myungin Lee, Joon-Hyuk Chang:
Blind estimation of reverberation time using deep neural network. IC-NIDC 2016: 308-311 - [c27]Ku-young Chung, Bong-Ki Lee, Kwang-Sub Song, Kangsoo Shin, Seok Hyun Cho, Joon-Hyuk Chang:
An experimental study: The sufficient respiration rate detection technique via continuous wave Doppler radar. IC-NIDC 2016: 471-475 - [i1]Jee-Hye Lee, Myungin Lee, Joon-Hyuk Chang:
Ensemble of Jointly Trained Deep Neural Network-Based Acoustic Models for Reverberant Speech Recognition. CoRR abs/1608.04983 (2016) - 2015
- [j68]Soojeong Lee, Sreeraman Rajan, Chee-Hyun Park, Joon-Hyuk Chang, Hilmi R. Dajani, Voicu Z. Groza:
Estimated confidence interval from single blood pressure measurement based on algorithmic fusion. Comput. Biol. Medicine 62: 154-163 (2015) - [j67]Chee-Hyun Park, Soojeong Lee, Joon-Hyuk Chang:
Shrinkage-based biased signal-to-noise ratio estimator using pilot and data symbols for linearly modulated signals. IET Commun. 9(11): 1388-1395 (2015) - [j66]Chungsoo Lim, Joon-Hyuk Chang:
Efficient implementation techniques of an SVM-based speech/music classifier in SMV. Multim. Tools Appl. 74(15): 5375-5400 (2015) - [j65]Chee-Hyun Park, Soojeong Lee, Joon-Hyuk Chang:
Robust closed-form time-of-arrival source localization based on α-trimmed mean and Hodges-Lehmann estimator under NLOS environments. Signal Process. 111: 113-123 (2015) - [c26]Inyoung Hwang, Jaeseong Sim, Sang-Hyeon Kim, Kwang-Sub Song, Joon-Hyuk Chang:
A statistical model-based voice activity detection using multiple DNNs and noise awareness. INTERSPEECH 2015: 2277-2281 - 2014
- [j64]Chee-Hyun Park, Joon-Hyuk Chang:
Shrinkage estimation-based source localization with minimum mean squared error criterion and minimum bias criterion. Digit. Signal Process. 29: 100-106 (2014) - [j63]Soojeong Lee, Chungsoo Lim, Joon-Hyuk Chang:
A new a priori SNR estimator based on multiple linear regression technique for speech enhancement. Digit. Signal Process. 30: 154-164 (2014) - [j62]Chungsoo Lim, Soojeong Lee, Jae-Hun Choi, Joon-Hyuk Chang:
Efficient Implementation of Statistical Model-Based Voice Activity Detection Using Taylor Series Approximation. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 97-A(3): 865-868 (2014) - [j61]Chee-Hyun Park, Kwang-Seok Hong, Sang Won Nam, Joon-Hyuk Chang:
Biased SNR estimation using pilot and data symbols in BPSK and QPSK systems. J. Commun. Networks 16(6): 583-591 (2014) - [j60]Chungsoo Lim, Jae-Hoon Choi, Sang Won Nam, Joon-Hyuk Chang:
A new television audience measurement framework using smart devices. Multim. Tools Appl. 73(3): 1757-1776 (2014) - [j59]Jihwan Park, Joon-Hyuk Chang:
Frequency-Domain Volterra Filter Based on Data-Driven Soft Decision for Nonlinear Acoustic Echo Suppression. IEEE Signal Process. Lett. 21(9): 1088-1092 (2014) - [j58]Jae-Hun Choi, Joon-Hyuk Chang:
Dual-Microphone Voice Activity Detection Technique Based on Two-Step Power Level Difference Ratio. IEEE ACM Trans. Audio Speech Lang. Process. 22(6): 1069-1081 (2014) - [c25]Inyoung Hwang, Joon-Hyuk Chang:
Voice Activity Detection Based on Statistical Model Employing Deep Neural Network. IIH-MSP 2014: 582-585 - [c24]Bong-Ki Lee, Inyoung Hwang, Jihwan Park, Joon-Hyuk Chang:
Enhanced muting method in packet loss concealment of ITU-t g.722 using sigmoid function with on-line optimized parameters. INTERSPEECH 2014: 2814-2818 - 2013
- [j57]Joon-Hyuk Chang:
Noisy speech enhancement based on improved minimum statistics incorporating acoustic environment-awareness. Digit. Signal Process. 23(4): 1233-1238 (2013) - [j56]Tae-Ho Jung, Jung Hee Kim, Joon-Hyuk Chang, Sang Won Nam:
Online Sparse Volterra System Identification Using Projections onto Weighted l1 Balls. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 96-A(10): 1980-1983 (2013) - [j55]Jong-Woong Kim, Joon-Hyuk Chang, Sang Won Nam, Dong Kook Kim, Jong Won Shin:
Improved Speech-Presence Uncertainty Estimation Based on Spectral Gradient for Global Soft Decision-Based Speech Enhancement. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 96-A(10): 2025-2028 (2013) - [j54]Soojeong Lee, Joon-Hyuk Chang, Sang Won Nam, Chungsoo Lim, Sreeraman Rajan, Hilmi R. Dajani, Voicu Z. Groza:
Oscillometric Blood Pressure Estimation Based on Maximum Amplitude Algorithm Employing Gaussian Mixture Regression. IEEE Trans. Instrum. Meas. 62(12): 3387-3389 (2013) - [c23]Bong-Ki Lee, Chungsoo Lim, Jihwan Park, Joon-Hyuk Chang:
Enhanced muting method in packet loss concealment of ITU-t g.722 employing optimized sigmoid function. INTERSPEECH 2013: 3458-3462 - 2012
- [j53]Yun-Sik Park, Joon-Hyuk Chang:
Integrated acoustic echo and background noise suppression technique based on soft decision. EURASIP J. Adv. Signal Process. 2012: 11 (2012) - [j52]Chungsoo Lim, Joon-Hyuk Chang:
Improvement of SVM-Based Speech/Music Classification Using Adaptive Kernel Technique. IEICE Trans. Inf. Syst. 95-D(3): 888-891 (2012) - [j51]Jae-Hun Choi, Joon-Hyuk Chang:
A Statistical Model-Based Speech Enhancement Using Acoustic Noise Classification for Robust Speech Communication. IEICE Trans. Commun. 95-B(7): 2513-2516 (2012) - [j50]Chungsoo Lim, Joon-Hyuk Chang:
Enhancing support vector machine-based speech/music classification using conditional maximum a posteriori criterion. IET Signal Process. 6(4): 335-340 (2012) - [j49]Sang-Kyun Kim, Joon-Hyuk Chang:
Voice activity detection based on conditional MAP criterion incorporating the spectral gradient. Signal Process. 92(7): 1699-1705 (2012) - [j48]Jae-Hun Choi, Joon-Hyuk Chang:
On using acoustic environment classification for statistical model-based speech enhancement. Speech Commun. 54(3): 477-490 (2012) - [j47]Chungsoo Lim, Seong-Ro Lee, Joon-Hyuk Chang:
Efficient implementation of an SVM-based speech/music classifier by enhancing temporal locality in support vector references. IEEE Trans. Consumer Electron. 58(3): 898-904 (2012) - [c22]Chungsoo Lim, Seong-Ro Lee, Yeonwoo Lee, Joon-Hyuk Chang:
New techniques for improving the practicality of an SVM-based speech/music classifier. ICASSP 2012: 1657-1660 - [c21]Jae-Hun Choi, Sang-Kyun Kim, Joon-Hyuk Chang:
Adaptive noise power estimation using spectral difference for robust speech enhancement. ICASSP 2012: 4649-4652 - [c20]Jae-Hun Choi, Joon-Hyuk Chang:
On using spectral gradient in conditional MAP criterion for robust voice activity detection. IC-NIDC 2012: 370-374 - 2011
- [j46]Jae-Hun Choi, Joon-Hyuk Chang, Dong Kook Kim, Suhyun Kim:
Speech Enhancement Based on Adaptive Noise Power Estimation Using Spectral Difference. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 94-A(10): 2031-2034 (2011) - [j45]Yu Gwang Jin, Nam Soo Kim, Joon-Hyuk Chang:
Speech Enhancement Based on Data-Driven Residual Gain Estimation. IEICE Trans. Inf. Syst. 94-D(12): 2537-2540 (2011) - [j44]Woojung Lee, Ji-Hyun Song, Joon-Hyuk Chang:
Minima-controlled speech presence uncertainty tracking method for speech enhancement. Signal Process. 91(1): 155-161 (2011) - [c19]Jae-Hun Choi, Sang-Kyun Kim, Joon-Hyuk Chang:
A Soft Decision-Based Speech Enhancement Using Acoustic Noise Classification. INTERSPEECH 2011: 1193-1196 - 2010
- [j43]Jong Won Shin, Joon-Hyuk Chang, Nam Soo Kim:
Voice activity detection based on statistical models and machine learning approaches. Comput. Speech Lang. 24(3): 515-530 (2010) - [j42]Sang-Ick Kang, Joon-Hyuk Chang:
Voice Activity Detection Based on Discriminative Weight Training Incorporating a Spectral Flatness Measure. Circuits Syst. Signal Process. 29(2): 183-194 (2010) - [j41]Jong-Mo Kum, Yun-Sik Park, Joon-Hyuk Chang:
Improved minima controlled recursive averaging technique using conditional maximum a posteriori criterion for speech enhancement. Digit. Signal Process. 20(6): 1572-1578 (2010) - [j40]Sang-Kyun Kim, Joon-Hyuk Chang:
Discriminative Weight Training for Support Vector Machine-Based Speech/Music Classification in 3GPP2 SMV Codec. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 93-A(1): 316-319 (2010) - [j39]Jong-Mo Kum, Joon-Hyuk Chang:
Improved Global Soft Decision Incorporating Second-Order Conditional MAP in Speech Enhancement. IEICE Trans. Inf. Syst. 93-D(6): 1652-1655 (2010) - [j38]Jae-Hun Choi, Joon-Hyuk Chang, Seong-Ro Lee:
Efficient Speech Reinforcement Based on Low-Bit-Rate Speech Coding Parameters. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 93-A(9): 1684-1687 (2010) - [j37]Yun-Sik Park, Joon-Hyuk Chang:
Double-talk detection based on soft decision for acoustic echo suppression. Signal Process. 90(5): 1737-1741 (2010) - [j36]Kyu-Ho Lee, Joon-Hyuk Chang, Nam Soo Kim, Sangki Kang, Yongserk Kim:
Frequency-Domain Double-Talk Detection Based on the Gaussian Mixture Model. IEEE Signal Process. Lett. 17(5): 453-456 (2010) - [c18]Yun-Sik Park, Ji-Hyun Song, Sang-Ick Kang, Woojung Lee, Joon-Hyuk Chang:
A statistical model-based double-talk detection incorporating soft decision. ICASSP 2010: 5082-5085 - [c17]Ji-Hyun Song, Kyu-Ho Lee, Yun-Sik Park, Sang-Ick Kang, Joon-Hyuk Chang:
On using Gaussian mixture model for double-talk detection in acoustic echo suppression. INTERSPEECH 2010: 2778-2781 - [c16]Sang-Kyun Kim, Jae-Hun Choi, Sang-Ick Kang, Ji-Hyun Song, Joon-Hyuk Chang:
Toward detecting voice activity employing soft decision in second-order conditional MAP. INTERSPEECH 2010: 3082-3085
2000 – 2009
- 2009
- [j35]Sang-Ick Kang, Joon-Hyuk Chang:
Discriminative weight training-based optimally weighted MFCC for gender identification. IEICE Electron. Express 6(19): 1374-1379 (2009) - [j34]Sang-Kyun Kim, Joon-Hyuk Chang:
Speech/Music Classification Enhancement for 3GPP2 SMV Codec Based on Support Vector Machine. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 92-A(2): 630-632 (2009) - [j33]Kye-Hwan Lee, Joon-Hyuk Chang:
Acoustic Environment Classification Based on SMV Speech Codec Parameters for Context-Aware Mobile Phone. IEICE Trans. Inf. Syst. 92-D(7): 1491-1495 (2009) - [j32]Jae-Hun Choi, Woo-Sang Park, Joon-Hyuk Chang:
Speech Reinforcement Based on Soft Decision under Far-End Noise Environments. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 92-A(8): 2116-2119 (2009) - [j31]Ji-Hyun Song, Joon-Hyuk Chang:
Efficient Implementation of Voiced/Unvoiced Sounds Classification Based on GMM for SMV Codec. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 92-A(8): 2120-2123 (2009) - [j30]Yun-Sik Park, Joon-Hyuk Chang:
Frequency Domain Acoustic Echo Suppression Based on Soft Decision. IEEE Signal Process. Lett. 16(1): 53-56 (2009) - [j29]Joon-Hyuk Chang, Q-Haing Jo, Dong Kook Kim, Nam Soo Kim:
Global Soft Decision Employing Support Vector Machine For Speech Enhancement. IEEE Signal Process. Lett. 16(1): 57-60 (2009) - [j28]Jong-Mo Kum, Joon-Hyuk Chang:
Speech Enhancement Based on Minima Controlled Recursive Averaging Incorporating Second-Order Conditional MAP Criterion. IEEE Signal Process. Lett. 16(7): 624-627 (2009) - [c15]Jong-Mo Kum, Yun-Sik Park, Joon-Hyuk Chang:
Speech enhancement based on minima controlled recursive averaging incorporating conditional maximum a posteriori criterion. ICASSP 2009: 4417-4420 - [c14]Yun-Sik Park, Ji-Hyun Song, Jae-Hun Choi, Joon-Hyuk Chang:
Enhanced minimum statistics technique incorporating soft decision for noise suppression. INTERSPEECH 2009: 1347-1350 - [c13]Yun-Sik Park, Ji-Hyun Song, Jae-Hun Choi, Joon-Hyuk Chang:
Soft decision-based acoustic echo suppression in a frequency domain. INTERSPEECH 2009: 2599-2602 - 2008
- [j27]Jong Kyu Kim, Jung Su Kim, Hwan Sik Yun, Joon-Hyuk Chang, Nam Soo Kim:
Frame Splitting Scheme for Error-Robust Audio Streaming over Packet-Switching Networks. IEICE Trans. Commun. 91-B(2): 677-680 (2008) - [j26]Q-Haing Jo, Yun-Sik Park, Kye-Hwan Lee, Joon-Hyuk Chang:
A Support Vector Machine-Based Voice Activity Detection Employing Effective Feature Vectors. IEICE Trans. Commun. 91-B(6): 2090-2093 (2008) - [j25]Kye-Hwan Lee, Sang-Ick Kang, Deok-Hwan Kim, Joon-Hyuk Chang:
A Support Vector Machine-Based Gender Identification Using Speech Signal. IEICE Trans. Commun. 91-B(10): 3326-3329 (2008) - [j24]Yun-Sik Park, Joon-Hyuk Chang:
A Probabilistic Combination Method of Minimum Statistics and Soft Decision for Robust Noise Power Estimation in Speech Enhancement. IEEE Signal Process. Lett. 15: 95-98 (2008) - [j23]Ji-Hyun Song, Kye-Hwan Lee, Joon-Hyuk Chang, Jong Kyu Kim, Nam Soo Kim:
Analysis and Improvement of Speech/Music Classification for 3GPP2 SMV Based on GMM. IEEE Signal Process. Lett. 15: 103-106 (2008) - [j22]Sang-Ick Kang, Q-Haing Jo, Joon-Hyuk Chang:
Discriminative Weight Training for a Statistical Model-Based Voice Activity Detection. IEEE Signal Process. Lett. 15: 170-173 (2008) - [c12]Sang-Ick Kang, Ji-Hyun Song, Kye-Hwan Lee, Yun-Sik Park, Joon-Hyuk Chang:
A statistical model-based voice activity detection employing minimum classification error technique. INTERSPEECH 2008: 103-106 - [c11]Kye-Hwan Lee, Sang-Ick Kang, Ji-Hyun Song, Joon-Hyuk Chang:
Group delay function for improved gender identification. INTERSPEECH 2008: 1513-1516 - 2007
- [j21]Joon-Hyuk Chang:
Complex laplacian probability density function for noisy speech enhancement. IEICE Electron. Express 4(8): 245-250 (2007) - [j20]Joon-Hyuk Chang, Hyoung-Gon Kim, Sangki Kang:
Residual echo reduction based on MMSE estimator in acoustic echo canceller. IEICE Electron. Express 4(24): 762-767 (2007) - [j19]Yun-Sik Park, Joon-Hyuk Chang:
A Novel Approach to a Robust a Priori SNR Estimator in Speech Enhancement. IEICE Trans. Commun. 90-B(8): 2182-2185 (2007) - [j18]Joon-Hyuk Chang, Dong Seok Jeong, Nam Soo Kim, Sangki Kang:
Improved Global Soft Decision Using Smoothed Global Likelihood Ratio for Speech Enhancement. IEICE Trans. Commun. 90-B(8): 2186-2189 (2007) - [j17]Jong Won Shin, Joon-Hyuk Chang, Nam Soo Kim:
Speech Enhancement Based on Perceptually Comfortable Residual Noise. IEICE Trans. Commun. 90-B(11): 3323-3326 (2007) - [j16]Joon-Hyuk Chang, Saeed Gazor, Nam Soo Kim, Sanjit K. Mitra:
Multiple statistical models for soft decision in noisy speech enhancement. Pattern Recognit. 40(3): 1123-1134 (2007) - [j15]Jong Won Shin, Joon-Hyuk Chang, Nam Soo Kim:
Voice activity detection based on a family of parametric distributions. Pattern Recognit. Lett. 28(11): 1295-1299 (2007) - [j14]Dong Kook Kim, Keun Won Jang, Joon-Hyuk Chang:
A New Statistical Voice Activity Detection Based on UMP Test. IEEE Signal Process. Lett. 14(11): 891-894 (2007) - [c10]Keun Won Jang, Dong Kook Kim, Joon-Hyuk Chang:
A uniformly most powerful test for statistical model-based voice activity detection. INTERSPEECH 2007: 2917-2920 - [c9]Q-Haing Jo, Yun-Sik Park, Kye-Hwan Lee, Ji-Hyun Song, Joon-Hyuk Chang:
Voice activity detection based on support vector machine using effective feature vectors. INTERSPEECH 2007: 2937-2940 - 2006
- [j13]Joon-Hyuk Chang:
Perceptual weighting filter for robust speech modification. Signal Process. 86(5): 1089-1093 (2006) - [j12]Joon-Hyuk Chang, Nam Soo Kim:
A new structural approach in system identification with generalized analysis-by-synthesis for robust speech coding. IEEE Trans. Speech Audio Process. 14(3): 747-751 (2006) - [j11]Joon-Hyuk Chang, Nam Soo Kim, Sanjit K. Mitra:
Voice activity detection based on multiple statistical models. IEEE Trans. Signal Process. 54(6-1): 1965-1976 (2006) - [c8]Joon-Hyuk Chang, Woohyung Lim, Nam Soo Kim:
Signal modification incorporating perceptual weighting filter. INTERSPEECH 2006 - 2005
- [j10]Joon-Hyuk Chang, Sanjit K. Mitra:
Multiband Vector Quantization Based on Inner Product for Wideband Speech Coding. IEICE Trans. Inf. Syst. 88-D(11): 2606-2608 (2005) - [j9]Joon-Hyuk Chang, Nam Soo Kim, Sanjit K. Mitra:
Pitch estimation of speech signal based on adaptive lattice notch filter. Signal Process. 85(3): 637-641 (2005) - [j8]Jong Won Shin, Joon-Hyuk Chang, Nam Soo Kim:
Statistical modeling of speech signals based on generalized gamma distribution. IEEE Signal Process. Lett. 12(3): 258-261 (2005) - [j7]Joon-Hyuk Chang, Jong Won Shin, Nam Soo Kim, Sanjit K. Mitra:
Image probability distribution based on generalized gamma function. IEEE Signal Process. Lett. 12(4): 325-328 (2005) - [j6]Joon-Hyuk Chang:
Warped discrete cosine transform-based noisy speech enhancement. IEEE Trans. Circuits Syst. II Express Briefs 52-II(9): 535-539 (2005) - [c7]Jong Won Shin, Joon-Hyuk Chang, Hwan Sik Yun, Nam Soo Kim:
Voice Activity Detection based on Generalized Gamma Distribution. ICASSP (1) 2005: 781-784 - [c6]Joon-Hyuk Chang, Jong Won Shin, Seung Yeol Lee, Nam Soo Kim:
A new structural preprocessor for low-bit rate speech coding. INTERSPEECH 2005: 2729-2732 - 2004
- [j5]Joon-Hyuk Chang, Nam Soo Kim:
Distorted Speech Rejection for Automatic Speech Recognition in Wireless Communication. IEICE Trans. Inf. Syst. 87-D(7): 1978-1981 (2004) - [j4]Joon-Hyuk Chang, Nam Soo Kim, Sanjit K. Mitra:
A Statistical Model-Based V/UV Decision under Background Noise Environments. IEICE Trans. Inf. Syst. 87-D(12): 2885-2887 (2004) - [j3]Nam Soo Kim, Joon-Hyuk Chang:
Signal modification for robust speech coding. IEEE Trans. Speech Audio Process. 12(1): 9-18 (2004) - [c5]Jong Won Shin, Joon-Hyuk Chang, Nam Soo Kim:
Speech probability distribution based on generalized gama distribution. INTERSPEECH 2004: 2477-2480 - [c4]Seung Yeol Lee, Nam Soo Kim, Joon-Hyuk Chang:
Inner product based-multiband vector quantization for wideband speech coding at 16 kbps. INTERSPEECH 2004: 2653-2656 - 2003
- [c3]Joon-Hyuk Chang, Jong Won Shin, Nam Soo Kim:
Likelihood ratio test with complex laplacian model for voice activity detection. INTERSPEECH 2003: 1065-1068 - 2002
- [j2]Nam Soo Kim, Joon-Hyuk Chang:
A preprocessor for low-bit-rate speech coding. IEEE Signal Process. Lett. 9(10): 318-321 (2002) - [c2]Nam Soo Kim, Joon-Hyuk Chang:
Generalized analysis-by-synthesis based on system identification. ICASSP 2002: 633-636 - 2000
- [j1]Nam Soo Kim, Joon-Hyuk Chang:
Spectral enhancement based on global soft decision. IEEE Signal Process. Lett. 7(5): 108-110 (2000) - [c1]Joon-Hyuk Chang, Nam Soo Kim:
Speech enhancement: new approaches to soft decision. INTERSPEECH 2000: 1133-1136
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-02 21:26 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint