Temporal pattern attention for multivariate time series forecasting SY Shih, FK Sun, H Lee Machine Learning 108 (8), 1421-1441, 2019 | 221 | 2019 |
Mockingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders AT Liu, S Yang, PH Chi, P Hsu, H Lee ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020 | 172 | 2020 |
Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder YA Chung, CC Wu, CH Shen, HY Lee, LS Lee arXiv preprint arXiv:1603.00982, 2016 | 164 | 2016 |
Tera: Self-supervised learning of transformer encoder representation for speech AT Liu, SW Li, H Lee IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 2351-2366, 2021 | 114 | 2021 |
Multi-target voice conversion without parallel data by adversarially learning disentangled audio representations J Chou, C Yeh, H Lee, L Lee arXiv preprint arXiv:1804.02812, 2018 | 110 | 2018 |
One-shot voice conversion by separating speaker and content representations with instance normalization J Chou, C Yeh, H Lee arXiv preprint arXiv:1904.05742, 2019 | 101 | 2019 |
Spoken content retrieval—beyond cascading speech recognition with text retrieval L Lee, J Glass, H Lee, C Chan IEEE/ACM Transactions on Audio, Speech, and Language Processing 23 (9), 1389 …, 2015 | 99 | 2015 |
Superb: Speech processing universal performance benchmark S Yang, PH Chi, YS Chuang, CIJ Lai, K Lakhotia, YY Lin, AT Liu, J Shi, ... arXiv preprint arXiv:2105.01051, 2021 | 86 | 2021 |
Learning chinese word representations from glyphs of characters TR Su, HY Lee arXiv preprint arXiv:1708.04755, 2017 | 76 | 2017 |
Supervised and unsupervised transfer learning for question answering YA Chung, HY Lee, J Glass arXiv preprint arXiv:1711.05345, 2017 | 71 | 2017 |
Audio albert: A lite bert for self-supervised learning of audio representation PH Chi, PH Chung, TH Wu, CC Hsieh, YH Chen, SW Li, H Lee 2021 IEEE Spoken Language Technology Workshop (SLT), 344-350, 2021 | 69 | 2021 |
Tree transformer: Integrating tree structures into self-attention YS Wang, HY Lee, YN Chen arXiv preprint arXiv:1909.06639, 2019 | 63 | 2019 |
Neural attention models for sequence classification: Analysis and application to key term extraction and dialogue act detection S Shen, H Lee arXiv preprint arXiv:1604.00077, 2016 | 61 | 2016 |
Meta learning for end-to-end low-resource speech recognition JY Hsu, YJ Chen, H Lee ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020 | 51 | 2020 |
Lamol: Language modeling for lifelong language learning FK Sun, CH Ho, HY Lee arXiv preprint arXiv:1909.03329, 2019 | 51 | 2019 |
Spoken SQuAD: A study of mitigating the impact of speech recognition errors on listening comprehension CH Li, SL Wu, CL Liu, H Lee arXiv preprint arXiv:1804.00320, 2018 | 50 | 2018 |
End-to-end text-to-speech for low-resource languages by cross-lingual transfer learning T Tu, YJ Chen, C Yeh, HY Lee arXiv preprint arXiv:1904.06508, 2019 | 48 | 2019 |
Towards machine comprehension of spoken content: Initial toefl listening comprehension test by machine BH Tseng, SS Shen, HY Lee, LS Lee arXiv preprint arXiv:1608.06378, 2016 | 48 | 2016 |
Improving conditional sequence generative adversarial networks by stepwise evaluation YL Tuan, HY Lee IEEE/ACM Transactions on Audio, Speech, and Language Processing 27 (4), 788-798, 2019 | 45 | 2019 |
Learning to encode text as human-readable summaries using generative adversarial networks YS Wang, HY Lee arXiv preprint arXiv:1810.02851, 2018 | 44 | 2018 |