Prati
Alexei Baevski
Alexei Baevski
Facebook AI Research
Potvrđena adresa e-pošte na fb.com
Naslov
Citirano
Citirano
Godina
fairseq: A fast, extensible toolkit for sequence modeling
M Ott, S Edunov, A Baevski, A Fan, S Gross, N Ng, D Grangier, M Auli
arXiv preprint arXiv:1904.01038, 2019
19112019
wav2vec 2.0: A framework for self-supervised learning of speech representations
A Baevski, H Zhou, A Mohamed, M Auli
arXiv preprint arXiv:2006.11477, 2020
15352020
wav2vec: Unsupervised pre-training for speech recognition
S Schneider, A Baevski, R Collobert, M Auli
arXiv preprint arXiv:1904.05862, 2019
7612019
Pay less attention with lightweight and dynamic convolutions
F Wu, A Fan, A Baevski, YN Dauphin, M Auli
arXiv preprint arXiv:1901.10430, 2019
4712019
vq-wav2vec: Self-supervised learning of discrete speech representations
A Baevski, S Schneider, M Auli
arXiv preprint arXiv:1910.05453, 2019
3892019
Unsupervised cross-lingual representation learning for speech recognition
A Conneau, A Baevski, R Collobert, A Mohamed, M Auli
arXiv preprint arXiv:2006.13979, 2020
2622020
Facebook FAIR's WMT19 news translation task submission
N Ng, K Yee, A Baevski, M Ott, M Auli, S Edunov
arXiv preprint arXiv:1907.06616, 2019
2612019
Adaptive input representations for neural language modeling
A Baevski, M Auli
arXiv preprint arXiv:1809.10853, 2018
2572018
Cloze-driven pretraining of self-attention networks
A Baevski, S Edunov, Y Liu, L Zettlemoyer, M Auli
arXiv preprint arXiv:1903.07785, 2019
2002019
Data2vec: A general framework for self-supervised learning in speech, vision and language
A Baevski, WN Hsu, Q Xu, A Babu, J Gu, M Auli
arXiv preprint arXiv:2202.03555, 2022
1592022
Effectiveness of self-supervised pre-training for speech recognition
A Baevski, M Auli, A Mohamed
arXiv preprint arXiv:1911.03912, 2019
148*2019
Unsupervised speech recognition
A Baevski, WN Hsu, A Conneau, M Auli
Advances in Neural Information Processing Systems 34, 27826-27839, 2021
1222021
Pre-trained language model representations for language generation
S Edunov, A Baevski, M Auli
arXiv preprint arXiv:1903.09722, 2019
1142019
XLS-R: Self-supervised cross-lingual speech representation learning at scale
A Babu, C Wang, A Tjandra, K Lakhotia, Q Xu, N Goyal, K Singh, ...
arXiv preprint arXiv:2111.09296, 2021
952021
Robust wav2vec 2.0: Analyzing domain shift in self-supervised pre-training
WN Hsu, A Sriram, A Baevski, T Likhomanenko, Q Xu, V Pratap, J Kahn, ...
arXiv preprint arXiv:2104.01027, 2021
942021
Self-training and pre-training are complementary for speech recognition
Q Xu, A Baevski, T Likhomanenko, P Tomasello, A Conneau, R Collobert, ...
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
932021
On generative spoken language modeling from raw audio
K Lakhotia, E Kharitonov, WN Hsu, Y Adi, A Polyak, B Bolte, TA Nguyen, ...
Transactions of the Association for Computational Linguistics 9, 1336-1354, 2021
752021
Multilingual speech translation with efficient finetuning of pretrained models
X Li, C Wang, Y Tang, C Tran, Y Tang, J Pino, A Baevski, A Conneau, ...
arXiv preprint arXiv:2010.12829, 2020
622020
The Zero Resource Speech Benchmark 2021: Metrics and baselines for unsupervised spoken language modeling
TA Nguyen, M de Seyssel, P Rozé, M Rivière, E Kharitonov, A Baevski, ...
arXiv preprint arXiv:2011.11588, 2020
472020
Improved language identification through cross-lingual self-supervised learning
A Tjandra, DG Choudhury, F Zhang, K Singh, A Conneau, A Baevski, ...
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
182022
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20