Prati
Philip Pham
Philip Pham
Potvrđena adresa e-pošte na google.com - Početna stranica
Naslov
Citirano
Citirano
Godina
Big bird: Transformers for longer sequences
M Zaheer, G Guruganesh, KA Dubey, J Ainslie, C Alberti, S Ontanon, ...
Advances in Neural Information Processing Systems 33, 2020
7992020
Long Range Arena: A Benchmark for Efficient Transformers
Y Tay, M Dehghani, S Abnar, Y Shen, D Bahri, P Pham, J Rao, L Yang, ...
arXiv preprint arXiv:2011.04006, 2020
2232020
ETC: Encoding Long and Structured Inputs in Transformers
J Ainslie, S Ontanon, C Alberti, V Cvicek, Z Fisher, P Pham, A Ravula, ...
Proceedings of the 2020 Conference on Empirical Methods in Natural Language …, 2020
1472020
The perils of balance testing in experimental design: Messy analyses of clean data
DC Mutz, R Pemantle, P Pham
The American Statistician 73 (1), 32-42, 2019
1332019
Drosophila Muller F elements maintain a distinct set of genomic properties over 40 million years of evolution
W Leung, CD Shaffer, LK Reed, ST Smith, W Barshop, W Dirkes, ...
G3: Genes, Genomes, Genetics 5 (5), 719-740, 2015
932015
Fair Hierarchical Clustering
S Ahmadian, A Epasto, M Knittel, R Kumar, M Mahdian, B Moseley, ...
Advances in Neural Information Processing Systems 33, 2020
242020
Omninet: Omnidirectional representations from transformers
Y Tay, M Dehghani, V Aribandi, J Gupta, PM Pham, Z Qin, D Bahri, ...
International Conference on Machine Learning, 10193-10202, 2021
182021
Signal transduction in a compliant short loop of Henle
AT Layton, P Pham, H Ryu
International journal for numerical methods in biomedical engineering 28 (3 …, 2012
132012
Neural structured learning: training neural networks with structured signals
A Gopalan, DC Juan, CI Magalhaes, CS Ferng, A Heydon, CT Lu, P Pham, ...
Proceedings of the 14th ACM International Conference on Web Search and Data …, 2021
92021
ReadTwice: Reading Very Large Documents with Memories
Y Zemlyanskiy, J Ainslie, M de Jong, P Pham, I Eckstein, F Sha
arXiv preprint arXiv:2105.04241, 2021
42021
Just how Easy is it to Cheat a Linear Regression?
P Pham
University of Pennsylvania, 2016
22016
Attention neural networks with sparse attention mechanisms
JT Ainslie, S Ontañón, P Pham, M Zaheer, G Guruganesh, KA Dubey, ...
US Patent 11,238,332, 2022
2022
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–12