Xinyun Chen
Xinyun Chen
Google DeepMind
Potvrđena adresa e-pošte na - Početna stranica
Delving into transferable adversarial examples and black-box attacks
Y Liu, X Chen, C Liu, D Song
arXiv preprint arXiv:1611.02770, 2016
Targeted backdoor attacks on deep learning systems using data poisoning
X Chen, C Liu, B Li, K Lu, D Song
arXiv preprint arXiv:1712.05526, 2017
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
arXiv preprint arXiv:2210.11416, 2022
Competition-level code generation with alphacode
Y Li, D Choi, J Chung, N Kushman, J Schrittwieser, R Leblond, T Eccles, ...
Science 378 (6624), 1092-1097, 2022
Adversarial example defense: Ensembles of weak defenses are not strong
W He, J Wei, X Chen, N Carlini, D Song
11th USENIX workshop on offensive technologies (WOOT 17), 2017
Learning to perform local rewriting for combinatorial optimization
X Chen, Y Tian
Advances in Neural Information Processing Systems 32, 2019
Tree-to-tree neural networks for program translation
X Chen, C Liu, D Song
Advances in neural information processing systems 31, 2018
Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses
M Goldblum, D Tsipras, C Xie, X Chen, A Schwarzschild, D Song, ...
IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (2), 1563-1580, 2022
Teaching large language models to self-debug
X Chen, M Lin, N Schärli, D Zhou
arXiv preprint arXiv:2304.05128, 2023
Execution-guided neural program synthesis
X Chen, C Liu, D Song
International Conference on Learning Representations, 2018
Large language models can be easily distracted by irrelevant context
F Shi, X Chen, K Misra, N Scales, D Dohan, EH Chi, N Schärli, D Zhou
International Conference on Machine Learning, 31210-31227, 2023
Large language models as optimizers
C Yang, X Wang, Y Lu, H Liu, QV Le, D Zhou, X Chen
arXiv preprint arXiv:2309.03409, 2023
Larger language models do in-context learning differently
J Wei, J Wei, Y Tay, D Tran, A Webson, Y Lu, X Chen, H Liu, D Huang, ...
arXiv preprint arXiv:2303.03846, 2023
Refit: a unified watermark removal framework for deep learning systems with limited data
X Chen, W Wang, C Bender, Y Ding, R Jia, B Li, D Song
Proceedings of the 2021 ACM Asia Conference on Computer and Communications …, 2021
Fooling vision and language models despite localization and attention mechanism
X Xu, X Chen, C Liu, A Rohrbach, T Darrell, D Song
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018
Neural symbolic reader: Scalable integration of distributed and symbolic representations for reading comprehension
X Chen, C Liang, AW Yu, D Zhou, D Song, QV Le
International Conference on Learning Representations, 2019
Latent attention for if-then program synthesis
C Liu, X Chen, EC Shin, M Chen, D Song
Advances in Neural Information Processing Systems 29, 2016
Compositional generalization via neural-symbolic stack machines
X Chen, C Liang, AW Yu, D Song, D Zhou
Advances in Neural Information Processing Systems 33, 1690-1701, 2020
Robustart: Benchmarking robustness on architecture design and training techniques
S Tang, R Gong, Y Wang, A Liu, J Wang, X Chen, F Yu, X Liu, D Song, ...
arXiv preprint arXiv:2109.05211, 2021
Lego: Latent execution-guided reasoning for multi-hop question answering on knowledge graphs
H Ren, H Dai, B Dai, X Chen, M Yasunaga, H Sun, D Schuurmans, ...
International conference on machine learning, 8959-8970, 2021
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20