Follow
Dongxian Wu
Dongxian Wu
Bytedance
Verified email at bytedance.com
Title
Cited by
Cited by
Year
Adversarial Weight Perturbation Helps Robust Generalization
D Wu, ST Xia, Y Wang
Conference on Neural Information Processing Systems (NeurIPS 2020), 2020
7852020
Skip Connections Matter: On the Transferability of Adversarial Examples Generated with Resnets
D Wu, Y Wang, ST Xia, J Bailey, X Ma
International Conference on Learning Representations (ICLR 2020), 2020
3782020
Adversarial Neuron Pruning Purifies Backdoored Deep Models
D Wu, Y Wang
Conference on Neural Information Processing Systems (NeurIPS 2021), 2021
2842021
Targeted Attack for Deep Hashing based Retrieval
J Bai, B Chen, Y Li, D Wu, W Guo, S Xia, E Yang
European Conference on Computer Vision (ECCV 2020), 2020
932020
When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture
Y Mo, D Wu, Y Wang, Y Guo, Y Wang
Conference on Neural Information Processing Systems (NeurIPS 2022), 2022
582022
Not all samples are born equal: Towards effective clean-label backdoor attacks
Y Gao, Y Li, L Zhu, D Wu, Y Jiang, ST Xia
Pattern Recognition 139, 109512, 2023
442023
On the effectiveness of adversarial training against backdoor attacks
Y Gao, D Wu, J Zhang, G Gan, ST Xia, G Niu, M Sugiyama
IEEE Transactions on Neural Networks and Learning Systems, 2023
232023
Dipdefend: Deep image prior driven defense against adversarial examples
T Dai, Y Feng, D Wu, B Chen, J Lu, Y Jiang, ST Xia
Proceedings of the 28th ACM International Conference on Multimedia, 1404-1412, 2020
232020
Towards Robust Model Watermark via Reducing Parametric Vulnerability
G Gan, Y Li, D Wu, ST Xia
International Conference on Computer Vision (ICCV 2023), 2023
82023
Backdoor attack on hash-based image retrieval via clean-label data poisoning
K Gao, J Bai, B Chen, D Wu, ST Xia
arXiv preprint arXiv:2109.08868, 2021
72021
Universal adversarial head: Practical protection against video data leakage
J Bai, B Chen, D Wu, C Zhang, ST Xia
ICML 2021 Workshop on Adversarial Machine Learning, 2021
52021
Matrix Smoothing: A Regularization For Dnn With Transition Matrix Under Noisy Labels
X Lv, D Wu, ST Xia
2020 IEEE International Conference on Multimedia and Expo (ICME 2020), 1-6, 2020
32020
Temporal Calibrated Regularization for Robust Noisy Label Learning
D Wu, Y Wang, Z Zheng, S Xia
International Joint Conference on Neural Networks (IJCNN 2020), 2020
22020
On the Adversarial Transferability of Generalized" Skip Connections"
Y Wang, Y Mo, D Wu, M Li, X Ma, Z Lin
arXiv preprint arXiv:2410.08950, 2024
12024
Rethinking the Necessity of Labels in Backdoor Removal
Z Xiong, D Wu, Y Wang, Y Wang
ICLR 2023 Workshop on Backdoor Attacks and Defenses in Machine Learning, 2023
12023
Does Adversarial Robustness Really Imply Backdoor Vulnerability?
Y Gao, D Wu, J Zhang, ST Xia, G Niu, M Sugiyama
12021
Towards Reliable Backdoor Attacks on Vision Transformers
Y Mo, D Wu, Y Wang, Y Guo, Y Wang
Do We Really Need Labels for Backdoor Defense?
Z Xiong, D Wu, Y Wang, Y Wang
The system can't perform the operation now. Try again later.
Articles 1–18