Jonathan Peck
Title
Cited by
Cited by
Year
Lower bounds on the robustness to adversarial perturbations
J Peck, J Roels, B Goossens, Y Saeys
Advances in Neural Information Processing Systems, 804-813, 2017
562017
CharBot: A Simple and Effective Method for Evading DGA Classifiers
J Peck, C Nie, R Sivaguru, C Grumer, F Olumofin, B Yu, A Nascimento, ...
IEEE Access 7, 91759-91771, 2019
232019
Inline detection of DGA domains using side information
R Sivaguru, J Peck, F Olumofin, A Nascimento, M De Cock
IEEE Access 8, 141910-141922, 2020
32020
Distillation of Deep Reinforcement Learning Models using Fuzzy Inference Systems
A Gevaert, J Peck, Y Saeys
The 31st Benelux Conference on Artificial Intelligence (BNAIC 2019) and the …, 2019
32019
Hardening DGA Classifiers Utilizing IVAP
C Grumer, J Peck, F Olumofin, A Nascimento, M De Cock
IEEE Big Data, 2019
22019
Detecting Adversarial Examples with Inductive Venn-ABERS Predictors
J Peck, B Goossens, Y Saeys
European Symposium on Artificial Neural Networks, Computational Intelligence …, 2019
22019
Calibrated multi-probabilistic prediction as a defense against adversarial attacks
J Peck, B Goossens, Y Saeys
Artificial Intelligence and Machine Learning, 85-125, 2019
12019
Detecting adversarial manipulation using inductive Venn-ABERS predictors
J Peck, B Goossens, Y Saeys
Neurocomputing 416, 202-217, 2020
2020
Regional Image Perturbation Reduces Norms of Adversarial Examples While Maintaining Model-to-model Transferability
U Ozbulak, J Peck, W De Neve, B Goossens, Y Saeys, A Van Messem
arXiv preprint arXiv:2007.03198, 2020
2020
Robustness of Classifiers to Adversarial Perturbations
J Peck
2017
The system can't perform the operation now. Try again later.
Articles 1–10