Flavien Prost
Flavien Prost
Google Brain
Potvrđena adresa e-pošte na
Fairness without demographics through adversarially reweighted learning
P Lahoti, A Beutel, J Chen, K Lee, F Prost, N Thain, X Wang, E Chi
Advances in neural information processing systems 33, 728-740, 2020
Debiasing embeddings for reduced gender bias in text classification
F Prost, N Thain, T Bolukbasi
First Workshop on Gender Bias in Natural Language Processing ACL 2019, 2019
Measuring Recommender System Effects with Simulated Users
S Yao, Y Halpern, N Thain, X Wang, K Lee, F Prost, AB H. Chi, Jilin Chen
2nd Workshop on Fairness, Accountability, Transparency, Ethics and Society …, 2020
Practical compositional fairness: Understanding fairness in multi-component recommender systems
X Wang, N Thain, A Sinha, F Prost, EH Chi, J Chen, A Beutel
Proceedings of the 14th ACM International Conference on Web Search and Data …, 2021
Toward a better trade-off between performance and fairness with kernel-based distribution matching
F Prost, H Qian, Q Chen, EH Chi, J Chen, A Beutel
NeurIPS 2019 Workshop on Machine Learning with Guarantees, 2019
Understanding and improving fairness-accuracy trade-offs in multi-task learning
Y Wang, X Wang, A Beutel, F Prost, J Chen, EH Chi
Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data …, 2021
Measuring model fairness under noisy covariates: A theoretical perspective
F Prost, P Awasthi, N Blumm, A Kumthekar, T Potter, L Wei, X Wang, ...
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 873-883, 2021
Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
F Prost, B Packer, J Chen, L Wei, P Kremp, N Blumm, S Wang, T Doshi, ...
arXiv preprint arXiv:2210.07755, 2022
Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
AA Kumthekar, A Beutel, EH Chi, F Prost, J Chen, L Wei, N Blumm, ...
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–9