Valerie Chen
Zitiert von
Zitiert von
Secure Computation for Machine Learning With SPDZ
V Chen, V Pastro, M Raykova
arXiv preprint arXiv:1901.00329, 2019
Interpretable machine learning: Moving from mythos to diagnostics
V Chen, J Li, JS Kim, G Plumb, A Talwalkar
Communications of the ACM 65 (8), 43-50, 2022
Understanding the role of human intuition on reliance in human-AI decision-making with explanations
V Chen, QV Liao, J Wortman Vaughan, G Bansal
Proceedings of the ACM on Human-Computer Interaction 7 (CSCW2), 1-32, 2023
Ask Your Humans: Using Human Instructions to Improve Generalization in Reinforcement Learning
V Chen, A Gupta, K Marino
arXiv preprint arXiv:2011.00517, 2020
Do LLMs exhibit human-like response biases? A case study in survey design
L Tjuatja, V Chen, ST Wu, A Talwalkar, G Neubig
arXiv preprint arXiv:2311.04076, 2023
Use-case-grounded simulations for explanation evaluation
V Chen, N Johnson, N Topin, G Plumb, A Talwalkar
Advances in neural information processing systems 35, 1764-1775, 2022
On the importance of application-grounded experimental design for evaluating explainable ml methods
K Amarasinghe, KT Rodolfa, S Jesus, V Chen, V Balayan, P Saleiro, ...
Proceedings of the AAAI Conference on Artificial Intelligence 38 (19), 20921 …, 2024
Task-aware novelty detection for visual-based deep learning in autonomous systems
V Chen, MK Yoon, Z Shao
2020 IEEE International Conference on Robotics and Automation (ICRA), 11060 …, 2020
Bayesian persuasion for algorithmic recourse
K Harris, V Chen, J Kim, A Talwalkar, H Heidari, SZ Wu
Advances in Neural Information Processing Systems 35, 11131-11144, 2022
Best practices for interpretable machine learning in computational biology
V Chen, M Yang, W Cui, JS Kim, A Talwalkar, J Ma
bioRxiv, 2022.10. 28.513978, 2022
Learning Personalized Decision Support Policies
U Bhatt, V Chen, KM Collins, P Kamalaruban, E Kallina, A Weller, ...
arXiv preprint arXiv:2304.06701, 2023
Assisting Human Decisions in Document Matching
JS Kim, V Chen, D Pruthi, NB Shah, A Talwalkar
arXiv preprint arXiv:2302.08450, 2023
Simulated user studies for explanation evaluation
V Chen, G Plumb, N Topin, A Talwalkar
eXplainable AI approaches for debugging and diagnosis., 2021
The RealHumanEval: Evaluating Large Language Models' Abilities to Support Programmers
H Mozannar, V Chen, M Alsobay, S Das, S Zhao, D Wei, M Nagireddy, ...
arXiv preprint arXiv:2404.02806, 2024
FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines
M Barker, E Kallina, D Ashok, K Collins, A Casovan, A Weller, A Talwalkar, ...
Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms …, 2023
A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies
A Martin, V Chen, S Jesus, P Saleiro
arXiv preprint arXiv:2302.07444, 2023
AdvisingNets: Learning to Distinguish Correct and Wrong Classifications via Nearest-Neighbor Explanations
G Nguyen, V Chen, A Nguyen
arXiv preprint arXiv:2308.13651, 2023
Perspectives on incorporating expert feedback into model updates
V Chen, U Bhatt, H Heidari, A Weller, A Talwalkar
Patterns 4 (7), 2023
Video-Text Compliance: Activity Verification Based on Natural Language Instructions
M Jaiswal, F Liu, A Jagannathan, A Gattiker, I Hwang, J Lee, M Tong, ...
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2019
Simulating Iterative Human-AI Interaction in Programming with LLMs
H Mozannar, V Chen, D Wei, P Sattigeri, M Nagireddy, S Das, A Talwalkar, ...
NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, 2023
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20