Nina Grgić-Hlača
Publications
- Laypeople's Egocentric Perceptions of Copyright for AI-Generated Art
Gabriel Lima, Nina Grgić-Hlača, and Elissa Redmiles.
Accepted for publication at the CHI conference in April 2025.
- Lay Perceptions of Algorithmic Discrimination in the Context of Systemic Injustice
Gabriel Lima, Nina Grgić-Hlača, Markus Langer, and Yixin Zou.
Accepted for publication at the CHI conference in April 2025.
- (De)Noise: Moderating the Inconsistency Between Human Decision-Makers
Nina Grgić-Hlača, Junaid Ali, Krishna P. Gummadi, and Jennifer Wortman Vaughan.
CSCW, San José, Costa Rica, November 2024.
[arXiv6]
- Blaming Humans and Machines: What Shapes People's Reactions to Algorithmic Harm
Gabriel Lima, Nina Grgić-Hlača, and Meeyoung Cha
CHI, Hamburg, Germany, April 2023.
[acm]
- Who Should Pay When Machines Cause Harm? Laypeople's Expectations of Legal Damages for Machine-Caused Harm
Gabriel Lima, Nina Grgić-Hlača, N., Jin Keun Jeong, and Meeyoung Cha
FAccT, Chicago, Illinois, June 2023.
[acm]
- Taking Advice from (Dis)Similar Machines: The Impact of Human-Machine Similarity on Machine-Assisted Decision-Making
Nina Grgić-Hlača, Claude Castelluccia and Krishina P. Gummadi
HCOMP, Online Virtual Conference, November 2022.
[arXiv]
- Dimensions of Diversity in Human Perceptions of Algorithmic Fairness
Nina Grgić-Hlača, Gabriel Lima, Adrian Weller and Elissa M. Redmiles
EAAMO, Arlington, Virginia, October 2022.
New Horizons Award
[arXiv]
- The Conflict Between Explainable and Accountable Decision-Making Algorithms
Gabriel Lima, Nina Grgić-Hlača, Jin Keun Jeong and Meeyoung Cha
FAccT, Seoul, South Korea, June 2022.
[arXiv]
- “Look! It's a Computer Program! It's an Algorithm! It's AI!”': Does Terminology Affect Human Perceptions and Evaluations of Algorithmic Decision-Making Systems?
Markus Langer, Tim Hunsicker, Tina Feldkamp, Cornelius J König and and Nina Grgić-Hlača
CHI, New Orleans, Louisiana, May 2022.
[arXiv]
- Machine Advice with a Warning about Machine Limitations: Experimentally Testing the Solution Mandated by the Wisconsin Supreme Court
Christoph Engel, Nina Grgić-Hlača
Journal of Legal Analysis, 2021.
[Publisher's version]
- Human Perceptions on Moral Responsibility of AI: A Case Study in AI-Assisted Bail Decision-Making
Gabriel Lima, Nina Grgić-Hlača, and Meeyoung Cha
CHI, Virtual, May 2021.
[arXiv]
- Human Decision Making with Machine Assistance: An Experiment on Bailing and Jailing
Nina Grgić-Hlača, Christoph Engel and Krishna P. Gummadi
CSCW, Austin, Texas, November 2019.
[PDF]
- A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices
Till Speicher, Hoda Heidari, Nina Grgić-Hlača, Krishna P. Gummadi, Adish Singla, Adrian Weller, Muhammad Bilal Zafar
KDD, London, United Kingdom, August 2018.
[PDF]
- Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction
Nina Grgić-Hlača, Elissa M. Redmiles, Krishna P. Gummadi and Adrian Weller
The Web Conf, Lyon, France, April 2018.
[PDF] [code & data]
- Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning
Nina Grgić-Hlača, Muhammad Bilal Zafar, Krishna P. Gummadi and Adrian Weller
AAAI, New Orleans, Louisiana, February 2018.
[PDF] [code & data]
- On Fairness, Diversity and Randomness in Algorithmic Decision Making
Nina Grgić-Hlača, Muhammad Bilal Zafar, Krishna P. Gummadi and Adrian Weller
FAT/ML Workshop @ KDD, Halifax, Canada, August 2017.
[arXiv]
- The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making
Nina Grgić-Hlača, Muhammad Bilal Zafar, Krishna P. Gummadi and Adrian Weller
Symposium on Machine Learning and the Law @ NeurIPS, Barcelona, Spain, December 2016.
Notable Paper Award
[PDF]