Biography
I am an Assistant Professor of Computer Science and Engineering at the University of South Florida. Prior to this, I received my PhD in Computer Science from UC Davis. My research primarily focuses on improving Machine Learning (ML) and Artificial Intelligence (AI) models to further facilitate their adoption into society by analyzing and improving model safety with respect to security, robustness, and fairness. I also work to translate these ideas to impact real-world models deployed in production systems, which by default are closed-source. Previously, I have also worked on developing ML approaches for improving networked systems.
Email: anshumanc[at]usf[dot]edu
Find me on: Github || Google Scholar || Twitter/X
I am looking to recruit exceptional PhD students for Spring/Fall 2025 to work on research relating to (1) AI/ML safety, (2) Multimodal Generative AI, and (3) using Generative AI for mitigating social media harms. Please reach out by email and apply here if interested.
RECENT NEWS:
8/12/2024: Our paper "Incentivizing News Consumption on Social Media Platforms Using Large Language Models and Realistic Bot Accounts" was accepted for publication in PNAS Nexus [pdf]
3/13/2024: Our paper "Revisiting Zero-Shot Abstractive Summarization in the Era of Large Language Models from the Perspective of Position Bias" was accepted at the NAACL 2024 Main Conference as an oral talk [pdf] [code]
1/16/2024: Our paper "What Data Benefits My Classifier? Enhancing Model Performance and Interpretability Through Influence-Based Data Selection" was accepted as an oral talk (top 1.2% of papers) at ICLR 2024 [pdf] [code]
12/1/2023: Invited to attend a research convening on LLMs and social media interventions at Google NYC organized by Google/Jigsaw and Prosocial Design Network [blog post]
11/29/2023: Our paper "Towards Fair Video Summarization" has been accepted for publication in Transactions on Machine Learning Research (TMLR) [pdf] [code]
9/21/2023: Our paper "Auditing YouTube’s Recommendation System for Ideologically Congenial, Extreme, and Problematic Recommendations" was accepted for publication in PNAS [pdf] [code]
1/20/2023: Our paper "Robust Fair Clustering: A Novel Fairness Attack and Defense Framework" was accepted at the ICLR 2023 Main Conference [pdf] [code] [poster]
PAST NEWS:
10/15/2022: I was invited to give a seminar talk on Robust Clustering at Brandeis University, Boston by Prof. Hongfu Liu
9/14/2022: Our paper "On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses" was accepted at the NeurIPS 2022 Main Conference [pdf] [supplementary] [code] [poster]
6/15/2022: Our paper on "Updatable Clustering via Patches" was accepted as a poster at the Updatable Machine Learning (UpML) workshop @ ICML 2022 [pdf]
1/25/2022: Invited by Kyle Polich as a guest on the Data Skeptic podcast [Spotify link]
10/21/2021: Our paper on "Fair Clustering Using Antidote Data" was accepted at the AFCR workshop @ NeurIPS 2021 for a contributed talk (top 6 papers), and is published in PMLR [pdf] [supplementary]
10/21/2021: Our paper on "Fairness Degrading Adversarial Attacks Against Clustering" was accepted at the AFCR workshop @ NeurIPS 2021 as a poster [pdf] [supplementary] [code]
10/11/2021: Invited keynote at MTD workshop @ ACM CCS 2021 for our paper on MTD for adversarial machine learning (talk by Prof. Mohapatra) [pdf] [supplementary]
9/17/2021: Our survey paper on fairness in clustering was accepted for publication in IEEE Access [pdf]
1/10/2020: Our paper "Suspicion-Free Adversarial Attacks Against Clustering Algorithms" was accepted in the AAAI 2020 Main Technical Conference [pdf] [code] [poster]
1/25/2019: Invited to talk about our research on adversarial attacks against clustering at Uber AI in SF by Ryan Turner
12/1/2018: Presented our paper on the Tensorflex framework at the MLOSS workshop @ NeurIPS 2018 [pdf] [code]