I am a PhD candidate in the Computer Science Department at the University of California, Davis, being advised by Prof. Prasant Mohapatra. My research primarily focuses on improving Machine Learning (ML) models to further facilitate their adoption into society by analyzing model robustness along two dimensions: (i) adversarial robustness (adversarial attacks/defenses against learning models) and (ii) social robustness (fair machine learning and trustworthy AI).
I am also interested in other applied problems such as (i) designing learning based debiasing interventions for social media platforms (YouTube/Twitter, among others) and (ii) ML based approaches for improving networked systems. I am also the author of the Tensorflex framework (NeurIPS 2018 presented paper link). From time to time, I write technical articles on my blog, so if interested do have a look here.
10/15/2022: I was invited to give a seminar talk on Robust Clustering at Brandeis University, Boston by Prof. Hongfu Liu
9/14/2022: Our paper "On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses" was accepted at the NeurIPS 2022 Main Conference [pdf] [supplementary] [code] [poster]
6/15/2022: Our paper on "Updatable Clustering via Patches" was accepted as a poster at the Updatable Machine Learning (UpML) workshop @ ICML 2022 [pdf]
3/21/2022: The preprint of our paper on analyzing and debiasing YouTube recommendations is online on arXiv [pdf] [code]
1/25/2022: Invited by Kyle Polich as a guest on the Data Skeptic podcast [Spotify link]
10/21/2021: Our paper on "Fairness Degrading Adversarial Attacks Against Clustering" was accepted at the AFCR workshop @ NeurIPS 2021 as a poster [pdf] [supplementary] [code]
10/21/2021: Our paper on "Fair Clustering Using Antidote Data" was accepted at the AFCR workshop @ NeurIPS 2021 for a contributed talk, and is published in PMLR [pdf] [supplementary]
10/11/2021: Invited keynote at MTD workshop @ ACM CCS 2021 for our paper on MTD for adversarial machine learning (talk by Prof. Mohapatra) [pdf] [supplementary]
9/17/2021: Our survey paper on fairness in clustering was accepted for publication in IEEE Access [pdf]
1/10/2020: Our paper "Suspicion-Free Adversarial Attacks Against Clustering Algorithms" was accepted in the AAAI 2020 Main Technical Conference [pdf] [code] [poster]
1/25/2019: Invited to talk about our research on adversarial attacks against clustering at Uber AI in SF by Ryan Turner
12/1/2018: Presented our paper on the Tensorflex framework at the MLOSS workshop @ NeurIPS 2018 [pdf] [code]