About / Bio


I am a PhD student in the Computer Science Department at the University of California, Davis, being advised by Prof. Prasant Mohapatra. As an undergraduate student (2014-2018) I researched topics involving security policies (and routing protocols) for wireless networks and networked systems. As a graduate student my research mainly focuses on machine learning and clustering algorithms-- with a special emphasis on adversarial examples, fairness guarantees, and machine learning security and privacy. I am also the author of the Tensorflex framework (NeurIPS 2018 presented paper link). From time to time, I also write technical articles on my blog, so if interested do have a look here.


  • 3/21/2022: The preprint of our paper on analyzing and debiasing YouTube recommendations is online on arXiv [pdf] [code]

  • 1/25/2022: Invited by Kyle Polich as a guest on the Data Skeptic podcast [Spotify link]

  • 10/21/2021: Our paper on "Fairness Degrading Adversarial Attacks Against Clustering" was accepted at the AFCR workshop @ NeurIPS 2021 as a poster [pdf] [supplementary]

  • 10/21/2021: Our paper on "Fair Clustering Using Antidote Data" was accepted at the AFCR workshop @ NeurIPS 2021 for a contributed talk, and is published in PMLR [pdf] [supplementary]

  • 10/11/2021: Invited keynote at MTD workshop @ ACM ICSE 2021 for our paper on MTD for adversarial machine learning (talk by Prof. Mohapatra) [pdf] [supplementary]

  • 9/17/2021: Our survey paper on fairness in clustering was accepted for publication in IEEE Access [pdf]

  • 1/10/2020: Our paper on adversarial attacks against clustering algorithms was accepted in the AAAI 2020 Main Technical Conference [pdf] [code]

  • 1/25/2019: Invited to talk about our research on adversarial attacks against clustering at Uber AI in SF by Ryan Turner

  • 12/1/2018: Presented our paper on the Tensorflex framework at the MLOSS workshop @ NeurIPS 2018 [pdf] [code]


Email: chhabra[at]ucdavis[dot]edu

LinkedIn: Click here

Github: Click here

Google Scholar: Click here