Aashish Kolluri

AI Security | Distributed Machine Learning | Trustworthy Machine Learning

Bio

šŸ‘‹ Hi! I’m an AI security researcher at Microsoft Research, working in the Confidential Computing group in Cambridge, U.K. My current focus is on designing agents and agentic frameworks with strong, robust, and interpretable security guarantees.

Previously, I completed my Ph.D. under the guidance of Prateek Saxena at the National University of Singapore (NUS), where I built efficient and secure systems for distributed machine learning, with a particular emphasis on distributed graph learning.

Research Interests

My work spans multiple research areas with both theoretical and systems contributions. Please drop me a message if you are interested in any of them!

🧠 AI Agent Security: I have been developing formal security foundations for autonomous AI agents. Our recent work introduces FIDES, a planner that tags every piece of data an agent ingests with confidentiality-and-integrity labels and then deterministically enforces information-flow policies before any tool call is executed. A formal model and task taxonomy show exactly which In benchmarks, FIDES blocks all prompt-injection attacks and completes up to 16% more tasks than a baseline planner when paired with OpenAI’s o-series model—offering auditable, proof-carrying security for multi-tool agent stacks. (FIDES on arxiv)

šŸ›”ļø Defending Poisoning Attacks: HiDRA attack (S&P’24) and RandEigen defense (CCS’25, TBA) address the fundamental compuational limitations of desigining theoretically sound defenses for poisoning attacks.

HiDRA revealed a computational hardness barrier for any deterministic robust aggregator, like the classical iterative filtering, with dimension-independent guarantees. The attack breaks state-of-the-art defenses against targeted poisoning and backdoor attacks. RandEigen is a randomized aggregator that overcomes the computational vulnerabitlity of deterministic aggregators while matching their theoretical guarantees. RandEigen runs up to 300Ɨ faster than state-of-the-art theoretically sound defenses while preventing all existing poisoning and backdoor attacks in both centralized and federated settings—making bias-bounded defenses finally practical.

🌐 Distributed Machine Learning: My Ph.D. research laid the groundwork for designing efficient and privacy-preserving distributed graph learning. I built a system Retexo (github), which optimizes GNN training efficiency across data centers and edge devices. LPGNet (CCS’22) and PrivaCT (CCS’21) are differentially private GNN and hierarchical clustering algorithms that protects graph data against leakage. These contributions have advanced the state of the art in scalable, privacy-aware learning over distributed graphs.

🌐 Broader Interests: I actively engage in solving algorithmic problems and enjoy building systems. My contributions extend to diverse areas such as program synthesis, translation, debugging, and the security of decentralized applications, with publications in top-tier venues.

Academic History & Internships
drawing
Researcher (Feb'25-)
drawing
Intern (Jun-Aug'23)
drawing
Intern (Jun-Jul'20)
drawing
Ph.D. (2018-24)
drawing
Intern (Aug'17-May'18)
drawing
Intern (May-Jul'16)
drawing
B.Tech. (2013-17)

news

Jun 1, 2025 [New] Our paper on securing AI agents with information flow control is now live on arxiv!
May 1, 2025 [New] Our paper on desiging efficiently and theoretically sound defenses for poisoning attacks is accepted at CCS’25!
Feb 1, 2025 [New] I have joined Microsoft Research in Cambridge, UK!
Mar 9, 2024 Our paper on attacking byzantine robust aggregation protocols is published at IEEE S&P’24 (see arXiv).
Feb 1, 2024 Please find our updated work on scalable neural network training on distributed graphs on arXiv.