Aashish Kolluri
AI Security | Distributed Machine Learning | Trustworthy Machine Learning

Bio
š Hi! Iām an AI security researcher at Microsoft Research, working in the Confidential Computing group in Cambridge, U.K. My current focus is on designing agents and agentic frameworks with strong, robust, and interpretable security guarantees.
Previously, I completed my Ph.D. under the guidance of Prateek Saxena at the National University of Singapore (NUS), where I built efficient and secure systems for distributed machine learning, with a particular emphasis on distributed graph learning.
Research Interests
My work spans multiple research areas with both theoretical and systems contributions. Please drop me a message if you are interested in any of them!
š§ AI Agent Security: I have been developing formal security foundations for autonomous AI agents. Our recent work introduces FIDES, a planner that tags every piece of data an agent ingests with confidentiality-and-integrity labels and then deterministically enforces information-flow policies before any tool call is executed. A formal model and task taxonomy show exactly which In benchmarks, FIDES blocks all prompt-injection attacks and completes up to 16% more tasks than a baseline planner when paired with OpenAIās o-series modelāoffering auditable, proof-carrying security for multi-tool agent stacks. (FIDES on arxiv)
š”ļø Defending Poisoning Attacks: HiDRA attack (S&Pā24) and RandEigen defense (CCSā25, TBA) address the fundamental compuational limitations of desigining theoretically sound defenses for poisoning attacks.
HiDRA revealed a computational hardness barrier for any deterministic robust aggregator, like the classical iterative filtering, with dimension-independent guarantees. The attack breaks state-of-the-art defenses against targeted poisoning and backdoor attacks. RandEigen is a randomized aggregator that overcomes the computational vulnerabitlity of deterministic aggregators while matching their theoretical guarantees. RandEigen runs up to 300Ć faster than state-of-the-art theoretically sound defenses while preventing all existing poisoning and backdoor attacks in both centralized and federated settingsāmaking bias-bounded defenses finally practical.
š Distributed Machine Learning: My Ph.D. research laid the groundwork for designing efficient and privacy-preserving distributed graph learning. I built a system Retexo (github), which optimizes GNN training efficiency across data centers and edge devices. LPGNet (CCSā22) and PrivaCT (CCSā21) are differentially private GNN and hierarchical clustering algorithms that protects graph data against leakage. These contributions have advanced the state of the art in scalable, privacy-aware learning over distributed graphs.
š Broader Interests: I actively engage in solving algorithmic problems and enjoy building systems. My contributions extend to diverse areas such as program synthesis, translation, debugging, and the security of decentralized applications, with publications in top-tier venues.
Academic History & Internships







news
Jun 1, 2025 | [New] Our paper on securing AI agents with information flow control is now live on arxiv! |
---|---|
May 1, 2025 | [New] Our paper on desiging efficiently and theoretically sound defenses for poisoning attacks is accepted at CCSā25! |
Feb 1, 2025 | [New] I have joined Microsoft Research in Cambridge, UK! |
Mar 9, 2024 | Our paper on attacking byzantine robust aggregation protocols is published at IEEE S&Pā24 (see arXiv). |
Feb 1, 2024 | Please find our updated work on scalable neural network training on distributed graphs on arXiv. |