ARJAN KOHLI
ML_RESEARCHER
I build and break AI systems.
SPECIALIZATION: Generative AI • Adversarial AI • Explainable AI • Computer Vision • LLM Safety • Graph Neural Networks
ABOUT
EDUCATION
Yale University
BS / MA Statistics and Data Science
2022 – 2026 • GPA: 3.8
→ Teaching Assistant for Intermediate Machine Learning
→ Director of Quantitative Research – Yale Undergraduate Hedge Fund Association
RESEARCH_FOCUS
My work centers on developing secure, interpretable deep learning and GenAI systems. I research adversarial machine learning techniques, explainability and interpretability methods, and computer vision.
TECHNICAL_STACK
EXPERIENCE
Adversarial Machine Learning Researcher
MirageLabs
2025-Present
- →Researched adversarial machine learning techniques and methodologies
- →Built black-box audio attacks on voice agents
- →Evaluated security, interpretability, and explainability of deep learning systems
- →Advised 3+ clients in sensitive sectors like healthcare and finance on safe AI integration
Machine Learning Researcher
Various Government Projects
2024-2025
Washington, DC
- →Contributed to multimodal vision transformer + LLM research projects
- →Developed deep learning, GenAI, and computer vision tools addressing technological threats
- →Developed supervised and self-supervised object detection models for satellite imagery
Machine Learning Researcher
Antithesis
Summer 2024
Vienna, VA
- →Researched self-supervised anomaly detection for autonomous software testing
- →Derived ideal gradient decay rates for SGD optimizer, improving test accuracy by 10%
- →Built and deployed ML anomaly detector into Google BigQuery workflow
- →Prototyped ML system to quantify distributed system behavior interestingness
- →Successfully identified vulnerabilities and quantified worrying system logs in Apache Spark and Hadoop systems.
RESEARCH & PROJECTS
Dynamic Curvature Optimization for Hyperbolic GCNs
Dr. Zhuoran Yang (Yale)
Dec. 2024
Developed novel optimization methods for learnable manifold curvature in Hyperbolic Graph Convolution Networks, showing 9% improvement over state of the art methods.
Explainable Deep Learning — DeepFRI
Pending Publication in Elsevier
May 2024
Implemented GradCAM, PGExplainer, Deepfool Perturbation, and Excitation Backpropagation on the Flatiron Institute's Functional Residue Identification model. Discovered spurious correlations classifying protein function.
Defensive Stochastic Activation Pruning
Dr. Rex Ying (Yale)
Apr. 2024
Demonstrated efficacy of various stochastic activation pruning approaches on ResNet-18 to defend against adversarial image perturbation attacks. Tested model with proven efficacy against FGSM and PGD attacks.
Portfolio Diversification with Graphical Lasso
Jan. 2025
Applied graphical lasso techniques to optimize portfolio diversification by modeling asset dependencies and covariance structures.