
- Ronak Mehta, ML/CS PhD
- ronakrm [at] the big G's mail
I’m currently working on accelerating and automating AI alignment and safety research.
My dissertation research focused on methods for efficiently identifying important subsets of features, parameters, and samples in modern ML settings. Current and future interests for me revolve around applying some of these ideas to interpretability and safety, and more broadly exploring issues around alignment. I’m currently working on a couple of projects in the guaranteed-safe and provably-safe AI space, stay tuned!
I occasionally write some blog posts, technical and otherwise.
Recent News
- I'm cofounding a new AI startup, Coordinal Research, working on accelerating and automating alignment research.
- We presented our joint work on A Benchmark for Formally Verified Code Generation at the LLM4Code workshop at ICSE 2025.
- I participated in the Catalyze Impact Incubator in London.
- I participated in the ML and Alignment Scholars Program in Berkeley this summer, and am continuing work on projects in guaranteed-safe AI.