m3hrdadfi.png

Mehrdad Farahani

PhD candidate at Chalmers & University of Gothenburg

I’m doing research on how large language models work under the hood — tracing and editing their internal representations to make them more interpretable and controllable. I focus on mechanistic interpretability, causal analysis, and behavior editing in LLMs.

Curious about how LLMs really think? Let’s dig in.

News

Selected Publications

  1. Deciphering the Interplay of Parametric and Non-Parametric Memory in RAG Models
    Mehrdad Farahani and Richard Johansson
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Nov 2024