Principal Investigator

Niranjan Balasubramanian
Associate Professor, Computer Science
PhD Students

Mahnaz Koupaee
My research is in the field of Natural Language Processing, more specifically using Large Language Models (LLMs) as primitives to reason about events and model how real-life scenarios unfold. I'm also very much interested in abstractive text summarization, mainly towards more controllable, factual summary generation and automatic summary evaluation using LLMs.
Research: Commonsense Reasoning, Text Summarization

Xueying Bai
My current research focuses on continual learning with pre-trained language models. My research interests generally lie in continual learning, transfer learning, and generalization.
Research: Continual Learning, Machine Learning

Yash Kumar Lal
I work on implicit reasoning problems on diverse types of texts. Particularly, I focus on evaluating and improving the abilities of NLP models to answer why questions that elicit different aspects of reasoning on stories and plans. I am also interested in understanding how well large language models (LLMs) understand the social and personal aspects of reasoning expressed in language.
Research: Human-Centric Planning, Commonsense Reasoning, Robust Reasoning

Nikita Soni
Language is more than words, it expresses identities, psychologies, cultures and much more. My research goal is to build human-context-aware language models that capture the meaning of language within the context of its generators, aiming to endow NLP systems with robust generalization, personalization, and more human-like language understanding.
Research: Human Language Models, Human Context in LLMs and NLP

Md. Saqib Hasan
My research focuses at the intersection of NLP and formal verification, and how each can benefit the other. My current projects involve using incorporating feedback for code verification systems to design and develop algorithms improve various aspects of LLM program synthesis such security of code generation.
Research: Program Synthesis, Formal Verification, Alignment, Synthetic Distillation

Pegah Alipoormolabashi
I am interested in authorship attribution methods. I want to find out how reliable the existing authorship attribution methods are and how to evaluate them in an insightful way using more realistic data and better metrics.
Research: Evaluating NLP models and Assessing Reliability

Biddut Sarker Bijoy
I focus on developing efficient, low-parameter agents for real-world applications, with a particular interest in enhancing the capabilities of large language models (LLMs) for complex reasoning and planning tasks. I am also interested in how LLMs can be refined to improve decision-making consistency and bridge the gap between language understanding and practical problem-solving.
Research: Efficient Reasoning, Decision-Making in LLMs, Lightweight AI Agents

Dikshya Mohanty
I work on human-centered NLP—the idea that natural language processing models can be improved by incorporating knowledge of who wrote the text. I’m also interested in how AI can be leveraged to enhance scientific discovery (AI for Science) and in understanding how media narratives shape public perception, particularly in the context of global crises (e.g., the Russia-Ukraine war).
Research: Human-Centered NLP, AI for Science, AI for Society
Syed Mostofa Monsur
I am working to improve the reasoning capabilities of LLMs to understand scientific text. I want to apply NLP/AI techniques to improve how these models interpret and reason over complex scientific literature. I am broadly interested in extending these techniques to assist scientific discoveries and support research innovation through AI models.
Research: LLM Reasoning, Implicit Reasoning, Scientific Discovery