Language Understanding and Reasoning (LUNR)

Address: New Computer Science 244

Contact: (631) 632-2457

The Language Understanding and Reasoning (LUNR) lab @ Stony Brook University, New York led by Dr. Niranjan Balasubramanian focuses on building, evaluating and analyzing systems that can identify, extract, understand and reason about actions and events, described in natural language as well as through programmatic constructs.
Agents

Autonomous Agents for Solving Tasks

Agents, augmented with language models and tools, are designed to solve a variety of tasks ranging from day to day activities like emailing to complex software engineering problems. We focus on development of benchmarks and methodologies for improving autonomous agents. We have developed AppWorld, a controllable world of apps for evaluating language models as autonomous agents. This work has won the ACL 2024 Best Resource Paper and the National Artificial Intelligence Research Resource (NAIRR) Pilot Award.

Planning & Reasoning in NLP

Planning & Reasoning in Language Models

We focus on analyzing the planning and reasoning capabilities of language models by creating novel evaluation benchmarks. We developed CaT-Bench, a testbed for assessing a model's ability to understand causal dependencies within plans, and CustomPlans to evaluate how well they can customize real-world plans.

Software Verification

Verifying Complex Software with NLP

This NSF funded project is looking at developing semantic parsing systems that convert system domain specification texts into formal statements. We have developed SpecNFS, a dataset of specifications from the Network File System documents and corresponding intermediate formal representations from a custom semantic representation called SpecIR. We also formulate ROLex, a retrieval-augmented parsing mechanism for overcoming the problem of open-vocabulary constructs when formalizing specifications.

Commonsense Knowledge

Understanding Commonsense Knowledge About Events

  • Modeling conditional knowledge about events - Events typically alter the state of the world. This NSF supported project focuses on developing techniques for acquiring knowledge from news texts and simple narratives.

  • Modeling Schematic Event Knowledge - This DARPA funded project focuses on developing event language models that can reason about events.

  • Papers - SageViz, POQue, PASTA, Event LMs etc.
Natural Language Inference

Explainable Natural Language Inference

This NSF supported project focuses on developing explainable inference algorithms. Applications include question answering, and relation extraction. We formulated multiple tasks and benchmarks to evaluate the limitations of question answering systems in multi-hop settings (DiRE, MusiQUE). We also develop task datasets to evaluate model capabilities in different domains like biology (BioNLI).

Efficient LLMs

Building Efficient LLMs

Models are getting larger, consuming more compute, memory, and energy. Our work looks into ways for making NLP models faster, smaller, and more energy efficient where possible. We have focused on predicting energy usage of models on diverse hardware, improving their sparsity and making them accessible on increasingly smaller devices

Latest News

  • Feb 2025
    Harsh Trivedi passed his thesis dissertation. Congratulations Dr. Harsh!
  • Dec 2024
    Sayontan Ghosh passed his thesis dissertation. Congratulations Dr. Sayontan!
  • Sept 2024
    CaT-Bench has been accepted to EMNLP'24.
  • Aug 2024
    AppWorld just won the ACL'24 Best Resource Paper Award!
  • July 2024
    Two papers accepted to COLM 2024.
  • Apr 2024
    AppWorld has been accepted to ACL'24