EagleCam

  • Post category:Projects

Improve a video analysis model for identifying the type of food (rodent, lizard, fish) that an eagle is bringing to its nest for the U.S. Fish and Wildlife Services.

Continue ReadingEagleCam

Entities in Context

  • Post category:Projects

Entities in ContextPartner: Meta Platforms Inc.Participants: Saeed Goodarzi, Nikhil Kagita, Dennis MinnDescription: We revisited the generalization effectiveness of LLMs by focusing on named entities. Named entities are ubiquitous in current Natural Language Understanding benchmarks, yet they have been largely ignored in order to examine the impact on models' reasoning capabilities. We subjected models to the same evaluation data while modifying them to iterate through a large array of named entities from diverse demographics.

Continue ReadingEntities in Context

Simple Strategies to Select Layers for Fine-Tuning Language Encoders

  • Post category:Projects

Simple Strategies to Select Layers for Fine-Tuning Language Encoders Partner: Microsoft, MAIDAP Participants: Gayatri Belapurkar, Saloni Chalkapurkar, Abhilasha Lodha, Yuanming Tao Description: We proposed two-layer selection methods for fine-tuning language encoders that can comprehensively make the transfer learning process for common NLP tasks such as GLUE and SuperGLUE more resource efficient.

Continue ReadingSimple Strategies to Select Layers for Fine-Tuning Language Encoders

Editing Transformer Models with Common Sense Knowledge (EMNLP Conference, Dec. 2023)

  • Post category:Projects

Editing Transformer Models with Common Sense Knowledge (EMNLP Conference, Dec. 2023)Partner: Allen Institute for AIParticipants: Anshita Gupta, Debanjan Mondal, Akshay Krishna SheshadriDescription: Memory editing for updating encyclopedic knowledge in transformers has received increasing attention, but it is unclear if these methods can be adapted for nuanced common sense knowledge. In this research, we proposed an adaptation of MEMIT to edit common sense mistakes in GPT-2 Large and XL. We extend editing to various token locations and employ a robust layer selection strategy. Our results suggest a promising path for improving GPT by incorporating context-specific user feedback about common sense through direct model editing as well as fixing and customizing model behaviors using human-in-the-loop-systems.

Continue ReadingEditing Transformer Models with Common Sense Knowledge (EMNLP Conference, Dec. 2023)