Publications

Can language representation models think in bets?

Abstract

In recent years, transformer-based language representation models (LRMs) have achieved state-of-the-art results on difficult natural language understanding problems, such as question answering and text summarization. As these models are integrated into real-world applications, evaluating their ability to make rational decisions is an important research agenda, with practical ramifications. This article investigates LRMs’ rational decision-making ability through a carefully designed set of decision-making benchmarks and experiments. Inspired by classic work in cognitive science, we model the decision-making problem as a bet. We then investigate an LRM’s ability to choose outcomes that have optimal, or at minimum, positive expected gain. Through a robust body of experiments on four established LRMs, we show that a model is able to ‘think in bets’ if it is first fine-tuned on bet questions with an identical structure …

Date
2023
Authors
Zhisheng Tang, Mayank Kejriwal
Journal
Royal Society Open Science
Volume
10
Issue
3
Pages
221585
Publisher
The Royal Society