headshot of Ryan Ryan Marcus, assistant professor at the University of Pennsylvania (Fall '23). Using machine learning to build the next generation of data systems.
      
    ____                       __  ___                          
   / __ \__  ______ _____     /  |/  /___ _____________  _______
  / /_/ / / / / __ `/ __ \   / /|_/ / __ `/ ___/ ___/ / / / ___/
 / _, _/ /_/ / /_/ / / / /  / /  / / /_/ / /  / /__/ /_/ (__  ) 
/_/ |_|\__, /\__,_/_/ /_/  /_/  /_/\__,_/_/   \___/\__,_/____/  
      /____/                                                    
        
   ___                   __  ___                    
  / _ \__ _____ ____    /  |/  /__ ___________ _____
 / , _/ // / _ `/ _ \  / /|_/ / _ `/ __/ __/ // (_-<
/_/|_|\_, /\_,_/_//_/ /_/  /_/\_,_/_/  \__/\_,_/___/
     /___/                                          
        
   ___  __  ___                    
  / _ \/  |/  /__ ___________ _____
 / , _/ /|_/ / _ `/ __/ __/ // (_-<
/_/|_/_/  /_/\_,_/_/  \__/\_,_/___/                                   
        

Generating bios with large language models

Update March 19th, 2024: this page now uses the gpt-3.5-turbo model.

It’s the holiday season of 2022, and everyone is talking about large language models! OpenAI’s GPT-3 and others are generating quite a bit of interest in the CS research community. Larger datasets and models are significantly improving results on many NLP tasks. Even in databases (my field), large language models are being used to tune databases and translate natural language to SQL.

Large language models are so good, some folks are tricking themselves into thinking that large language models are sentient. It is important to remember that there is no there there – GPT-3 and the like only model the distribution of the next word given the previous word.1 As a result, large language models can at times easily produce misinformation, state biased viewpoints, or conjure total nonsense!

Let’s try it out. Enter the name of a computer scientist below to search for their DBLP entry. Click their name to ask GPT-3 to generate a bio for them using a list of their citations as an input.

A standalone version of this bio generator is available at https://bio.rmarcus.info.

In order to generate a bio, we fetch the 15 most recent publications for the author from DBLP, and then ask GPT-3 to complete the following text (“prompt”):

Given the below list of publications, produce a short summary of the research of {name}. Use succinct language when possible. Focus on common themes across all of their works. Do not enumerate specific topics or papers. Don't use phrases like "such as" or "including." Do not include specific dates, conferences, venues, or journals. Use gender-neutral language.

{publication list}

Bio: 

So GPT-3 starts generating text after Bio:, which normally results in some plausible descriptions of researchers. However, the model can certainly make mistakes. For example, despite the instruction to use gender-neutral language, the language model still outputs the pronoun “he” for Nesime Tatbul:

Nesime Tatbul is a computer scientist and researcher. His work focuses on machine programming, query optimization, explainable anomaly detection, cloud observability, and other topics related to data management. He has been involved in multiple projects, from developing a benchmarking platform for explainable anomaly detection to creating a pluggable metrics storage engine for the age of observability.

Additionally, despite being instructed not to list specific projects or papers, the model frequently does exactly that. This is somewhat understandable, since it might be impossible to determine an author’s general subfield from only their 15 more recent publications.

Because large language models are prone to produce bias, capable of producing nonsense, and are generally unpredictable, you probably shouldn’t use them in any “real” application yet. But that doesn’t mean you can’t play around with them and have fun! If you haven’t tried out ChatGPT, give it a shot here: https://chat.openai.com

Footnotes

  1. A portion of the machine learning community subscribes to the belief that human beings are similar to such models (“there is no ghost in the machine”). We simply map inputs to outputs, and our consciousness is a manifested illusion. Once we multiply sufficiently large matrices together, AGI is achieved! I’m not an expert here, so I’ve gone with my intuition, which is that we are at least not entirely stochastic machines.Â