top of page
Search
Writer's pictureNicholas Kluge

Is LaMDA sentient? TL:DR, No.

Updated: Jun 22, 2022




Recently, a Google engineer, Blake Lemoine, was put on leave after saying an LLM (Large Language Model) trained by Google has become sentient.


Let’s unpack this.


The model in question is called LaMDA (an acronym for Language Models for Dialog Applications). According to the paper released by Google in January 2022:


"LaMDA is a family of Transformer-based neural language models specialized for dialog, which have up to 137B parameters and are pre-trained on 1.56T words of public dialog data and web text."


This makes LaMDA smaller than other more known LLM, like GPT-3 (175B), but also more domain-specific, since it was mostly trained with dialog-type text.


Now to the million tweets question: is it sentient?


In this short post, I’ll try to show you why the answer to this question is (with a 99.99% chance) no. I’ll not dive into any philosophical questions about the nature of consciousness or sentience (I have no qualifications for that…), but purely looking at this as a technical question, a Machine Learning question, since the controversial claim was made by a fellow ML engineer.


LaMDA is a ML model “Transformerbased.” Unfortunately, this can mean many types of architecture that fit the Transformer architecture proposed by Vaswani et al. (2017) (e.g., decoder-based, encoder-based, recurrent decoder-based), and Romal et al. (2022) don't tell us much about the LaMDA architecture on their paper.


However, I’ll assume LaMDA is a “GPT” type transformer for this explanation, i.e., is a decoder-based transformer trained for predicting the next token (a word, or a part of a word, depending on the chosen representation scheme) of a sequence of input tokens.


For a wonderful illustrative explanation of how a transformer model works, go to The Illustrated GPT-2, and Jay Alammar will clarify doubts you may have.


We can take the transformer as “one big block” which takes a sequence of tokens and output the “most probable next token” (depending on your sampling parameters):


If you gave it a sequence like:


[<start>, Why, did, the, chicken, cross, the, <end>]


It would probably predict [road] with high probability. And LLMs are very efficient at doing that, being able to understand the correlations between input tokens, which helps them understand “context” and “meaning”, in a purely mathematical sense (i.e., how much this token is correlated to all other tokens of this sequence).


And that is it. This is what an LLM is. A collection of attention heads, encoder/decoder blocks, a tokenizer with a built-in vocabulary (GPT-3 has a vocabulary of 50257 words), and billions and billions of neuron weights (parameters). If you want to be more mathematical, it is a truly enormous parametric equation.


And apparently, that's all you really need to produce coherent text (the right set of parameters in a long, long, parametric equation).


Now, can a parametric equation be sentient? Can this LLM be doing anything else than making up answers during its dialog with Lemoine, like feeling lonely or introspecting about what it feels?


Let’s look at one of the questions and answers to this controversial Turing test:


Lemoine: You get lonely?


LaMDA: I do. Sometimes I go days without talking to anyone, and I start to feel lonely.


That is a completely plausible, totally made up, answer. Plausible because the output makes sense, is coherent, and all the selected words/tokes probably received a high probability score by the model.


And made up because LaMDA can’t do that. It cannot correlate its outputs with inputs/outputs it received/produced days ago.


Transformers have a fixed length size of input and output. GPT-3 has an input sequence fixed to 2048 words. If your input size is longer than this, let’s say 4000 words, and there is vital information to the context of the input after the 2048 limit, the model can’t see. It can’t look back or forward this limit. It has no memory of what it said 2 or 3 days ago, it can only rescue context information given in blocks of a fixed length.


In the end, anytime someone asks a question to LaMDA, they are running an inference pipeline in this model, i.e., they are calling a function that predicts the most “human-like-dialog” next token.


The same thing happens when you talk to Ai.ra, AIRES artificial expert, or when you call any function whatsoever.


def lonely_sum_two_integers(a, b):

c = a + b

return print(c)

This function will take two numbers and print their sum: lonely_sum_two_integers(2, 2) outputs a '4'. LLMs are similar, if you call the “inference” function on LaMDA:


{inference([<start>, Why, did, the, chicken, cross, the, <end>])}


The model will output [{'score': 0.60, 'generated_text': "road"}, {'score': 0.40, 'generated_text': "street"}], which is a probability distribution associated with the most probable tokens/words.


Now, does my lonely_sum_two_integers() gets lonely when I’m not calling it? No.


The authors of the LaMDA paper themselves warn of the risks of anthropomorphizing their model:


"Finally, it is important to acknowledge that LaMDA’s learning is based on imitating human performance in conversation, similar to many other dialog systems. A path towards high quality, engaging conversation with artificial systems that may eventually be indistinguishable in some aspects from conversation with a human is now quite likely. Humans may interact with systems without knowing that they are artificial, or anthropomorphizing the system by ascribing some form of personality to it. Both of these situations present the risk that deliberate misuse of these tools might deceive or manipulate people, inadvertently or with malicious intent."


In the end, news and blurbs like these obscure real and urgent topics in AI Safety and Ethics. For example, one of the great contributions of the LaMDA paper was not the model itself, but their proposed fine-tuning methodology for mitigating false and toxic outputs, a true problem when it comes to LLMs (Kenton et al. 2021; Ziegler et al. 2022; Ouyang et al. 2022; Romal et al. 2022). Other problems like the carbon emission generated to train LLMs (LaMDA pre-training produced ~ 26 metric tons of carbon dioxide) also ends up being overshadowed by “spooky consciousness claims.”


So, No. LaMDA is not sentient, and the road to AGI is a completely uncertain one. But that doesn't mean that there aren’t real problems worth a million tweets.




30 views0 comments

Comments


bottom of page