The "Evidence" of an AI Smile

By Mary Warner posted 04-11-2023 08:30 AM


I am not photogenic. I can’t turn on a million-megawatt smile and pose naturally in front of a camera. As my family is all too eager to point out, most photos of me show my eyes closed in mid-blink.

The encouragement to say, “Cheese!” and smile big for the camera was all around me while I was growing up. And it’s gotten stronger since the advent of the internet and the rise of online personalities. “Brand You” is an admonishment to post your best profile photo on social media services, and that photo better come with a warm, friendly, full-toothed smile.

It did not dawn on me how culturally specific this smile was until I read an article called AI and the American Smile: How AI misrepresents culture through a facial expression.

The article shows a variety of AI-generated photos of groups of people across time and cultures, with all of the individuals featuring the broad “selfie” smile we are expected to don in our digitally driven world.

The author, jenka, is originally from the Soviet Union, where smiling in this fashion is not typical. From the article:

As the old Soviet joke goes, how can you tell that someone is an American in Russia?

They’re smiling.

But how does AI tell when someone is most likely lying? They’re smiling like an American.

The article made me feel better about my less-than-American smile (I’m not alone!), while simultaneously giving me further pause about AI-generated content.

In the legal tech community, it's hard to avoid the excited chatter about OpenAI, ChatGPT, and generative AI text tools. The technology is expected to transform the way law is practiced through the efficiencies it will bring to analyzing, comparing, and summarizing large amounts of data quickly.

Granted, the legal tech community is also well aware of the problems with generative AI text, namely “hallucinations,” or false information (including made up citations), that these tools serve up in a confident manner.

A primary key to reducing hallucinations is knowing what data, or large language models in the case of text, the AI model was trained on. It’s also critical to be well-versed in your specific subject area, so you can spot when the AI is wrong.

I recently experimented with Character.AI ( , asking it to create a character from central Minnesota in the 1850s, a place and era I’ve studied in some depth. After telling me that prompt was very specific, it returned some general information that was off in a way that I instantly spotted as wrong, but that others without this knowledge would not have recognized. It also gave me a character name that was bland – John Robert – and not indicative of the variety in names at the time. When I asked it to provide me the name of this character’s Ojibwe friend, it returned the suggestion “Running Deer” … in English! I had to prompt it to provide me with a name in Ojibwe, but I wasn’t able to save the generated text in order to check on the accuracy of the suggestion.

Like the AI-generated American grins plastered on the faces of Native Americans or Māori warriors or Eastern European soldiers from the American Smile article, Character.AI showed me that large language models behind generative AI text tools have their own cultural and historical biases.

When it comes to legal tools built on AI, so long as the training models are based on law-related sources, like caselaw databases, and the legal practitioner is knowledgeable enough to catch and correct any errors, these tools should be handy in managing several aspects of legal practice.

However, when it comes to the intersection of law and the diversity of humanity’s cultures, emotions, and languages, it’s best to remember how much of AI is being trained on a specific strain of the American experience, one that leaves out the true diversity of the United States, as well as the rest of the world. When generative AI can convincingly misrepresent the typical facial expression of a cultural group and we don’t know enough to question it, the “evidence” of a Māori warrior with an American smile will mislead us.



After writing this blog post, I spotted this article from PC Magazine: Only Half of Americans Can Differentiate Between AI and Human Writing.

It appears law firms will need a new field of expertise for e-discovery: AI Forensics.