Using deep learning to detect sarcasm

Would you trust an AI to recognize irony in social media posts?
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

From comedy shows to casual conversations, sarcasm is a commonplace aspect of our lives, but it’s still an elusive form of communication to AI.

New research — funded in part by DARPA — indicates that might be changing.

The challenge: For better or worse, intelligence agencies around the world scan social media for possible threats to national security. Natural language understanding has come a long way in the past decade — seen most clearly in the capabilities of OpenAI’s GPT-3 — but detecting sarcasm remains notoriously difficult.

Sarcasm might seem straightforward to us, but humans are much more skilled in inferring tone and meaning than AI. The nuance inherent in sarcasm, such as saying something ironically that you don’t mean, can produce false positives when taken out of context or stripped of its tone of voice — even for humans. 

For computer models designed to detect genuine threats, it can be impossible to discern intent.

The opportunity: If algorithms had a better grasp of sarcasm, including the ability to learn about new trends and slang, they could differentiate jokes from real threats. 

The development: In a new paper, University of Central Florida researchers outlined a method for training neural networks to understand human sarcasm. It involves detecting specific word combinations that function as indicators of sarcasm in a social media post — even without further context.

These neural networks were trained on a variety of datasets, from social media platforms like Twitter and Reddit to headlines from The Onion. This allowed researchers Ramya Akulato and Ivan Garibay to identify relationships between words and punctuation that could indicate a sarcastic tone.

“For instance, words such as ‘just’, ‘again’, ‘totally’, ‘!’ … are the words in the sentence that hint at sarcasm and, as expected, these receive higher attention than others,” they write.

What Akulato and Garibay propose is that this “self-attention architecture” can be an effective method for training neural networks to weight some words more than others based on words that appear around them.

“Attention is a mechanism to discover patterns in the input that are crucial for solving the given task,” Garibay told Defense One.

In AI, attention refers to the way that models can be programmed to weight certain aspects of the data relative to others. The human brain automatically makes these adjustments all the time — if you’re hungry, you’ll notice food; if you’re trying to find your phone, you’ll pay extra attention to shapes that look like your phone.

In “self-attention architecture,” the researchers essentially tell the models to apply this technique in a more granular way, by instructing them to weight certain words within sequences and then directing them to identify patterns that occur within those particular sequences.

“In deep learning, self-attention is an attention mechanism for sequences, which helps learn the task-specific relationship between different elements of a given sequence,” Garibay said.

Oh, a sarcasm detector. Oh, that’s a real useful invention: Like the ill-fated sarcasm detector in The Simpsons, most attempts to use AI to identify sarcasm haven’t had much success.

Some attempts focus on hand-selecting keywords, limiting how widely something could be used in practice (for example, with new slang terms that researchers might not be aware of). Other approaches with neural networks have seen more efficacy, but suffered from the black box issue, in which researchers don’t have the ability to understand how a model came to a given conclusion.

Akulato and Garibay claim that their self-attention architecture differentiates itself by offering the capabilities of neural networks and explainable AI. Being able to understand how and why an AI came to a particular conclusion is particularly important with a topic like sarcasm, which is constantly evolving and can vary widely from person to person.

Intent matters: Still, using AI to classify people’s habits, tendencies, or intent comes with challenges and the demand for deep ethical consideration. Just as bias in AI data has exacerbated racial bias in law enforcement and medicine, ambiguity and limited data too could trip up a sarcasm detector. 

What one culture or group identifies as sarcasm can differ widely from another, and it is incumbent upon researchers and funders to ensure that rigorous research and inclusion have been incorporated throughout the development process. 

This issue is particularly important, given the potential uses that sarcasm AI is potentially being developed for. Miscategorizing a joke as a genuine threat could put people’s life and liberty in jeopardy.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
LLMs are a dead end to AGI, says François Chollet
AI researcher François Chollet thought we needed a better way to measure progress on the path to AGI — so he made one.
Meet Thresh, the world’s first professional gamer
Was Elon Musk any good at Quake? “He’s a legit gamer,” but…
You’re thinking of the metaverse all wrong, says Matthew Ball
Rumors of the metaverse’s demise have been greatly exaggerated.
Perplexity, Google, and the battle for AI search supremacy
AIs that generate answers to user queries could transform search, but only if someone can get the tech and the business model right.
How AI is rewriting Silicon Valley’s relationship with the Pentagon
Silicon Valley is warming to the Department of Defense as it works to get new AI systems developed and deployed en masse.
Up Next
kate darling
Subscribe to Freethink for more great stories