/awesome-artificial-humour

A curated list of great examples of AI-driven humour

Creative Commons Zero v1.0 UniversalCC0-1.0

Awesome Artificial Humour Awesome

What is Artificial Humour? I am glad you asked! It is anything that automates the generation of jokes (in any format, but usually text).

What is this? A curated list of best examples of Artificial Humour online.

Is something cool missing? Pull requests welcome!

Contents

Generative bots

In English

  • Supa Bot Fire - Jokes. I tweet that.
  • Deep Drumpf - I Make America Rich Again for my friends.
  • Burned your tweet - Burnining tweetes by the aforementioned politician.
  • InspiroBot - An endless source of insiprational posters.
  • SciGen - A generator of bogus scientific papers that have been accepted to a bunch of conferences.
  • HEADLINERTRON - Imitating standup comedy with Botnik engine.

Markov chains:

  • Neuroneca - Making Stoicism Great Again.
  • King James Programming - Commutativity of addition is a single theorem because it depends on the kind of unholy rapport he felt to exist between his mind and that lurking horror in the distant black valley.
  • At the Modules of Madness - Mixing software documentation with novels.
  • Taylor Swift Bot - In case you think the world doesn't have quite enough Taylor wift lyrics.
  • The Great Botsby - Because F. Scott Fitzgerald is awesome and The Great Gatsby is awesome.
  • That can be my next tweet - Do what they did to Trump, Bible and Taylor Swift to yourself.

And Darius Kazemi's 76 bots, including:

In Russian

Conversational bots

  • ELIZA - In case you can't afford a therapist. Passed the Turing test with just a few regular expressions.
  • Mitsuku - makes you wonder if you pass the Turing test.

Research papers

Humoroids - Conversational Agents That Induce Positive Emotions With Humour

In this paper, we propose a definition of "Humoroids" - a new class of humor-equipped talking agents. We summarize existing research, discuss the concept of Humoroids and introduce our pun-telling agent, which (as shown in an evaluation experiment) induces positive emotion in human interlocutors.

That's What She Said: Double Entendre Identification

Humor identification is a hard natural language understanding problem. We identify a subproblem — the “that's what she said” problem — with two distinguishing characteristics:

  1. Use of nouns that are euphemisms for sexually explicit nouns
  2. Structure common in the erotic domain. We address this problem in a classification approach that includes features that model those two characteristics. Experiments on web data demonstrate that our approach improves precision by 12% over baseline techniques that use only word-based features.

Dank Learning: Generating Memes Using Deep Neural Networks (GitHub)

We introduce a novel meme generation system, which given any image can produce a humorous and relevant caption. Furthermore, the system can be conditioned on not only an image but also a user-defined label relating to the meme template, giving a handle to the user on meme content. The system uses a pretrained Inception-v3 network to return an image embedding which is passed to an attention-based deep-layer LSTM model producing the caption - inspired by the widely recognized Show and Tell Model. We implement a modified beam search to encourage diversity in the captions. We evaluate the quality of our model using perplexity and human assessment on both the quality of memes generated and whether they can be differentiated from real ones. Our model produces original memes that cannot on the whole be differentiated from real ones.

Unsupervised joke generation from big data

Humor generation is a very hard problem. It is difficult to say exactly what makes a joke funny, and solving this problem algorithmically is assumed to require deep semantic understanding, as well as cultural and other contextual cues. We depart from previous work that tries to model this knowledge using ad-hoc manually created databases and labeled training examples. Instead we present a model that uses large amounts of unannotated data to generate I like my X like I like my Y, Z jokes, where X, Y, and Z are variables to be filled in. This is, to the best of our knowledge, the first fully unsupervised humor generation system. Our model significantly outperforms a competitive baseline and generates funny jokes 16% of the time, compared to 33% for human-generated jokes

Generating Original Jokes

Computational Joke generation is a complex problem in the field of artificial intelligence and natural language processing. If successful, however, computational humor would play an essential role in interpersonal communication between humans and computers. In this paper, we use natural language processing (NLP) techniques paired with various models to generate original puns. We found that character-based recurrent neural network (RNN) is a more solid approach to generate original jokes by comparing its results with those generated by trigram and word-based RNN models. Using jokes from sources like Reddit.com, Twitter, and joke specific websites to train our models, we evaluate results and present our conclusions.

Computer, Tell Me a Joke ... but Please Make it Funny: Computational Humor with Ontological Semantics

Computational humor is a subdiscipline of computational linguistics with applications in human computer interfaces, edutainment, affective computing, intelligent agents, and other areas. Based on ontological semantics, we develop the resources and algorithms the computer needs to understand and produce humor, in principle and on a detailed example. Our ultimate output will not be that of a toy system that fills in templates, previously the only available approach, but rather true natural language generation, based on the computer approximation of the human understanding of the joke. This paper shows how ontological semantics provides for and computes the full computer understanding of humor, a sine qua non of humor generation.