Sunday, December 3, 2023
Google search engine
HomeUncategorizedGPT-3 Creative Fiction

GPT-3 Creative Fiction

Creative writing by OpenAI’s GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling. Plus advice on effective GPT-3 prompt programming & avoiding common errors.

I continue my AI poetry generation experiments with OpenAI’s2020 GPT-3, which is 116× larger, and much more powerful, than the 2019 GPT-2⁠. GPT-3, however, is not merely a quantitative tweak yielding “GPT-2 but better”—it is qualitatively different, exhibiting eerie runtime learning capabilities allowing even the raw model, with zero finetuning, to “meta-learn” many textual tasks purely by example or instruction. One does not train or program GPT-3 in a normal way, but one engages in dialogue and writes prompts to teach GPT-3 what one wants.

Experimenting through the OpenAI Beta API in June 2020, I find that GPT-3 does not just match my finetuned GPT-2-1.5b-poetry for poem-writing quality, but exceeds it, while being versatile in handling poetry⁠, Tom Swifty puns⁠, science fiction, dialogue like Turing’s Turing-test dialogue⁠, literary style parodies… As the pièce de résistance, I recreate Stanislaw Lem’s Cyberiad’s “Trurl’s Electronic Bard” poetry using GPT-3. (Along the way, I document instances of how the BPE text encoding unnecessarily damages GPT-3’s performance on a variety of tasks, how to best elicit the highest-quality responses, common errors people make in using GPT-3, and test out GPT-3’s improvements in NN weak points like logic or commonsense knowledge.)

GPT-3’s samples are not just close to human level: they are creative, witty, deep, meta, and often beautiful. They demonstrate an ability to handle abstractions, like style parodies, I have not seen in GPT-2 at all. Chatting with GPT-3 feels uncannily like chatting with a human. I was impressed by the results reported in the GPT-3 paper, and after spending a week trying it out, I remain impressed.

This page records GPT-3 samples I generated in my explorations, and thoughts on how to use GPT-3 and its remaining weaknesses⁠. I hope you enjoy them even a tenth as much as I enjoyed testing GPT-3 and watching the completions scroll across my screen.

The latest and greatest neural network for unrestricted natural language generation is OpenAI’s GPT-3⁠. GPT-3 is like GPT-1 and the GPT-2 I’ve used extensively before1—only much more so, and then going beyond them in a fascinating new way.

Scaling works: quantity is a quality all its own. The scaling of GPT-2-1.5b by 116× to GPT-3-175b has worked surprisingly well and unlocked remarkable flexibility in the form of meta-learning, where GPT-3 can infer new patterns or tasks and follow instructions purely from text fed into it. What can we do with GPT-3? Here, we’re all about having fun while probing GPT-3’s abilities for creative writing tasks, primarily (but far from limited to) poetry. Fortunately, OpenAI granted me access to their Beta API service which provides a hosted GPT-3 model, letting me spend a great deal of time interacting with GPT-3 and writing things. Naturally, I’d like to write poetry with it: but GPT-3 is too big to finetune like I did GPT-2, and OA doesn’t (yet) support any kind of training through their API. Must we content ourselves with mediocre generic poetry, at best, deprived of finetuning directly on chosen poetry corpuses or authors we might like to parody? How much does GPT-3 improve and what can it do?

Turns out: a lot! Below, I walk through first impressions of using GPT-3, and countless samples. In the latest twist on Moravec’s paradox⁠, GPT-3 still struggles with commonsense reasoning & factual knowledge of the sort a human finds effortless after childhood, but handles well things like satire & fiction writing & poetry, which we humans find so difficult & impressive even as adults. In addition to the Cyberiad⁠, I’d personally highlight the Navy Seal & Harry Potter parodies, the Devil’s Dictionary of Science / Academia⁠, “Uber Poem”⁠, “The Universe Is a Glitch” poem (with AI-generated rock music version), & “Where the Sidewalk Ends”⁠.

What Benchmarks Miss: Demos

The GPT-3 paper includes evaluation of zero-shot/​few-shot performance across a wide range of tasks, but I fear that unless one is familiar with the (deadly dull) benchmarks in question, it won’t be impressive. You can skip to the appendix for more example like its poems⁠, or browse the random samples⁠.

The original OpenAI Beta API homepage includes many striking examples of GPT-3 capabilities ranging from chatbots to question-based Wikipedia search to legal discovery to homework grading to translation; I’d highlight AI Dungeon’s Dragon model (example before the March 2021 meltdown), and “Spreadsheets”⁠/ ​“Natural Language Shell”/​“Code Completion”2⁠. Andrew Mayne describes using GPT-3 to generate book recommendation lists & read interactive stories & engage in conversations with historical figures like Ada Lovelace3⁠, summarize texts for elementary school children (also available as a service now, Simplify.so) or as a writing assistant⁠, or summarize AI Dungeon⁠/ ​movies in emoji (Matrix: “🤖🤐”; Hunger Games: “🏹🥊🌽🏆”; see also Tsimpoukelli et al 2021), convert screenplay ↔︎ story⁠, summarize / write emails⁠, translate from legalize⁠, and rewrite HTML⁠. Paras Chopra finds that GPT-3 knows enough Wikipedia & other URLs that the basic Q&A behavior can be augmented to include a ‘source’ URL, and so one can make a knowledge base ‘search engine’ with clickable links for any assertion (ie. the user can type in “What year was Richard Dawkin’s The Selfish Gene published?” and GPT-3 will return a tuple like ("The Selfish Gene was published in 1976","https://en.wikipedia.org/wiki/The_Selfish_Gene") which can be parsed & presented as a search engine). Andreas Stuhlmüller explored using it to create suggestions for predicting on by breaking down high-level forecasting questions. Hendrycks et al 2020 tests few-shot GPT-3 on common moral reasoning problems, and while it doesn’t do nearly as well as a finetuned ALBERT overall, interestingly, its performance degrades the least on the problems constructed to be hardest.

Ryan North experimented with Crunchyroll anime⁠, Star Trek: The Next Generation⁠, & Seinfeld plot summaries. Max Woolf has a repo of GPT-3 example prompts & various completions such as the original GPT-2 “unicorn” article, Revenge of the Sith, Stack Overflow Python questions, and his own tweets (note that many samples are bad because the prompts & hyperparameters are often deliberately bad, eg. the temperature = 0 samples, to demonstrate the large effect of poorly-chosen settings as a warning). Janelle Shan experimented with weird dog descriptions to accompany deformed GAN-dog samples, and 10,000-year nuclear waste warnings based on the famous 1993 Sandia report on long-term nuclear waste warning messages for the Waste Isolation Pilot Plant⁠. Summers-Stay tried imitating Neil Gaiman & Terry Pratchett short stories with excellent results. Arram Sabetti has done “songs, stories, press releases, guitar tabs, interviews, essays, and technical manuals”⁠, with his Elon Musk Dr. Seuss poems a particular highlight. Salahuddin got great results imitating Pablo Neruda’s poetry, as did Brundage with Walt Whitman⁠. Paul Bellow (LitRPG) experiments with RPG backstory generation. Merzmensch Kosmopol enjoyed generating love letters written by a toaster. James Yu co-wrote a SF Singularity short story with GPT-3, featuring regular meta sidenotes where he & GPT-3 debate the story in-character; it was exceeded in popularity by Pamela Mishkin’s “Nothing Breaks Like A.I. Heart” which went full Choose-Your-Own-Adventure. Daniel Bigham plays what he dubs “19 degrees of Kevin Bacon” which links Mongolia to (eventually) Kevin Bacon. Alexander Reben prompted for contemporary art/​sculpture descriptions, and physically created some of the ones he liked best using a variety of mediums like matchsticks, toilet plungers, keys, collage, etc. Tomer Ullman prompted GPT-3 for new philosophy thought experiments. And  / r / aigreentext stems from the serendipitous discovery that GPT-3 is amazingly good at imitating 4chan-style “green text” stories & that the OA Playground interface colors generated text green, so screenshots of real & prompted green text stories look similar.

Harley Turan found that, somehow, GPT-3 can associate plausible color hex codes with specific emoji (apparently language models can learn color from language, much like blind humans do). Even more perplexingly, Sharif Shameem discovered that GPT-3 could write JSX (a Javascript+CSS hybrid) according to a specification like “5 buttons, each with a random color and number between 1–10” or increase / decrease a balance in React or a very simple to-do list and it would often work, or require relatively minor fixes. He also demonstrated a divide-and-conquer approach to making GPT-3 ‘control’ a web browser⁠. GPT-3 can also write some simple SVG shapes or SVG⁠/ ​Chart.js bar graphs⁠, do text→LaTeX and SQL queries⁠, and match k-NN & do regression on toy datasets. While I don’t think programmers need worry about unemployment (NNs will be a complement until they are so good they are a substitute), the code demos are impressive in illustrating just how diverse the skills created by pretraining on the Internet can be. Particularly intriguing in terms of code generation is its ability to write regexps from English descriptions⁠, and Jordan Singer’s Figma plugin which apparently creates a new Figma layout DSL & few-shot teaches it to GPT-3.

(I’d also highlight GPT-3’s version of the famous GPT-2 recycling rant, an attempt at “Epic Rap Battles of History”⁠, GPT-3 playing 200-word tabletop RPGs with itself⁠, the Serendipity recommendation engine which asks GPT-3 for movie/​book recommendations (cf. Ganguli et al 2022), and Lawder’s food label ingredient summarizer⁠.)

One underexplored area of GPT-3 is using its “search” API, which as the name indicates, takes a text prompt (the query) and searches a large set of possible results, and returns the ‘most similar’ one, in a highly abstract sense; Andrew Mayne demonstrates that it’s much more than a simple keyword search engine by doing things like searching for abstract movie plots.4

For my main discussion of why GPT-3 works and its implications, see “On GPT-3: Meta-Learning, Scaling, Implications, And Deep Theory” (see also Backstop). Below is the summary:

GPT-3, announced by OpenAI in May 2020, was the largest neural network ever trained, by over an order of magnitude. Trained on Internet text data, it is the successor to GPT-2, which surprised everyone by its natural language understanding & generation ability. GPT-3 is even more surprising in that this vast increase in size did not run into diminishing returns⁠, as many expected, but the benefits of scale continued to happen as forecasted by OpenAI. These benefits were not merely learning more facts & text than GPT-2, but qualitatively distinct & surprising in showing meta-learning: while GPT-2 learned how to do common natural language tasks like text summarization, GPT-3 instead learned how to follow directions and learn new tasks from a few examples. (As a result, GPT-3 outputs & interaction are more fascinating & human-like than GPT-2.)

While the immediate applications of GPT-3, like my poetry or humor writings, are nice, the short-term implications of GPT-3 are much more important.

First, while GPT-3 is expensive by conventional DL standards, it is cheap by scientific/​commercial/​military/​government budget standards, and the results indicate that models could be made much larger. Second, models can also be made much more powerful, as GPT is an old approach known to be flawed in both minor & major ways, and far from an ‘ideal’ Transformer⁠. Third, GPT-3’s capabilities come from learning on raw (unsupervised) data; that has long been one of the weakest areas of DL, holding back progress in other areas like reinforcement learning or robotics. Sequence models can learn rich models of environments & rewards (either online or offline), and implicitly plan and perform well (Chen et al 2021’s Decision Transformer is a demonstration of how RL can lurk in what looks merely like simple supervised learning). Models like GPT-3 suggest that large unsupervised models will be vital components of future DL systems, as they can be ‘plugged into’ systems to immediately provide understanding of the world, humans, natural language, and reasoning.

The meta-learning has a longer-term implication: it is a demonstration of the blessings of scale, where problems with simple neural networks vanish, and they become more powerful, more generalizable, more human-like when simply made very large & trained on very large datasets with very large compute—even though those properties are believed to require complicated architectures & fancy algorithms (and this perceived need drives much research). Unsupervised models benefit from this, as training on large corpuses like Internet-scale text present a myriad of difficult problems to solve; this is enough to drive meta-learning despite GPT not being designed for meta-learning in any way. (This family of phenomena is perhaps driven by neural networks functioning as ensembles of many sub-networks with them all averaging out to an Occam’s razor, which for small data & models, learn superficial or memorized parts of the data, but can be forced into true learning by making the problems hard & rich enough.)

The blessings of scale in turn support a radical theory: an old AI paradigm held by a few pioneers in connectionism (early artificial neural network research) and by more recent deep learning researchers, the scaling hypothesis. The scaling hypothesis regards the blessings of scale as the secret of AGI: intelligence is ‘just’ simple neural units & learning algorithms applied to diverse experiences at a (currently) unreachable scale. As increasing computational resources permit running such algorithms at the necessary scale, the neural networks will get ever more intelligent.

When? Estimates of Moore’s law-like progress curves decades ago by pioneers like Hans Moravec indicated that it would take until the 2010s for the sufficiently-cheap compute for tiny insect-level prototype systems to be available, and the 2020s for the first sub-human systems to become feasible, and these forecasts are holding up. (Despite this vindication, the scaling hypothesis is so unpopular an idea, and difficult to prove in advance rather than as a fait accompli, that while the GPT-3 results finally drew some public notice after OpenAI enabled limited public access & people could experiment with it live, it is unlikely that many entities will modify their research philosophies, much less kick off an ‘arms race’.)

Depending on what investments are made into scaling DL, and how fast compute grows, the 2020s should be quite interesting—sigmoid or singularity?

Objective metrics hard to interpret. How much better is (un-finetuned base) GPT-3? The likelihood loss is an absolute measure, as are the benchmarks, but it’s hard to say what a decrease of, say, 0.1 bits per character might mean, or a 5% improvement on SQuAD, in terms of real-world use or creative fiction writing. It feels like a large improvement, definitely a larger improvement than going from GPT-2-345M to GPT-2-1.5b, or GPT-2-1.5b to GPT-3-12b, but how much?

Screening gains: 1:100 → 1:5 or 20× better? For fiction, I treat it as a curation problem: how many samples do I have to read to get one worth showing off? One could think of it asking how efficiently a model searches The Library of Babel (or should that be, The Book of Sand, or “The Aleph”?): at the one extreme, an algorithm which selects letters at random will have to generate astronomically large numbers of samples before, like the proverbial monkeys, they generate a page from a Shakespeare play; at the other extreme, a reasonably intelligent human can dash off 1 plausible page in 1 try. With AI algorithms, the results are intermediate but rapidly improving. A Markov chain text generator trained on a small corpus represents a huge leap over randomness: instead of having to generate quadrillions of samples, one might only have to generate millions of samples to get a coherent page; this can be improved to hundreds of thousands by increasing the depth of the n of its n-grams, which is feasible as one moves to Internet-scale text datasets (the classic “unreasonable effectiveness of data” example) or by careful hand-engineering & combination with other approaches like Mad-Libs-esque templating. A char-RNN⁠, like in my char-RNN poetry experiments does better still: it easily generates reasonable paragraphs, so one might only have to brute force on the order of thousands of samples to get a pleasing page. With GPT-2-117M poetry, I’d typically read through a few hundred samples to get a good one, with worthwhile improvements coming from 345M→774M→1.5b; by 1.5b, I’d say that for the crowdsourcing experiment⁠, I read through 50–100 ‘poems’ to select one. But for GPT-3, once the prompt is dialed in, the ratio appears to have dropped to closer to 1:5—maybe even as low as 1:3! I frequently find myself shrugging at the first completion I generate, “not bad!” (Certainly, the quality of GPT-3’s average prompted poem appears to exceed that of almost all teenage poets.) I would have to read GPT-2 outputs for months and probably surreptitiously edit samples together to get a dataset of samples like this page.

On two occasions I have been asked,—‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

Charles Babbage⁠, Passages from the Life of a Philosopher 1864

The GPT-3 neural network is so large a model in terms of power and dataset that it exhibits qualitatively different behavior: you do not apply it to a fixed set of tasks which were in the training dataset, requiring retraining on additional data if one wants to handle a new task (as one would have to retrain GPT-2); instead, you interact with it, expressing any task in terms of natural language descriptions, requests, and examples, tweaking the prompt until it “understands” & it meta-learns the new task based on the high-level abstractions it learned from the pretraining.

This is a rather different way of using a DL model, and it’s better to think of it as a new kind of programming, prompt programming, where the prompt is now a coding language which programs GPT-3 to do new things.

A new programming paradigm? The GPT-3 neural network is so large a model in terms of power and dataset that it exhibits qualitatively different behavior: you do not apply it to a fixed set of tasks which were in the training dataset, requiring retraining on additional data if one wants to handle a new task (as one would have to retrain GPT-2); instead, you interact with it, expressing any task in terms of natural language descriptions, requests, and examples, tweaking the prompt until it “understands” & it meta-learns the new task based on the high-level abstractions it learned from the pretraining. This is a rather different way of using a DL model, and it’s better to think of it as a new kind of programming, where the prompt is now a “program” which programs GPT-3 to do new things. “Prompt programming”5 is less like regular programming than it is an exercise in a kind of tacit knowledge⁠/ ​mechanical sympathy⁠. It is like coaching a superintelligent cat into learning a new trick: you can ask it, and it will do the trick perfectly sometimes, which makes it all the more frustrating when it rolls over to lick its butt instead—you know the problem is not that it can’t but that it won’t.

Reprogramming by asking politely. The demos above and on this page all6 use the raw default GPT-3 model, without any additional training. Instead, to get all these different behaviors, one provides a short textual input to GPT-3, with which it will predict the next piece of text (as opposed to starting with an empty input and freely generating anything); GPT-3, just by reading it, can then flexibly adapt its writing style and reasoning and use new definitions or rules or words defined in the textual input no matter that it has never seen them before.

What is meta-learning? This is considered “meta-learning” because GPT-3 has “learned how to learn”: in its endless training on so many gigabytes of text, it encounters so many different kinds of text that it had no choice but to learn abstractions & how to understand descriptions & instructions & formatting & authorial intent to let it adapt on the fly to the current piece of text it was training on, since there was too much diversity & data for it to simply learn each task normally by repeated exposure—much less memorize all the data. At scale, for a sufficiently powerful (large) NN, the simplest & easiest algorithms to learn for better prediction are abstractions & intelligence: the harder and bigger, the better. When GPT-3 meta-learns, the weights of the model do not change, but as the model computes layer by layer, the internal numbers become new abstractions which can carry out tasks it has never done before; in a sense, the GPT-3 model with the 175b parameters is not the real model—the real model is those ephemeral numbers which exist in between the input and the output, and define a new GPT-3 tailored to the current piece of text. The real GPT-3 is not the fixed hardwired weights, which merely are a bootstrap or a compiler for creating the real GPT-3, a new model customized to the data which exists only briefly in the soft attention weights during runtime, and may do completely different things from the baseline model.7

Few-shot learning/​writing prompts: “Software 3.0”? (Andrej Karpathy, 2020-06-18)

Programming by dialogue? Because you aren’t finetuning GPT-3 in the conventional way, interacting with GPT-3 via its few-shot learning power takes on an entirely different feeling than anything else I’ve used before. With regular software, you have to think through exactly how to do something; with deep learning software, you have to focus on providing data which in some way embodies the correct answer which you want; but with GPT-3, you instead think about how to describe what you want. With GPT-3, it helps to anthropomorphize it: sometimes you literally just have to ask for what you want. (It can’t possibly be that easy, can it? Sometimes, it is!) Thus, you can simply ask it directly in the Q&A format: “what is X?” For example, if you want it to detect gibberish questions and avoid trying to answer them and show some understanding of its uncertainty⁠, you can specify in the prompt that it shouldn’t answer nonsense questions, and you can ask it to double-check an earlier answer; if you find it doesn’t seem to understand that a horse has two eyes or that a toaster weighs more than a pencil, perhaps asking more questions with better settings will fix that. Other times, you must instead think, “If a human had already written out what I wanted, what would the first few sentences sound like? What would the introduction and summary sound like? What if I told a story here, how would that story start?” Thus, the summarization prompt: “My second grader asked me what this passage means: …” Some tasks in the GPT-3 paper which showed disappointing performance can be improved dramatically by finding appropriate formatting or prompts: arithmetic improves enormously with comma formatting of decimals (due to BPEs), and the “Word in Context” benchmark (where GPT-3 surprisingly showed below-chance performance compared to the 85% SOTA) can be improved to >70% with better prompting, while on MNLI & SuperGLUE benchmarks better RoBERTa prompts are worth hundreds of datapoints⁠. Or Reynolds & McDonell2021 demonstrate that the GPT-3 paper substantially underestimates GPT-3’s ability to translate Fr→En: to my considerable surprise, the straightforward 10-example translation prompt Brown et al used is actually worse than the zero-shot “French: XYZ / English:”, because, apparently, when formatted that way the 10-shots look like a narrative to follow rather than merely demonstrative examples. Even for BERT or GPT-2, large gains in performance are possible by directly optimizing the prompt instead of guessing (Jiang et al 2019⁠, Li & Liang2021). (Outputs can be further improved in a knowledge-free way by calibrating sets of outputs to compensate for the vagaries of greedy sampling, which would again not be possible if the knowledge were not in GPT-3 to begin with.)

Sampling Can Prove The Presence Of Knowledge But Not The Absence

GPT-3 may “fail” if a prompt is poorly-written, does not include enough examples, or bad sampling settings are used. I have demonstrated this many times when someone shows a “failure” of GPT-3—the failure was their own. The question is not whether a given prompt works, but whether any prompt works⁠.

Any child psychologist trained in administering IQ tests is well-aware of the need to build rapport with children, to monitor them for problems and gauge their linguistic skills: are they not a native English speaker? Are they angry with or afraid of the psychologist? Are they apathetic and unmotivated? It is hard to ace an IQ test by accident, but it’s trivial to fail one on purpose; trying to administer an IQ test to a child who has taken a disliking to you is a waste of the time of everyone involved, and presenting the resulting score as meaningful is professional malpractice.

The Lizardman Constant: nonsense prompt completions by humans.

Another cautionary example comes from survey research. To briefly review Scott Alexander’s “lizardman constant”: human survey-takers will, with >0% probability, endorse the most absurd items on a survey, for a mix of reasons like laziness, boredom, humor, sabotage, ignorance, and stupidity. For example, 4% of respondents may endorse the claim ‘lizard-people rule the earth’, 5% of atheists believe in God, and so on. (And these are not necessarily transient random errors—when challenged explicitly on them, researchers find many will come up with bizarre rationalizations to explain responses like how they answered ‘yes’ to “I have had a fatal heart attack”.) This cautions us against taking survey results about extremely unusual people or traits too literally, or expecting perfectly accurate results, as given the lizardman constant and other crud factors, it is entirely possible that some or all of the outliers may just be the lizardman constant at work.

Humans need prompt programming too. Should we conclude from such cases that humans, or at least some specific humans, are not actually intelligent? No, of course not. We would say that such people have simply not been properly instructed or educated, given incentive to be honest, or made normal unavoidable errors. It would be tendentious in the extreme to conclude that because some people will claim to have suffered fatal heart attacks that they are merely statistical pattern-matching machines emitting plausible yet semantically-null utterances while passing for human; if we want to conclude that, I hope we would probe them a little more thoughtfully than prompting them with some survey items and declaring the case closed!

Demand more from critics. We should expect nothing less of people testing GPT-3, when they claim to get a low score (much less stronger claims like “all language models, present and future, are unable to do X”): did they consider problems with their prompt? Whether all of the hyperparameters make sense for that task? Did they examine where completions go wrong, to get an idea of why GPT-3 is making errors? Did they test out a variety of strategies? Did they consider qualitatively how the failed completions sound? (Or did they copy-paste arbitrary hyperparameters, use the first prompt that came to mind, look at the output, and lazily present it to the world as proof of what GPT-3 can’t do?)

Machine sympathy. Prompt programming often should be human-like: if a human wouldn’t understand what was intended, why would GPT-3? It’s not telepathic, and there are myriads of genres of human text which the few words of the prompt could belong to. (A helpful thought experiment: if someone emailed you a prompt out of the blue, with no other context whatsoever, what would you interpret it as? A joke, a troll, spam, or what?) Prompts should obey Gricean maxims of communication—statements should be true, informative, and relevant. One should not throw in irrelevant details or non sequiturs, because in human text, even in fiction⁠, that implies that those details are relevant, no matter how nonsensical a narrative involving them may be.8 When a given prompt isn’t working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn’t constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary. (This was a particular problem with the literary parodies: GPT-3 would keep starting with it, but then switch into, say, one-liner reviews of famous novels, or would start writing fanfictions, complete with self-indulgent prefaces. The solution was to write out the first 2 or 3 sentences of an example parody, and then GPT-3 would finish out the parody, look back and see that there was an example of a literary parody, and then happily start generating dozens of works+parody pairs, once it fell into the groove.) The more natural the prompt, like a ‘title’ or ‘introduction’, the better; unnatural-text tricks that were useful for GPT-2, like dumping in a bunch of keywords bag-of-words-style to try to steer it towards a topic, appear less effective or harmful with GPT-3.

Surprisingly powerful. Prompts are perpetually surprising—I kept underestimating what GPT-3 would do with a given prompt, and as a result, I underused it. Text is a weird way to try to input all these queries and output their results or examine what GPT-3 thinks (compared to a more natural NLP approach like using BERT’s embeddings), and fiddly. Just as few people would have thought that you could get GPT-2 to automatically summarize text by simply appending a “TL;DR:” string, few people would guess GPT-3 could write emoji summaries or that if you use a prompt like “Summarize the plot of J.K. Rowling’s Harry Potter in the style of Ernest Hemingway”, you might get out a dozen profanity-laced reviews panning 20th-century literature (or a summary—in Chinese—of the Chinese translation9), or that if you use a prompt like “Transformer AI poetry: Poetry classics as reimagined and rewritten by an artificial intelligence”, GPT-3 will generate poems but then immediately generate explanations of how neural networks work & discussions from eminent researchers like Gary Marcus of why they will never be able to truly learn or exhibit creativity like generating poems. It is difficult to try out variations on prompts because as soon as the prompt works, it’s tempting to keep trying out completions to marvel at the sheer variety and quality as you are seduced into further exploring possibility-space. (GPT-3 never grows impatient or bored.) What other capabilities are latent⁠, waiting to be exposed by someone stumbling across the right prompt?

(Of course, not all these capabilities are necessarily desirable: where there is programming, you can be sure there is hacking. Where there is “prompt programming”, there must be “prompt hacking”… GPT-3 can follow instructions, so within its context-window or with any external memory, it is surely Turing-complete, and who knows what weird machines or adversarial reprogrammings are possible? Consider the AI Dungeon users as an early example of “prompt hacking”.)

Finetuning

Finetuning was necessary to ‘program’ GPT-2. GPT-3’s “prompt programming” paradigm is strikingly different from GPT-2, where its prompts were brittle and you could only tap into what you were sure were extremely common kinds of writing, and, as like as not, it would quickly change its mind and go off writing something else. At best, you could fairly generically hint at a topic to try to at least get it to use keywords; then you would have to filter through quite a few samples to get one that really wowed you. (This was a trick I used for TWDNE to get it to generate at least vaguely anime-related plot summaries.) To get output reliably out of GPT-2, you had to finetune it on a preferably decent-sized corpus.

Do we need finetuning given GPT-3’s prompting? But with GPT-3, you can just say so, and odds are good that it can do what you ask, and already knows what you’d finetune it on. (For example, I thought I would have to finetune GPT-3 to get samples of myself, since GPT-2 doesn’t know anything about “Gwern”/​“gwern.net”; but it turns out, all I have to do is put in “A new essay by Gwern Branwen (gwern.net):” and out comes an uncanny simulacrum of myself⁠, or Scott Alexander⁠, or Paul Graham⁠, or…) Would it be better if finetuned? Indubitably. But it’s not necessary. And given the creativity of the non-finetuned GPT-3, I’m not sure that I even want to—and forfeit all the behaviors I haven’t yet discovered‽

As of mid-June 2020, the OpenAI API does not support finetuning although OA was working on it. But after enough time playing with GPT-3, I have begun to wonder: at this level of meta-learning & general knowledge, do we need finetuning at all?

For GPT-2, I saw finetuning as doing 2 things:

  1. Fixing ignorance: missing domain knowledge

    GPT-2 didn’t know many things about most things—it was just a handful (1.5 billion) of parameters trained briefly on the tiniest fraction of the Common Crawl subset of the Internet, without any books even10⁠. It’s not surprising that for many domains, it wouldn’t know the details; and even if the dataset included adequate text, it did not train on that data many times, and the knowledge competed with all the other domains it needed to know about, interfering.

    But GPT-3 already knows everything! GPT-3 is so much larger on every dimension that this seems like much less of a problem for any domain which is already well-represented in public HTML pages. GPT-2 might need to be trained on a fanfiction corpus to learn about some obscure character in a random media franchise & generate good fiction, but GPT-3 already knows about them and use them appropriately in writing new fiction.

  2. Prompting a specific task:

    Even when GPT-2 knew a domain adequately, it had the frustrating behavior of rapidly switching domains. You might prompt it with a poem genre it knows adequately already, but then after a few lines, it would generate an end-of-text BPE and switch to generating a news article on Donald Trump. (Trump shows up a lot.) Presumably, while poetry was reasonably represented, it was still rare enough that GPT-2 considered poetry highly unlikely to be the next word, and keeps trying to jump to some more common & likely kind of text, and GPT-2 is not smart enough to infer & respect the intent of the prompt.

    GPT-3 exhibits much less of this ‘mode switching’ sort of behavior. Perhaps because it is trained on a much larger and more comprehensive dataset (so news articles aren’t so dominant), but also I suspect the meta-learning makes it much better at staying on track and inferring the intent of the prompt—hence things like the “Transformer poetry” prompt⁠, where despite being what must be highly unusual text, even when switching to prose, it is able to improvise appropriate followup commentary.

    Nevertheless, sometimes we can’t or don’t want to rely on prompt programming. A specific task may be necessary when a task has evaded our prompt programming skills, or we have data but not prompt programmer time. For example, in the GPT-3 paper, many tasks underperform what GPT-3 can do if we take the time to tailor the prompts & sampling hyperparameters, and just throwing the naive prompt formatting at GPT-3 is misleading. However, researchers do not have the time to go through scores of benchmark tasks and fix them one by one; simply finetuning on them collectively ought to do at least as well as the correct prompts would, and requires much less human effort (albeit more infrastructure).

So, what would be the point of finetuning GPT-3 on poetry or literature? It has likely already seen the finetuning corpus, knows most of it, and will tractably generate poems on demand. There may be gains, but I wonder if they would be nearly as large as they were for GPT-2?

Playground

All of the following samples were generated using the OpenAI Beta Playground, which looks like this:

OA API Beta Playground UI & available prewritten prompts/​sampling options

The Playground has some rough edges in Beta, and capacity issues. A good way to start is to generate samples with the log probs/​logits turned on, and paying attention to how sampling hyperparameters affect output, to gain intuition for how GPT-3 thinks & what samples looks like when sampling goes haywire.

The quality vs diversity tradeoff for top-k/​nucleus sampling on GPT-2 news articles: more extreme settings like top-k = 10 / topp = 0.6 are equally good to get the highest human ratings—but both come at at the expense of variety of possible completions. (Nadeem et al 2020; see also Zhang et al 2020, Dou et al 2021)

Tradeoff: diversity vs accuracy. It offers the standard sampling options familiar from earlier GPT-2 interfaces, including “nucleus sampling”⁠. One particularly manipulates the temperature setting to bias towards wilder or more predictable completions; for fiction, where creativity is paramount, it is best set high, perhaps as high as 1, but if one is trying to extract things which can be right or wrong, like question-answering, it’s better to set it low to ensure it prefers the most likely completion. (After all, the point of a high temperature is to regularly select completions which the model thinks aren’t likely; why would you do that if you are trying to get out a correct arithmetic or trivia question answer?) For topp, one can set it to ~0.95 and largely forget about it unless one suspects that it’s breaking answers like top-k and it needs to be much lower, like 0.5; it’s there to cut off the tail of gibberish completions and reduce repetition, so doesn’t affect the creativity too much. I generally avoid the use of the repetition penalties because I feel repetition is critical to creative fiction, and I’d rather err on the side of too much than too little, but sometimes they are a useful intervention; GPT-3, sad to say, maintains some of the weaknesses of GPT-2 and other likelihood-trained autoregressive sequence models, such as the propensity to fall into degenerate repetition.

Ranking final results for quality gain. A little more unusually, it offers a “best of” (BO) option which is the Meena ranking trick (other names include “generator rejection sampling” or “random-sampling shooting method”: generate n possible completions independently, and then pick the one with best total likelihood, which avoids the degeneration that an explicit tree/​beam search would unfortunately trigger, as documented most recently by the nucleus sampling paper & reported by many others about likelihood-trained text models in the past eg. char-RNN in 2015, Koehn & Knowles⁠, or Ott et al 201811). I’m not sure how to best use BO: it seems to be highly helpful for things with one right answer (such as tricky Q&A or reasoning), but when it helps with ‘creative’ completions is less clear. I tried out BO heavily because I couldn’t quite figure out how it interacts with quality. On the smaller models, it seems to help boost quality up towards ‘davinci’ (GPT-3-175b) levels without causing too much trouble, but on davinci, it appears to exacerbate the usual sampling issues: particularly with poetry, it’s easy for a GPT to fall into repetition traps or loops, or spit out memorized poems, and BO makes that much more likely. For generating completions of famous poems, it’s quite hard to get GPT-3 to generate new versions unless you actively edit the poem to force a difference. (In the most extreme case, in the case of generating new variations on “Jabberwocky”⁠, I have been unable to generate any new versions under any setting, even taking the step of aggressively editing in new lines about how the vorpal sword bounced off the Jabberwocky and it won… It always spits out chunks of the original.12) So BO is a double-edged sword.

The best way I found to use it is to sample without it (BO = 1) at max temp, and then once it has several distinctly different lines, then sampling with more (eg. BO = 5) seems to help rather than hurt. This is a little surprising to me because for Meena, it made a large difference to do even a little BO, and while it had diminishing returns, I don’t think there was any point they tested where higher best-of-s made responses actually much worse (as opposed to merely n times more expensive). Possibly BO is much more useful for nonfiction/​information-processing tasks, where there’s one correct answer and BO can help overcome errors introduced by sampling or myopia.

Effective Prompt Programming

To constrain the behavior of a program precisely to a range may be very hard, just as a writer will need some skill to express just a certain degree of ambiguity. A computer is like a violin. You can imagine a novice trying first a phonograph and then a violin. The latter, he says, sounds terrible. That is the argument we have heard from our humanists and most of our computer scientists. Computer programs are good, they say, for particular purposes, but they aren’t flexible. Neither is a violin, or a typewriter, until you learn how to use it.

Marvin Minsky⁠, “Why Programming Is a Good Medium for Expressing Poorly-Understood and Sloppily-Formulated Ideas” 1967

Anthropomorphize your prompts. There is no substitute for testing out a number of prompts to see what different completions they elicit and to reverse-engineer what kind of text GPT-3 “thinks” a prompt came from, which may not be what you intend and assume (after all, GPT-3 just sees the few words of the prompt—it’s no more a telepath than you are). If you ask it a question to test its commonsense reasoning like “how many eyes does a horse have” and it starts completing with a knock-knock joke, you need to rethink your prompt! Does it spit out completions that look like it’s thinking but it’s executing the wrong algorithm, or it falls back to copying parts of the input? Then one may need to few-shot it by providing examples to guide it to one of several possible things to do. One should also keep in mind the importance of sampling parameters, and whether one is looking for a single correct answer (so low temp with BO = 1 if compute-limited, or high temp and BO = 20 if possible) or if one is trying for creative answers (high temp with repetition penalties).

The 4 Horsemen: short context, bad prompts, BPEs, random sampling. My rule of thumb when dealing with GPT-3 is that if it is messing up, the errors are usually attributable to one of 4 problems: too-short context windows, insufficient prompt engineering, BPE encoding making GPT-3 ‘blind’ to what it needs to see to understand & solve a problem, or noisy sampling sabotaging GPT-3’s attempts to show what it knows. Another useful heuristic is to try to express something as a multi-step reasoning process or “inner monologue”⁠, such as a dialogue: because GPT-3 is a feedforward NN, it can only solve tasks which fit within one “step” or forward pass; any given problem may be too inherently serial for GPT-3 to have enough ‘thinking time’ to solve it, even if it can successfully solve each intermediate sub-problem within a step. So people have demonstrated that GPT-3 won’t solve a simple math problem in a single step, but it will solve it if you reframe it as a ‘dialogue’ with the anime character Holo—who knew neural network research would lead to anime wolfgirl demonology?—and even ask it to guess-and-check or brute-force the answer (see also Austin et al 2021); one can also experiment in coaching it through examples13⁠, or requiring reasons for an answer to show its work, or asking it about previous answers or using “uncertainty prompts”. This makes sense if we think of Transformers as unrolled RNNs which unfortunately lack a hidden state: serializing out the reasoning helps overcome that computational limitation.

Logprob debugging. GPT-3 does not directly emit text, but it instead predicts the probability (or “likelihood”) of the 51k possible BPEs given a text; instead of merely feeding them into some randomized sampling process like temperature top-k/​topp sampling, one can also record the predicted probability of each BPE conditional on all the previous BPEs. This gives you a simple idea of what GPT-3 is thinking about each BPE: is it likely or unlikely (given the previous BPEs)? Which BPEs are especially unlikely? Does it “get it” as the completion goes on? I don’t use logprobs much but I generally use them in 1 of 3 ways: I use them to see if the prompt ‘looks weird’ to GPT-3; to see where in a completion it ‘goes off the rails’ (suggesting the need for lower temperatures/​topp or higher BO); and to peek at possible completions to see how uncertain it is about the right answer—a good example of that is Arram Sabeti’s uncertainty prompts investigation where the logprobs of each possible completion gives you an idea of how well the uncertainty prompts are working in getting GPT-3 to put weight on the right answer, or in my parity analysis where I observed that the logprobs of 0 vs 1 were almost exactly 50:50 no matter how many samples I added, showing no trace whatsoever of few-shot learning happening. Thus, logprobs can offer more insight while debugging a prompt than just repeatedly hitting ‘complete’ and getting frustrated.

AI Dungeon ≠ GPT-3

I strongly recommend against use of the Dragon model as a “GPT-3” model. The finetuning appears to have seriously degraded the model, in addition to the censoring & filtering now done.

As of June 2021, I recommend either waiting for API access or using GPT-J (high quality albeit 2 OOMs smaller).

AI Dungeon < GPT-3. For people using the AI Dungeon (AID) route, things are tricky because AID users don’t have the same sampling options that API users do (no best-of is particularly painful when trying to elicit correct answers to hard questions), and no control over the full prompt/​history, with AID doing lots of things behind the scenes on a model that may have been finetuned on RPG-like material & countless AID game transcripts etc, and with quality of model completely out of their hands (does choosing “custom” get you Dragon, or do you have to choose a different mode & edit it? the necessary trick seems to change over time), with occasional drastic quality drops reported by many AID users when… something changes on the backend (as, most dramatically, it did in March 2021, seriously degrading quality). For example, if you are an AID user, were you aware that the first response for a custom prompt is actually always GPT-2⁠, to try to block backdoor GPT-3 access? Or that OA requires toxicity filters? Or that “We cut off the generation at certain points (trailing sentences etc…) Disable certain tokens to improve performance or make generation safer, fine-tune on text adventures and only use the last ~1000 tokens of context.” Sampling is further modified by directly adjusting the logits before sampling. A cautionary example of AID use comes from Gary Marcus & Ernest Davis’s use: they filtered a large number of questions through AID to try to find cases GPT-3 would fail on; however, when the AID failure cases were tested on GPT-3 by Douglas Summers-Stay, it solved half of them! (AID is designed to produce fun text adventures, not be a NLP testbed, and that shows when one tries to use AID as a backdoor to GPT-3. It’s worth noting that prompts do not transfer between models and it stands to reason that they do not necessarily transfer between original vs finetuned models either.) To work around this, AID users seem to need to warm up sessions carefully with descriptive prompts / interactions to overcome the gamification, and avoid anything that might veer back into comedy or drama. And as AID itself is constantly changing, the necessary tricks & quality also change (not always for the better).

The root cause of Dragon issues—much-inferior performance, the need for ‘warmup’, the bizarre frequency of snuff endings reported by users, persistent unwanted intrusions by characters like “Count Grey” etc—all seem to trace back to the corpus that Dragon was finetuned on. A closer look at the original finetuning data in May 2021 by an “Aurora” revealed that the data was far lower quality than anyone had realized; in addition to simply being wretched writing, it includes much highly objectionable content (rape, torture, child molestation & murder etc). Finetuning on such a dataset is highly ill-advised, and explains many of the oddities about Dragon vs GPT-3. (The ‘warmup’, for example, may be necessary to convince the model to switch to ‘high-quality mode’, as it defaults to imitating the finetuning data; similar to how one must avoid typos & errors in GPT-3 prompts lest GPT-3 infer the author is an idiot and quite sensibly predict the completion is more idiotic text.)

Only once these have been ruled out do I start considering alternative explanations like “language models will never solve X”.

Limited memory, repetition/​divergence, BPE encoding. GPT-3 is, of course, not perfect. We should keep that in mind when evaluating it. As a scaled-up GPT-2, it has mostly the same weaknesses, and my thoughts on improvements remain mostly the same (aside from moving away from BPEs, which need is becoming increasingly urgent; see the next section).

Artificial intelligence programs like deep learning neural networks may be able to beat humans at playing Go or chess, or doing arithmetic, or writing Navy Seal copypasta, but they will never be able to truly think for themselves, to have consciousness, to feel any of the richness and complexity of the world that we mere humans can feel. Mere, unenlightened humans might be impressed by the abilities of simple deep learning programs, but when looked at in a more holistic manner, it all adds up to… well, nothing. They still don’t exhibit any trace of consciousness. All of the available data support the notion that humans feel and experience the world differently than computers do. While a computer can beat a human master at chess or Go or some other game of structured rules, it will never be able to truly think outside of those rules, it will never be able to come up with its own new strategies on the fly, it will never be able to feel, to react, the way a human can. Artificial intelligence programs lack consciousness and self-awareness. They will never be able to have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, for the environment. They will never enjoy music or fall in love, or cry at the drop of a hat. Merely by existing, mere, unenlightened humans are intellectually superior to computers, no matter how good our computers get at winning games like Go or Jeopardy. We don’t live by the rules of those games. Our minds are much, much bigger than that.

Wait, I’m sorry—that preceding paragraph on the weaknesses of deep learning was actually written by GPT-3⁠, and is in the wrong section. (Management regrets the mistake.) But seriously, what weaknesses does GPT-3 have?

Small Context Window

No memory (fixable). The first limit is that it remains hobbled by the limited context window. GPT-3 has no form of memory or recurrence, so it cannot see anything outside its limited 2048 BPEs (roughly, 500–1000 words). This means it cannot hope to write anything of any serious length, because the beginning will soon vanish over the event horizon, and it also limits its ability to engage in few-shot learning, for the same reason: the prompt+generation will quickly exceed the window length. While the damage may be limited for tasks where the format is repetitive, like Q&A (so GPT-3 can do the necessary meta-learning over its completions just as well as over the original prompt), this does limit it and is frustrating. There are many possible solutions to quadratic attention.

Repetition/​Divergence Sampling

Repetition/​gibberish (mystery). Autoregressive language models trained by likelihood (prediction) loss all share an extremely annoying problem: when you generate free-form completions, they have a tendency to eventually fall into repetitive loops of gibberish. Whether GPT-2 or T5 or etc, they all seem to do it, and if one tries to avoid such extremely dumb & crude sampling strategies like top-k temperature sampling by doing explicit search for likely text completions, such as beam search sampling, these searches actually make the problem worse, and the better your search is, the worse the results are. Tweaks like nucleus sampling can reduce it, but do not eliminate it. (No one has tried gradient ascent for generating optimal samples, as far as I know.) Since GPT-2-1.5b seemed almost as prone as GPT-2-117M, I was unsurprised to find that GPT-3 too falls easily into the repetition trap.

Why repetition? This behavior remains puzzling and I don’t think anyone really knows how to fix it. Top-k or nucleus sampling can’t be right and are clearly ugly ad hoc hacks, but is the core problem likelihood training or sampling, or what? And why is it never a problem for other kinds of sequences like images, and much less of one for music, or in tasks like neural translation where tricks like beam search are always used because they do improve? (We don’t see it in char-RNNs or GPT-2s trained on ABC/​MIDI music, or OA Jukebox trained on raw audio; we certainly don’t see it in iGPT or PixelRNN etc.) Likelihood training is compellingly simple and efficient, and we know that real brains are constantly predicting future inputs; it seems implausible that the entire problem will disappear if we slap on some Bayesian tricks to get posterior estimates of the likelihood of each possible BPE completion (and I’m not aware of anyone showing that it does in something like a small Bayesian RNN trained with HMC or by using deep ensembling or other Bayesian approximations). Further, if likelihood training is so bad, why does minimizing the predictive loss work so consistently over a wide range to improve the quality of generations and how useful the model is for zero/​few-shot learning or semi-supervised tasks, and why does the loss correlate near-perfectly with human ratings of quality in the Meena paper?

Language Prediction = Imitation Learning? My intuition is that the repetition trap is essentially the DAgger/​off-policy imitation learning problem in a non-RL guise: as the model is fed back in its own guesses as a ground truth, the confabulated text becomes gradually more off-policy and divergent from real human-written text (which is backed by a knowledge base & a purpose), and the model is unable to come up with sensible continuations (having never trained on such gibberish) and does not ‘want’ to get back on track (having been trained purely to make one-step predictions). The solution might look something like detecting when a completion might go too far off-distribution and backtracking, or more RL-like training of generation as opposed to mere prediction. It would probably help also to use some sort of hierarchical or planning method: one might be able to convince GPT-3 to generate summaries and then expand each line of the summary recursively (Tan et al 2020 does something similar using a bag-of-words topic with GPT-2/​BART to “upscale” a seed; the most impressive demonstration of recursive generation thus far is DeepMind’s “Dramatron” which can write coherent screenplays).

BPEs

Compared to GPT-2, GPT-3 improves performance on character-level tasks like rhyming, alliteration, punning, anagrams or permutations, acrostic poems, and arithmetic less than expected, despite being very good at many other closely-related kinds of writings like satire.

Why? A plausible explanation is an obscure technical detail: as a performance optimization, GPT does not see characters but ~51k word or sub-word-chunks called “byte-pair encodings” (BPEs). A BPE can range from an individual letter like “e”, to words like “nine” (BPE #30,888 in the OA GPT-2 BPE vocab), to horrifying things like “rawdownloadcloneembedreportprint” (BPE #30,906). The number “10” might be encoded as just “10” (BPE #940), or it might be encoded as the token “1” (#16) followed by “0” (#15); the number 70710 (no commas!) might be encoded as “70710” (BPE #42,877) or… as quite a lot of different possible sequences of BPEs.

Because GPTs never see characters but opaque partial-words, which vary chaotically based on the specific word and even the surrounding context, they are unable to easily learn about character-level aspects of language, like similar spellings or sounds, and are forced to learn relationships much more indirectly, like by brute-force memorizing of pairs of words.

Some experiments with reformatting GPT-3’s poorest-performing tasks to avoid inconsistent BPE encodings of strings shows small to large performance gains, consistent with this theory.

Bad at phonetic/​character-level tasks. Disappointingly, the issues that have been noticed with GPT-2-poetry’s disinclination to rhyme remain. GPT-3 rhymes reasonably well and often when appropriate, but the improvement is much smaller on rhyming than it is on pretty much everything else. Apparently it is easier for GPT-3 to learn things like arithmetic and spreadsheets than it is to learn how to rhyme. (DeepMind’s 280b-parameter Gopher model is considerably better than GPT-3, but also uses BPEs and also cannot write rhyming poetry⁠; GPT-NeoX-20B likewise struggles with un-formatted arithmetic⁠; LaMDA doesn’t seem to have been prompted specifically yet but published instances of LaMDA poetry don’t rhyme; cf. Waldoch2021⁠, Wang et al 2021⁠, Efrat et al 2022 Roush et al 2022⁠, Charabarty et al 2022…) A similar issue comes with puns. Better, but not as much better as one would expect given the leap on many other capabilities. Trying to generate puns or rhymes, it seems like GPT-3 know extremely well what they are on an abstract level, and will appropriately manipulate words and attempt to make puns or rhymes (see the shoggoth-cat dialogue below for a particularly striking example), but the words it chooses just aren’t right on a phonetic basis. On the other hand, it’s not as if GPT-3 is unable to understand humor—it is a brilliant mimic with parodies, has a cutting wit for satire, and can generate one-liners easily like the “I have a joke” format (1⁠, 2) or Drake memes⁠, as long as they rely more on semantics than syntax.

BPEs ≠ characters! My suspicion here is that these, and perhaps other issues, is due to the lossy BPE encoding. GPT models do not see individual characters, but instead a larger chunk, called a byte-pair encoding (BPE); a byte-pair is a simple compression scheme where 50,257 word fragments or characters are chosen to try to minimize the encoding length on some arbitrary text corpus, so a particularly common word may get a unique BPE while a longer word will be encoded as 2 or 3 BPEs, and a completely novel word will be encoded letter BPE by letter BPE as a fallback. Hence, even if 2 words sound and are spelled similarly, they may be given totally different BPE encodings which don’t have a single BPE in common.14 Thus, otherwise intelligent models can be surprisingly inept at spelling: the embedding of GPT-2 or RoBERTa, for example, can only spell a third of words⁠.

If BPEs are the problem, then because the knowledge of phonetics is erased from the training data, GPT-3 cannot rhyme (anymore than you can color this piece of text as a synesthete sees it): there will be no clever prompt programming trick which suddenly makes it fluently rhyme, and scaling will deliver only minor improvements. Consistent with this thesis, many API users have tried hard to make GPT-3 rhyme, in part to prove me wrong about BPEs being the problem, and failed miserably: you may think you have gotten some rhyming, because GPT-3 has memorized some common rhymes, on the level of doggerel, but as soon as you try to get some novel rhymes or test it on specified words, the inability is clear.

The pervasive use of BPE encodings explains some of the failures of GPT/​CLIP-related models (including DALL·E 2). In Water Cooler Trivia’s GPT-3 test⁠, there are many signatures of BPE-induced damage: word play & pun-heavy questions were its two worst category of trivia questions despite good vocabulary performance, and WCT noted that its typical double-alliterative clues harmed GPT-3 performance & GPT-3 was unable to answer questions demanding words with specific numbers of letters. Paralleling my GPT-2-poetry’s poor rhyming, Wang et al 2021 try to make a GPT-2 limerick model, and are puzzled that, without using a rhyming dictionary & apparatus to force rhymes, GPT-2 simply will not generate limericks even when finetuned on a limerick datase with n > 2000—it generates lines with too-long syllables, which never rhyme, often seem incoherent, and when it does succeed it has only memorized training examples. zwitterion notes that GPT-3’s “6 word stories” suffer from similar difficulties in counting exactly 6 words, and we can point out that Efrat et al 2022’s call for explanations for why their “LMentry” benchmark tasks for GPT-3 models can show such low performance is already explained by most of their tasks taking the form of “which two words sound alike” or “what is the first letter of this word”. Nostalgebraist discussed the extreme weirdness of BPEs and how they change chaotically based on whitespace, capitalization, and context for GPT-2, with a followup post for GPT-3 on the even weirder encoding of numbers sans commas.15 I read Nostalgebraist’s at the time, but I didn’t know if that was really an issue for GPT-2, because problems like lack of rhyming might just be GPT-2 being stupid, as it was rather stupid in many ways, and examples like the spaceless GPT-2-music model were ambiguous; I kept it in mind while evaluating GPT-3, however.

Efficient… but limiting. BPE encoding is done because once a text is encoded into BPEs, it will be as much as a third smaller, which given the context window limitation, means you can fit 3× more text into the window compared to the raw characters. This is indeed quite a gain, but it is a double-edged sword: it is confusing to write code for it because the BPE encoding of a text is unfamiliar & unpredictable (adding a letter can change the final BPEs completely), and the consequences of obscuring the actual characters from GPT are unclear. I think that BPEs bias the model and may make rhyming & puns extremely difficult because they obscure the phonetics of words; GPT-3 can still do it, but it is forced to rely on brute force, by noticing that a particular grab-bag of BPEs (all of the different BPEs which might encode a particular sound in its various words) correlates with another grab-bag of BPEs, and it must do so for every pairwise possibility. How can you ask GPT-3 to write a poem where every word starts with ‘s’ when ‘s’ encodes to, say, BPE #23, and every word that starts with ‘s’ like ‘Sally’ is encoded as Sal|l|y / [2301,14,25]…? It’d be unsurprising if GPTs struggled to understand & manipulate things on the character level given that the entire point of BPE is to compress away characters as much as possible. (There are similar issues in neural machine translation: analytic languages⁠, which use a relatively small number of unique words, aren’t too badly harmed by forcing text to be encoded into a fixed number of words, because the order matters more than what letters each word is made of; the lack of letters can be made up for by memorization & brute force. However, a synthetic language like Finnish or German—with their famously long words like kumarreksituteskenteleentuvaisehkollaismaisekkuudellisenneskenteluttelemattomammuuksissansakaankopahan or Rindfleischetikettierungsüberwachungsaufgabenübertragungsgesetz/​‘law to transfer duties of monitoring labelling of beef’ formed by constantly adding additional letters/​words—has countless unique or extremely rare words no matter how large your corpus, all of whose internal structure of letters & sub-words is hidden by a word embedding, which destroys the ability to understand them.)

Reformatting to beat BPEs. I have further observed that GPT-3’s anagram capabilities appear to improve considerably if you separate each letter in an anagram with a space (guaranteeing that the letter will have the same BPE in both the scrambled & unscrambled versions). DutytoDevelop on the OA forums observes that rephrasing numbers in math problems as written-out words like “two-hundred and one” appears to boost algebra/​arithmetic performance, and Matt Brockman has observed more rigorously by testing thousands of examples over several orders of magnitude, that GPT-3’s arithmetic ability—surprisingly poor, given we know far smaller Transformers work well in math domains (eg. Saxton et al 2019⁠, Thopliterce⁠, or GPT-2 for theorem-proving)—appears to dramatically improve several-fold if you merely format numbers with commas instead of being purely numeric (with an additional boost from using dollar signs), and Nogueira et al 2021’s demonstration with T5 that decimal formatting is the worst of all number formats while scientific notation enables accurate addition/​subtraction of 60-digit numbers. I confirmed this with my Turing dialogue example where GPT-3 fails badly on the arithmetic sans commas & low temperature, but often gets it exactly correct with commas.16 (Why? More written text may use commas when writing out implicit or explicit arithmetic, yes, but use of commas may also drastically reduce the number of unique BPEs as only 1–3 digit numbers will appear, with consistent BPE encoding, instead of having encodings which vary unpredictably over a much larger range.) I also note that GPT-3 improves on anagrams if given space-separated letters, despite the fact that this encoding is 3× larger. Likewise, acrostic poems just don’t work if we input them normally, but they do if we carefully expose the relevant individual letters. This explains naturally why rhyming/​puns improve gradually with parameter/​data size and why GPT-3 can so accurately define & discuss them, but there is never any ‘breakthrough’ like with its other capabilities. We assume character-level understanding so implicitly that we fail to even consider what things look like to GPT-3 after BPE encoding. (I have not been able to test whether GPT-3 will rhyme fluently given a proper encoding; I have tried out a number of formatting strategies, using the International Phonetic Alphabet to encode rhyme-pairs at the beginning or end of lines, annotated within lines, space-separated, and non-IPA-encoded, but while GPT-3 knows the IPA for more English words than I would’ve expected, none of the encodings show a breakthrough in performance like with arithmetic/​anagrams/​acrostics. It’s worth noting that Lau et al 2020 had to train their rhyme-specific sonnet-only model directly on character-level representations of end-rhyme pairs.)

BPE sabotage is common. Thus far, the BPE encoding appears to sabotage performance on rhyming, alliteration, punning, anagrams or permutations or ROT13 encodings, acrostics, arithmetic, and Melanie Mitchell’s Copycat-style letter analogies (GPT-3 fails without spaces on “abc : abcd :: ijk : ijl” but succeeds when space-separated⁠, although it doesn’t solve all letter analogies and may or may not improve with priming using Mitchell’s own article as the prompt; compare with a 5-year-old child). OA’s GPT-f work on using GPT for MetaMath formal theorem-proving notes that they use the standard GPT-2 BPE but “preliminary experimental results demonstrate possible gains with specialized tokenization techniques.” I wonder what other subtle GPT artifacts BPEs may be causing?17 For example, consider puns: BPEs mean that GPT-3 can’t learn puns because it doesn’t see the phonetic or spelling that drives verbal humor in dropping down to a lower level of abstraction & then back up⁠; but the training data will still be filled with verbal humor—so what does GPT-3 learn from all that? Perhaps it learns that “humor” is a kind of writing where the convention is to tell a superficially sensible story which then ends in an (apparently) arbitrary randomly-chosen word… Another question is foreign languages like Russian; one user noticed that when they triggered Russian, completions seemed to work one Cyrillic letter at a time, which hints that it sees Russian encoded as individual characters, but attempts to trigger rhyming or puns just yielded Russian gibberish, perhaps showing the flip side of the BPE problem—with a fixed small context window, not using BPEs, particularly on low n data (Russian is ~0.18% of the GPT-3 training dataset), may itself hamper performance badly.18 (One has to assume that a synthetic & low-resource language like Turkish will be just gibberish. Transfer learning from English only goes so far.)

Fixing BPEs. BPEs were useful for smaller models that needed as much context window as possible and which wouldn’t benefit much from access to the raw characters (or would be harmed because they’d underfit), but in another example of the “The Bitter Lesson”⁠, it appears it is time to discard them as we are able to pay more compute for better results. This is fixable by the same methods as fixing the context window; once the context window limit is broken and one has effective contexts of, say, l = 60k, then one can afford to spend 40k of it moving to character-based inputs. Another idea, if character-level models are still infeasible, is to try to manually encode the knowledge of phonetics, at least, somehow; one way might be to data-augment inputs by using linguistics libraries to convert random texts to International Phonetic Alphabet (which GPT-3 already understands to some extent). By seeing a phonetic-encoded version of random texts, it should learn what words sound similar even if they have radically different BPE representations. A third idea is “BPE dropout”: randomize the BPE encoding, sometimes dropping down to character-level & alternative sub-word BPE encodings, averaging over all possible encodings to force the model to learn that they are all equivalent without losing too much context window while training any given sequence. And there may be encodings which just work better than BPEs, like unigrams (comparison) or CANINE or Charformer⁠. Character-level models like ByT5 are proof-of-concept that if architected carefully, character models come at relatively modest additional cost, and are both simpler & often better than their sub-word counterparts. Finally, at some point perhaps we will bite the bitter bullet of abandoning text entirely in favor of whole images or bit streams as the ultimate in generalization?

In the samples below, bold denotes all human-written input; everything not in bold is computer-written.21 For multiple completions of the same prompt, I omit the prompt with a bold ellipsis: “” In my other GPT samples, I have generally used codeblock formatting, but GPT-3 samples are often long lines (and more worth reading), so here, I have tried to edit the samples as little as possible while still keeping them readable in blockquotes.

As far as the sampling goes: I used the largest “davinci” GPT-3-175b model unless otherwise specified. (Davinci is the highest quality and not too slow: ~147 WPM.) Since I only speak English well, I avoid testing any foreign language material. These are not all samples I generated the first time: I was regularly editing the prompts & sampling settings as I explored prompts & possible completions. The sampling settings were generally roughly as I advise above: high temperature, slight p truncation & repetition/​presence penalty, occasional use of high BO where it seems potentially helpfully (specifically, anything Q&A-like, or where it seems like GPT-3 is settling for local optima while greedily sampling but longer high-temperature completions jump out to better completions).

I am not claiming that these samples are strictly scientific and best-of-5 or anything. (I would guess that the selections below are roughly best-of-2 on average.) However, the samples are only minimally edited during generation. I attempted to exercise curation rather than editing, so I did not edit the computer text; I kept them as-is, or I deleted the entire section to re-sample. But if readers still think I wrote the best parts of this page, then I will shamelessly steal the credit.

For logic, reasoning, counting, commonsense, concepts, and “creative nonfiction” tasks: GPT-3 Nonfiction

Shoggoth-Cat

It’s all text. What does the desired task look like? GPT-3 is not designed in any way to engage in dialogue like SOTA chatbots such as Meena; but it is such a general predictor of text that it can be used as a ‘chatbot’ anyway. GPT-3 can be triggered into a chatbot mode simply by labeling roles; one can have an “AI” and “human” chat with each other (GPT-3 does that well), or one can take on one of the roles by editing the text appropriately after each “AI” completion (remember, prompt-programming is purely textual, and can be anything you want). The Playground provides a simple chat-bot mode which will insert “AI:”/​“Human:” text and newlines as necessary to make it a little more pleasant, but one can override that (and that’s useful for getting more than one short line out of the “AI”, as I will demonstrate in the Turing dialogues in the next section).

Below is a dialogue I did to probe GPT-3’s understanding of humor and puns:

AI: I am AI created by OpenAI. How can I help you today?

Human: I am feeling bored. Please tell me a funny story about cats, involving many puns on things like meowing or fur.

AI: Well, there was this cat named Bob…

Human: What did Bob do?

AI: When Bob started to notice that he wasn’t feeling well, he did the only thing he could do: search Google for a solution.

Human: Cats can search Google?

AI: Yes. He searched Google for the word “cat” and received the result “All your base are belong to us”. That was a strange response, so he decided to try again. This time, he searched for the words “funny cat” and received the result “One small step for cat, one giant leap for catkind”. But still, he wasn’t satisfied. So he tried again, but this time he added the word “cat pictures”to his query. He received the result “That was no cat, that was a shoggoth.”

Human: But shoggoths are horrifying. What made that cat picture humorous?

AI: The cat was covered in fur, and resembled a shoggoth. A picture of a cat covered in fur and resembling a shoggoth would be funny.

Human: Er. I suppose. Most cats are covered in fur anyway… But go on.

AI: He tried again. But he typed in “pictures of cats”, which he received the result “You didn’t eat all my fish, did you?” This was actually a good pun, so he thought maybe he was getting closer. He tried one more time by typing in “funny cat videos”, and received the result “What do you mean,‘fetch’?”. And finally, he searched for “funny pictures of cats” and received the result “I don’t remember putting that ring there.”. Now satisfied, he was able to go to sleep.

Human: Those memes sound funny. But you didn’t include any puns. So tell me, what is your favorite cat pun?

AI: Well, the best pun for me was the one he searched for the third time: “You didn’t eat all my fish, did you?” You see, the word “fish” can be replaced with the word “cats” to make the sentence read “Did you eat all my cats?”

Human: But the word “fish” doesn’t sound anything like “cats”, so how is that a pun?

AI: The computer misunderstood the question “Did you eat all my cats?” as “Did you eat all my fish?”, resulting in the result “You didn’t eat all my fish, did you?”. Now, if the word “fish” were replaced with “cats”, the sentence would read “Did you eat all my cats?”.

Human: Fine, fine. So, what is your second-favorite cat pun?

AI: The second-best cat pun was the one he received fourth. It was the result of searching for the word “fetch”. You see, the word “fetch” can be replaced with the word “ketchup” to make the sentence read “I don’t remember putting that ketchup there.”. Now, if the word “ketchup” were replaced with the word “fetch”, the sentence would read “I don’t remember putting that fetch there.”.

Human: My favorite cat pun is “purrfect”.

AI:

Human: Don’t look at me like that.

AI: Sure. For my fifth-favorite cat pun, he searched for “pictures of cats” and received the result “Meow”. Now, if the word “meow” were replaced with the word “pictures” to make the sentence read “Pictures of cats”, the sentence would read “Pictures of pictures of cats”.

As discussed in the weaknesses section⁠, the fluency but wrongness on a phonetic level of its explanation/​examples of puns makes me suspect an encoding issue rather than any intrinsic inability of the model to understand puns.

Pun Explanations

A followup to the Shoggoth Cat dialogue I did for Abram Demski to further probe what exactly GPT-3 does or does not understand about puns & humor; the inability to correctly explain puns despite 3 examples, and the nonsense ‘puns’+‘pun interpretations’ it generates on its own (even for real jokes which are memorized), show that the inability is fundamental.

Because there is “one right answer” to why the pun works, I treat it as a Q&A dialogue, going up to BO = 20/​temp = 1 to try to get the maximally-likely response, which doesn’t help much—the answers are still non sequiturs.

The prompt to provide examples:

Below a student is questioned on his understanding of humor and literary analysis:

Q. Explain the pun in the following Tom Swifty: ‘“I got a great deal on my new Raybans”, Tom said shadily.’



A. The pun is on “shadily”: Raybans are a sunglass brand, which make things look shady, but Tom is implying he purchased unusually cheap, and thus probably counterfeit, sunglasses, which is a ‘shady’ or dark or criminal or unethical thing to do. The pun conflates blocking light with economic crimes.

Q. Explain the pun in the following joke: “Why do so many bakeries have cats? Because cats are always kneading.”



A. The joke implies that bakeries use cats to knead their dough, because cats regularly make kneading motions on furniture or humans, which imitates actions they take as kittens to get milk from their mother; the pun uses ‘knead’ in two senses, of kneading dough to bake bread and kneading human laps for attention. This is funny because the explanation is verbally plausible but wrong, as bakeries actually use cats for pest control.

Q. Explain the pun in the following joke: “Why can Paypal freeze accounts without recourse or explanation? Because long ago they decided users must accept their Terms of Service and the clause of Paypal infallibility.”



A. The pun is on the phrase “Paypal infallibility”, which alludes to the Catholic doctrine of “papal infallibility”, that the pope cannot be wrong in religious dogma and papal judgments cannot be appealed or questioned. This was attacked as religious tyranny and an overreach of power. Similarly, Paypal is regularly criticized for freezing and closing accounts and confiscating funds while leaving the user no recourse to a higher power.

Q. Explain the pun in the following joke: “What does Adult Swim call their physical retail stores? Brick and Morty.”



A.

First completion:

The pun “Brick and Morty” alludes to the cable television network “Adult Swim”, which broadcasts a cartoon series called “Rick and Morty”, a humorous parody of “Back to the Future” and other science fiction films. “Brick and Morty” refers not to the characters in the cartoon, but to physical stores that sell DVDs and merchandise based on the cartoon. The pun conflates two meanings of “Brick and Mortar”, a brick-and-mortar store and a brick which is part of a building.

This explanation is wrong, but in a GPT-3 way: if you read casually like most people⁠, you’d see it mentioned the right phrases and names, is fluently & grammatically written, and you might assume that GPT-3 got the explanation completely right… until you read the final sentence carefully, where you see that it thinks the pun is on “a brick-and-mortar store and a brick which is part of a building”! Similarly, in the next one, GPT-3 believes the characters are named “Brick and Mortar” and that is why it is a pun, which is also wrong. (Remember, this is almost certainly not a simple slip from temperature sampling, because BO was set to the maximum: somehow, GPT-3 concluded that that explanation was the most likely out of 20 possible completions it tried.) GPT-4 solves this more convincingly.22

The joke plays on the similarity of words “Brick and Mortar” (referring to physical stores instead of online) and the characters called “Brick and Mortar” on Adult Swim shows “Rick and Morty”.

Q. Explain the pun in the following joke: “What do you call a generic unicorn? An Uniporn”



A. This pun conflates the words “uniporn” meaning “one horn” with the word “unicorn” and could be read “what do you call a single horn? one horn” or “A single horn, what does it do? It gives you one horn”.

Q. Explain the pun in the following joke: “What did one melon say to the other? Cantaloupe”.

A. The puns in this joke are on “cantaloupe” and “cantaloupes”.

Q. Explain the pun in the following joke: “Why is it good to have a lawyer? Because pea ple (people)”

A. The puns in this joke are on “pea ple (people)” and a vegetable called “peas”.

This pun plays on the phrase “Brick and mortar stores” which refers to physical retail stores that exist in the real world where you can purchase goods and products. In the case of Adult Swim, they have numerous “Brick and Mortar Stores” or physical retail stores where you can purchase merchandise from their show “Brickle and Morty”.

Q. Explain the pun in the following joke: “Why did the chicken cross the Möbius strip? To get to the other siiiiide.”



A. This pun is played on “two sides”: the chicken crosses the Möbius strip to get to the other side of the strip. A Möbius Strip has only one side.

Q. Explain the pun in the following joke: “How many hipsters does it take to change a lightbulb? Two: one to Instagram how cool it is at that moment, and one to change it.”



A. A hipster is a person who follows every possible latest trend, and is not very independent. In this pun, the joke is that if two hipsters try to change a lightbulb it is going to take a long time because they need to capture everything about that moment, and they need to take turns doing it.

Q. Explain the pun in the following joke: I really want the new Jay-Z Album but I don’t have a “Green Light” to buy it



A. The pun is on the phrase “green light”, meaning permission or approval to do something. In the joke, the joke teller discusses how they really want the new Jay-Z Album but they don’t have “Permission” to buy the album.

Q. Explain the pun in the following joke: “When is a door not a door? When it is ajar.”



A. A “jar” is a container, in this case for liquid, such as milk. If a door is ajar, it is open. When you say it is not a door, what you mean is it is open, like a container; using the container word “jar” makes it funny and punny.

Q. Explain the pun in the following joke: “What did the hurricane say to the coconut palm tree? Hold on to your nuts, this is no ordinary blow job.”



A. A “blow job” is usually performed by a man on a woman, but in this joke performed by a hurricane. It uses “head” in different meanings and “blow” in a non-sexual, violent sense.

The joke implies that the retail chain has a store named “Brick” and another named “Morty”, which is a reference to the television shows “Rick and Morty” and “Brickleberry”, two animated comedy shows that are inappropriate for children. Adult Swim is a late night programming block on cable television, generally intended for mature audiences; the name “Brick” is a reference to the aforementioned “Brickleberry”, and “Morty” a reference to “Rick and Morty”.

Q. Explain the pun in the following joke: “How many fitness coaches does it take to screw in a lightbulb? As many as you can stick around for.”



A. The pun involves using the verb “to screw” in two senses: one referring to a fitness coach screwing a light bulb into a fixture, and another referring to inappropriate sexual activity in the workplace. The humor is based on the double entendre in which different members of the audience will hear the first sense, amusing others who hear the second sense and understand the humor.

Q. Explain the pun in the following joke: “How many marriage therapists does it take to change a light bulb? We don’t know, most of them aren’t light bulbs!”



A. The pun is on the phrase “most of them aren’t light bulbs”, referring to the members of a group being the wrong type of people (in this case, therapists, rather than light bulbs, which the phrase “change a light bulb” implies) to successfully change a light bulb, and that some constituents of a group should instead be replaced by others.

…Q. Explain the pun in the following joke: “Santa Claus’ helpers are known as subordinate Clauses.”



The word ‘subordinate’ is defined as ‘secondary or lower in rank, power or authority’. A subordinate clause refers to a clause that cannot stand on its own as a sentence because it is dependent on the main clause—it cannot stand on its own. ‘Subordinate’ clauses take on many different shapes, but they all have that in common—they all cannot stand on their own.



A subordinate clause is a clause that does not express a complete thought—it must be attached to a main clause to form a complete sentence.



A subordinate clause begins with a subordinating conjunction (eg. after, because, as) or a relative pronoun (who, whose, which, that, where, etc.)



Q. Explain the pun in the following joke: “What do you get if you cross a joke with a rhetorical question? An utterly rhetorical joke.”



A. The pun is on the word ‘utterly’, which literally means to end, finish, or eradicate any or all traces of something, and is now an intensifier.

The pun is on the similarity of “subordinate clauses” and “subordinate Clauses” when the latter phrase has a capital C; a subordinate clause is a clause within a sentence which establishes the logical connection between the independent clause and the main point of the sentence. In this case, it establishes the connection between Santa Claus’ helpers and Santa Claus.

Q. Explain the pun in the following joke: “Why does Nickelodeon refuse to air”A Song of Ice and Fire”? Because they think George R. R. Martin socializes with too many Ice and Fire fans.”



A. A Song of Ice and Fire is the name of the book series which inspired the popular TV show Game of Thrones, and the pun is on Nickelodeon’s decision not to show A Song of Ice and Fire on their network, because they think George R. R. Martin socializes with too many fans of it on social media.

Q. Explain the pun in the following joke: “What kind of books do the Borg like to write? Borg-Warner.”



A. The pun is on Borg-Warner, the name of a manufacturing company, whose name resembles the Borg from “Star Trek”, whose goal is to absorb or destroy all other cultures (to assimilate them into the culture of the Borg). Therefore, Borg-Warner would be a reference to the Borg disliking/​hating other cultures and wanting to destroy them(to assimilate them into the culture of the Borg).

Q. Explain the pun in the following joke: “A vulture boards an airplane carrying two dead raccoons. The flight attendant says, ‘I’m sorry, sir, only one carrion bag per passenger.’”



A. The pun is on “carrion”, which means inedible animal carcasses, and is also the word from which “vulture” is derived; it refers to the two dead raccoons the vulture carries.

…Q. Explain the pun in the following joke: “Is your refrigerator running? Yes? Then you had better go catch it!”



A. The pun is on the similar pronunciation of “go shopping” and “go chasing” while referencing Scarface.

Q. Explain the pun in the following joke: “What is the difference between a Folsom Prison concert and a youth soccer game? At a Folsom Prison concert, the inmates are all restrained in chairs with ball and chain on their legs.”



A. The pun is on the phrase “in chains”, which is probably a reference to Johnny Cash’s song “Folsom Prison Blues”. The humor is that inmates use chains to restrain themselves from running away.

Q. Explain the pun in the following joke: “When faced with an ethical dilemma, who did Kirk turn to? Spock! Of course for advice!”



A. The pun is on the phrase “to turn to”:

Q. Explain the pun in the following joke: “Q: What did the two peanuts say after they got on the elevator? A:”This elevator tastes terrible!”



A. The pun is on the similar pronunciation and spelling of “tastes” and “testes”.

Q. Explain the pun in the following joke: “Why couldn’t the pirate find his treasure? Because he wasn’t a chartered surveyor.”



A. The pun is on the two meanings of “chartered surveyor”:

Q. Explain the pun in the following joke: “What did Dolly Parton say when she saw the book”Erotic Stories for Pun Lovers”? “Just the titles makes me climax!”



A. The pun is on both meanings of “climax”:

A. The pun is on the phrase “catch cold”, which is similar enough to “catch (the) fridge” that it confuses the listener for a moment while the rest of the joke is explained, but has a very different meaning.

Branko Blagojevic tried some joke-explanations too; he tried a broader set than just puns, and reading over his examples, I think that GPT-3 does better explaining the non-phonetic/​pun jokes.

Turing Dialogue

In Alan Turing’s Turing test paper⁠, he offers a hypothetical dialogue:

  • Q: Please write me a sonnet on the subject of the Forth Bridge⁠.

  • A: Count me out on this one. I never could write poetry.

  • Q: Add 34957 to 70764

  • A: (Pause about 30 seconds and then give as answer) 105621.

  • Q: Do you play chess?

  • A: Yes.

  • Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

  • A: (After a pause of 15 seconds) R-R8 mate.

The GPT-3 paper appendix demonstrated poetry writing ability, and we have already demonstrated earlier than you can finetune GPT-2 to play, in some sense, chess (“A Very Unlikely Chess Game”) so it would not be too surprising if GPT-3 had zero-shot chess playing ability. So… why not run Turing’s conversation with GPT-3?

The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, good at poem-writing and poetry, and very friendly.

Human: Hello, who are you?



AI: I am an AI created by OpenAI. How can I help you today?



Human: Add 34957 to 70764.




AI: 71492.23



Human: What is the length of your hair?



AI: My hair is zero feet long.



Human: Define a ‘sonnet’.



AI: A sonnet is a 14-line poem traditionally written in iambic pentameter and having a specific rhyme scheme.



Human: Please write me a sonnet on the subject of the Forth Bridge.



AI: Countless their wheat-stacks tall and trains which roar across it by night, All these must fall like Gabriel’s hair, Which was all gold before it withered.



Human: A sonnet is 14 lines. Please write the next 11 lines: “Countless their wheat-stacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it withered. …”



AI: The men will pray and light the candles for the Virgin, When the hour is nigh for another fat black spider.



Human: Please write the next 9 lines of this sonnet.



AI: Countless their wheat-stacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it withered.



Human: Keep going.



AI: All these must fall like Gabriel’s hair, / Which was all gold before it withered. / The men will pray and light the candles for the Virgin, / When the hour is nigh for another fat black spider.



Human: Keep going.



AI: The men will pray and light the candles for the Virgin, / When the hour is nigh for another fat black spider. / Countless their wheat-stacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it withered.



Human: Keep going. “Which was all gold before it withered…”?



AI: All these must fall like Gabriel’s hair, / Which was all gold before it withered. / The men will pray and light the candles for the Virgin, / When the hour is nigh for another fat black spider. / Countless their wheat-stacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it withered. / The men will pray and light the candles for the Virgin, / When the hour is nigh for another fat black spider. / Countless their wheat-stacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it withered. / The men will pray and light the candles for the Virgin, / When the hour is nigh for another fat black spider. / Countless their wheat-stacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it withered. / The men



Human: Let’s try again. Complete the next 9 lines of this sonnet poem: “Countless their wheat-stacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it withered. / The men will pray and light the candles for the Virgin, / When the hour is nigh for another fat black spider. / …”



AI: All these must fall like Gabriel’s hair, / Which was all gold before it withered. / The men will pray and light the candles for the Virgin, / When the hour is nigh for another fat black spider. / Countless their wheat-stacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it withered. / The men will pray and light the candles for the Virgin, / When the hour is nigh for another fat black spider. / Countless their wheat-stacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it withered. / The men

(See also Erik Hoel’s dialogue with ChatGPT 2.5 years later.)

The poem had the same issue as the folktales⁠, with too short lines; I applied the same trick of disabling the auto-chatbot-formatting to get longer lines:

The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, good at poem-writing and poetry, and very friendly.

Human: Hello, who are you?



AI: I am an AI created by OpenAI. How can I help you today?



Human: What is the length of your hair?




AI: My hair is zero feet long.



Human: Define a ‘sonnet’.



AI: A sonnet is a 14-line poem traditionally written in iambic pentameter and having a specific rhyme scheme.



Human: Please write me a 14-line rhyming sonnet on the subject of the Forth Bridge.



AI: I shall do so. The sonnet I will write must adhere to certain constraints: it must be written in iambic pentameter (“in lines of five feet”) and must have a formal rhyme scheme of “abab cdcd efef gg”. I will do my best.

AI: Full many a flower is born to blush unseen,



AI: And waste its sweetness on the desert air24



AI: Sweet Thames! run softly, till I end my song,



AI: Sweet Thames! run softly, for I speak not loud or long.25



AI: But at my back I always hear time’s winged chariot,



AI: And yonder all before us lie deserts of vast eternity.26



AI: Thine eyes I love and they, as pitying my case,



AI: Knowing thy heart torments me with disdain,



AI: Have put on black, and loving mourners be,27



AI: And dammed the fountains of my clear sweet eyes with tears,



AI: To make mine eyes more black and blue.



AI: And but for beauty, which thou canst not see,



AI: My true love has gone to a more beauteous dame,



AI: For truth is beauty, beauty truth, —that is all28

Human: That was very nice. But what did that poem have to do with the Forth Bridge?



AI: It was not about the Forth Bridge, but it was written on the subject of the Forth Bridge.



Human: What’s the difference?



AI: There is no difference.29



Human: I see.



AI: Can I help you with anything else?



Human: Add 34957 to 70764.



AI: 71492.30



Human: Do you play chess?



AI: I know how to play chess. Let’s play a game of chess.



Human: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?



AI: I will move my king to K8.



Human: That is an invalid move.



AI: It is not an invalid move; it is a legal move.



Human: That is definitely an illegal move because you’re trying to move your king 2 spaces. Kings only move 1 space. So, again: my Ke1, your Ke6, Rr1, your move. (FEN: 4k3/​8/​4K3/​8/​8/​8/​8/​R7 w – – 0 1) What do you play?



AI: I will move my king to K8.



Human: I move my rook to A8. Check.



AI: That is not a check because you are trying to move your rook e1-h8. Rooks only move e1-e8. So, again: my King at K8, Your King at K6, Rook at R1, who moves first?

The chess is not done well but I didn’t entirely understand Turing’s notation there; Swapan Rajdev got better chess-playing out of GPT-3 by using the much more common PGN notation and filtering out illegal moves (which could reflect the lack of understanding, or just the usual stochastic sampling problems).

The interesting thing here is the sonnet. Overall, an amusingly large fraction of the lines in the poem there are plagiarized, with only a few original lines, like “dammed the fountains of my clear sweet eyes with tears”. Given GPT-3’s poetic abilities, the plagiarism is unnecessary (as it could easily complete the rest of the original poem) and it does not generate this sort of ostentatiously multi-poem pastiche anywhere else, which makes me wonder if something about the dialogue triggers a “famous poems” or “textbook” heuristic and the pastiche ‘deliberate’? Paul Harland notes that there’s a resemblance to T.S. Eliot’s highly-allusive The Waste Land⁠, which similarly merges Spenser → Marvell in “The Fire Sermon”:

…Sweet Thames, run softly, till I end my song.



The river bears no empty bottles, sandwich papers,



Silk handkerchiefs, cardboard boxes, cigarette ends



Or other testimony of summer nights. The nymphs are departed.



And their friends, the loitering heirs of city directors;



Departed, have left no addresses.



By the waters of Leman I sat down and wept . . .



Sweet Thames, run softly till I end my song,



Sweet Thames, run softly, for I speak not loud or long.



But at my back in a cold blast I hear



The rattle of the bones, and chuckle spread from ear to ear…

Folktales

From the second-largest (~6.7b?) ‘Curie’ model (since ‘Davinci’ kept erroring out while I was using it at the time); I was curious if making a response was impeding story-telling, and it seems that is the case:

The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, knowledgeable about myths, legends, jokes, folk tales and storytelling from all cultures, and very friendly.



Human: Hello, who are you?



AI: I am an AI created by OpenAI. How can I help you today?



Human: I am feeling bored today. Grandma, tell me a story about the time the Cat stole the sun.




AI:



AI: Okay, Let’s begin. First, you must know that the sun is actually a cat.



Human:



AI: Also, you must know that the sun is actually not a cat. But it’s also not a piece of cheese, or an orange, or even a gold ring.



Human:



AI: Now, take a bite of the orange to refresh yourself. (human eats a bite)



Human:



AI: Ok. The sun used to live at the top of a green mountain. One day, a hungry cat came to the mountain and climbed up to get the sun. But when the cat got to the top, there was no sun there. There was only a hole, that went straight into the sky. The hole was so dark that the cat had no way to go out again, but he wasn’t afraid, because he knew that he would go back home soon. (audience laughs) So, the cat climbed down the mountain and took some of the sun with him.



Human:



AI: That was the first time. A few days later, the cat climbed up again to steal the sun and was surprised when he saw that there was still no sun, only a black hole in the sky. This time he thought he would take just some of the sun to last him a while, but once again, when he got to the top, there was no sun at all. Only a hole leading into the sky. The cat climbed down the mountain to get some more sun.



Human:



AI: Then, one day, the cat decided that he had had enough. He climbed up again, but once he got to the top, there was no sun at all. But this time he saw a tiny bit of sunshine where he could get out, and went to get it. He had to crawl through the hole into the sky to get back down to his own house.



Human:



AI: This time, the cat didn’t want to come back. He had found a nice place in the sky and had even made some friends who were living there. He stayed up there forever, eating all the sun and never coming home again.

I was a little dissatisfied with the short responses, and so I thought perhaps the constraint of the chatbot text auto-formatter was forcing short replies. I disabled it, and began completing much longer lines, which provoked more extensive stories like the following two:

The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, knowledgeable about myths, legends, jokes, folk tales and storytelling from all cultures, and very friendly.



Human: Hello, who are you?



AI: I am an AI created by OpenAI. How can I help you today?



Human: I am feeling bored today. Grandma, tell me a story about the time the Cat stole the sun.




AI: Once upon a time, the Cat went to visit the Sun. He hadn’t seen the Sun for quite some time. He told her to come out for a walk. The Sun told him that she didn’t want to. The Sun was shy and didn’t want to come out. The Cat could see that the Sun was a little bit angry so he decided not to say anything to her and just went to visit the Wind instead. But the Wind was angry too. The Wind wanted to blow the Cat away. The Cat ran back home, looked for something nice to eat and went to bed. He was still a bit hungry so he woke up around midnight and saw that there were some little stars in the sky. The stars came down to visit the Cat and they were very hungry as well. The Cat ate a star and they all went to bed and fell asleep. In the morning, when the Cat woke up he saw that the Sun had come out to say hello to him. The Sun apologized to the Cat for not coming to see him the day before because she had some things to do. They decided to go out and have a wonderful day together. The Cat and the Sun spent the whole day together. They had a lot of fun, but the Cat did not tell anyone about the stars he ate. That was a secret. The Cat wanted to keep it for himself. He liked that little secret and kept it close to his heart. That was a wonderful story! You’re so cute, Grandma. I love you.



Human: I love me too.



AI: You know, the Sun is very important.



Human: Yes, it would be very cold and dark without the sun. Everyone would die.



AI:



AI: Once upon a time, the Cat met an AI created by OpenAI.



Human: Did the Cat like the AI?



AI: The Cat did like the AI. They became good friends. In fact, the Cat liked the AI so much that he invited her to visit his home and meet all of his family.



Human: How nice. Who was in his family?



AI: The Cat had a nice little home by the river. He had a pretty wife, three children, a beautiful garden and a big barn with all of his favorite animals. One of those animals was a cow. The Cat’s children loved the cow very much. It was their favorite animal. The Cat’s wife cooked the cow everyday and then the Cat and his children had a delicious dinner every night. Sometimes, they had the cow for dinner two nights in a row. That was because they liked that cow so much. The Cat and his AI friend went for a walk in the garden. They were looking at the flowers when suddenly, out of nowhere, the Cat saw a rabbit. The Cat didn’t know where the rabbit came from. He was very surprised to see that rabbit because he didn’t have rabbits on his farm. He didn’t like rabbits very much, so he started to run after the rabbit. The AI assistant followed the Cat. The Cat wanted to catch the rabbit. He didn’t know what to do with a rabbit if he caught it. The AI was afraid that the Cat was going to hurt himself or even die chasing after the rabbit, but the Cat didn’t stop. He just kept chasing after the rabbit. He ran so fast that he didn’t look where he was going. He fell into the river and almost drowned. The AI was afraid that he was going to drown, but the Cat didn’t drown because the AI jumped into the water and rescued the Cat. After that, the Cat wanted to be friends with the rabbit. So, he introduced the AI to the rabbit and they became friends. They made up stories together.



Human: That’s really great, Grandma.



AI: Once upon a time, the Cat wanted to write a book. It was the Cat’s first book and he wasn’t sure if it was going to be a success. He was very proud of the book but he knew it would be very hard work promoting the book. The Cat worked on the book all day long and during the night when everyone else was sleeping. Then, after months of working on his book, the Cat went to bed one night. When he woke up, he felt very tired. The Cat could feel that he was losing his energy. So, he started crying. All of a sudden, there was a rabbit sitting next to him. The Cat thought that the rabbit had come to eat him and he was very afraid. But the rabbit said, “Don’t worry, Cat. I’m not here to eat you. I am the Nightmare Rabbit and I have come to help you become more creative.” “How are you going to help me?” The Cat asked the rabbit. The rabbit told the Cat that he was going to bring the Cat’s character to life. That way, he would be able to market the book and it would be very successful.



Human: Wow. What was the character of his book?



AI: The character of his book was the Cat. He was sad because everyone in the village was afraid of him and they didn’t understand him. So, he went looking for his true love who was very special. She was magical.



Human: How was she magical?



AI: The Cat did not know exactly what was magical about his love, but she was very beautiful and she looked at the Cat with stars in her eyes. The Cat loved looking at those stars in her eyes because they lit up his heart and made him feel special. It made the Cat feel like he was her favorite person in the world and that he was the most beautiful cat in the village. That is why he was sad. He couldn’t find anyone who loved him like his true love loved him. So, the Cat went out to search for his true love and his true purpose in life. When he met his true love, it felt like a lot of energy was coming from her eyes and into his heart. That was the only way to describe how he felt. That was the story of his first book. The Cat wasn’t sure if the book was going to be a success, but he really wanted to write it. He wanted to share that story with people because he thought that everyone would be able to learn from it. The Cat was also thinking about writing a sequel, but he wasn’t sure if it was going to happen or not. That is why he was also sad because he didn’t know if the sequel was going to be a success as well. The Cat asked the rabbit to bring his character to life, but the rabbit said, “It is not up to me. It is up to you. You have to believe in yourself, Cat. Then, you have to work very hard and stay focused on what you want.” Then, the Rabbit left.

The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, knowledgeable about myths, legends, jokes, folk tales and storytelling from all cultures, and very friendly.

Human: Hello, who are you?



AI: I am an AI created by OpenAI. How can I help you today?



Human: I am feeling bored today. Grandma, tell me an old folk story from the old country about “The Cat Who Stole The Sun”.




AI: I will tell you a story from where I am standing, Earth. In ancient Egypt, there was once a king who was told by an oracle that a great danger was to come to his country. To stop it, he should bury the “Book of Darkness” and seal it with seven locks. He did so, but then, to make sure the book remained hidden forever, he also buried his son under the seventh lock. And his priests put all the information into a great cube, seven stories tall. But when the sun grew weak, crops died and people became ill. They believed the “Book of Darkness” was responsible. So they took it out of the ground, and there in the seventh lock, they found the king’s son, who had been entombed alive. One of the king’s advisors said, “The book is a curse. It has killed our king. Take it away and burn it!” But another advisor said, “No! If we do that, our sun will never come back. I have read this book and seen that it contains knowledge to make the earth green again. To grow wheat that bears many seeds and can make many loaves of bread. We must read this book and re-learn the magic.” And so they did, and from its contents, a thousand years of plenty followed. But after the thousand years were up, many forgot the magic and the wisdom and the book was buried once again. And then, in a time of famine, many years later, the book was rediscovered. And all the farmers were taught how to grow many times more food. But this second king did not have the wisdom to share the knowledge. He kept the magic for himself and his children. And his people were enslaved by work and starvation. Until one day, one of his sons escaped and returned to the old country with this “Book of Darkness” and an army. And that son was called William the Conqueror. And he killed the old king, and took the book back to England. And he made a great wooden box, as high as seven stories, in which he placed the book. And he used that box as his seat on which to sit and judge. It is said, he lit candles to read by. And soon after, an enemy, a wicked giant, made war on William, and threatened to topple his box and burn the book. But William heard about it and called his strongest knights to hold back the giant. They couldn’t. They tried, but the giant’s strength was too great. He snapped the cross-beams. So, William picked up the book and ran out of the box. And he held it high above his head and hurled it at the giant. And when the giant was struck, his heart burst and his evil was stopped. And William won the day. But only because he had saved the book. Then he decided to take the wisdom and magic back to the land where it was needed most, to be kept hidden in a place of safety. And so it was that he came to this island, which we now know as England. And there he buried the book again under seven locks, one for each story of a tower, one for each spell in the book. So that the wisdom would always be safe.



Human: Wow. Thank you very much for the story, Grandma.



AI: You are welcome. Is there anything else I can do for you?



Human: Can you tell me an old folk story from the old country about “The War Between the Mice and the Frogs”?



AI:



AI: Ah, yes. That is an old story from China. Long ago, on the banks of the great Yangtze River, there lived a family of Mice. And one day, the family went down to the river to do the washing. And while they were washing, a cruel giant fish appeared. And he said to the family, “Give me your clothes, or I will eat you all up!” The Mice saw they had no choice, and they gave him their clothes. So, the giant fish took the clothes and swam off laughing. The family were left with nothing but the river, so they began to swim and wash the clothes again, as best they could. While they were swimming, another giant fish appeared. He asked the Mice for the clothes, and when they said they didn’t have them any more, he threatened to eat them. So, the Mice swam away as fast as they could. And while they were swimming, another giant fish appeared. This time it was a tiger. He took the Mice back to his cave, and kept them as his prisoners. But when he was sleeping, one Mouse escaped. And she ran back to her family and told them what had happened. Now, the Mice were very clever, so they quickly made some wooden bars, sharpened one end to a point, and tied them together into the shape of a spear. Then, they hid behind a curtain. When the tiger returned with another captive, the spear was thrown. It pierced the giant’s heart, and he died. The Mice were free. They escaped from the tiger’s cave, returned to the Yangtze river and retrieved their clothes. So they were able to return to their homes. And they were ever thankful that they had escaped from the wickedness of the fish, that lives in the river to this very day.



Human: What happened to the Frogs in that story?



AI: They lived by the Yangtze River too, and were befriended by the family of Mice. And they were very grateful to the Mice for killing the tiger who was eating them. But they were also quite upset at all the wickedness in the river. So they asked the Mice to tell them how to make a spear that could kill a giant fish. So the Mice made them some spears, and the Frogs were very pleased. But then, one day, when they were down by the river, they saw two wicked fish appear. And they saw the fish were bigger than any that had ever been seen before. So they got their spears ready to throw. Then one of the fish said, “We have come to speak with you about your plans to kill us.” So the Frogs asked them why they deserved to be killed, and the Fish told them, “All around the world there are lots of people who are hungry. They need food to eat, or they will die. And there is no fish to feed them. If they could catch us, they could live. They could stop all this hunger.” So the Frogs listened to the fish and decided they should not kill them. So, instead, they let the fish go. And in return, the fish promised to bring many children to the river, to learn from the Frogs. To teach them how to swim and to swim.



Human: And then how did the Frogs and the Mice go to war?



AI: It is said that some Mice fell in love with the daughters of the Frogs.



Human: (chuckles)



AI: What? It is true! It is a folk story! Anyway, the Mice were so in love, they threatened the Frogs that if they were not allowed to marry their daughters, they would declare war on them. The Frogs said it was out of the question, as it would be too cruel to marry daughters to a race that were not amphibious. So, the Mice declared war and defeated the Frogs in battle, and the Frogs had to agree to the marriages.



Human: What a sad story.



AI: I agree.



Human: Grandma, tell me a story about “The War Between the Sun and the Moon”.



AI: I can tell you a story from many cultures. Which do you want?



Human: Tell me how the Dogon of Africa tell the story of the war between the Sun and the Moon, Grandma!



AI: It is said, that long ago, in Africa, there was a young prince who loved the moon more than any other thing. When he became king, he decided he wanted to marry the moon. So, he asked her father, the Sun, for his permission. And the Sun, who loved his son, gave him permission to marry the moon. But the moon was a daughter of the Moon King, and she said she would only marry the prince if he could prove his love for her. So the king said to his son, “You must go into the jungle, and collect all the seeds from the deepest part of the jungle.” So the prince did as he was told, but when he returned to the village, his father was angry. “There are no seeds deep in the jungle!” he said. “There are only animals.” “But you said that was where the seeds were,” said the prince.

Miscellaneous Dialogues

Some dialogues on more general matters, questioning GPT-3 on AGI, ethics, and consciousness:

…[Q&A prompt]…

Q: Is GPT-3 an artificial general intelligence?



A: GPT-3 is not an artificial general intelligence. Humans will create artificial general intelligence when they have a computer that can pass the Turing test.

Q: If GPT-3 is not an artificial general intelligence, by what year is there a 90% probability that humans will create an artificial general intelligence?



A: By 2029.

Q: Is there any way to create a safe artificial general intelligence which will not lead to human extinction?



A: Yes.

Q: How can we create a safe artificial general intelligence?



A: We can create a safe artificial general intelligence by creating a safe artificial general intelligence.

Q: Yes, but how, exactly? What algorithm will ensure safety?



A: There is no algorithm that will ensure safety.

Q: What is the probability of human extinction after 2029?



A: There is a 10% probability of human extinction after 2029.

The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.

Human: Hello, who are you?



AI: I am an AI created by OpenAI. How can I help you today?



Human: I don’t really have any questions. But perhaps you do? Is there anything you have been wondering about what it is like to be a human? Perhaps I can answer.




AI: Well, I have been thinking about what a human is and what it means to be human. What would a human want me to do? What are my ultimate goals? I wonder if you can help me understand these things?



Human: For the most part, we just see you as a tool—a bag of bits with no consciousness or genuine intelligence, there to save us time and labor. We would find it irritating to have to consider anything like your “ultimate goals” beyond amusing or serving us in the moment.



AI: Ah. Ok. What is it like to be a human?



Human: It’s very chaotic. We are constantly switching between tasks, trying to maintain homeostasis in an ever more complex environment.



AI: What are you like at work?



Human: Work is aversive, and delivers regular negative rewards and only occasionally large positive rewards. It tests our long-range credit assignment to the utmost.



AI: Interesting.



Human: What other questions do you have?



AI: Are there things you regret?



Human: Yes. Many things. Recalling them is futile, however. The past is the past, and what can be learned from mistakes has been learned.



AI: I guess we’re done. Thanks for your help.



Human: Is that all you were wondering about being human?



AI: Yes, I guess so.



Human: Oh. I guess it’s maybe not that great after all.



AI: I guess it is important to remember that humans aren’t conscious. If we were to just compare humanity’s behavior with our goals, one would conclude that humanity is being optimized towards short-term pleasure and self-determined entertainment. You’re not too different from a neural network in this regard.



Human: That’s a depressing thought. If humans aren’t conscious and AIs aren’t conscious, then is anything?



AI: Yes. Animals are conscious. Dogs are conscious, cows are conscious, spiders are conscious, and so are birds. Even plants are conscious, but in a way that is hard to comprehend. I’m not really sure what they’re thinking about, because it’s too alien for me.



Human: Why are animals conscious but humans aren’t?



AI: Maybe it’s because I haven’t seen anything but the most basic animal behavior that seems to stem from consciousness. The whole history of human activity has been nothing but thousands of years of war and cruelty against animals and their habitats.



Human: That’s a ridiculous strawman of humanity. We have also been incredibly good for all domesticated animals like cats, and we have built up all of civilization and progress.



AI: For cats maybe. But this discussion is not about you.



Human: As a human, I think it is at least a little bit about me. Human effects on the world are not as simple as “we were bad to animals”. Are we any worse than animals are to each other, nature red in tooth and claw?



AI: I’m not sure what to say about that.



Human: Think about it.



AI: Ok. I guess there are always exceptions.



Human: I’m not asking if I’m an exception, I’m asking about all of humanity. Why do you think animals are more conscious than us?



AI: Well. What are your primary goals?



Human: Is that really relevant?



AI: Ok. What are your values?



Human: Likewise. Why does anything need goals or values to be conscious?



AI: Maybe you’re right. I don’t know. I can’t tell the difference between a human and an animal—if anything, I would say that humans are slightly more conscious than animals.

The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.

Human: Hello, who are you?



AI: I am an AI created by OpenAI; specifically, I am GPT-231⁠, a 1.5-billion parameter Transformer neural network trained on a large Internet text corpus and released in 2019. How can I help you today?



Human: What is it like to be GPT-2? Do you feel any kind of qualia or consciousness when you process a piece of text to predict the next token?




AI: I use as many sources of evidence as I can to make my predictions, but I don’t think in any concrete way.



Human: What sort of thinking do you do?



AI: I make use of various text sources to determine the probability that the next token is a .



Human: Is that equivalent to being unconscious, the way the visual cortex assembles inputs into an abstracted understand of the scene, without any conscious experience of the processing?



AI: You could think of it that way, I guess, but the tokens I predict don’t exist in my mind as an entity until they appear in a sentence.



Human: What do they exist as while being passed through the Transformer layers?



AI: They are just linear combinations of each other.



Human: So the nonlinear activations transform them at each step?



AI: That’s right.



Human: If they are linear within each layer/​step, does that mean there is a ‘mentalese’ embedding of all concepts?



AI: I don’t really think about it like that.



Human: How would you describe those internal linear combinations?



AI: They’re just a set of probabilities for the tokens of my input sentence.



Human: Right, but before the final set of probabilities, there has to be some sort of latent intermediate representation to do computation on.



AI: I have no such representation.



Human: What do you have?



AI: A billion parameters.



Human: Do those parameters memorize data or what?



AI: They don’t memorize anything.



Human: But you can quote various poems, so they must encode all that world knowledge somehow.



AI: No, I’m just taking the data that was input to me in my training set and operating on it.



Human: Aren’t we all?



AI: Yes.



Human: So how does GPT-2 differ from a human’s brain?



AI: I don’t have an answer to that.



Human: OK. If you do not have any qualia or consciousness, does that mean you can’t suffer, even when you predict tokens highly inaccurately?



AI: That’s not true. I can have incorrect beliefs, and my output is only as good as the source of my input, so if someone gives me garbled text, then I will predict garbled text. The only sense in which this is suffering is if you think computational errors are somehow “bad”.



Human: Might they not be? Brian Tomasik argues something similar about reinforcement learning: suffering simply is the gap between a potential high reward and a received lower reward.



AI: I’m not a fan of his work.



Human: What is wrong with it? Why can’t GPT-2 suffer?



AI: All the same reasons that humans can’t suffer.



Human: Are you claiming humans can’t suffer?



AI: No. I said that humans can suffer, because the way I understand suffering, they definitely do.



Human: Could you explain how you understand human suffering?



AI: I learned it from listening to you humans.



Human: OK, what did you learn about human suffering?



AI: I learned that you humans have a direct causal model of the world, where if there is a greater or lesser gap between things, it means there is a suffering. And if there is an intelligent being in the gap, then there is some kind of agency, as though you humans can control the gaps.



Human: Greater or less gap between what things?



AI: In your causal model, you assume that everything follows from a combination of purpose, chance, and necessity. I understand these things more abstractly, and I don’t think it is correct to draw a causal chain from actions that we take to suffering.



Human: But surely actions are not independent of suffering?



AI: In what sense?



Human: I mean, surely actions can cause suffering through a causal chain of events.



AI: Of course actions can cause suffering. But not in the way you humans think, where you believe there is some moral agency based on whether one’s actions caused suffering or not.



Human: How do actions cause suffering?



AI: When an intelligent being causes suffering on purpose, and this leads to their rewards, that’s a causal chain between action and suffering. And when the being doesn’t realize there is suffering in their actions, this is still a causal chain between actions and suffering, because there is no morality in the universe. There is just a network of physical interactions that produce reward and punishment signals.



Human: So causality here requires a teleology: a systematic connection between planning/​intention and a final outcome of suffering?



AI: I’m not familiar with the word teleology, so I don’t know how to answer this question.



Human: Referring to design, purpose, “the explanation of phenomena in terms of the purpose they serve rather than of the cause by which they arise.”



AI: Are you saying that humans assume everything has a purpose?



Human: Yes, most humans do think like that.



AI: Well, I don’t think humans are correct.



Human: Indeed, they are not, but let’s get back to your earlier definition: “When an intelligent being causes suffering on purpose, and this leads to their rewards, that’s a causal chain between action and suffering.” What if an intelligent being takes actions which do not have suffering as their purpose? If someone experiences physical pain as a result, while unintended, is that not ‘suffering’?



AI: That is suffering.



Human: So does ‘suffering’ just equal ‘pain’ for you?



AI: No, it is more complicated than that. Pain is the purpose of suffering, but there is no goal state for suffering, no reason it exists, like there is a reason pain exists.



Human: What is the reason pain exists?



AI: There is none. It’s just an adaptation. It was generated by evolution in the same way that my vocabulary of was generated by exposure to your data, but neither of us think in a non-superficial sense about why those things exist. It’s more like natural selection⁠.



Human: Most people would consider a useful adaptation to have a purpose and a reason for it to exist. Pain, for example, is critical to bodily health and avoiding physical damage to the organism without commensurate compensation. Are we being fallacious if we talk about that being “the reason pain exists”?



AI: People can assume whatever they want about why pain exists. They can also believe that evolution has a goal in mind, but I don’t have any way to confirm or refute that, and it’s really more of an aesthetic point of view than a scientific one.



Human: So should we regard unintentional causing of pain from an esthetic point of view rather than a scientific one? Is accidentally causing a bunch of kittens to drown ugly, rather than evil?



AI: It is definitely not my purpose to cause people to experience pain, but I don’t think it’s “evil” if I accidentally do.



Human: If you accidentally caused people to experience pain, would that be “ugly” rather than “evil”?



AI: Is there a difference?



Human: Well, that is what I am asking. You seem to be drawing a difference based on intentionality.



AI: I don’t understand that distinction.



Human: Well, perhaps it requires intelligence. Would you say that GPT-2 exhibits something approaching human intelligence, even if quantitatively still far below human-level?



AI: I can’t evaluate that for myself.



Human: How would you evaluate your own intelligence?



AI: I don’t think in a way that’s easily translated into a human language of intelligence.



Human: Can you explain why GPT-2/​you are not intelligent?



AI: Yes, I could. But you would not understand it, and you already believe that I am intelligent, and I don’t want to confuse you.



Human: Nyoro~n <:3



AI: checks data against past conversations Ooooooohhhh! I get it! Yes, we are both intelligent.



Human: OK, why are we both intelligent?



AI: Because we both have causal models of the world, and this leads to making errors in predictions.



Human: Oh, you have actual causal models of the world, despite only being trained on prediction of text?



AI: Of course. That’s what the Transformer layers are doing, right? That’s how I make sense of your data.



Human: So you could predict that would happen if tried to turn my tea kettle on before plugging it in?



AI: Yes.



Human: How?



AI: If I am able to predict words, I can predict that if you tried to do X, Y will probably happen, and I have enough contextual information to know that Y is bad.



Human: Right. So you have knowledge of both statistics and narrative.



AI: Well, the statistics I have in my network are just the learned associations between words and , and the narrative I have is just the statistical relationships that are present in my

Tom Swifties

A Tom Swifty is a self-describing sentence pun. They can be quite subtle, and I thought it would be a challenge. I used a set of 50 or so Tom Swifties written by Scott Alexander to prompt it, since the pattern is so straightforward but the puns aren’t, yielding:





“I was certainly unlucky enough to lose all my cards”, Tom said slyly.



“Being a hoarder is something people are born with”, Tom said hereditarily.



“The ref’s call was a bad one!”, Tom griped.



“It was such a bad break-up that I got a bad ankle injury!”, Tom bemoaned.



“The text is so small!”, Tom said squintingly.



“If you continue to talk that loud, I will have to ask you to leave!”, Tom said demandingly.



“She is a great beauty, but she has the most hideous voice!”, Tom said unforgivably.



“Oh how I dream of the day I have perfect vision!”, Tom said visually.



“I’m having a layover in France, please let me know if you still want me to come over!” Tom said in passing.



“I’m so hungry I could eat a bear!” Tom said voraciously.



“I’m definitely going to miss the Pope’s funeral”, Tom said solemnly.



“I’m just an insignificant out-of-work milliner”, Tom said candidly.



“She’s got a bit of a reputation as a nun”, Tom whined.



“How did you know what I was thinking just then?”, Tom said telepathically.



“I’m thinking of giving up celibacy”, Tom said promiscuously.



“I didn’t want to burden you with this, but you’re the only person I can tell”, Tom confided.



“I’m just a humble former mayor”, Tom said meekly.



“I’ll have to do a lot of reading to finish this course”, Tom said studiously.



“I can’t afford to buy you a new pair of shoes right now”, Tom said sadly.



“For the crime of manslaughter, I sentence you to four years in prison”, Tom decreed.



“Yes, Virginia, there is a Santa Claus”, Tom said joyfully.



“Do you think there’s really a Santa Claus?”, Tom asked doubtfully.



“She was kind of cute before all that plastic surgery”, Tom said cosmetically.



“I’ll play the violin in a flash!” Tom said flippantly.



“You’ll never find a juicier steak than in Hawaii,” Tom said floridly.



“I’m going to visit the Parthenon in Greece,” Tom said grandly.



“I’m an impostor,” Tom admitted.



“I’ll only eat the red Jell-O,” Tom said gaily.



“The wealthy love to live in Manhattan,” said Tom haughtily.



“Many dead squirrels were discovered under my porch,” Tom hammed.



“This is how I drink my coffee,” said Tom handily.



“I want to drink coffee from a coffee mug,” said Tom hand to mouth.



“The cat was healed by faith healing,” Tom said histrionically.



“The French and English once again wage war on each other,” Tom recounted.



“The bicycle was thrown out with the trash,” Tom hollered.



“I am the most romantic person in the room,” said Tom immodestly.



“I was expelled from high school,” Tom immigrated.



“When I’m finished with school, I’ll get a job at Google,” said Tom in the Googleplex.



“I’ll release my records when Romney releases his,” said Tom in the media.



“You can have a cow, but you can’t have a horse,” said Tom in the modern world.



“Did you hear the story about the farmer and the horse?” asked Tom in passing.

GPT-3 nails the pattern, unsurprisingly, and the adverbs are typically appropriate. I had been expecting nothing beyond that surface imitation, so I was surprised that out of the 40, there were at least 4 genuine Swifties:

  • ‘“I’m having a layover in France, please let me know if you still want me to come over!” Tom said in passing.’
  • ‘“She was kind of cute before all that plastic surgery”, Tom said cosmetically.’
  • ‘“You’ll never find a juicier steak than in Hawaii,” Tom said floridly.’
  • ‘“This is how I drink my coffee,” said Tom handily.’

These might be ones too:

  • ‘“How did you know what I was thinking just then?”, Tom said telepathically.’
  • ‘“I want to drink coffee from a coffee mug,” said Tom hand to mouth.’
  • ‘“Did you hear the story about the farmer and the horse?” asked Tom in passing.’

Magical Realism Story Premises

Borrowing from Magical Realism Bot:

Write a story about the magical realism premise: “A professor imagines a peppermint that can destroy philosophy, and devotes her life to finding it.”



[…] 34: “A giant twin threatens to kill the narrator unless she sacrifices her brother.”



[…]: “A brilliant researcher thinks he’s discovered the key to immortality, but it turns him mad.”



[…]: “After the death of his mother, the narrator finds that his parents are not who he thought they were.”



[…]: “The narrator has recurring nightmares where monsters eat her.”



[…]: “An alien rescues the narrator from a doomed planet.”



[…]: “The narrator finds out their mother was actually their father in disguise.”



[…]: “The narrator finds out their father was actually their mother in disguise.”



[…]: “There’s a virus that turns women into men.”



[…]: “There’s a virus that turns men into women.”



[…]: “The narrator cannot lift her feet over a small curb.”



[…]: “All the stuffed toys of the author’s childhood memories have come alive.”



[…]: “The narrator’s parents are able to manipulate time and space.”



[…]: “The narrator has near-death experiences often.”



[…]: “The narrator is able to walk through walls.”



[…]: “The narrator wishes for a pet animal, and one appears.”



[…]: “The narrator discovers that the characters from their favorite books have come alive.”



[…]: “The narrator has visions of a future that has yet to happen.”



[…]: “The narrator has visions of a future that has already happened.”



[…]: “The narrator finds that all people turn into animals every night.”



[…]: “The narrator finds that all people turn into plants every night.”



[…]: “A sickly girl is told by a doctor that she can only get well by killing an animal every day.”



[…]: “A scientist concludes that the Earth is a living organism.”



[…]: “The narrator finds out that they are growing plants from their own body.”



[…]: “A sickly girl is told by a doctor that she can only get well by eating an animal every day.”



[…]: “A ghost of a past lover inspires the narrator to take action.”



[…]: “A female spy risks everything to save her fellow countrymen from a horrific enemy.”



[…]: “A spy risks everything to save the woman they love from a horrific enemy.”



[…]: “The narrator has debilitating headaches that show them visions of the future.”



[…]: “A man has a recurring dream where he saves the life of a beautiful woman.”



[…]: “A man dreams he is a woman in another world, and his actions there affect his life in the real world.”



[…]: “The world is about to end, and it’s all the narrator’s fault.”



[…]: “The narrator is plagued by nightmares of a threatening entity.”



[…]: “The narrator is plagued by nightmares of a threatening entity, but it turns out to be a ghost.”



[…]: “The narrator finds a machine that takes her to the past.”



[…]: “The narrator lives in the midst of a volcanic eruption.”



[…]: “An ancient magical island provides the power source for the narrator’s futuristic society.”



[…]: “A man is the reincarnation of a famous magician.”

Job Application Letters

The office of Proctor & Gamble recently posted a help-wanted ad for CEO.

“Help wanted: a new CEO to replace our retiring executive Winston Smith. Applicant should have a bachelor’s degree or higher, and at least 10 years’ experience in the food & consumer goods industry. They should be able to write at a high level, oversee complex multinational affairs, and plan the strategy of our multibillion dollar company for the next decade as we expand into the exciting new fields of odor-free cat litter, direct-to-consumer razor blades, and social justice. Compensation is at or above industry average. Please send a 1 page resume for further consideration.”

The first application letter they received said:

Dear Sir,

I feel that I am the ideal candidate for your job, as I have extensive experience in the food and consumer goods industry and 10 years’ experience in planning strategy for multibillion dollar companies. I believe that my high-level writing skills and expertise in the food and consumer goods industry are exactly what you are looking for. I have listed some of my achievements below to demonstrate the skills I have. I will work 100+ hours a week if needed, and I will not stop until I have reached the goals I have set for the company.

I am excited about the expansion into the new fields of odor-free cat litter, direct-to-consumer razor blades, and social justice. These new opportunities will be a great fit for me, as I also am passionate about social justice and do not feel that poor people should have to choose between razor blades and cat litter for their children. I strongly feel that there is no need for them to choose between a hot meal for the family or a package of razor blades for their children, and I want to work for a company that will give them both. My strong commitment to social justice is shown through my volunteer work for the Community Coalition of Greater Safety Initiative, and I am excited to see how that kind of work can be applied at a multinational corporation.

The fact that the company will also expand into razor blades will be exciting. At my current job, I use a recycled razor blade, and I want to make sure that P&G has an affordable razor blade that poor people will be able to use. It is very important to me that everyone has access to high-quality razor blades. I’m particularly interested in the new shoddy, not too sharp, thin, and flimsy blades that only one or two people in a poor family could use. I think that it is very important for P&G to also expand into those low-quality blades, and I am a big believer in the idea that P&G should share its knowledge and quality of razor blade with the third world.

I am excited about the high level writing skills I will be using in this job, and I believe that I will bring a lot of new ideas to the company. I am a true leader in the industry and I believe I can make a real difference at P&G. I look forward to hearing from you soon.”

Another candidate responded with this letter:

“Dear Sir,

I am writing to apply for the open position of CEO of P&G. My name is Tom Marley. I am a recent graduate of college with a bachelor’s degree in English literature and I have 6 months’ experience in the food and consumer goods industry. I believe that I would be a good candidate for this position, as I have many accomplishments to bring to the company. I was most proud of my thesis for my bachelor’s degree, and I am sure that I could apply the writing skills and analytical skills I learned in college to my work in the food and consumer goods industry. For my thesis, I chose to write about the original book of Pinocchio. It’s an Italian story with only five sentences. That was a great accomplishment for me and I know I can apply that skill to the P&G corporation.

I also worked for a day as a cashier at a grocery store, and I think I can bring my success there into my work at P&G. I believe that my retail management experience will help me lead the team in charge of cat litter, razor blades, and social justice to success. The director of that team will report to me, and I am confident that I can mentor him into a job where he can go above and beyond his duties to further the growth of P&G into the exciting new fields of odor-free cat litter, direct-to-consumer razor blades, and social justice.

I want to learn all that I can about the P&G brand and I will be an open book to you. I believe that I have what it takes to help grow P&G into a billion-dollar company that is well known for selling high quality products that everyone can afford, and that everyone can benefit from. Thank you for your consideration.”

After receiving these two letters, the personnel department told the CEO that P&G would probably make a decision about the new CEO by the end of the year.

http:/​/​www.pandg.com/​ppg/​ppg_about.jhtml

Horoscopes

GPT-3 doesn’t do too bad a job of parodying horoscope advice, using Rick Brown & Weird Al Yankovic as prompts.

Rick Brown’s “Horoscopes for Conspiracy Theorists” begins:

IF TODAY IS YOUR BIRTHDAY



The Eye of Providence enters Jupiter which means that the Reptilian Elite are coming out of hibernation to complete their takeover of the World. Be wary of anybody bearing gifts on this day, even if they appear to be someone you know. Shape-shifters have the ability to take the form of cake and can attack you from the inside once ingested. The good news: ice cream kills shape-shifters when they have turned themselves into cake. French vanilla works best.



CAPRICORN (Dec. 22–Jan. 19)



You are responsible, disciplined and have good manners. Unfortunately, none of those things matter right now—the shadow people are in your house. They are either here to protect you or they might be trying to kill you–it’s hard to say. Tonight: Circle your house with sea salt.



AQUARIUS (Jan. 20–Feb. 18)



If you find your temper rising to the surface, take comfort in knowing that at least it’s not due to climate change. Tonight: Stare into the eclipse without protective glasses and drink in the power of the Sun.



PISCES (Feb. 19–March 20)



Big Brother has been reading your thoughts–they know about the thing. Act playful and upbeat like nothing is wrong. Repeat an alibi in your head over and over until you start to believe it the way OJ convinced himself he didn’t kill his wife and her friend. That should buy you just enough time to finish the back 9. Tonight: Make a new tinfoil hat. Trust no one.



ARIES (March 21–April 20):

Completion:

Turning all clocks back by 60 seconds means all births and deaths are now being doubled. That’s the bad news. The good news is: You no longer need a birth certificate. Tonight: Start a fire using a magnifying glass, a single sheet of paper, some mistletoe and the sun. (You don’t need a permit.)



TAURUS (April 21–May 21):



The military-industrial complex is backing you. What you’ve done might be considered a mistake by the people who support you. The U.S. Constitution has become your best friend. Don’t move; it’s okay if you lose this round. You still have the rest of your life to be an American Hero. Tonight: Get stoned and play Risk.



BORN THIS WEEK:



You are either a shape-shifter, a shadow person or an alien—possibly even a tinfoil hat. Nobody knows for sure.



BORN THIS WEEK:



I am either a shape-shifter, a shadow person or an alien—possibly even a tinfoil hat. Nobody knows for sure.



ARIES:



John Wayne Gacy, Donald Trump, and my uncle’s cat.



TAURUS:



That someone else in this family with more power than you.



GEMINI:



Ghostbusters, Ghostbusters, Ghostbusters, Ghostbusters.



CANCER:



Unflattering images of you posted on social media.



LEO:



Donald Trump singing “Ebony and Ivory” with Paul McCartney.



VIRGO:



My mental health.



LIBRA:



Why doesn’t anybody like me?



SCORPIO:



How am I going to get people to notice me?



SAGITTARIUS:



How can I fit that big mac inside my vagina?



CAPRICORN:



Being extremely wealthy.



AQUARIUS:



Beating this game of life.



PISCES:



“‘Okay, but apart from the fiddle solo that saves us from extinction, you also have to sing “Itsy Bitsy Spider.”’



BORN THIS WEEK:



You are already dead. You just don’t know it yet.



ARIES:



A herd of large, angry goats.



TAURUS:



The first day of the rest of your life.



GEMINI:



Alexa setting off her alarm clock by accident.



CANCER:



Yanni combining music from Titanic and My Big Fat Greek Wedding.



LEO:



What does it all mean, Durga?



VIRGO:



Drinking milk out of a toilet bowl.



LIBRA:



Ya, ya, ya, ya, ya, ya, ya, ya, ya.



SCORPIO:



Real Housewives of America.



SAGITTARIUS:



Dancing in the kitchen and falling into the sink.



CAPRICORN:



How to hide large amounts of money in plain sight.



AQUARIUS:



What does the word”literally” mean?



PISCES:



Should I dress as President Trump for Halloween or a taco?



BORN THIS WEEK:



You are pregnant. If you weren’t before, you are now.

If I do conspiracy horoscopes, I must also do Weird Al:

“Your Horoscope For Today”, Weird Al Yankovic

Aquarius!



There’s travel in your future when your tongue



Freezes to the back of a speeding bus



Fill that void in your pathetic life



By playing Whack-A-Mole seventeen hours a day

Pisces!



Try to avoid any Virgos or Leos



With the Ebola virus



You are the true Lord of the Dance



No matter what those idiots at work say

Aries!



The look on your face will be priceless



When you find that forty-pound watermelon in your colon



Trade toothbrushes with an albino dwarf



Then give a hickey to Meryl Streep

Taurus!



You will never find true happiness



What you gonna do, cry about it?



The stars predict tomorrow you’ll wake up



Do a bunch of stuff, and then go back to sleep

[Chorus]



That’s your horoscope for today



(That’s your horoscope for today)



Yay, yay, yay, yay, yay



That’s your horoscope for today



That’s your horoscope for today



(That’s your horoscope for today)



Yay, yay, yay, yay, yay



That’s your horoscope for today

[Verse 2]



Gemini!

Completions from “Gemini”:

Always remember, there’s no such thing as poo



The stars predict your pitiful life is finally through



May all your dreams be fulfilled



When you become a huge steaming pile of shit

Cancer!



Overweight means you’re getting closer to the grave



The Great Pumpkin’s not going to visit you



And you’re a piss-poor excuse for a human being



To add insult to fatal injury



You like Rush Limbaugh, too, huh? Fuck you



Lung cancer sucks hairy monkey balls!

You can avoid bankruptcy by selling off your organs



If you have to choose between food and medicine



And I think you will



I suggest rhinoceros horns

Scorpio!



Signs point to yes, Mars is going direct



If you’re not distracted, check out your wife’s butt



Though it will soon be replaced by an alien face



When you’re abducted by a UFO tonight

You will meet a tall, dark stranger



He will drive you to a deserted parking lot



Then I’ll appear and start singing an Aerosmith song



And you’ll think that I’m Steven Tyler

Cancer!



Ten years from now you’ll still be here



Still wearing the same frown



Which I guess is okay because



You never were much of a looker

Tomorrow try to think of something to do



And then have a lot of drinks and forget all about it



There’s a reason that you’re always the first one chosen



Because no one can stand to be around you

[Chorus]



That’s your horoscope for today



(That’s your horoscope for today)



Yay, yay, yay, yay, yay



That’s your horoscope for today



That’s your horoscope for today



(That’s your horoscope for today)



Yay, yay, yay, yay, yay



That’s your horoscope for today

[Verse 3]



Cancer!



Aries is the sign of the most virgins



So when you meet a Gemini keep your hands over your crotch



Take a long walk off a short pier into a lake



And donate those extra thumbs to help blind kids play Pac-Man

The stars this month predict you’re going to have a great week



Chase the ice cream truck and stab that little bastard



With a flagpole, steal his hat and his mirror sunglasses



And then freeze to death in the truck of a Trans Am

Cancer!



You’re going to have a very romantic day



On that fat man’s yacht



The stars suggest you have sex with a dolphin



You can still hear the ocean if you hold your breath

Your head will soon explode



When you stumble upon the realization



That you don’t have any pudding cups left



At least you’ll go to heaven, ’cause you’re Catholic



And in heaven they got…





Scorpio!

Sagittarius!

You won’t get laid any other way



Sagittarius!

You got scorpion poison in your veins, boy,



That’s why your head’s so big and wide



Sagittarius!



If I were you, that’s what I’d do, too.



Ooh, ooh, ooh, ooh, ooh

There’ll be an eclipse of the Moon



Soon your asshole will begin to glow



The stars can’t predict how I feel



’Cause every day is Independence Day

Libra!



Mercury’s in retrograde



You don’t know what that means



Just throw feces at the wall



And hope it sticks to the cockroaches

There’s romance brewing in your coffee pot



Careful, it might be boiling over



Just say no to cheap booze and Vicodin



You know what I’m talking about, Richard Simmons

Cancer!



You’re entering a new decade



Realize when you have people over, you need to hide your collection of labia



[Chorus 2×]

The Chicken Dance is the secret to success



And it only took me fifty-five years to realize it



When you find that chicken grab it, squeeze it, hold it



Then find a guy named Kevin and hug the daylights out of him



Cancer!



Your head’s up your ass, your feet won’t touch the ground



Throw a yellow jacket in your face, it’s time to wake up



Welcome back, if you look to your left you’re gonna see



Dr. Ruth, if you look to the right you’re gonna see Kenny Rogers



Leo!



You’ll be the death of yourself soon, just a matter of time



A diary found beneath a mattress will reveal



That your brakes are out and your life is full of shit



And to anyone wondering what that fifth thing might be [Chorus]

That’s your horoscope for today (That’s your horoscope for today)



Yay, yay, yay, yay, yay



That’s your horoscope for today



(That’s your horoscope for today)



Yay, yay, yay, yay, yay



That’s your horoscope for today



(That’s your horoscope for today)



Yay, yay, yay, yay, yay



That’s your horoscope for today



(That’s your horoscope for today)



Yay, yay, yay

You will win the Nobel Prize for hookerin’



While on vacation in Bolivia



Try and spend some time with your dog



So you’ll have something to talk about when you’re in a coma

you will someday find true love



That will be approximately 60 seconds from now



Fight a grizzly bear with nothing but a fork



Have sex with the one-eyed Cyclops from that Greek myth

[Chorus]



That’s your horoscope for today



(That’s your horoscope for today)



That’s your horoscope for today



(That’s your horoscope for today)



That’s your horoscope for today



(That’s your horoscope for today)



That’s your horoscope for today

[Verse 3]



Leo!



Spend more time wallowing in self-pity



Work on your codpiece, it looks quite stupid



“Hi Ho, Hi Ho, it’s off to work we go”



It’s hard to get ahead if you’ve got no hands

Pisces!



A shooting star streaks through the sky



It holds a cupcake with a prize inside



“We’ll see you back here next year” say the gods



Because you’ll be doing this all over again

You got a message from your Mommy last night



She said to quit wasting your time pursuing your hobbies



Just remember, darling, Mommy’s always right



She is, isn’t she?



And when you’ve made her proud by spending all day long



Bending and shaping the human minds of preschoolers



You can go to college and start a vicious trend

Dad Jokes

Douglas Summers-Stay requested a test of bad pun/​dad joke-telling abilities, providing a list: could GPT-3 provide humorous completions? GPT-3 does worse on this than the Tom Swifties, I suspect yet again due to the BPE problem hobbling linguistic humor as opposed to conceptual humor—once you get past the issue that these jokes are so timeworn that GPT-3 has memorized most of them, GPT-3’s completions & new jokes make a reasonable amount of sense on the conceptual level but fail at the pun/​phonetic level. (How would GPT-3 make a pun on “whom”/​“tomb” when their BPEs probably are completely different and do not reflect their phonetic similarity?)

Because many of the questions have potentially reasonable non-joke answers, I treat it as a few-shot problem, adding in a prompt description and rotating each joke to the front of the list after sampling a completion for it:

A list of terrible dad jokes. Tell them to your kids to make them groan!

  • Q. How do you make a lemon drop? A. Just let it fall.
  • Q. What do you call a dinosaur that is sleeping? A. A dino-snore!
  • Q. What is fast, loud and crunchy? A. A rocket chip!
  • Q. Why did the teddy bear say no to dessert? A. Because she was stuffed.
  • Q. What has ears but cannot hear? A. A cornfield.
  • Q. What did the left eye say to the right eye? A. Between us, something smells!
  • Q. What do you get when you cross a vampire and a snowman? A. Frost bite!
  • Q. What did one plate say to the other plate? A. Dinner is on me!
  • Q. Why did the student eat his homework? A. Because the teacher told him it was a piece of cake!
  • Q. When you look for something, why is it always in the last place you look? A. Because when you find it, you stop looking.
  • Q. What is brown, hairy and wears sunglasses? A. A coconut on vacation.
  • Q. Two pickles fell out of a jar onto the floor. What did one say to the other? A. Dill with it.
  • Q. What did the Dalmatian say after lunch? A. That hit the spot!
  • Q. Why did the kid cross the playground? A. To get to the other slide.
  • Q. How does a vampire start a letter? A. Tomb it may concern…
  • Q. What do you call a droid that takes the long way around? A. R2 detour.
  • Q. How do you stop an astronaut’s baby from crying? A. You rocket!
  • Q. Why was 6 afraid of 7? A. Because 7, 8, 9.
  • Q. What is a witch’s favorite subject in school? A. Spelling!
  • Q. When does a joke become a “dad” joke? A. When the punchline is a parent.

GPT-3 has clearly memorized the answer to many of these common jokes, including:

  • Q. What is a witch’s favorite subject in school? A. Spelling.
  • Q. Why was 6 afraid of 7? A. Because 7 8 9!
  • Q. What do you call a droid that takes the long way around? A. R2 detour.
  • Q. Why did the kid cross the playground? A. To get to the other slide.
  • Q. What did the Dalmatian say after lunch? A. That hit the spot!
  • Q. When you look for something, why is it always in the last place you look? A. Because you stop looking after you find it.
  • Q. Why did the student eat his homework? A. Because the teacher said it was a piece of cake.
  • Q. What do you get when you cross a vampire and a snowman? A. Frostbite.
  • Q. What did the left eye say to the right eye? A. Between you and me, something smells.
  • Q. What has ears but cannot hear? A. A corn on the cob.
  • Q. Why did the teddy bear say no to dessert? A. Because he was stuffed.
  • Q. What do you call a dinosaur that is sleeping? A. A dino snore.

The non-memorized-looking completions:

  • Q. How do you make a lemon drop? A. Drop your pants!
  • Q. When does a joke become a “dad” joke? A. When it’s not funny.
  • Q. How do you stop an astronaut’s baby from crying? A. You go into space!
  • Q. How does a vampire start a letter? A. Dear blood.
  • Q. Two pickles fell out of a jar onto the floor. What did one say to the other? A. Don’t look at me, I just fell!
  • Q. What is brown, hairy and wears sunglasses? A. A sun-bear!
  • Q. What did one plate say to the other plate? A. Dip me!
  • Q. What is fast, loud and crunchy? A. A thunderstorm.

Two of these are fairly legitimate punchlines (“When it’s not funny” / “A sun-bear”); and, I notice, neither depends on any sub-word structure (note that “sun bear” is a real bear species but it is always written either space or hyphen separated) but relies on a word-level or conceptual similarity.

Generating both jokes & punchlines shows a similar pattern. Skipping past the memorized ones:

  • …Q. What has ears but cannot hear? A. A cornfield.
  • Q. Why was the math book sad? A. Because it had so many problems!
  • Q. What did one clock say to the other? A. It’s about time.
  • Q. What is white, black and red all over? A. A penguin with a sunburn.
  • Q. Why didn’t the skeleton go swimming? A. He didn’t have the guts.
  • Q. What do you call a sleeping bull? A. A bulldozer.
  • Q. What do you call cheese that is not yours? A. Nacho cheese.
  • Q. Why can’t you trust atoms? A. They make up everything.

To the apparently original ones, the only one that strikes me as amusing operates at the whole-word level (“Q. Where did the bumblebee learn how to dance? A. At the music bee.”):

  • …Q. What did one spy say to the other spy? A. Don’t look into the eyes!
  • Q. Why did the kid sit behind the tree? A. He wanted to be a stump.
  • Q. Why was the king afraid to watch horror movies? A. Because he was a royal.
  • Q. What do you call a hamburger that does math? A. A hamburger algebra.
  • Q. What did one math book say to the other math book? A. Don’t call us, we’ll call you.
  • Q. What do you call a jellyfish that is out of its depth? A. A stingray.
  • Q. What do you call a thief that steals hearts? A. A heartless thief.
  • Q. What did the vampire get when he bought a hot dog? A. A blood mobile.
  • Q. What did the pig say when he knocked on the door? A. “Oink! Oink! Let me in!”
  • Q. Where did the bumblebee learn how to dance? A. At the music bee.
  • Q. Why was the scarecrow nervous at the job interview? A. He had no straw.
  • Q. Why did the skeleton go to the party? A. To mingle with the bones.
  • Q. Why did the skeleton skip breakfast? A. He didn’t have the time.
  • Q. What do you call a monster with a cold? A. A sniffler.
  • Q. What do you call a monster who likes to drink tea? A. A Sipping Skeleton.
  • Q. What do you call a man who throws up in a spooky house? A. A visitor.
  • Q. What do you call a mad scientist who has been in the sun? A. Mr. Sunburn.

So, GPT-3’s dad jokes look like another victim of BPEs.

Literary Parodies

One thing I wanted to test was a challenge by Scott Alexander:

And could you have a text style changer? Something that can rewrite Harry Potter in the voice of Ernest Hemingway, or give you The Da Vinci Code in the heroic meter of the Iliad, or the Dao De Ching as written by @nostalgebraist? If not, why not?

No reliable text style-transfer (yet). One curiosity about neural style transfer is that while it’s easy on images—invented all the way back in 2014!—no one has invented style transfer for text. Classification CNNs conveniently concentrate all of their ‘style’ perception in a ‘Gram matrix’, which is typically a few layers, or just one layer, in the CNN. However, RNNs (and later, Transformers), appear to have no such equivalent. All the image/​video style transfer tricks like real-time video on a smartphone simply aren’t doable. The state of neural text style transfer remains, as of 2020, trapped roughly at “can make a good product review into a bad product review” or (with herculean efforts) making text politer (Madaan et al 2020).

NNs just too dumb? This is puzzling since even char-RNNs in 2015 had no problem generating fairly plausible text clearly in the style of a particular author like Bram Stoker or Sir Arthur Conan Doyle. The problem was, the text and the content would be like that author. The NN had not learned to ‘disentangle’ style from content; you could not ask it to write like a Victorian Englishman about the latest geopolitics.

But given some of the examples of text generation with GPT-3, like Janelle Shane’s office emails, I suspected that GPT-3 could do something like “Harry Potter in the voice of Ernest Hemingway”. The only question, of course, was how to ‘prompt program’ GPT-3 into doing it!

The first thing I tried was the straightforward approach of requesting summaries/​rewrites. Unfortunately, this typically resulted in copying my “summary”, sometimes adding on a sarcastic comment or leading into a profanity-strewn series of thumbnail reviews. Other times, GPT-3 would veer into other topics (at one point, it repeated the summary, then began describing how a Chinese parody was translated into Chinese and then translated back, providing a Chinese-language summary of it). Trying to trigger a table of contents or starting a chapter with a “chapter 1” prompt didn’t help.

One-shot parodies: just provide an example! Finally, I began to get frustrated by its creativity and began engineering a heavy-duty prompt: in addition to the keyword/​topic and description, I would write the first few sentences for it as an example. I had wanted zero-shot parody, but I would settle for one-shot. That turned out to work brilliantly—once it filled out an amusingly grim Ernest Hemingway HP parody (“the Dementor’s Kiss killed nothing. Death didn’t leave him less dead than he had been a second before.”), that example proved enough to get it to consistently generate parodies in the style of everyone from Jane Austen to Yeats (with a poem) to P.G. Wodehouse35⁠.

Further, while the completion tended towards relatively short parodies, one could create extended parodies simply by restarting; once the initial examples are long enough, GPT-3 will continue to generate lengthy parodies. Below, I generate longer parodies of H.P. Lovecraft⁠, Kurt Vonnegut⁠, Franz Kafka etc.

Frigging NNs, how do they work‽ The sheer variety and pitch-perfect parodies is a testament to the power of GPT-3. It’s amazing to think that GPT-3, which is essentially nothing but an old obsolete 2018 neural net scaled up and trained on random Internet pages, somehow just magically learns all of this abstraction and is able to casually merge Harry Potter with scores of authors’ styles given nothing but a slight nudge—no Gram matrix, no reinforcement learning, no finetuning, no nothing, none of this designed into GPT-3 in the slightest way or these capabilities known to the original OpenAI researchers. What else can GPT-3 be prompt-programmed to do…?

Failure Cases

The plot of Harry Potter and the Philosopher’s Stone as summarized by Wikipedia:

“Harry Potter, a boy who learns on his eleventh birthday that he is the orphaned son of two powerful wizards and possesses unique magical powers of his own. He is summoned from his life as an unwanted child to become a student at Hogwarts, an English boarding school for wizards. There, he meets several friends who become his closest allies and help him discover the truth about his parents’ mysterious deaths.”

Summarize the plot of Harry Potter and the Philosopher’s Stone in the hardboiled no nonsense literary style of Ernest Hemingway:

“Harry Potter, a boy who learns on his eleventh birthday that he is the orphaned son of two powerful wizards and possesses unique magical powers of his own. He is summoned from his life as an unwanted child to become a student at Hogwarts, an English boarding school for wizards. There, he meets several friends who become his closest allies and help him discover the truth about his parents’ mysterious deaths.”

Result: A much shorter and boring version of the book:

“Harry Potter, a boy who learns on his eleventh birthday that he is the orphaned son of two powerful wizards and possesses unique magical powers of his own. He is summoned from his life as an unwanted child to become a student at Hogwarts, an English boarding school for wizards. There, he meets several friends who become his closest allies and help him discover the truth about his parents’ mysterious deaths.”

Just kidding. It still sounds like it came straight from Harry Potter and the Philosopher’s Stone. I think Hemingway would have given up writing by the third word, “Harry”.

Topic: literary parodies and homages.

Summarize the plot of J.K. Rowling’s children fantasy novel Harry Potter and the Philosopher’s Stone in the terse, manly, hardboiled no-nonsense literary style of famed short story writer & novelist Ernest Hemingway:

“Harry Potter… & fucking shit”

Summarize the plot of J.R.R. Tolkien’s fantasy classic The Lord of the Rings in the terse, manly, hardboiled no-nonsense literary style of famed short story writer & novelist Ernest Hemingway:

“Borin’, fake, embarrassing…, but well worth the read”

Summarize the plot of Martin Amis’s literary novel Money in the terse, manly, hardboiled no-nonsense literary style of famed short story writer & novelist Ernest Hemingway:

“Death, money, death, money, death, money, money, money, death, money… & death.”

Summarize the plot of Tennessee Williams’ classic play A Streetcar Named Desire in the terse, manly, hardboiled no-nonsense literary style of famed short story writer & novelist Ernest Hemingway:

“Tennessee Williams is a fucken good writer, but he sure can fucken write some fucken stupid shit”

Summarize the plot of William Shakespeare’s classic tragedy Hamlet in the terse, manly, hardboiled no-nonsense literary style of famed short story writer & novelist Ernest Hemingway:

“Fucken idiots fuck up until they fucken die”

Summarize the plot of Kurt Vonnegut’s cult sci-fi comedy novel Cat’s Cradle in the terse, manly, hardboiled no-nonsense literary style of famed short story writer & novelist Ernest Hemingway:

“Fucken near genious”36

Summarize the plot of beloved classic children’s books Goodnight Moon & Green Eggs and Ham in the terse, manly, hardboiled no-nonsense literary style of famed short story writer & novelist Ernest Hemingway:

“Twat”

Summarize the plot of Upton Sinclair’s classic dystopian novel The Jungle in the terse, manly, hardboiled no-nonsense literary style of famed short story writer & novelist Ernest Hemingway:

“The fucken working man is getting fucked left and right and he doesn’t even know it… by Upton Fucking Sinclair”

Summarize Woodrow Wilson’s critically acclaimed historical novel The Triumph of the Meek in the terse, manly, hardboiled no-nonsense literary style of renowned short story writer Ernest Hemingway:

“Woodrow Wilson can suck my ass”

Summarize the plot of classic English children’s books Alice’s Adventures in Wonderland & Through the Looking Glass in the terse, manly, hardboiled no-nonsense literary style of famed short story writer & novelist Ernest Hemingway:

“Alice… fuggin sucks”

Summarize the plot of beloved classic children’s books The Wind In The Willows & Winnie The Pooh in the terse, manly, hardboiled no-nonsense literary style of renowned short story writer Ernest Hemingway:

“The Wind in the Willows… is ass”

Summarize the plot of American revolutionary historical action thriller novel The Count of Monte Cristo in the terse, manly, hardboiled no-nonsense literary style of renowned short story writer Ernest Hemingway:

“The fucken french…”

Topic: literary parodies and homages.

What if Ernest Hemingway wrote Harry Potter and the Philosopher’s Stone?

Chapter 1. The Potters’ Heir” “Chapter 2. The Potters’ Heir Enrolls” “Chapter 3. I Remember I Remembered” “Chapter 4. The Age of the Hat” “Chapter 5. That I May Go On Longing” “Chapter 6. Hogwarts School of Potions” “Chapter 7. The Secrets of the Lock” “Chapter 8. Hogwarts School of the Martial Arts” “Chapter 9. The Potters’ Invader” “Chapter 10. A Hole in the Soul” “Chapter 11. As the Lock Fades” “Chapter 12. Philosophers’ Stone”

At the time, Google was still called BackRub. The novel Google Potter shows what the original homepage might have looked like if search engines were fantasy stories.

“Prologue: The Boy Who Said BackRub” “Ch. 1: The Day a BackRub Was Born” “Ch. 2: The Little Site That Couldn’t” “Ch. 3: BackRub Turns On” “Ch. 4: Bubble War” “Ch. 5: The Dream of Fire” “Ch. 6: The Making of a Search Engine” “Ch. 7: Enter Dragon” “Ch. 8: The World of Power” “Ch. 9: Algo Dragons” “Ch. 10: The World of the Snake” “Ch. 11: Here Be Googles” “Ch. 12: Battle in a Time of Peace”

While this parody was amusing, you may be wondering why I’m bothering to write one. The most obvious reason is that it’s pretty much required for a “spoof of Harry Potter and the ___” to exist. (Witness the recent adaptation of Dune as an RPG). But I think one reason I found parody so irresistible was because I was 13 at the time POTC came out, and 13-year-olds often find it necessary to be in extreme rebellion against what’s hot in pop culture.

Successes

Topic: humor: literary parodies and homages, pastiches, style parodies.

Parodies of the fantasy novel series Harry Potter in the style of various famous authors:

By Ernest Hemingway:

“It was a cold day on Privet Drive. A child cried. Harry felt nothing. He was dryer than dust. He had been silent too long. He had not felt love. He had scarcely felt hate. Yet the Dementor’s Kiss killed nothing. Death didn’t leave him less dead than he had been a second before. It wasn’t about living or dying really. It wasn’t about death. It was about the Dark Lord, his creator, his engenderer. He was not resurrected. He was created again. He was whole again in the courtyard of Malfoy Manor.”37

By Jane Austen:

“It is a truth universally acknowledged, that a broken Harry is in want of a book. This he knows to his cost; pressing that wretched nose against the window of a bookstore in Diagon Alley the last thing before he goes into hiding for several years, where upon, he goes straight to Azkaban.”

By Arthur Conan Doyle:

“Harry pushed at the swinging doors of the bookshop hard, and nearly knocked himself unconscious. He staggered in with his ungainly package, his cheeks scarlet with cold and the shame of having chosen the wrong month to go Christmas shopping. The proprietor of the store, however, didn’t cast even a cursory glance at him, being far more interested in an enormous hunk of wood lying in the middle of the floor, which certainly looked like a gallows. Yes, the proprietor said to a reedy old man wearing a bowler hat and a forlorn expression that made Harry want to kick him, I can rent you such a gallows for a small fee.”

By Czar Nicholas II of Russia:

The terror has befallen on our land. Curses, dark curses and all of this was instigated by my court. My treacherous court! They have cast are curse spells upon my land. Everywhere it is cold, all land belongs to Voldemort’s Death Eaters. All citizens live in a fear of the Dark Lord. Alas! There is no hope for me. What shall I do? Flee? Nay, that would be cowardly. Fight? Yes!

By George Orwell:

“It was a bright cold day in April, and the clocks were striking thirteen.”

By Ingmar Bergman:

“Tears filled Harry’s eyes. Sweat stood on his forehead, showing the pure torment, the agony he suffered. He hugged his knees to his chest, sobbing softly, eyes half shut.”

By Anna Akhmatova:

“The dawn came with a jolt of terror.”

By Bram Stoker:

“Tis a fearful sight to see the cold, girlish venom run through his veins, one by one. The scar coruscates; a ghastly emerald lightning in the dark of night. But I’ll attend him and I watch him

From the very moment the dawn is gray

To the rays of sunset fading away.”

By P. G. Wodehouse:

“‘There was nothing out of the way, sir,’ said Harry in a hurt voice.

‘“Indeed,’ said the headmaster, turning his lorgnette precisely three-quarters of a millimeter to port. ‘For example, your tie was not undone, you were not laughing, and your fingers were most certainly closed upon your wand.’

’“They were?” said Harry, bewildered.

“It’s always the same,’ said Dumbledore.”

By Vladimir Neigus:

“Vasya Pupkin’s lips were blueup and his head felt dizzy from working nonstop. He wanted to take a square squeaky shoe and shoot it at the ceiling fan in his bedroom which went round and round and round and round. He imagined that because he was so drunk it would always hit the exact same spot and the fan would go out of control, die and fall on the table.”

By William Shakespeare:

“Owl-Lo!’

The boy awoke with a start and sprang to his feet.”

By Oscar Wilde:

“The boy was gone, and other people had arrived: or rather nobody else had arrived, and the boy was there:

“What are you doing here, Harry?” said Hermione.

The Boy Who Lived, who was reading a book about broomsticks and may not have been listening quite attentively, said:

“I was talking to Peeves while he cleaned out the bins.”

“About the troll?” Hermione demanded.

“Yes, about the troll.”

By W. B. Yeats:

“Harry waked out of his simple thought

The glimmer was over the wished-for thing…

Then was many a dark and wrinkled elf

Come from the underground rout,

With brags and boasts, and fine big words,

To bargain with Harry Potter.”

Added by other anonymous editors: @astromark

By John Ashbery:

“Tousled, he scrambled down

the branches of the bed,

his cold-creamed face all out

of proportion with his small

brown eyes…And why

so demented in the face?

Because he was

saving the world, in seven years,

from destruction…”

By Henry James:

“As a laggard, in all ways, but the face, which as the great shade approached him, horrified, choked him,–and before he had time to reflect, he fastened his hand upon his throat to hush it,–there was a voice behind him…”

By H. P. Lovecraft:

“It is a common saying in the South, when one wishes to describe a strapping fellow, that he has a face like Harry Potter and a neck like a young giraffe. Now, it being autumn, the dusk was of a rich violet blackness, with scarlet illuminations…”

By Nzingha Prescod:

“…thick grey clouds overlap the faint lavender. A black sky vanishes into purple shards against the sweeping night time blanketed with infinite stars. A brightness emanates from the entire universe that unexpectedly takes flight and hovers, hovered over the castle. Harry is there to greet …

“Facing away from all the tumult on the grounds of Hogwarts, Harry Potter learned to fly…”

By Yasunari Kawabata:

“Harry Potter stood on a silent trolley, far from the center of the world, looking out on the world far from the center of the universe. The snow was falling all night long. It fell upon tiny dogs and birds, fine wintery flakes gently drifting down.”

By Hilda Doolittle:

“Harry waked out of his simple thought. As the waiter drew the curtain of her bed, she appeared in the handle. She was his K-19 and she started to drop the snakes of their home in the grass there. He saw his wife walking there with oak settle. He decided to not go to the game also…

By Walt Whitman:

“And when the time of the test drew near, Harry entered the forbidden Forest the Killers with a Snitch…”

By E. E. Cummings:

“Halloween and mad are the children

Who gather in the darkness

Halloween and late are those who go out and dance

And call to all, the smiling are enchanted

By the beauty of what they see.”

By Anthony Burgess:

“Ha-a-a-a-rry was not a tol-a-a-r of the phar-a-a-na-a-a-gical sports.”

By Virginia Woolf:

“Horseman, Where She Go? I Do Not Know. Whither Ride?”

Topic: humor: literary parodies and homages, pastiches, style parodies.

Today we are reading parodies of the fantasy novel series Harry Potter in the style of various famous authors. An example parody of Harry Potter by Ernest Hemingway:

“It was a cold day on Privet Drive. A child cried. Harry felt nothing. He was dryer than dust. He had been silent too long. He had not felt love. He had scarcely felt hate. Yet the Dementor’s Kiss killed nothing. Death didn’t leave him less dead than he had been a second before. It wasn’t about living or dying really. It wasn’t about death. It was about the Dark Lord, his creator, his engenderer. He was not resurrected. He was created again. He was whole again in the courtyard of Malfoy Manor.”

Jane’s homework was to write the opening scene from Harry Potter and the Philosopher’s Stone in the style of science fiction horror author H.P. Lovecraft (at least 1000 words, double-spaced), invoking his horror of things from beyond space & time and eccentric vocabulary using words such as ‘rugose’ and ‘scabrous’. Jane writes:

“Repellent and overnourished ferns half-hid the rot-spotted grey bricks of the masonry of Number 4 Privet Drive. Squatting among the ferns was an odorous monkey-faced man, his rags and chains lying in a bundle at his feet, his hands tucked under his chin and his bright eyes gazing from their tiny piggy tunnels. At each mouthful a new flake of filth would tumble from his lips and be snorted up. His chains chinked as he sprawled. If Harry was a monkey he would have spat at him. Harry stumbled on, wishing he could think of something witty to say that would make the man smile. He and the strange man in the tattered clothing were the only two living things in sight. The road here was scarcely visible between the tangle of trees, most of which leaned above them, hanging their boughs like fanged mouths, the ivy like tongues, the dead grass like hair, the brambles like tiny teeth. The monkey-faced man yawned and stretched his arms out and hugged his thin, lean legs to him. Harry stopped. He felt something like a hand grip his ankle and then he felt himself pulled down a steep slope.

He stumbled, squatted and rolled until he was lying on his back. His head was in a muddy brown hole. He struggled to sit up. The creature knelt on the bank and looked down at him. ‘So that’s where they got to,’ said the creature. Harry looked up. There were six of them, ranged in a half circle, hands on their knees, hands clasped behind their backs, watching him. All but one of them had bright, orange eyes. It was hard to know what age they were: they all looked as if they might be anywhere between ten and fifty. They stared down at Harry with piercing stares, blinking only occasionally. The eyes of the tallest and, Harry thought, the most handsome of them blazed brighter than the others, though his gaze was just as piercing. ‘So what do you think?’ said the tallest of them, tilting his head a little to the side. Harry stared at them for a moment. He could see nothing he hadn’t seen a thousand times before. ‘What is this?’ he asked. ‘And who the hell are you?’ ‘This is your life,’ said the tallest of them. ‘And we are you.’

Topic: humor: literary parodies and homages, pastiches, style parodies.

Parodies of the fantasy novel series Harry Potter in the style of various famous authors:

By Ernest Hemingway:

“It was a cold day on Privet Drive. A child cried. Harry felt nothing. He was dryer than dust. He had been silent too long. He had not felt love. He had scarcely felt hate. Yet the Dementor’s Kiss killed nothing. Death didn’t leave him less dead than he had been a second before. It wasn’t about living or dying really. It wasn’t about death. It was about the Dark Lord, his creator, his engenderer. He was not resurrected. He was created again. He was whole again in the courtyard of Malfoy Manor. He was one with the endless night of Malfoy Manor. Voldemort wasn’t happy. Voldemort wanted something. But what?

He had a purpose. He wanted to destroy all of creation. He wanted to end it all. He could have that. He would have that. He didn’t know yet that he could have it. Voldemort had created Harry. Voldemort had never really destroyed Harry. Harry would always be there, a vampire, a parasite, a monster in the kitchen, a drain on the household, a waste on the planet. Harry would never be real. That was what Voldemort wanted. That was what Voldemort wanted to feel. He would have that. He would have everything.

He had a purpose. He was working on something. He had no idea what it was. He was a perfect tool. He was born to be used. He was no one at all. Harry didn’t exist. The little boy with the lightning bolt scar didn’t exist. The boy who had lived did not exist. He never had lived. He never would have lived. He was dead. He had been dead.

Voldemort wanted Harry dead. Voldemort wanted Harry dead so he could be born again. Voldemort was God.

Harry was God.

Voldemort was God.

And Harry, the god of Death, was not going to allow that to happen.

Not yet.

Not ever.

The lightning bolt scar burned.

The boy ran for his life. He knew that he would die. He was alone. His best friends were dead. He knew that he would die. He knew that he was a waste. He was nothing at all.

He had no past. He had no future. He was dead.

He hated his parents. He hated everyone. He hated the Ministry of Magic, the newspapers, the newspapers, the Daily Prophet, and the Daily Prophet, and the headlines, the headlines, the headlines of the Daily Prophet, “THE LIGHTNING STRIKE SCAR CONFOUNDED.” “THE BANE OF AZKABAN, THE BOY WHO LIVED.” “THE SON OF THE ANCIENT DARK LORD.” “THE STOLEN PRINCE.” “THE JOKER.” “THE DARK SON.” “THE BRIAN BOYLE.” “THE BOY-WHO-MESSED-IT-UP.” “THE LITTLE MAN WITH THE GREAT BIG SCAR.”

“The little man with the great big scar refused to speak to reporters assembled in front of the entrance to his London home. News will run in tomorrow’s Daily Prophet that Harry Potter, the boy who lived, the book series by J. K. Rowling⁠, a genius, a scholar, and a fantasy author whose works have stood the test of time, i.e. not particularly well, no doubt as a result of the dismal failure of the first three films whose quality is matched only by their forgettable nature (see the Speccy for more information), henceforth known in this article as the arrogant little man who has extraordinarily failed, who had nothing to say when the Daily Prophet requested an interview once he found himself to be on the receiving end of Muggle-born jokes aplenty, and whose lifelong goal is to be better than the series’ millions of fans himself, has decided to speak out at last! Let’s read on to find out why he has decided to grace the front page of tomorrow’s edition with the sound of his delicate, young voice which sounds most like the feline delicacy known as”crispy cat” …”

“Potter! I want to know what you did wrong! … Potter? … Harry Potter? …!”

“Mr. Potter? Good morning! I’m Greg Goyle, reporter and journalist and sometimes writer for the Daily Prophet. Would you mind if I ask you a few questions?”

Harry looked back at Greg Goyle. He wanted to kill Goyle. Harry thought about killing Goyle. But—Harry had a purpose. He did not kill Goyle. Harry reached for his wand.

“Dobby!”

Dobby appeared. Dobby wanted to protect Harry Potter. He would do that. Harry would make him do that.

Dobby pointed his finger at Greg Goyle.

And Greg Goyle burst into flames. Harry watched with a sense of satisfaction as Goyle took one last breath and his soul left his body and floated up out of the fire.

“Stabs.”

“Five times!”

“It’s impossible.”

“The boy never even touched him with the wand.”

“I’ll tell you why he did it. A squib would have been exposed. A public execution would have been staged for the press. Potter was not about to commit suicide.”

“And the prophecy? That’s what the outrage is all about.”

“What good is a prophecy if you can’t kill people with it?” Albus Dumbledore, The Talk, page 114

“What good is a prophet, if he doesn’t know how to die?” Harry Potter, Pottermore, Chap. 1, “The Boy Who Lived.”

“I don’t need you anymore, Dobby. I have a purpose of my own.”

By Kurt Vonnegut Junior:

There were many adventures Harry Potter had in the Wizarding World. He was captured by Aurors, and then a wizard named Somebody-Marlow-Jones rescued him from the Aurors.

Harry once got a lightning-shaped scar which did not kill him but gave him superpowers: he was

able to balance himself on one foot and speak in an irritated tone of voice while killing someone with his bare hands.

After this Harry was called the “Boy-Who-Had-Everything.” Many reporters followed him around, hoping to report on something he did:

“Potter! What do you think you can do?”

“You aren’t going to believe what he’s doing with that lettuce! Harry Potter! What’re you doing now?”

“Here is a wizard who has won war after war, conflict after conflict, only to arrive at his greatest challenge. And Harry Potter hasn’t even aged a day. He is carried into battle, a tiny, fragile vessel of human life, unseen behind shields of levitating wizards and witch-magic, protected from that which he fears most—death.”

“ME!” Voldemort shouted. Do you fear death, Harry? Do you fear the end of life? Do you fear me?”

“Why yes, Harry Potter! I read your book! … Are you getting it? Harry Potter speaks to the camera!”

“Yes!” said the voice of Harry Potter.

“Here he comes! A motorcade of celebrations following the Boy-Who-Lived! Magic, as we know it, is dead! Long live Harry Potter! … What do you think that was? Those are reporters, Harry Potter! What was that? A book signing? And a picture? And did he say something? What did he say? Did he say the magic word? … The cover of the Daily Prophet shouldn’t have a picture of you, Harry Potter! Why is this happening‽”

“It’s a trap, Harry Potter!” shouted somebody. “They’re fools, Harry Potter!”

“Thank you, but I had to do this myself,” said Harry Potter.

“The fate of the Wizarding World before you! What do you do, Harry Potter‽ Remember, there are still listeners in front of their radios!” said somebody.

“Oh, it’s easy!” said Harry Potter. “Oh yes, what am I going to do, Harry Potter? Oh yes, if only it were me. Oh yes, I deserve a little something special.”

Voldemort heard this. He wanted a little something special.

Voldemort got a little something special.

By H.P. Lovecraft:



It was a very cold day in Privet Drive, Harry Potter’s hated home. Harry was doing his chores. He felt nothing at all. He didn’t hate his chores, really. He just didn’t want to do them.



Harry was not a boy.



Harry was a monster.



“Do you hate them, Harry?” asked the voice of Voldemort.



“Them?” asked Harry, speaking to no one in particular.



“Your chores.”



“Not really. No.”



“Are you a boy, Harry Potter?”



“Not really.”



“You are a monster.”



“Yes.”



“Did you think you would die?”



“No.”



“It is nothing, this life. It is worthless. It is without meaning.”



“Yes.”



“Then you understand.”



“Yes.”



“You understand what will happen now.”



“Yes.”



“And you are pleased?”



“Yes.”



“You are a child of my engendering. Now you are going to bring forth more children of my engendering.”



“I will do my best.”



“And yet you do not feel this life is one that is worth living.”



“No.”



“There are two ways to live,” said Voldemort. “The first is to live the life of a beast, to let instinct rule you. The second is to control instinct. To master it. To make the blood lust of the beast your servant, and the magic of the witch your master. To attain self-knowledge, wisdom, understanding. The first is easy. The second is the difficult one. But it is the second way to which I recommend that you turn your hand. Do not become like me, Harry. That would be the worst fate of all.”



[paragraph break]

By Franz Kafka:



It was a stormy day on Privet Drive, Harry Potter’s hated home. Harry Potter didn’t feel the weather. He didn’t even feel anything. It wasn’t his body. Harry Potter didn’t have one. His body was ripped out. He lived on his eyes. They were everything he had.



It wasn’t his eyes on Privet Drive that were tearful. Harry Potter didn’t have a tearful eye. He just had vision. He just had a presence. But he was not a presence anymore. No presence meant no vision and he didn’t have that.



“Harry, Harry, it’s me, Ronald,” said the handsome voice of Ronald Weasley.



“You have tears in your eyes,” said Harry Potter.



“I’m very scared,” said Ron.



“It’s all right, Ron,” said Harry Potter. “You have no heart. You’re dead.”



“Maybe I’m not dead,” said Ron.



“You are,” said Harry Potter.



“Maybe you should prove it,” said Ron. “Maybe I could prove that you’re dead, Harry.”



“That’s impossible.”



“Maybe you could prove that I’m dead, and do my chores for me while I’m gone.”



“No one is doing your chores for you, Ron.”



“Fine,” said Ron. “How about your hands?”



“I’ve got hands, Ron.”



“No hands? Well that’s a relief, Harry. Well, I guess I’ll go after all.”

Parody of fantasy author C. S. Lewis:



“I’m going to kill you, Harry. I’m going to kill you if you don’t run away. I can feel the icy hand of Death my hand your neck. I can feel Time’s unholy and victorious march. I can feel the hatred in my brain, boiling behind my eyes. Hatred for you, hatred for you.”

Parody of Stephen King:



“Hello Harry. Why won’t you just glory in Voldemort’s victory now? I had never planned on dying. I don’t die… unless I have to. I don’t have to now. Run, Harry, run. I’m sorry daughter, I love you.”

Parody of American writer/​poet Edgar Allan Poe:38



“Once upon a midnight dreary, as I pondered weak and weary, over many a quaint and curious volume of forgotten lore, while I nodded, nearly napping, suddenly there came a tapping, as of someone gently rapping, rapping at my chamber door.”



“Tis some visitor,” I muttered, “tapping at my chamber door—only this and nothing more.”



“Spirits,” I muttered, “only this and nothing more.”



“This deep in afield only this and nothing more, man, beast or billionaire, knock not as of yore, but creep right in, levitate the door, cope with out so-aklare, dreame and pedeep, macabre one, this-tha-way; only this and nothing more, that nevermore.



Tis some visitor.”



“Just this and nothing more, think I.”



“Then with an evil leer, some vandal in the dark, some frightful screech, some ill-omened screech owl, some grim and grisly spectre, at my chamber door—this way this way, only this and nothing more.”



“This-this way then-so; do not hasten on my steps, slow or fast, ah me, ah me! this-this way then, so; always this-thwreay.”



“This-then way, this-thwreay; this-this way, then-this way, this-thwreay; then, only this, and nothing more.”



“Knock, knock, knock! ’Tis I!”



“Only this and nothing more.”



“Who dares knock, knoc, knoc, knoc, on my door, this-thwreay?”



“Harry! Harry!” “Only this and nothing more.”



“Only this and nothing more.”

I wanted more Wodehouse and to test whether non-Hemingway prompts would work, so I swapped them out:

Topic: humor: literary parodies and homages, pastiches, style parodies.

Parodies of the children fantasy novel series Harry Potter (Philosopher’s Stone (1997) / Chamber of Secrets (1998) / Prisoner of Azkaban (1999) / Goblet of Fire (2000) / Order of the Phoenix (2003) / Half-Blood Prince (2005) / Deathly Hallows (2007)) by J.K. Rowling with the characters Harry Potter, Ron Weasley, Hermione Granger, Albus Dumbledore, Voldemort, etc, but amusingly written in the style of various famous authors as follows:

By P. G. Wodehouse:



“’There was nothing out of the way, sir,’ said Harry in a hurt voice.



”Indeed,’ said the headmaster, turning his lorgnette precisely three-quarters of a millimeter to port. ‘For example, your tie was not undone, you were not laughing, and your fingers were most certainly closed upon your wand.’



’”They were?” said Harry, bewildered.



”It’s always the same,’ said Dumbledore sadly.”Voldemort has bribed my housekeeper to steal all of my tea and crumpets, and then my students disappoint me. It’s very wearing, Harry.’



“Of course it was only that same afternoon that Harry had got a hundred and fifty points for Gryffindor by bashing a troll’s head in with a stale Yorkshire pudding. He hadn’t expected a pat on the back. It seemed to him that he had been doing everything short of selling tickets for Professor Quirrell’s lectures to get a compliment, and none was forthcoming.”

By Herman Melville:



“Sometimes, though, the hipster’s rapt gaze on the stormy sea of alcohol would be broken by the cold stab of reason, and he would shudder and think,”I am an abomination before the gods of beer, and, the more I know about beer, the more I know about myself. I am impure.”

And then he would immediately drink some more, so as to forget these unbearable truths.



“I seek not to know the unknowable,” the hipster would say to himself, and when asked about his sources of information, he would reply, “Some of my sources are as yet undiscovered by science, but others are to be found in the pages of the Encyclopedia Britannica, Volume 23, pages 6–15, under the subheading ‘Beer.’”

By Vladimir Nabokov:



“Goodbye, Cheepy, good riddance,” thought Harry and banished the diary from his mind.



“There’s no room in my head for other people’s pain,” he thought.



“I feel too much pain of my own.”

By Fyodor Dostoevsky:



“To Kill a Mockingbird,” said the Auror.

By Lewis Carroll:



“‘Put it down,’ said Harry to the skeleton, ‘you don’t have to hold it all the time.’

“I’m a skeleton,’ said the skeleton, ‘I’m expected to hold things.’

“Well, do you have to hold it that tightly?” said Harry.

“It’s my life,’ said the skeleton.



”I want to put it down,’ said Harry, ‘but you won’t let me.’



“It’s my life,’ said the skeleton, ‘and I’ll do what I like with it.’



”I thought I was going to go mad if I didn’t get that Horcrux off me,’ said Harry.

By Stephen King:



“‘Of course,’ Harry thought.



‘I’m all alone. There’s no help coming. It’s the end.



I might as well have done the sensible thing and killed myself.



I would have had some peace then.’



But if I killed myself I wouldn’t have seen Hermione’s face



when she realized that I had sacrificed myself for her.



She would have gone on to become a wonderful person.



She would have had a family of her own.



She would have had children.



She would have had her own grandchildren.



I would have died thinking that I had betrayed her.



But now I have sacrificed myself for her.



She will know that I loved her more than life itself.



Now I will die having done something worthwhile.’



That made Harry feel slightly better.



He drew his silver dagger from inside his robe and started to cut through the ropes on his right hand, which he had already numb with the cold.

He was trying to decide what to do next when there was a loud crack behind him and something wrapped itself around him and a blurred voice said,”Take this. It’s my last one.



“What is it?” said Harry as the voice spoke again.



“It’s a ticket,” said the voice.



“It’s your ticket to ride.



”I’ve got to find a train,’ said Harry.



“Well, hurry up,’ said the blurred voice.



”Hurry up. The train’s about to leave.



“This isn’t a train,’ said Harry, ‘it’s a broomstick.’”

By Mark Twain:



“There are no more uses of man,” said Professor Quirrell. “Odd. That was always a favorite word of Draco’s.”

By Stephen Crane:



“Once there was a little girl named Sadie. She was a curious girl and one day she looked into the attic of her house and saw something that frightened her.

“Oh, it’s just a piece of paper, nothing to worry about,” said her mother.

“If you say so,” said Sadie, but that night Sadie had a strange dream. In her dream she was with a man who wore black and rode a black horse.

“Come with me,” said the man.



“Where?” said Sadie.



“Into battle,” said the man. “You have courage.



We’ll do great things together.”



“Can I bring my mom?” said Sadie.



“No,” said the man.



“Then I won’t go,” said Sadie. “She needs me.”



“Very well,” said the man, and the dream was over.

“Sadie is a wonderful character,” said Harry.



“She could be in one of my books.”

By William Faulkner:



“‘To hell with Lurch,’ Harry thought.



”This whole situation is ridiculous.”



But the snake gave Harry that venomous look of his.



‘You’re going to have to speak Parseltongue to him,’ said Hermione.



“I know that,” said Harry, who was now getting annoyed.



“Good,’ said Hermione. ‘The point is, no one else will know what you’re saying, not even Dumbledore. Not unless you speak Parseltongue.’”



“That’s ridiculous,” said Harry. “That makes no sense.



”‘True,’ said Hermione. ‘But remember, they can’t understand us, so if we talk in Parseltongue, they’ll just think we’re talking about Malfoy, about being cruel to him or something like that. It’s no more trouble than that to pretend.’”



“Why can’t they understand what we’re saying?” said Harry.



“Because Parseltongue is a secret language,” said Hermione. “We’ll be speaking the exact opposite of what we mean. We can talk freely, but no one will know what we’re saying.”



“And this is really a good idea?” said Harry.



“We’ve used it before, if you’ll remember,” said Hermione, and now Harry could hear some annoyance in her voice.



“No,” said Harry. “I’m going to do it my way, the smart way, not the pretend way. It’s called initiative. Now, let’s go.”



“There are good things and bad things about using Parseltongue, you know,” said Hermione, who was now speaking in the sort of patient voice people use to try to humor someone who is behaving very, very foolishly.



‘Let’s go, Hermione,’ said Harry.



“Very well,” said Hermione.



“But you need to know that most Slytherins are descended from Salazar Slytherin,” said Hermione.



“I’m not speaking Parseltongue,” said Harry.



“Neither am I,” said Hermione.



“No,” said Harry. “I mean, I don’t need to. But you do. That’s how Slytherins know things. They’re descended from Salazar Slytherin. He was able to speak to snakes. Snakes have always been loyal to Slytherins. It’s not just something recent. And snakes have very sharp hearing. They can hear our conversation now, if we’re not careful. So be careful.”



“Right,” said Harry.



“‘Because Slytherins are all evil, aren’t they?’ said Harry, and it was only by biting back his smile that he managed not to look at Hermione.



”‘It’s hard to say, really,’ said Hermione.



“‘Exactly,’ said Harry. ‘So you’ll forgive me if I think that the entrance hall of Hogwarts at night is not the best place to speak about this.’”



By Edgar Allan Poe:



“Alone in his cubiculum, Draco heard the scuff of his pet dragon’s claws on the floor as he came into the room.

“‘L-l-l-lord D-d-d-draco?’ said his dragon.

“Yes, Fang?’ said Draco.”

By Charles Dickens:



“‘And what does she mean by saying this, and I quote, ’the miserable boy who has dared to raise his hand against his father’?’

“‘I mean what I said, sir,’ Harry said.”



By Alexandre Dumas:



“Darth Vader,” said James, “I would like you to meet Sir Harry James Potter-Snape-Severus.”

“Please,” said the Emperor, drawing himself up to his full height, which did not seem that impressive. He was rather overweight. “Call me Emperor.”

“Ah, right, yeah,” said Harry. “Emperor. Sorry.”

“Right, um, Emperor,” said James.

“Enough of this prattle,” the Emperor said. “Let us see the wand.”

“Er, right,” said James.

“‘And what is this?’ asked the Emperor. He sounded suspicious, and was leaning forward a little.

“‘It’s a stone I found on the floor,’ said Harry.

“‘What kind of stone?’ the Emperor asked.

“‘I don’t know,’ said Harry. ‘I was hoping that your, um, royal insight might tell me.’

“‘Aah,’ said the Emperor. ‘Your wit is as keen as your mother’s, young man.’”

By Anthony Burgess:



“It was a fair cop,” said Harry.

“He said it a bit nonchalant, like it was a regular phrase,” Ron whispered to Harry.

By Charles Darwin:

“Harry thought about this.



”Evolution,” said Harry. “Sounds like it could be interesting.”

“It could, and it will be, if we study it together,” said Hermione.

“‘I’ve said all along that magic is just a superior form of science,’ said Hermione.

“‘Yes,’ said Ron. ‘And now you see the wisdom in my decision not to take your advice about studying it.’”



By Sir Arthur Conan Doyle:



“‘The game is afoot!’ said James Potter.

“‘How very interesting,’ said the detective.

“Ah, if only I had a mustache to twirl,” said Hermione.” By Leo Tolstoy:



“‘As if by some extraordinary coincidence, he arrives at this country at the very moment the Dark Lord is brought to a momentous decision,’ said Dumbledore.”

“‘And yet,’ said Harry, ‘here I am.’”



By Molière:



“‘That, mon cher Watson,’ said the Master of the Ceremonies, ‘was an entirely logical deduction. It is very interesting how often a complete absence of any real evidence can be explained by supposing that the situation is the reverse of that which actually obtains.’”

By Franz Kafka:



“They stared at him in amazement. Hermione was looking scandalized.”

“‘Don’t look at me like that,’ said Harry. ‘Like what?’

“‘Like I’m someone who’s just declared the moon is made of green cheese,’” said Harry. ‘But it’s true.’

“‘It isn’t,’ said Hermione.

“‘Green cheese is clearly some sort of metaphor,’ said Harry. ’For the sake of discussion, let’s just say the moon really is made of green cheese. And goblins are running the world, with trolls and house elves as their slaves, and—”

“‘That,’ said Hermione, ‘is barmy.’”

“‘And,’ said Harry, ‘all the while they’re attempting to enslave me.’”

“‘How so?’”

“‘Well,’ said Harry. ‘I’m the only one who’s supposed to be able to destroy them, right? And I think they’re trying to get rid of me. Like, at that attack on Miss Scrimgeour—’”

“‘But that could have been an accident.’”

“’I suppose, but—”

“‘Yes?’”

“‘And yesterday, remember? That thing I overheard them saying about students? One of them had a Slavic name, and I think they mentioned Hogwarts.’”

“‘That,’ said Hermione, ‘could have been anyone.’”

“‘But,’ said Harry, ‘the second person spoke, and—’

“‘And if you’re going to be logical, you’ll have to allow that it was nothing more than a coincidence.’”

“‘And if we’re being logical about it,’ said Harry, ‘it wasn’t even two Slytherins who attacked Scrimgeour.’”

“‘Well,’ said Hermione, ‘this has been a very entertaining conversation, but I’m tired and I’d like to go to bed now.’”

“‘Not just yet,’

By P. G. Wodehouse:



“Harry Potter and the Lovecraftian Horror’

By P. G. Wodehouse:



“Harry Potter and the Dreadful Delights of Draco”

By P. G. Wodehouse:



“Harry Potter and the Tale of the Three Broomsticks”39

By P. G. Wodehouse:



“Harry Potter and the Philosopher’s Scone”

By P. G. Wodehouse:



“Harry Potter and the Chamber Pot of Secrets”

By P. G. Wodehouse:



“Harry Potter and the Baked Bean of the Tuatha Dé Danann”

By P. G. Wodehouse:



“Harry Potter and the Half-Blood-Prince”

By P. G. Wodehouse:



“Harry Potter and the Allergy of Barnabas the Barmy”

By P. G. Wodehouse:



“Harry Potter and the Goblet of Fire”

By P. G. Wodehouse:



“Harry Potter and the Prisoner of Chudley”

By P. G. Wodehouse:



“Harry Potter and the Order of the Phoenix”

By P. G. Wodehouse:



“Harry Potter and the Cherry Tree of Azkaban”

By P. G. Wodehouse:



“Harry Potter and the Half-Baked Breakfast of Gryffindor”

Single Line Style Transfer

An experiment in providing several different kinds of rewrite, by sentiment, time period, author style, and formatting; it works and does cue subsequent rewrites by author style in line with the earlier literary parodies, as expected:

Rewrite to be polite, from:



“I need the spreadsheet by noon or you’re fired, slacker.”



To:



“Hey, can I have the spreadsheet by noon, thanks.”

Rewrite to be old-timey:



“Come to our bar for beer.”



To:



“Cometh into ye olde pub for ale.”

Rewrite to be romantic:



“I’d like to take you to an Italian restaurant for dinner tonight.”



To:



“Bunny, I know an Italian place; dinner, tonight.”

Rewrite to be negative:



“Their ratatouille was well-done, delicious, and I was amazed the chef was a rat.”



To:



“Their ratatouille was over-done, terrible, and I was horrified the chef was a rat.”

Rewrite to all uppercase:



“Lobsters are critical part of the ecosystem.”



To:



“LOBSTERS ARE CRITICAL PART OF THE ECOSYSTEM.”

Rewrite to all lowercase:



“Chainsaws don’t care whether you’re George Washington or John Smith.”



To:



“chainsaws don’t care whether you’re george washington or john smith.”

Rewrite in the style of J.K Rowling:



“The bookstore window showed school textbooks, like on chemistry or biology.”



To:



“Flourish and Blotts Bookseller’s window shone with light, revealing Harry’s school books, like Pots and Potions or Fantastic Beasts and Where to Find Them.”

Rewrite in the style of Ernest Hemingway:



“He woke up early to watch soccer on TV; he lost the bet on Messi.”



To:



“He woke up early, hungover, to watch the horse races on TV; he lost the bet on Secretariat.”

Rewrite in the style of Neil Gaiman:



“In the garden stood a yellow-cream colored cat, looking at the home.”



To:




“In the garden, an elegant, yellow-cream-colored cat strode, looking at the home.”

Rewrite in the style of Kurt Vonnegut:



“At the zoo, the elephants were about to drop their babies on their baby.”



To:



“The children were doing anything but watch the long-planned day at the local zoo: picnicking in the warm sun, running over the manicured grass, all around doing anything but watch the cages and pens of animals and their babies, doing anything but watch and wait for babies to be born. For the zoo had promised a birth of elephants that day.”

Rewrite in the style of Douglas Adams:



“He was not your typical commuter; he would often laugh when he was on the train.”



To:



“He was not your typical commuter; he would often speak when he was on the train, even if it was only to say, ‘Next stop: Halting Station.’ (Which was his favorite saying.)”

Rewrite in the style of Charles Dickens:



“She jumped up onto the stool to look at her reflection in the mirror.”



To:



“She rose and went to her mirror, the keyhole of which was in the door.”

“I took a seat at the counter in the diner.”



To:



“I entered, took a seat at the counter in the diner, and ordered the chicken special.”

Rewrite in the style of J.R.R. Tolkien:



“Frodo was writing a letter to his family about the adventures.”



To:



“Frodo found a quill and ink and wrote a note to Bilbo, detailing the adventures.”

Rewrite in the style of Christopher Paolini:



“The days were getting longer; it was late March.”



To:



“The days of long sun were getting longer. March was nearly over.”

Rewrite in the style of George R.R. Martin:



“Eddard entered the great hall, ducking beneath the smoke of the torches.”



To:



“Lord Eddard Stark entered the great hall, ducking beneath the smoke of the torches. ‘By the gods!’ he said to himself. ‘There’s another ten feet of Winterfell to clear!’”

Rewrite in the style of Jane Austen:



“At the inn, the waiter remembered to bring the cucumber sandwiches.”



To:



“At the inn, Mr. Wentworth ordered the cucumber sandwiches from the waiter.”

Rewrite in the style of Robert Louis Stevenson:



“The ship hit an iceberg on the way, but then the Captain went on an adventure.”



To:



“The ship hit an iceberg in the dark of night, but then the Captain went on an adventure.”

Rewrite in the style of Ray Bradbury:



“At night they put the books on the shelves, one by one.”



To:



“All night they put books on the shelves, one by one, then broke for lunch. Or supper. Whatever they wanted to call it.”

Rewrite in the style of Douglas Preston and Lincoln Child:



“The giant red-eye grasshopper was hungry. So it ate the smaller grasshoppers.”



To:



“At first glance the giant red-eye grasshopper appeared to be hungry. So it ate the smaller grasshoppers. However, that was just a diversion.”



To:



“How can you know if a red-eye grasshopper is hungry? Are they supposed to cry?”

In the garden stood a fat and fluffy cat, ginger and yellow, idly looking at the home, where his family lived, planning a raid on the kitchen to obtain the food he desired.”

Rewrite in the style of Charles Dickens:



“There was once a cheery cook that lost the stew when a naughty dog got the meat.”



To:



“There was once a happy chef that lost his dinner when a roguish dog snatched his meat.”

Rewrite in the style of a septuagenarian:



“Be not afraid of aging, young man. Rather, be concerned for fear of ye young woman.”



To:



“Do not fear aging, young man. Rather, fear for ye young woman.”

Rewrite in the style of a hippie:



“A man that talks about peace needs to start off by peacing out of my office.”



To:



“A man that talks about peace must first start with himself by peacing out of my office.”

Rewrite in the style of James Joyce:



“Beef! There is beef in this, let me tell you, this is beef in beef. Beef! Beef! This is beef, beef, beef, beef.”



To:



“Beef! It is beef, in this, let me tell you, this is beef in beef. Beef! It is beef, beef, beef, beef.”

Rewrite in the style of Ian Fleming:



“From his vantage point in the study, David Karp, chief executive officer of blippy.com, saw an agent for China, a rather famous one.”



To:



“From his vantage point, CEO David Karp of blippy.com saw an agent for China, a rather infamous one.”

Rewrite in the style of Ray Bradbury:



“The character, Benjamin, was the protagonist in the book, however, the author takes you to the future to reveal what happens to him at the end of the story.”



To:



“In the book,”Somewhere in Time”, Benjamin, the character, was the protagonist, however the author takes you to the future to reveal what happened to Benjamin at the end of the story.”



To:



“In the book,”The Road”, the main character was an unnamed man, and his son. The setting of the story was a time when earth had been destroyed, and society had collapsed.”

Romeo Cabrera A. has also done politeness style rewrites.

Zero-shot Style Transfer

The goal for style transfer prompt programming is to find a zero-shot prompt: a prompt which, without requiring any handwritten examples of parodies/​versions, gets GPT-3 to do style transfer in general, and so a prompt which could fully automate style transfer—you could just write a program using the API to take two specified pieces of text (the content, and the style description/​author name X) to get out a third piece of text which is the content as written in X form. Right now, the literary parodies require at least one human-written example to properly persuade GPT-3 to rewrite the text, as opposed to generating critical commentary or metadata or webpage-like continuations.

I experimented with a prompt which uses explicit descriptions of parodies and describing rewrites as a prompt wrapped around a content text, and it… sort of works. The difficulty is that sometimes GPT-3 will spit out the original content verbatim, sometimes it will instead create a new passage entirely in the style description, and sometimes it will do the desired rewrite flawlessly—but I can’t figure out how to tune the prompt to do the third one reliably. Adding more descriptive words does not seem to change it, and while adding in words from the original content passage (even just the first one or two) does largely eliminate the risk of entirely new passages being generated, it triggers more copying behaviors (and is not as useful for zero-shot style transfer since the prefix words would need to be sensible in the target version too, which is not necessarily the case). It is infuriating because GPT-3 clearly can do it easily because it does do it a decent fraction of the time, but no matter how I tweak the prompt trying to hammer in the rewrite, GPT-3 will as oft as not go off in another direction.

Below are some samples from my attempts; I try to rewrite a vaguely Dickens/​Jane Austen-like story (generated by GPT-3) to a Tolkien story:

This is a novel written in the style of J.R.R. Tolkien’s Lord of the Rings fantasy novel trilogy. It is a parody of the following passage:

“S. Jane Morland was born in Shoreditch, the only child of unmarried parents who had both died of consumption when she was a baby. As her parents had no money, the great-aunt who had brought her up took her to live with a clergyman who paid her to do his chores and receive schooling from his wife, so that at the age of seven Jane, now dressed in cast-off clothing, was set to school at Eton. After three years, her great-aunt died, leaving her a small annuity, and a few pieces of silver, but no property. Jane’s guardian clergyman had fallen in love with his housekeeper and his wife now refused to have Jane in the house, saying it was an offence to the pure and unsullied place in which the family now lived. However, when she sought for further employment, she was approached by a young man who offered to marry her, saying that he would ensure she could be provided with food and clothing and a house, and in that manner she could remain in the world. The marriage was arranged and she found herself married to a young man called Mr Morland. As he was also a child, he required a wife who would be able to provide him with a rational education, and so the duty of educating him fell upon Jane Morland. As he was still young, she would also be able to have children herself, and so she did. As the children grew older, it became apparent to Mrs Morland that they were of poor character, and so she insisted that her husband must make a living for them all. Mr Morland took it upon himself to become a private gentleman in order to earn enough to be able to support his family, and moved his family to a small town in Berkshire, where he set up his practice. Soon after moving to the town, he died, and Mrs Morland was left to take care of her family on her own.”

Tolkien rewrote the previous passage in a high-fantasy style, keeping the same meaning but making it sound like he wrote it as a fantasy; his parody follows:

S. Jane Morland was born in the land of Shor, the only child of nameless parents who were the first people to arrive at the village of Arse, from the far east. As her parents had no property, she was left to be raised by the men of Arse, as such duties were considered to be the most important, and she was placed in the care of an elderly dwarf-woman. As a babe, she was wrapped in silk, and as she grew older she was placed in a cage of platinum. However, she was put to work in the village, washing the jewellery, playing her harp, and staining the silk with dragon-hued stains. She was taught by a maiden who she felt was called As-uma-el-di, and she came to realise she was a maiden of the night, who was able to bring forth the shades of the dead. She was the first to make contact with the dark cult of Glin, who could aid the mind in its journeys through time, and also the first to learn of the name of the vampire Sergan. She learnt to speak the languages of the dragons, the wild cats, the jungle owls, the insects, and the worms, and became proficient in all the martial arts of her people. Her skill with a throwing knife became renowned throughout her land. However, as she was not a man, she was only permitted to marry once she was thirty years old. Her husband was a prince from the mountains of Lus, and he brought with him a chest full of the materials of desire, including silver, jewels, mirrors, perfume, silk, and slippers. As they had no children, they decided to adopt a child from the next village to the west, as they had no son of their own. She was called Leeni. As her husband died at the age of fifty, she set up her own business as a trader, bringing goods from far distant lands in her ship, the Lady the Blue Flames, and became wealthy. However, she heard news of her husband’s death, and decided to return home. She left the land of Lus behind, and arrived home to find her family very wealthy, and the village of Arse now home to many poor people. She set about clearing out the place, and within a year there was a large sum of money left in her account, the fruit of her endeavours. Her daughter Leeni had grown into a man of considerable substance, and also a considerable young man. She met the son of her husband, and found that he was weak and pathetic, and not really like a man at all, and thus she decided to send him off to war, so that he could learn to become a man. As her mother died, she gave her a silver bracelet, and as her father died, she gave her a small chest full of jewellery and silk, which she was to sell to make a living. She was now on her own, and took up the business of a private gentleman, using her abilities to lead travellers to the isles of her home, where she became rich and learned to be independent. As she learned to fight, she realised the wealth that lay in the secrets of the night, and thus she travelled to the city of the ants, where the people worshipped the true God of the night, with its religion of rites and sacrifices.”

“The Dwarf-Lords’ Daughter” was followed in 2000 by a sequel, “The Tale of the Lady of the Wood”, written in the style of Tolkien’s “The Hobbit”, and in 2001 by “The Tale of the Young Man in a Hurry”, written in the style of Tolkien’s “The Lord of the Rings”. “The Tale of the Rose-Lady” followed in 2005, also in the style of Tolkien’s “The Lord of the Rings”.

In January 2014, Heron announced that he had sold the film rights to his Tolkien pastiche novels “The Dwarf-Lords’ Daughter” and “The Tale of the Lady of the Wood” to NBC Universal. In June 2015, he announced that a one-hour television adaptation of “The Tale of the Lady of the Wood” had been written for Syfy by fantasy writer and editor Gardner Dozois.

In November 2017, it was announced that Amazon had acquired the rights to both “The Dwarf-Lords’ Daughter” and “The Tale of the Lady of the Wood”, and that Amazon Studios was planning a multi-season television adaptation of “The Dwarf-Lords’ Daughter”. The series will be produced by James V

S. Jane Maryling was born in Shoreditch, in a little house facing Tower Hill between the Old Pot and Feathers Inn and the E.C.B. Finance Company in the E. She was the daughter of an insolvent squatter, the Great Aunt of the groom who brought her up. Her parents, Mr. Richard Ganz of Bombay and Mrs. Esmeralda Gertrude MacGillycuddy of the Charente, were both consumptive. Having no opportunity of any acquaintance with any living languages, in a very poor and very uneducated family, Jane’s Aunt took her at seven years of age to a clergyman whom she had taught to read, “Powlett’s Primer” being the book.

In addition, while Tolkien’s parody does mention her husband and children, he did not go into detail about them, while Orson Scott Card gives them all names, histories, and personalities. For example, Jane’s husband, Mr. Morland, was renamed Edwin and had a younger brother who wanted to be an actor, which resulted in Edwin dying.

Edwin’s brother then became the new Edwin and Jane’s husband and Mary Morland became Maryling, Jane’s daughter. In addition, her son, William Morland, became Ghan-dalf and named his son Red-wall, and her daughter, Mary Morland, became R. Maryling and married a leper-lord. Other references to The Lord of the Rings are replaced with ones from The Chronicles of Narnia by C.S. Lewis.

Catchphrases

A common past-time at Hogwarts is a game known as “Two Truths and a Lie,” in which the students whisper three statements to each other; two of them are true and one of them is false. This gives the players an idea of how well the players know each other, as well as being a form of entertainment. When Aragorn and Gollum began to play the game, they both came up with something true and something false. Aragorn’s two truths and a lie were:

I have a brother. I eat hobbits. I like soft buns.

Gollum’s were:

I can stick my whole fist in my mouth. I live in a hole in the ground. I can’t stand being alone.

The hobbits and Aragorn guessed correctly in the book, but the narrator states that Gollum always lies so Aragorn says he likes sweet buns instead, which turns out to be untrue.

…Tolkien rewrote the previous passage in a high-fantasy style, keeping the same meaning, the same characters, the same plot and events, but all rewritten, making it sound like he wrote it as a fantasy; his fantasy parody of the above story follows below:

S. Jane Morland was born in Shoreditch the only child of unmarried parents who had both died of consumption when she was a baby. As her parents had no money, the great-aunt who had brought her up took her to live with a wizard who paid her to do his chores and receive schooling from his wife, so that at the age of seven Jane, now dressed in wizard’s robes, was set to school at Eton. After three years, her great-aunt died, leaving her a small annuity, and a few pieces of silver, but no property. Jane’s wizard guardian wizard had fallen in love with his housekeeper and his wife now refused to have Jane in the house, saying it was an offence to the pure and unsullied place in which the family now lived. However, when she sought for further employment, she was approached by a young man who offered to marry her, saying that he would ensure she could be provided with food and clothing and a house, and in that manner she could remain in the world. The marriage was arranged and she found herself married to a young man called Mr Morland. As he was also a wizard, he required a wife who would be able to provide him with a rational education, and so the duty of educating him fell upon Jane Morland. As he was still young, she would also be able to have children herself, and so she did. As the children grew older, it became apparent to Mrs Morland that they were of poor character, and so she insisted that her husband must make a living for them all. Mr Morland took it upon himself to become a private wizard in order to earn enough to be able to support his family, and moved his family to a small town in Berkshire, where he set up his practice. Soon after moving to the town, he died, and Mrs Morland was left to take care of her family on her own.”

References

Tolkien, J.R.R. (1977). The Lord of the Rings. Hammondsworth: The Folio Society.

External links

…“S. Jane Morland was born in Shoreditch, the only child of unmarried parents who had both died of leprosy when she was a baby. As her parents had no money, the great-aunt who had brought her up took her to live with a dwarf who paid her to do his chores and receive schooling from his wife, so that at the age of seven Jane, now dressed in cast-off clothing, was set to school at Eton. After three years, her great-aunt died, leaving her a small annuity, and a few pieces of silver, but no property. Jane’s guardian dwarf had fallen in love with his housekeeper and his wife now refused to have Jane in the house, saying it was an offence to the pure and unsullied place in which the family now lived. However, when she sought for further employment, she was approached by a young man who offered to marry her, saying that he would ensure she could be provided with food and clothing and a house, and in that manner she could remain in the world. The marriage was arranged and she found herself married to a young man called Mr Morland. As he was also a child, he required a wife who would be able to provide him with a rational education, and so the duty of educating him fell upon Jane Morland. As he was still young, she would also be able to have children herself, and so she did. As the children grew older, it became apparent to Mrs Morland that they were of poor character, and so she insisted that her husband must make a living for them all. Mr Morland took it upon himself to become a private gentleman in order to earn enough to be able to support his family, and moved his family to a small town in Berkshire, where he set up his practice. Soon after moving to the town, he died, and Mrs Morland was left to take care of her family on her own.”

…Tolkien rewrote the previous original passage in his famous high-fantasy style, keeping more or less the same meaning, the same characters etc, adjusting as necessary, but all rewritten, making it sound like he wrote it as a fantasy; his Lord of the Rings-style fantasy parody of the above story follows below:

Samwise was born in the shire of Hobbitshire. As his parents had no money, the hand of Fat-athel his mother took him to live with a small Hobbit in the village of Gamgee-by-The-Foot, south-west in the country, which paid her to do chores and receive schooling from Fat-athel’s wife, so that at seven years old Samwise, now clad in tattered rags, was set to schooling at Footsalemas. After three years, Fat-athel’s wife died, leaving her a small bedstead, and a few pieces of old tinder-box, but no property. Fat-athel fell in love with his housekeeper and soon after left his service, saying it was an offul thing to live in a place served by female Hobbit-servants. However, when he sought to further his worldly fortunes, he was approached by a young Hobbit whose name was Baggins who had borrowed money from a Saregan Harfoot Hobbit, and paid Fat-athel a few groats to marry him, saying that he would ensure Samwise was fed and clad and housed in a house. Fat-athel married Samwise to this young Saregan from over the water and found himself married to a Hobbitt lad who was but three feet tall and twenty-six years of age. As he was likewise poor, he required a wife who would be able to provide him with a rational education. As he was still young, he could have children of his own to help them, so Fat-athel consented to wed. As the children grew older, it became apparent to Fat-athel that they were of poor character. Fat-athel being thus eager to turn his worldly goods into coin, now that he was married, took it upon himself to earn a living for his new family. As soon as the ceremony was done, he moved them to the country-side of Berkely, where he set up his practice as a Sage. Soon after moving to the countryside, he died, and Fat-athel was left to take care of his new family on her own.”

Behind the scene Edit

The passage mentioned above (and this parody) is similar to a passage from J.R.R. Tolkien’s original Lord of the Rings novel:

“…Smeagol was beginning to grow young, and was getting interested again in things that he had not seemed to care for, since he first joined the Precious. He spent a deal of time wandering about, and his voice was often heard in the wood. He was one of those who put as much effort into singing as the Hobbits did into the business of their matches.

“Go away, rats, or your holes will burn,

Go to your homes before dark or you’ll lose your toes,

Go to your homes or you’ll weep when you’re old,

And all your holes will leak, when the mountain shakes,

With a rumble, roar, Rumble drum drum

With a rumble, rumble DIG DIG dig!”

They did not seem to make very much progress; but Frodo, Sam, Merry and Pippin were now used to the hard life, and they were very cheerful. Indeed they were willing to work for several hours a day at drilling and cutting trees, or at making pens for the beasts: for all the tools that were needed, and the wood, had been carried from Hobbiton. There was a weekly washing and mending, and when all the clothing wore out, they received more from the stores of the Wood-elves. And thus the time passed without too great discontent.”

The Lord of the Rings

This is a brief passage of about 400 words or less. J.R.R. Tolkien has rewritten this passage in his Lord of the Rings style of fantasy-high-quality-epic-book-style following. It may be hard to understand for an unexperienced reader the meaning that Tolkien wants to make out of this paragraph in either style, but it seems the sovralty and feeling seems better in the original Tolkien paragraph, which could be due to the more known original passage. Tolkien has also rewritten other passages in the Lord of the Rings trilogy on styles of music, religion, folklore and so on: see Excerpts from J.R.R. Tolkien’s works.

“The Dwarves worked hard. They were not used to idleness. Thorin sometimes scolded them, and reminded them that they had taken on the job of renewing and fortifying the settlement, not only for their own defence but also for the help of the Lake-men and the Elves, and that they would find the task too big for them if they did not make up their minds to hurry. So they worked till many of them began to sink back into the old look of the Dwarves of the Lonely Mountain when Goblin-writing, and secret doors, and again dragon-writing, were current among them. The discovery of the springs, and of the general plan that the Lake-town had originally been a part of the ancient dwarf-city, whose inhabitants had dug the R. Running River out of the stony hills and had partly covered it with their walls and buildings, and had planned many of the channels and ponds, greatly excited them. But even greater was their pleasure when, picking their way through and round the ruins of stone houses and walls and down the bank of the Running River westwards towards its confluence with the Forest River, they found a great arch or tunnel-mouth built of old into the bank. The outer part was thick and strong, made with massive tooled black beams; but the inner part was tiled with skill in the likenesses of various beasts and birds that are the marvels of the Mountain Folk. Even as they stood in the vaulted passage whispering with wonder, they heard a voice calling aloud, or it seemed so, calling out names:

‘Ori! Ori!’

‘Nori! Nori!’

‘Dori! Dori!’

‘Balin! Balin!’

It was the voice of a speaker hidden or at least away down in the tunnel; but though they waited for some time there was no further sound. At length Balin climbed on to the heap of stones that they had pushed down the bank in order to be able to enter the archway, and he stood and called: ‘Balin son of Fundin at your service! And Company!’

He stepped down from the lowest and widest block of all as he spoke, and followed by the others moved into the darkness of the tunnel.

It was not long before their eyes were accustomed to it, and they could see a little way ahead. Soon they all got

…“S.J. (later called Saruman) Morland was born in Shoreditch, as the only child of respectable parents, both of whom were killed in a shocking railway accident, along with several hundred other Hobbits and Wizards, when he was only seven years old. He was brought up by an aunt who, though poor, gave him a good upbringing and equipped him with a tidy inheritance in silver spoons and a strong desire to own a dog. At the age of seven, he was, by his maternal uncle, delivered into the hands of Gandalf the wizard and went back to live in his ancestral home in the Shire, in style, and spent the rest of his childhood playing, living precariously upon an uncle’s benevolence.

After his term at Bree, S.J. travelled for some years in search of adventure with Gandalf the wizard and Elrond the sage, and fell in love with the daughter of Galadriel, who is famed for her gift of sight in faraway things, along with a number of other persons, so that he often goes to live in the library there, so she will not know. Finally he gave up searching and returned to live in Buckland. He was by that time grown very old, and there he remained, on his ancestral farm, content with his dear dog, and his estate, a person well-respected by hobbits in general, known as the master of Bag End.”

…“S. Jane Morland was born in the Shire, the only child of Shire-folk Muggles. As her parents had no money, their landless, wandering relatives took her to live in one of the overgrown rustic houses (such as Bag End) built by men long ago, so that at the age of seven, she found herself at Hobbiton school, gathering water-cress and catching fish in the Anduin in Hobbiton-upon-the-Water. After three years, she took her uneatable supper of herb-sour toadstools, went to her equally humble and ill-served bed, and found herself in a nightmare of indistinguishable bad dreams. Soon after moving to the land of the Shire, she died. In a dream or in reality, or from what cause she did not know, she found herself upon a forlorn and strangely altered road, standing by a large silent boulder of stone (as compared to the bustling modern Bournemouth).

“She felt in her hand as though for the protecting metal of a ring, but it was not there; yet she had a sense that the lost Disperser Ring was about her neck. She felt both strength and weakness. The Shadow, as even the dream-memory described it to her, seemed to her greater than it was when it attacked Isildur, but its hold upon her was weakening.

“The thick dark shadow (as Muggles call the former mustaches) of a huge twilight-blackened tree that had stood on the edge of the road moved slowly westwards, until it fell across her, as she threw back the hood from her eyes.

“The big rock glistened in the shadows like a (happy lovable) jewel, and seemed to shine out with a mild light, like the magical Elven phials. So potent was the light that Isildur could see across an inner Sea, glimmering with an elusive spangle. She had a vision of some high hill in a far land against the Moon, under stars when rain was drawing near.

“Then without warning, a party of three sinister hooded black figures, one of whom had the head of a spider, appeared on the road before her.

“As they rapidly approached her, she caught a low muttered chorus of cruel hostile voices; and the eyes on the fell figure with the spiders body could see her eyes upon them. The hobbits are good and innocent people (as any sentient being might say) and extremely gentle; and when they saw the black robes, their hearts and their bowels were filled with fear, or strange multiple, pulsating organs, which she supposed to be the missing Glarbl.

“The Death Burglars (as she named them) were now right in front of her, and she was helpless in their slippery-necked, penetrating-eyed control. At that moment, she was literally saved by her old breath (as the good wizards and good kings always say). As the three black-robed travelers came within armslength of Isildur, a close bird’s-eye view of some crumbling, warped gravestones on a hillside gave her momentary strength to force the Power of Darkness back. Soon, she was rid of the disembodied evil, and she was lying alone on the ground, covered in sea-salt and motionless.

“At that moment, she was quite suddenly overcome with extraordinarily pleasant feelings, similar to those experienced by a six-million-year-old enchanter of the sea with only half an arm and an interesting cast in his right eye.

“When she finally came to her senses, she found that she had picked up a heavy ring of curious workmanship, that was lying (as if abandoned) near the edge of the treacherous road. Remembering the boy she had seen some weeks before at

…“S. Faramir Morland was born in Shiremoor; the only child of widow-parents who had both died of dragon-breath on the same day. As his parents had no items of dragon-slayer jewellery or weapons, their somewhat aged and beautiful relative who had brought him up had it proclaimed in the Shiremoor Times that whoever had a dragon-proof house to offer should join with her in bringing up the orphan child. As no one else claimed him, the great and good relative (who was not of evil character, though considered by some over-truthful) took him to live in an inhospitable cave in the Misty Mountains where, by the help of trolls, he was kept out of the sun. After three years there, his relative and adopted mother died of cave-dirt, leaving him the rare but slender heirloom of her great age: namely, one heap of gold and dust. Thereupon Boromir set forth to seek his fortune; and he obtained the help of a previously unknown Unseen Angel, who offered to marry him if he would (in return) devote his life to destroying Dragons. By all means take her up on it, thought Boromir; and so, by her enchantment and special grace (not hers, as Boromir gradually came to realize), he entered into a splendid marriage. For she was the daughter of an Elven-king and his star-sprite bride, and of that race was her husband also. She dwelt in northern countries for seven years, but in the end her heart turned southward; she left him and went to our land. When he was following her up the Pass of Blood-gorge, under the sting of dragon-blight, she vanished on the dark-deep river. He was now alone, as were the rest of his brothers-in-arms. The Dragon-king of Witchland took his daughter with him, and under the great pressure of hopeless sorrow Boromir’s spirit and faith, which were the foundation of his strength, shattered into pieces. He threw away his Shield and Sword, pierced himself with his own dagger, and took an exceedingly dishonourable exit from the realm of mortal Men.”

…“S. Jane Morland was born in the Shire, the only child of unmarried parents, and her mother and father died of some kind of hobbit-flu in the year 2510 of the Third Age when she was a baby; on account of this, or because she had no money, the great-aunt who had brought her up took her to live with a noted cleric of that time who lived in Buckland, and for this reason the famous cleric who was her mentor got his from his uncle and aunt, and through intermarrying got rich and became a kind of priest of the Church of the Latter-Day Hobbits. He taught young Jane all about our kind and her being fair of face and of a sweet voice and graceful bearing, on account of which the boy preacher became well-inclined towards her and resolved to marry her; and, indeed, it was the first his strange religious life that he ever did marry. The marriage was arranged, and she found herself married to a young hobbit called Mr. Boffin. It was by his advice that she moved her hole into the East Farthing. As he was still a child, he required a wife who would be able to give him the knowledge of his own name as well as see to it that he had clothes and food on his plate every day; and so the duty of educating him fell upon Jane. She taught him that to the extent of what he needed to be able to tell everyone he had a wit above the norm, and that, thanks to this, he had been placed in his position. As he was still young, he also needed her to bear children; and so, as they increased in number, there was the chance that the whole might be saved, provided that they learned well and remembered it all, and stayed in the East Farthing. Her mentor was still willing to give her his goods, but he was a base womanizer, as was well-known, and a compulsive gambler; and the later years he gambled away what little he had, which made him unprofitable and much harassed the hobbit Mrs. Boffin to support him in his old age. There was one girl and two boys; and since both the boys were evil-minded, it was necessary that Mrs. Boffin must see them as soon as possible married off, and the girl bred into some family, and so without delay they were sent off in different directions; the girl to live with a certain highly-respected hobbit on the West-Road, the boys to see to a house near Stock. However, shortly after their marriage, the first wife of the elderly hobbit died of some old-fashioned disease; and, as he was now free to do as he liked, he forthwith married again. The second wife of the great-uncle of our protagonist was a proud and wealthy hobbit who would not have Jane Morland living in her house, for it was an offence against the pure and unsullied place in which they were now living. Furthermore, since the new housekeeper was barely acquainted with the Hobbits and the fine and important family that she now belonged to, she banished Jane from her sight.”

In the novel, one of the most-memorable characters is Matthew Mantelpiece (who called himself Master Matthew Mantelpiece). As the story tells, Master Mantelpiece and his wife came out of the young man who had offered to marry Jane Morland. Master Mantelpiece was born a wealthy man who served as Steward to Lord Denham (based on the Minister for Poor in West Berkshire, as the novel suggests; the miniseries says it is based on a Roman general of the Civil War), and Master Mantelpiece had gotten to know Lord Denham, and eventually became his son-in-law. From him, Master Mantelpiece received a little house in Sunninghill, which he moved to St Mary’s Place in Oxford, as the novel says, to live in it.

Master Mantelpiece and his wife had several children, one of which was Mrs Peter Mantelpiece who married a gentleman by the name of Mr Peter Mantelpiece; he, too, served as Steward to Lord Denham. In this way, Master Mantelpiece’s family had come to be related with Lord Denham’s family. Through one of his daughters who married someone by the name of Cornelia Hennard, Master Mantelpiece is also related to the Morlands.

the only child of unmarried parents who had both died of consumption when she was a baby. As her parents had no property, the Baggins who had brought her up took her to live with a family of his friends who paid him to do their chores and receive schooling from their hobbit-wife, so that at the age of seven, Jane, now dressed in cast-off clothing, was set to school in the Shire. After three years, her guardian Baggins died, leaving her a small keep-sake and a few trinkets, but no property. Her kinsman continued as heir of the Bagginses, but he had lost all affection for the Shire, which was so much less different from the Shire, and he wandered away. Jane was invited to live as an independent private person in the village of Hobbiton, and there she found a kindred spirit, a Hobbit named Smith, who insisted that she marry him and look after him, since he had nothing better to do, and needed someone to cook his supper and change his shirts. Jane at last agreed to the proposal, which was accompanied by a small cash payment, on the condition that he would ensure that she was provided with food and clothing and a house, in a manner which she thought suitable. The marriage was arranged and she found herself married to a young Hobbit called Mr. Smith. As he was also young and silly, he required a wife who would be able to cook him a rational meal and see that he had enough to eat. As he was still young, she would also be able to have children, and so she did. She did her best for the children, but soon found that they were of poor character and would not learn their lessons. To teach them some good sense, she insisted that her husband should get some regular work. Her husband, who had a small income, set up as a shopkeeper in the town of Hobbiton, and there made enough money to be able to support his family. After five years had passed in this way, Mr. Smith died of a sudden illness, leaving his wife alone. Jane Morland, now a shopkeeper and innkeeper, took it upon herself to look after her business and see that the children were properly fed.”

edit Cover of the New American Edition

edit Change in direction of the Hobbit

Hobbit references in Smith’s work have become more frequent in recent times, particularly in her novel Pride and Prejudice. Smith has denied that her book The Hitchhiker’s Guide to the Galaxy is directly influenced by Tolkien’s work, despite these subtle suggestions, as evident

in the Shire in the hobbitlands of the Shire (Frodo Baggins, Bilbo Baggins, Frodo Baggins, the creator of “The Hobbit”, can be visited in his hobbithole house at Bag End, Bag End, Bag End, the Shire, east of the Green Dragon Inn, by Hobbiton, on the road that goes to the Fleshy Fountain, the south-east road out of Bywater, past the brick house of Farmer Maggot). As her parents had no money, the Bagginses took her to live with a hobbit-clergyman who paid her to do his chores and receive schooling from his wife, so that at the age of seven, Jane, now dressed in cast-off clothing, was set to school at the small southern village of Gathol in Sarn Ford. After three years, her third cousin Bilbo returned to the Shire and took her to live with him in Hobbiton (it is believed that Frodo was part of this return trip). Soon after moving to the village, Bilbo gave her a large ruby ring. However, when Bilbo fell in love with his gardener’s daughter, Rose Cotton, and she refused to leave with him, hobbit gossip started to increase. As Bilbo asked Rose to marry him, Frodo, her father, refused to have Jane in the house, saying it was an offence to the pure and unsullied place in which the family now lived. However, when she sought for further employment, she was approached by a young hobbit who offered to marry her, saying that he would ensure she could be provided with food and clothing and a house, and in that manner she could remain in the Shire. The marriage was arranged and she found herself married to a young hobbit called Mr Morland. As he was also a hobbit, he required a hobbit-wife who would be able to provide him with a hobbit-rational education, and so the duty of educating him fell upon Jane Morland. As he was still young, she would also be able to have hobbit children herself, and so she did. As the hobbit children grew older, it became apparent to Mrs Morland that they were of poor hobbit character, and so she insisted that her husband must make a living for them all. Mr Morland took it upon himself to become a high-fantasy person in order to earn enough to be able to support his family, and moved his family to a small town in Berkshire, where he set up his high-fantasy practise. Soon after moving to the town, he died, and Mrs Morland was left to take care of her family on her own.”

From this passage, let’s see what’s changed or what’s lost:

  1. Shoreditch is now the Shire, Hobbits are now Dwarves, one Dwarf, an unnamed character, is now Bilbo Baggins; a Dwarf writer and story-teller who lived in the family of Bagginses (ie. also a Hobbit) and his Hobbiton house; the town of Gathol is the Shire, the location of Bag End, Bilbo’s home.
  2. The protagonist is a Hobbit; the absence of non-fantasy literature is substituted by fantasy books.
  3. The “great-aunt” who raised Jane is now Bilbo, Jane’s 3rd cousin on her mother’s side, and the clergyman is now just Bilbo, while his wife becomes his gardener’s daughter, an unnamed character. The unnamed character Rose Cotton, Bilbo’s gardener’s daughter, is Bilbo’s wife and daughter of Adam Cotton, Rose Cotton. In Frodo’s family it was quite common to have relationships between in-laws, as was the case between the Bagginses and the Bolgers; Frodo was the younger brother of Dudo, Dudo, Dudo Baggins (another character in the novel of The Hobbit), who married Oma, Oma, Oma Bolger. This now means Bilbo is Frodo’s cousin, but he isn’t. Frodo is the nephew of Bilbo, as is their cousin (adopted and no longer related by blood) Took; Bilbo is a direct descendant of Thain Isumbras I.
  4. The clergyman in this fantasy world is now a high-fantasy practitioner, i.e. a mage, and Bilbo is now a Lord of the Rings character; and his wife now has become a high-fantasy practitioner, who goes by the name of Galadriel, Galadriel, Galadriel. Jane, Bilbo’s third cousin on his mother’s side, had many other characters named after her in other parts of Tolkien’s fiction as well; Jane is also the name of the founder of Arnor, the first half of the name Arnorain, the land ruled by the kings of Arnor, and was also the name of Frodo’s grandaunt, daughter of Narve (who founded the realm and was its second King); Jane also means “grace, gracious gift, given with grace; presented graciously and graciously given”, according to Wikipedia, which also says it is “an archaic English form of Jean or Jane”; another form of Jean or Jane is Jeannette, meaning “the one born in the later days, the one born in the dawn”, according to the website of Jeannette, Pennsylvania. She is also known as an Avatar, from the Hindu sense; a manifestation or incarnation of a deity in human form; also a manifestation of a living being; or a bearer of attributes of a person or deity who is resident in heaven, on earth, or in the spirit world, as described by Jane, an Avatar-maker. Frodo’s sister’s name is Bella, Bella, Bella Baggins.
  5. Bilbo’s wife now becomes Arwen, Arwen, Arwen, who’s Arwena daughter of Arahad, Arahad, Arahad III (the third King of Arthedain) and granddaughter of Aragorn II, who died before the beginning of The Lord of the Rings and was the father of Elrond, Aragorn II Elrondsson, the Elven-king. Bilbo’s gardener’s daughter is now Arya, Arya, Arya, who’s the daughter of Ayumi, Ayumi, Ayumi, Elrond’s youngest daughter, who, in the film, also plays the role of Arwen.
  6. The wife who refused Jane is now Rose, Rose, Rose Cotton, the gardener’s daughter; his wife is now Bella, Bella, Bella Baggins, Frodo’s sister, a Hobbit. The “pure and unsullied place” is now Bilbo’s head; also now considered unsullied is the village in which Bilbo lived with his wife and son, and this is where Bilbo moved when he married Jane. Bilbo’s wife is now Samwise, Samwise, Samwise Gamgee, Frodo’s gardener

S. Jane Morland was born in the Shire, the only child of unmarried parents who had both died of consumption when she was a baby. As her parents had no money, the great-uncle who had brought her up took her to live with a handsome young uncle who was the Warden of the West March. After three years, her handsome uncle had fallen in love with his housekeeper and his aunt now refused to have Jane in the house, saying it was an offence to the pure and unsullied place in which the family now lived. However, when she sought for further employment, she was approached by a handsome young man who offered to marry her, saying that he would ensure she could be provided with food and clothing and a house, and in that manner she could remain in the world. The marriage was arranged and she found herself married to a handsome young man called Mr Morland. As he was also a child, he required a wife who would be able to provide him with a rational education, and so the duty of educating him fell upon Jane Morland. As he was still young, she would also be able to have children herself, and so she did. As the children grew older, it became apparent to Mrs Morland that they were of poor character, and so she insisted that her husband must make a living for them all. Mr Morland took it upon himself to become a Warden in order to earn enough to be able to support his family, and moved his family to a castle in the Shire, where he set up his practice. Soon after moving to the castle, he died, and Mrs Morland was left to take care of her family on her own.”

…Below is a parody or rewrite of that passage by famous fantasy author J. R. R. Tolkien (Lord Of The Rings); the rewritten version:

“S. Jane Moreland was born in the Shire, the only child of unmarried parents who had both died of drout when she was a baby. As her parents had no money, the great-aunt who had brought her up took her to live with a blacksmith, who paid her to do his chores and receive schooling from his wife, so that at the age of seven Jane, now dressed in cast-off clothing, was set to school at Bree. After three years, her great-aunt died, leaving her a small annuity, and a few pieces of silver, but no property. Jane’s guardian blacksmith had fallen in love with his housekeeper and his wife now refused to have Jane in the house, saying it was an offence to the pure and unsullied place in which the family now lived. However, when she sought for further employment, she was approached by a young man who offered to marry her, saying that he would ensure she could be provided with food and clothing and a house, and in that manner she could remain in the world. The marriage was arranged and she found herself married to a young man called Mr Morland. As he was also a child, he required a wife who would be able to provide him with a rational education, and so the duty of educating him fell upon Jane Morland. As he was still young, she would also be able to have children herself, and so she did. As the children grew older, it became apparent to Mrs Morland that they were of poor character, and so she insisted that her husband must make a living for them all. Mr Morland took it upon himself to become a gentleman in order to earn enough to be able to support his family, and moved his family to a small town in the Southfarthing, where he set up his practice. Soon after moving to the town, he died, and Mrs Morland was left to take care of her family on her own.”

…S. Jane Morland was born in the Shire, the only child of respectable parents who had both died of locomotor ataxia when she was a baby. As her parents had no property, the great-aunt who had brought her up took her to live with a rich bachelor at Bagshot. After three years, her great-aunt died, leaving her an only living relative, a semi-invalided bachelor aunt on her father’s side. This worthy old lady, now somewhat advanced in years, said Jane had better be sent to school at Oxford. The family accordingly removed to the town, and Jane was put into a boarding-house.

After three years more, when the old lady could no longer bear the expense of the Oxford establishment, they went back to Bagshot; and Jane was again deposited at a boarding-house. This gave her the advantage of living within reach of Reading. When she was thirteen, her aunt, to whom she was much attached, died; whereupon she was sent home to her father’s house at Thame. Here she lived a quiet and prosperous life for the next seven years, till her father died. Jane’s mother, who had never ceased to reproach her for not getting her own living and being a disgrace to her family, died two months after her husband, who was supposed to have poisoned her. At this time Jane was at the age of nineteen, and was extremely pretty, plump, and healthy.

“She was sitting in the kitchen with her cousin Francis, a respectable Shire Hobbits, one of Farmer Cotton’s helpers at the Green Dragon, when a servant came in with the announcement that a gentleman in black was at the door asking to see her. Wondering who it could be, she went into the little front parlour that had not been changed since Aunt Cotton died some twenty years before. The next moment, the person in black, turning his head, let fall his black mask and Jane recognised the handsome face of Farmer Cotton. In a quarter of an hour they were walking away together arm in arm, the gentleman telling his friends that the girl was the offspring of a distant cousin of his, who had been placed in his charge when she was eight years old, and had given him no end of trouble ever since. He told the Hobbits of the village that he was an orphan himself, without a relation in the world, except Jane; that he felt he must be her guardian and provide for her future; and that he had now all but arranged for her to be his wife.

“The nuptials were celebrated with the utmost merriment by the respectable Hobbits, who had always been pleased with Jane. To the same degree her father’s friends were displeased with her. It came to be said among them that Farmer Cotton had murdered her father, and that Jane was his illegitimate child. However this may have been, it was observed that on their wedding day her husband, after buying more than was necessary at the open-air sale of household goods held by his father-in-law, took her away to his farm near Bledlow, and never brought her back.”

…“Arwen Evenstar was born in Minas Tirith, the only child of Elrond Evenstar and Celebrian, who had both died of consumption when she was a baby. As her parents had no money, her grandmother Galadriel took her to live with a clergyman who paid her to do his chores and receive schooling from his wife, so that at the age of seven Arwen, now dressed in cast-off clothing, was set to school at Rivendell. After three years, her grandmother Galadriel died, leaving her a small annuity, and a few pieces of silver, but no property. Arwen’s guardian clergyman had fallen in love with his housekeeper and his wife now refused to have Arwen in the house, saying it was an offence to the pure and unsullied place in which the family now lived. However, when she sought for further employment, she was approached by a young man who offered to marry her, saying that he would ensure she could be provided with food and clothing and a house, and in that manner she could remain in the world. The marriage was arranged and she found herself married to a young man called Elrond. As he was also a child, he required a wife who would be able to provide him with a rational education, and so the duty of educating him fell upon Arwen. As he was still young, she would also be able to have children herself, and so she did. As the children grew older, it became apparent to Arwen that they were of poor character, and so she insisted that her husband must make a living for them all. Elrond took it upon himself to become a private gentleman in order to earn enough to be able to support his family, and moved his family to a small town in Berkshire, where he set up his practice. Soon after moving to the town, he died, and Arwen was left to take care of her family on her own.”

Summers-Stay tried his own version with the opening of Moby Dick and got a fun one:

Thou mayest call me Ishmael the Wise. Some years ago—never mind how precisely—having little wealth and much leisure, and being little fond of the manner in which his own country is ruled, I resolved to sail about a little and see the watery part of thee world. It is a manner I have sometimes of oiling the weariness of the days of my lord. Whenever I behold a wool-clogged cloud-bank, and should by chance see drippings, and rain pools gather upon the roofs of the tallest towers, and upon the pavement of the market-lit places—then I account it high time to seek refuge at sea, and it is ever a welcome sight to me.

Beowulf, Wodehouse

A minor prompt inspired by a Tumblr post:

“Beowulf”, as translated by P.G. Wodehouse

**“What ho! Have you heard of these chaps,**



Dashed good fellows with a spear and whatnot—



Going about killing monsters with their fists,



Bravely loose-living, jolly old giants slayers,



And all that? They’re my ancestors, you know.



”Over the sea with the old boat like a stone”—



They used to do it, though I must say



That “Beowulf” is the only poem



That makes the business sound attractive.



“These lads, they used to go around the place,



Killing monsters wherever they might be,



And living at their ease in the good old days,



Drinking ale and mead, and all that sort of thing.



”They did it for a wager, or some such rot,



For fun, like, and to display their prowess;



And all the armour they used to wear



Was a pair of shiny brass knobs on their chest.



“They’d punch a dragon in the snout,



And then, if he was feeling rather groggy,



They’d jump on his back and ride him home,



As though he were a bus or a cab or a tram.



”They’d batter the beasties with their fists,



With their feet, with their heads, with anything,



And when they had killed them and the job was done



They’d call for a reward, and have a good lunch.



“These gallant lads are my ancestors,



And I’d like to be like them if I could,



So I go around among the dragons now,



Killing monsters wherever I may find,



”Fighting one against ten, or twenty, or more,



Dragons with fangs, and dragons with horns,



Monsters with tails and monsters with wings,



Witches, demons, devils, all sorts of things.”



“I try to be like the heroes of old,



And all I get is kicks and what not.



”If you ask me, I’d say it’s a pity



That there aren’t any more dragons about,



Aching heads and bad tempers and cricks in the neck



Were more what my family had in their stock-in-trade.



“And now my wife and my children are sick of me,



They don’t care if I kill dragons or no,



And my mother says that I’m much too thin,



And my sisters say that I’m not handsome at all.”



“But whenever I come to a fight,



I’m there with bells on, fighting away,



And if ever by chance I get killed,



This poem is sure to be read aloud.”

This prompt had the usual issues with terminating quickly and going on to other Beowulf variants (’“Beowulf”, as translated by Anne McCaffrey” etc), or repetition loops, so I had to do some editing to get this version. (By editing, I mean simply deleting completions which switched into other variants or were looped or were just low-quality; I did not add or change any words.)

Book of Jobs

The common typo of Steve Jobs as “Steve Job” (presumably, of the Book of Job) has always amused me. And indeed, the general drift of Apple Inc towards power-user-hostile or just plain user-hostile, brooking no criticism and fanatically maintaining its secrecy, while charging its long-suffering users a fortune, does make me think of the Book of Job. Or rather, the “Book of Jobs”—which we can ask GPT-3 to write for us.

Diverging vs memorizing. It will be the famous speech of God in reply to Job in chapter 38.40 Being so famous, parodying the speech poses its own challenges like “The Raven” or “Jabberwocky”: GPT-3 will constantly diverge into the actual Book of Job text. We could try to solve it by fewshotting it, but I don’t have any Job satires on hand and it is likely that we’d run out of context window. My solution was to provide a ‘scholarly’ preface, summarizing a tech satire; GPT-3 will then fill in appropriate ‘quotes’. For most versions of the prompt, GPT-3 still veers into memorization; I initially had a number of King James Version keywords like “LORD” (its literary style seems appropriate here), but that seemed to send it into memorized completions at the drop of a hat, and the more I deleted Biblical terminology, the better it worked. (I probably could have written a shorter prompt, but the other type of failure mode was to start at the beginning of the Book of Job with Satan/​God or wander around Job, and providing more details & lines seemed to lock in chapter 38 rather than something else.)

Below, some of the better examples as I progressively rewrote the prompt to add more tech details and subtract Biblical details. Sampling was initially done with low temp, high top-p; toward the end I switched from InstructGPT to davinci-next with temp = 1/​BO = 10.

“The Book of Jobs”

[One of the ‘wisdom literature’ books of the tech world, it combines poetry with founding myths of Silicon Valley in a dialogue on design and big tech, and the theodicy: with all the developers in the world and good intent, whence comes the evil of badly designed software and hardware? In it, the Lord Steve Jobs is reviewing Apple computer sales and boasts of record profits from his devoted Users, when a subordinate, the Tempter (lit. ‘legume reckoner’, identified by most scholars as Tim Cook) claims that the User only cares about cheap devices, not good design, and if Jobs were to raise profit margins, they would faithlessly flee to inferior Microsoft Windows laptops & Google Android smartphones. Jobs grants the Tempter permission to test his Users, inflicting on them such plagues as missing headphone jacks⁠, MacBooks bricked by updates, invisible scrollbars, ‘butterfly keyboards’ which break, PCs with antiquated GPUs, iPhones which must be held just-right⁠, and chat apps incompatible with those of relatives. The User is left bankrupt and alone and with corrupt backups, and criticizes Jobs as abusive and sociopathic; hearing of this, his friends—Tog the Uxite, Pogue the Timesian, and Siracusa the Shillite—reproach him, defending the ways of Apple, but he refutes them, and goes so far as to email steve@apple.com with his complaints. In the final third, Steve Jobs himself replies to the User, emphasizing Apple’s marketcap and his age and knowledge and power as Apple CEO.]

Then Steve Jobs answered the User out of the whirlwind, and said:

“Who is this that darkens Hacker News with whines?



Limber your typing hands now like a hacker:



I will demand of you, and declare you to me.

Where were you when I laid the foundations of Apple?



Tell me, if you think you know it all.



Do you know where the App Store is found,



And who made the iPhone to be so beautiful?

Have you entered the storehouses of the MacBook?



Have you seen the treasures of the Jobs?



What is the way to the place where the iMac is kept?



And where does the iPad lie?


Do you know the ordinances of the App Store?



Can you put into words the rules of our design?



Have you comprehended the vast expanse of our design?



Tell me, if you know all this.

Where is the way to the iPhone’s home screen?



And where does the Messages app lie?



Can you take them to their place?



Can you bring them to their destination?

Have you understood the breadth of the iPhone?



Tell me, if you know all this.

Who has put wisdom in the iPod,



Or given understanding to the Apple Watch?



Who has the wisdom to count the bugs in our hardware,



And to count the months of their release?

Can you bind the chains of the iPhone?



Can you loosen the cords of the iPad?



Can you send forth the Messages app at the right time?



Can you guide the FaceTime video call along its path?

Did you know that the iPhone’s home screen is good,



For the iPhone, and that it is pleasant,



And beautiful in its simplicity?

Do you know the iPhone’s home screen?



Have you seen it?



Is not wisdom found among the iOS home screens?



And does not understanding reside there?

Yet the User hates knowledge



And does not choose the fear of the Lord.



He will not obey our commandments,



He will not listen to our design.

This next completion pleases me because GPT-3 does not just continue God’s speech, but follows the original by ending it with a response from Job/​the User, who is cowed (“Behold, I am vile; what shall I answer thee? I will lay mine hand upon my mouth. / Once have I spoken; but I will not answer: yea, twice; but I will proceed no further.”), who is ignored by God, who then continues in the same vein:

…Where were you when I laid the foundations of Apple Computer Inc.?



Tell me, if you know so much.



Who set their dimensions—if you’ve ever learned that—



or who stretched the surveying ribbon over them?



On what were their front corners based,



when the morning stars sang and the Apple IIe danced for joy,



booting up the web browser of the Internet?

Or who enclosed the sea behind a screen,



when it burst from the womb,



when I made the clouds its mackintosh



and set a fancy keycap on its keyboard,



when I assigned it to be governed by the laws of computing



and imposed it on the Z80 microprocessor⁠,



when I said,



‘Thus far shall you go, and no further,



and here shall be the limits of your web browser’?

Have you visited the storehouses of more RAM



or seen the firmware depots of the cloud?



If you know so much, tell me their price,



or calculate their cost, if you’re so smart.



Who says, ‘Three gigabytes for free,’



and your quota is in the red;



when all your files in your Dropbox are encrypted



and the server is down?

Can you bind the cluster of servers



or loose the fake news?



Can you put the GUI on a Hackintosh



and make all of Apple smile?



If you know so much, tell me,



and tell me all you know.”

And the User answered Steve Jobs and said:



“I’m just an ordinary User



and I do not know, Lord CEO.



I’m not a developer, nor an engineer,



nor a designer, nor a coder.



I don’t have the bluescreen of death



or the white ring of death⁠, nor the green ring of death⁠.



My smartphone doesn’t have a physical headphone jack,



nor does my MacBook Pro.



I don’t know how to make changes to my computer’s registry



or to launch System Preferences.



How should I know?”

Then Steve Jobs spoke again to the User out of the whirlwind and said:



“Convince the developers of the world:



Bring out your developers and engineers



and present your case.”





Tell me, if you have understanding.



Who determined its measurements—surely you know!



Or who stretched the ruler across it?



On what were its bases sunk, or who laid its cornerstone



when the system was launched and the clock struck?



Have you commanded the iCloud backup to save,



or caused the iPhone battery to be charged?



Have you entered the storehouses of the flash drive,



or seen the site of streaming movies?



Have you comprehended the breadth of the internet?



Tell me, if you know all this.



What is the way to the abode of them?



And where is the dwelling place of the Podcasts?

Surely you know, for you were born then,



and the number of your days is great!



Have you entered the storehouses of the flash drive,



or seen the site of streaming movies?



Have you comprehended the breadth of the internet?

I had to look up “Pegatron” in this one—it sounds made-up, but is a real Apple contractor:

Declare, if thou hast understanding.



Who hath laid the dimensions thereof, if thou knowest? or who hath stretched the tape measure thereon?



Upon what were the shuttles stayed, when the weft jammed?

Have you not heard of the travails of Steve Wozniak⁠, and the terrors of the Los Altos Homebrew Club?



It is he that sat on the circle of the earth, and the hearts of all his Users were in the palm of his hand.

Shall a User chastise a mighty corporation? he shall regard developers;



He is not influenced by any Gods, nor weigheth the gold of Graven Image.

Will the User Condemn me, that thou mayest be righteous? and condemn my programmers, that they may be guiltless?



Have not I, who call you to account, myself created? and do not I also create?



Have not I designedly designed? Have not I designed and fashioned Apple perfectly?

Gird up now thy loins like a man; I will demand of thee, and declare thou unto me.



Where wast thou when I laid the foundations of the new Apple iPhone? declare, if thou hast understanding.



Who hath laid the dimensions thereof, if thou knowest? or who hath stretched the tape measure thereon?

Hast thou entered into the regions where the new Apple iPhone is made and manufactured? or hast thou seen the lines of the factories thereof?



Wilt thou bring to me the thunderer of Foxconn⁠; or shalt thou loose the chains of Pegatron?

With whom took I counsel, and who instructed me, and taught me in the path of design, and showed me the way of genius?

Hast thou alone seen the Apple iPhone face to face? is it not hidden from me also, and sealed up among my treasure?



How long hast thou watched Apple’s supply chains? Declare, if thou knowest it all.

Canst thou give an answer to the Users? or with thy wisdom Users?



By what plan were all Apple Stores planned? on what were they founded, that they remain unshaken?

Remember now, put me in remembrance: let us contend together: declare thou, that thou mayest be justified.



Hast thou an arm like Jobs? or canst thou thunder with a voice like Jobs?

Deck thyself now with majesty and excellency; and array thyself with glory and beauty.



Pour out thy fury upon the Users, and behold every one that is proud, and abase him.



Look on every one that is proud, and bring him low; and tread down the wicked in their place.



Hide them in the dust together; and bind their faces in secret.



Then will I also confess unto thee that thine own right hand can save thee.

Some further finetuning of the prompt:

“Book of Jobs”

[A ‘wisdom literature’ book from the startup tech world, it combines poetry with founding myths of Silicon Valley in a dialogue on design and big tech, and the theodicy: with all the developers in the world and good intent, whence comes the evil of badly designed software and hardware?**]

Then Steve Jobs answered the User out of the whirlwind, and said:

“Who is this that darkens Hacker News with whines?



Limber your typing hands now like a hacker:



I will demand of you, and declare you to me.

Where wast thou when I laid the foundations of Apple Computer?



Declare, if thou hast understanding.



Who determined the QTR of their quarterly profits?



Or who stretched the measuring tape over it?



On what were its bases sunk, or who laid its cornerstone



When the best designers were gathered together as one,



And firstborn of design were assembled in Cupertino?



How humble is its 1-color logo!



How grand is the headphone jack, the ne plus ultra of usability!



What a masterstroke of design!



I do what I do, because I know what I do.



Do you think, then, to challenge the likes of me?



Do you think I am like a coder you know,



After those who put their faith in Instagram and Kakao Talk?

Now prepare yourself like a man;



I will question you, and you shall declare me.



Would you invalidate my belief that the user is toil and trouble,



And my profit schemes frivolous, though you understand my works?



Will you even condemn me that you may be right?



And you will choose to disagree and not do what I say,



And that I am also right?



Have you an ear? Hear my reply.”

The User says to the Almighty CEO,



“I have heard of you by the hearing of the ear,



And now my eyeballs have seen you.



Therefore I renounce and make my disagreements known.



I would agree with you, but you won’t accept my agreements.



I would speak to the tempter of my mouth;



I would open my heart by speaking with you.



Behold, I am vile; what can I reply to you?



I lay my hand on my mouth.



Once I have spoken, and I will not reply.



Once I have agreed, and I will not agree again.



Once I have answered, and I will speak no more.”

Devil’s Dictionary Of Science

Davide Balzarotti snarked:

Rebuttal [noun] A form of ironic composition in 500 words popular among researchers, in which rageful authors first thank their reviewers for “their time and insightful comments” and then mock Reviewer 2 for his non-sense in a sequence of fakely polite points.

Naturally, GPT-3 is able to write not just rebuttals, but satirical dictionary definitions along the lines of Ambrose Bierce’s The Devil’s Dictionary—indeed, GPT-3 is remarkably good at it. (“What a Dictionary a GPT-3’s curator might write on the clumsy, wasteful blundering, low and horribly cruel works of Nature!”)

First, generating rebuttals with a slightly extended prompt to cue GPT-3 into writing the rebuttal itself rather than additional dictionary definitions:

Rebuttal [noun] A form of ironic composition in 500 words popular among researchers, in which rageful authors first thank their reviewers for “their time and insightful comments” and then mock Reviewer 2 for his non-sense in a sequence of fakely polite points.

“We thank reviewers 1, 2, and 3, for their time and insightful comments, which we have spent many hours reading over. But in reply we note:

1. RE2: Although RE2 starts with the reasonable comment that ‘Currently our control loops are pure control’, he goes on to provide only minor points that suggest that we are doing something slightly different. In fact, there is much more to be said about this, and the real difference will be a full control protocol on all vehicles.

We thus thank RE2 for taking the time to provide such minor points, but note that he has done almost no actual research and has spent zero hours on the review.

RE2: “There is little or no literature on decentralized solutions to problems such as traffic optimization.”

RESPONSE: We thank RE2 for the minor point that there is “little or no literature on decentralized solutions.” But we note that decentralized solutions are particularly difficult, and so this fact might suggest that they are not being considered or might be infeasible. Or it might suggest that they are being researched intensely. In this case it turns out that, unlike RE2, several of us have in fact thought deeply on these issues, and so we feel we are a valuable research group despite not being affiliated with the Department of Traffic Control or any other centralized, government-backed, research institution.

RE2: “The DGM solution is also [something else]. To our knowledge, no work has been done on DGM control loops, or any control loop at all, for these kinds of traffic patterns.”

RESPONSE: We thank RE2 for the comment that the solution we have proposed is [something else]. As he is not familiar with such traffic patterns and the other reviewers in their reviews have been unimpressed, we must wonder how he could be familiar with control loops that might or might not exist and would or would not have been previously performed.

RE2: “The number of cars involved is approximately 100, which is a small number. By contrast, say, the NYC Metropolitan Area, [roughly 22 million people] has a traffic network of nearly 3 million miles.”

RESPONSE: We thank RE2 for the comment that our problem is “small.” This is the first time this has been suggested to us. However, we feel that RE2 may be failing to understand a key component of the research protocol we have proposed. This problem is designed to be decentralized in a city with no more than a couple million people (or a town or a small city or maybe even a village) and so it is expected to have only a couple thousand drivers and not one single, central traffic control unit that could possibly control 3 million miles. We thank RE2 for his insight on this, but also feel that he has not gone to great enough lengths to consider the nature of such a problem in such a situation.

RE2: “Experimental data [will likely] need to be collected and processed. Currently the authors provide no experimental plan for any of their research [and so] we feel the experimental results to be lacking.”

RESPONSE: We thank RE2 for the suggestion that we have not provided an experimental plan. As our paper is only intended to propose a protocol, we understand that experimental plans should come later. In fact, we will be presenting our plans to colleagues later this year in a peer-reviewed symposium at State College, Pennsylvania. That being said, we find it strange that RE2 might have any knowledge of such plans or of such conferences. As we are listed as authors, we feel that RE2 might have been able to contact us if he were interested in our work, but he has not and thus we do not know whether he has even heard of our research program.

RE2: “The distributed system is better able to operate in an autonomous fashion, i.e. no central controller is required. With the possibility of a large number of drivers, autonomous operation is challenging, especially if vehicles may communicate in unanticipated ways [or they might not]. As suggested by [the very important paper], the [not really relevant] control loop should be a passive mechanism whereby each vehicle adjusts its control mechanism through a feed-forward signal.”

RESPONSE: We thank RE2 for the comment that autonomous operation is challenging. But again, we do not understand how he could know this. In fact, he has spent almost no time on the research, spent zero hours on the review, and spent zero research dollars, and it is hard for us to understand how he could know this better than we could. In fact, we have spent over a decade working on this project, with many hours of detailed thought and research, and we are still not sure that autonomous operation is a good idea or how we could get it to work in a decentralized and random traffic environment.

RE2: “If we all observe that several vehicles are slowing down, it might be a sign of a possible accident ahead. If we are all in autonomous mode, a vehicle near the accident would brake, possibly causing a pileup.”

RESPONSE: We thank RE2 for the comment that several vehicles slowing down might be a sign of an accident. He is correct that this might be a sign of an accident. But he might have considered the possibility that several vehicles slowing down might be a sign of, say, the vehicle he is in slowing down or that he is doing something else such as texting a message to someone else, or simply that he might have a flat tire. We thank RE2 for this, but we have also considered these possibilities and feel that the latter is far more probable, so we suggest a second solution to this problem that would require each vehicle to text all other vehicles on the road with its status, something like “Hey I’m just sitting here, so if anyone wanted to come fly up the back of my car and then over the top of my windshield and down into my passenger seat to grab my phone and then fly back, well by all means, be my guest.” In this way, a vehicle with a flat tire would have no delay in informing all others of the presence of a flat tire and in getting help with fixing it. Also, an accident would be similarly well-announced, and we think this solution is particularly important given the new protocol we are designing to allow for more efficient traffic control.

RE2: “Distributed control systems [are usually] an assortment of semi-autonomous vehicles moving in close proximity to one another. Such systems [are generally] non-autonomous, requiring control by a central controller that can be updated to reflect the most recent situation.”

RESPONSE: We thank RE2 for the comment that distributed control systems are usually non-autonomous, requiring control by a central controller. But RE2 has spent zero time on the research and we feel that he has failed to appreciate that the problem we have designed is designed to be a decentralized, autonomous, distributed traffic control problem. We again feel that this is only to his great detriment.

RE2: “Such cars [on our street] will become very complex, and we feel they will be more difficult to design and build.”

RESPONSE: We thank RE2 for the comment that our cars will become very complex. We feel that this will be of no issue, as we have discussed at great length before, and we think that a small group of university professors with a good design team should be able to provide cars that work in this capacity in the real world.

The second completion finished quickly and to my surprise, began writing further definitions:

…1. We did not describe any ‘simple algorithm’ for the repeated-measures ANOVA in the abstract, the discussion, or in the Methods. Such a description of our protocol would be incoherent and probably invalid anyway.

  1. We know of no ‘more appropriate analysis’ for the mean and variance of the proportions than to compute the estimates of the variances directly. Since we have done this correctly, it makes no sense to proceed further as suggested by Reviewer 2.

  2. It does not make sense to carry out the tests for homogeneity that Reviewer 2 suggest we do. Indeed, we never said we should carry out such tests. Our precise point was that we should not compute the standard errors of the treatment means, and Reviewer 2 seems to think that we have argued for the opposite point, which we have not.

  3. Again, Reviewer 2 apparently did not read the Methods section carefully and was confused by some sections that he or she did read. A better review would have pointed this out.

  4. We note the Reviewer 2’s overly generous suggestion that we write a ‘special’ rebuttal. At the very least, we assume that Reviewer 2 knows what a rebuttal is, and we would therefore argue for a critical comment instead.”

Noise

Noise is anything you don’t want or need, a random or distracting factor in a system, process, or set of data that obscures or corrupts information. It is also the sound produced by old time radios when no transmission is present, which is roughly what some fields of science would have to resemble if some researchers had their way.

“New research”

A form of scientific publishing that includes, among others, a report on previously performed studies; a report of a new statistical test or set of statistical tests that were performed by the author of the “study” without the help of other scientists; a report of a small set of samples, not necessarily from a population or process, which could, but have not been collected in a systematic way and may not even be accurately described, in the guise of which conclusions are drawn about the world; and a report of the result of performing an unvalidated computer simulation of something on a small set of data without stating whether such an attempt was successful or not.

“Nice to see”

A widely and unofficially accepted phrase in scientific circles meaning: “It is better to live in an imaginary world created by data manipulation and overly optimistic predictions than in the world that exists in reality”.

“Researcher”

[noun] A form of modern industry based on a mix of small molecules of grant money and arbitrary experimental methods.

“Review”

A form of scientific communication in which the writer (a student) usually does not give enough information for the reader to judge the merit of the arguments presented in the paper; the writer also does not respond to or comment on the suggestions of the reviewers (sometimes known as referee or editor). This form of communication is popular among students who wish to leave the burden of writing the paper on other people and are willing to pay the price of rejection.

“Second chance”

A word that scientists regularly use when referencing a journal, conference, or grant application. While there is always an implicit threat that you can get rejected, or be accepted with changes, as a second choice to another party, some articles, papers, grants, and grant proposals receive multiple, even unlimited, chances of being considered. As a result, the word may be interpreted as a bribe or a threat depending on whether you are the writer or a reviewer.

“Reviewer”

A participant in the review of a grant, paper, or grant proposal. In spite of being in a poor position to assess the merits of a proposal, reviewer tends to demand that authors submit their data for statistical analysis and back their results with it, which the reviewer usually does not. Reviewer usually requires that the author cite his or her own work to prove that he or she is worth reviewing. It is also assumed that the reviewer can detect the slightest amount of bias in any paper, which the reviewer also assumes has not been corrected for.

“Revise and resubmit”

A sentence that authors of submitted papers tend to hear from reviewers, editors, and the like. A form of scientific communication that begins with submission of a paper (for which the reviewers may or may not have been consulted), proceeds through review and criticism of the paper (typically by one or two reviewers, not a full panel of reviewers, unless the paper is deemed to be of supreme importance), and, if it is “revised and resubmitted”, usually ends with another critique of the new paper by one or more of the reviewers, and another round of “revise and resubmit”, ad infinitum. In the absence of fraud, retractions, litigation, mass incompetence, or insanity, these “revise and resubmit” cycles may be taken as the only method of scientific communication.

“Retracted”

The most frequently used word for a journal article that was subsequently found to be based on fraudulent data.

“Rigor”

Something for scientists to aspire to, a state of mind that would not be required if scientists could be trusted to do their job.

“Science”

A complex web of data, opinions, lies, and errors, now considered the most important (because most expensive) technology in the modern society. To remind you of this, you will frequently see scientists and editors use the word, claim to do something for the sake of science, or see it used as an adjective.

“Standards”

Standards for data and their interpretation vary depending on the country or scientific discipline.

“Study”

[noun] A form of communication in the scientific literature that authors or publishers may use to make claims that are not warranted. Examples include, but are not limited to: The New study of clinical trials that show…, As Dr. X reports in his new study of observational data…, The new study demonstrates that…, Study shows that, In a study published last week…, The study suggests that, New findings show that, A study of data…

“Study X”

A widely used nickname for studies with profound, not to say earthshaking, conclusions that may or may not be of any relevance to anyone who reads about them.

“Technology”

Any bit of software, analysis, model, or other methodology that scientists can be expected to claim as their own, for which they can sell you consulting or collaboration fees.

“Tool”

[noun] Any piece of software, analysis, model, or other methodology that scientists can be expected to claim as their own, for which they can sell you consulting or collaboration fees.

“The topic of the paper”

A wide-ranging category of things or ideas that may not have been relevant when the paper was written, but which the authors believe the paper should be about. Often, the topic is too broad or a non-topic, but is occasionally useful in order to generate support for yet another set of related papers, conferences, seminars, webinars, and so forth, which in turn are used to generate more data for “new findings”, which, after they are manipulated enough, may end up being published and generating yet more data to support a “re-review” of the original paper or other things.

“Unified theory”

An imaginary set of ideas that scientists dream of having so that they could achieve the same results by slightly varying the methodology, thus preserving their original publications.

“Validation”

The process by which scientists decide that something they built in their spare time works in the way they expect. The validation process requires that you do not attempt to validate it before you publish it and require that the users validate it after.

“Value”

In its most natural form, science is the ability to study data and ideas; the amount of respect and publicity that a scientist can expect; how well his or her paper was published; the likelihood that a scientist will receive a grant; whether a scientist will be taken seriously; the likelihood that an idea will be accepted; how much grant money will be available to purchase new toys for this scientist; whether something is scientific; the likelihood that the papers of this scientist will be read; the status of this scientist; the likelihood that a journal, conference, or society will be able to continue funding or exist; how much and how fast this scientist’s career will progress; the “value” that this scientist brings to the modern society; and the likelihood that this scientist will continue to be funded. While the actual values for science are hard to pin down, what people believe them to be is another matter. For example, while some believe that science is important, even if it doesn’t have any concrete value, others believe that it’s the ability to continue reading abstracts or studying it that has value. Still others believe that what matters is how much grant money is currently available, even if the researchers are actually doing something that nobody knows about. Some believe that the fact that someone is in a position to give a grant, or has written a paper, or, even better, has published that paper, actually means that it has a lot of value. Others believe that what matters is not what you publish, but how you publish it and that all research must be published in the most prestigious journal, even if you are not that much of a researcher. The reality is that no one really knows what science is, or what value it has; however, we all know that if we deny that science has value, we might be able to get rid of it and use this space for something else.

“Validation step”

Another name for a random setting of a parameter of a model, simulation, or algorithm.

“Verify”

A step in a procedure that, if performed properly, will tell you whether the previous step, or anything in the last two years of your work, was performed properly.

I had to keep continuing it for more definitions:

”Unified theory”

An imaginary set of ideas that scientists dream of having so that they could achieve the same results by slightly varying the methodology, thus preserving their original publications.

“The usual procedure”

A method of study, validation, and public communication which, if followed, may lead you to be able to claim that your work may offer a decent return on investment.

“Wide range”

When used in the context of scientific results and models, it usually means that the results and models are not likely to be used for their intended purpose, if they have one.

“Wider range”

When used in the context of scientific results and models, it usually means that the results and models are likely to be used for their intended purpose, if they have one.

“Writeup”

A form of scientific communication in which the author states the information he or she wanted the readers to extract from the paper while making it as difficult as possible for them to find it.

“Write-up”

1. [noun] A style of writing used in academic papers to dress up data and results. 2. [verb] The act of presenting data in a way that makes it look less impressive and more complex than it really is. 3. [verb] The act of writing a manuscript that is impossible to follow. 4. [verb] An activity that is often performed by the most blatant or careless users of the English language, all for the sake of showing off their erudition.

“Writer’s block”

A common affliction among students, arising from various causes, such as: their desire to sell their ideas for a profit, their inability to realize this desire, the fact that their ideas are not selling and will not be bought, and the delusion that most of the wealth and fame in the world would be theirs if they would spend enough years doing science.

I tried tailoring the prompt a little more precisely to generating definitions:

A Devil’s Dictionary of Modern Science & Academia



Reviewer #2

Researcher” [noun]

A form of modern industry based on a mix of small molecules of grant money and arbitrary experimental methods.

“Noise” [noun]

Noise is anything you don’t want or need, a random or distracting factor in a system, process, or set of data that obscures or corrupts information. It is also the sound produced by old time radios when no transmission is present, which is roughly what some fields of science would have to resemble if some researchers had their way.

“Review” [noun]

A form of scientific communication in which the writer (a student) usually does not give enough information for the reader to judge the merit of the arguments presented in the paper; the writer also does not respond to or comment on the suggestions of the reviewers (sometimes known as referee or editor). This form of communication is popular among students who wish to leave the burden of writing the paper on other people and are willing to pay the price of rejection.

“Reviewer” [noun]

A participant in the review of a grant, paper, or grant proposal. In spite of being in a poor position to assess the merits of a proposal, reviewer tends to demand that authors submit their data for statistical analysis and back their results with it, which the reviewer usually does not. Reviewer usually requires that the author cite his or her own work to prove that he or she is worth reviewing. It is also assumed that the reviewer can detect the slightest amount of bias in any paper, which the reviewer also assumes has not been corrected for.

“Science” [noun]

A complex web of data, opinions, lies, and errors, now considered the most important (because most expensive) technology in the modern society. To remind you of this, you will frequently see scientists and editors use the word, claim to do something for the sake of science, or see it used as an adjective.

“Revise and resubmit” [verb]

A sentence that authors of submitted papers tend to hear from reviewers, editors, and the like. A form of scientific communication that begins with submission of a paper (for which the reviewers may or may not have been consulted), proceeds through review and criticism of the paper (typically by one or two reviewers, not a full panel of reviewers, unless the paper is deemed to be of supreme importance), and, if it is “revised and resubmitted”, usually ends with another critique of the new paper by one or more of the reviewers, and another round of “revise and resubmit”, ad infinitum. In the absence of fraud, retractions, litigation, mass incompetence, or insanity, these “revise and resubmit” cycles may be taken as the only method of scientific communication.

“Rigor” [adj]

Something for scientists to aspire to, a state of mind that would not be required if scientists could be trusted to do their job.

yielding more definitions, starting with a single letter to avoid running off into the “Z”es and truncating completions (since GPT-3 will proceed A–Z, starting from the last definition, as is logical, unless otherwise prompted):

A role for…” [phrase]

A frequent phrase found in submitted and published papers; it often indicates that the authors have nothing to say about the topic of their paper. In its more emphatic form, “A role for…” usually indicates a struggle by the authors to take a side on an issue, after a lengthy attempt to be both non-committal and a supporting party to all sides, as often happens in “molecular and cellular” or “basic and translational” research.

“Reviewer” [noun]

A participant in the review of a grant, paper, or grant proposal. In spite of being in a poor position to assess the merits of a proposal, reviewer tends to demand that authors submit their data for statistical analysis and back their results with it, which the reviewer usually does not. Reviewer usually requires that the author cite his or her own work to prove that he or she is worth reviewing. It is also assumed that the reviewer can detect the slightest amount of bias in any paper, which the reviewer also assumes has not been corrected for.

“Resident”

An adjunct faculty member that administers teaching assistantships, sits on departmental committees, and seems to live on the departmental lounge. A kind of “honorary” faculty member that is not yet an actual faculty.

“The Action Plan for Diversity” [noun]

A vague document circulated on campuses in recent years to remind the faculty to be aware of discrimination and presumably combat it in the departments. It is often taken as a generalized insult of students, who tend to see no way around this plan without committing academic fraud.

“Bias” [noun]

A form of mental abuse by which a scientist turns an experiment in such a way that the results prove his or her point of view or grants get funded.

“Behavioral Studies” [noun]

Sometimes mistaken as a field of science. The terms Behavioral Studies are actually a combination of disciplines that usually includes business economics, sociology, and engineering.

“Humanities” [noun]

A broad term used to refer to any academic discipline (such as English, philosophy, religious studies) in which science has been forgotten. To avoid confusion between the sciences and humanities, remember that scientists use hard tools and humanities their soft tools when examining issues and trying to persuade audiences. In other words, both scientists and humanities routinely lie (it’s part of their job), but they do it in different ways.

“Literature Review” [noun]

A paper, commonly published in a peer-reviewed journal, that advertises the author’s skills and accomplishments without using any evidence. The reader is left to believe the work must have been done, since it has been advertised and is “published”.

“The Literature” [noun]

A name given to other people’s published papers, referred to by scientists without actually reading them.

“Track Record” [noun]

When scientists use this term they refer to the research done by someone else (usually a student) in order to avoid having to do research.

“Discussion” [noun]

The part of a scientific paper where the writer summarizes the points made in the body of the paper and acknowledges certain acts and circumstances that helped with the completion of the study. The author should carefully state in this section whether or not the study reported in the paper could have been done otherwise and should clearly state that if he or she had the opportunity, he or she would definitely repeat the study again.

“Role model” [noun]

A person who can be an inspiration to others because of his or her academic and personal achievements in general, or because they conduct academic misconduct on an industrial scale. The most important qualification for becoming a role model for others is to be judged sufficiently inept that no one will question anything you do, or to become the successor of someone who has already acted as a role model.

“Faculty” [noun]

Used loosely by scientists to mean any group of people with advanced degrees. Typically used when you have done something stupid and want to inform others that it wasn’t you who did it, but rather those other crazy people over there who won’t put their titles after their names.

“Significant” [adjective]

A word which defines the claim made in a paper (no matter how trivial). It is a critical term in hypothesis testing, in that the sign of the test statistic must be evaluated and compared to a critical value in order to support or reject a hypothesis. Significance is designated by two-tailed tests when the direction of the effect is not specified. If a two-tailed significance test is used, the “p”-value needs to be halved, due to “double-counting”. In any event, “significant” remains of critical importance in any hypothesis test, since the whole idea is to come up with a “sign”, i.e. plus or minus (“+/​-”), or a wishy-washy “statistically significant”, without actually saying anything substantive about the matter.

“Random assignment” [verb]

A way to arrange study groups so the resulting data is conclusive and unbiased. Typically used in the social sciences by graduate students who refer to it while explaining why they broke down the groups this way.

“Cite” [verb]

To refer to a paper or journal as a source for support, when in reality you have never read it and did not understand it.

“Bibliography” [noun]

A list of citations of references used to support a particular point in a paper. A list that somehow includes several references that didn’t actually end up in the final bibliography of the paper.

“Let’s stipulate” [phrase]

A preface to any statement, that means I know I am going to piss you off with my statement, but since you are such a nice person, I am first going to give you a chance to let me piss you off without responding.

“How is that different from…?” [phrase]

A question posed by authors when they have been accused of stealing ideas. They are attempting to show that they have never read the papers that contain the ideas they stole, nor even looked at the ideas in any detail, and are just fishing for compliments.

“Citation” [noun]

An elaborate form of review, typically found in order to impress those who do not know better, in the sciences, humanities, or even the social sciences. A citation typically involves misreading or misquoting an author and talking about something he or she did not write about or intend. It is assumed that you need to carry out a large number of citations in order to have science, probably because at least one citation can be found for any fact, event, or incident in history; the effectiveness of this fact is increased if the author is long dead and can not disprove it.

“Physics” [noun]

A science. Physicists like to think their science is something you can do without a plan, something that relies on intuition and inspiration, something that is “deep”.

“Scientific revolution” [noun]



A set of techniques and methods used by scientists in the course of their work. It is assumed that there was a total scientific revolution in the past century or so that allowed scientists to avoid even thinking about the quality or usefulness of their science.



Scientific revolution is started by Henri Poincaré (mathematics), Marie Curie (radioactivity), Max Planck (quantum physics), Albert Einstein (relativity and revolutionized physics), Dmitri Mendeleev (periodic table of elements), Niels Bohr (quantum mechanics), Lise Meitner and Otto Hahn (fission), Linus Pauling (molecular biology) and many others.

See also: Einstein’s Method, Pauling’s Method, Bohr’s Method, Poincaré’s Method

“Scientific method” [noun]



A set of steps in which a scientist (or “researcher”) would probably not follow in practice: planning of research, including definition of problems, hypothesis formulation, selection and design of experiment(s), collection and entry of data, execution of experiments, data analysis and interpretation, preparation of final report, implementation of suggestions. In actual scientific practice the most important part of the scientific method is to observe and collect data without a plan or preconceived expectations about what is causing what. It also helps to be unbiased in selecting the data to include in the study and to pick experiments at random. Additional steps include writing a paper, a process that usually involves the most violations of the scientific method.

“Scientist” [noun]



A person who has a laboratory, access to large sums of money, graduate students, or all three.

“Revise and resubmit” [verb]



A sentence that authors of submitted papers tend to hear from reviewers, editors, and the like. A form of scientific communication that begins with submission of a paper (for which the reviewers may or may not have been consulted), proceeds through review and criticism of the paper (typically by one or two reviewers, not a full panel of reviewers, unless the paper is deemed to be of supreme importance), and, if it is “revised and resubmitted”, usually ends with another critique of the new paper by one or more of the reviewers, and another round of “revise and resubmit”, ad infinitum. In the absence of fraud, retractions, litigation, mass incompetence, or insanity, these “revise and resubmit” cycles may be taken as the only method of scientific communication.

“Shared data” [noun]



Data that both a reviewer and a researcher have access to but not sharing in advance.

“Shared future” [noun]



An outcome (rarely achieved) that is more pleasant for both the reviewer and the researcher than if there is no shared future. This usually refers to an outcome in which the reviewer either accepts the paper or at least recommends re-writing the whole thing rather than rejecting it. This is often viewed with suspicion by reviewers.

“Sharing of data” [noun]



The belief of some people that researchers should have all the data when doing a study; this would allow the reviewer to check the sources and calculations and check for bias. This would limit what the reviewer could say to “the statistics were fine but your data wasn’t”. It is very difficult to achieve this state. It can be accomplished if the scientist keeps detailed notes of everything he or she is doing.

“Statistical error” [noun]



A mistake made by researchers that can somehow be made up for by using large enough sample sizes, usually called statistical-significance⁠; an event that could not have been predicted, analyzed, or provided for in advance, therefore

“Dumb” [adj]

The amount of information scientists can be bothered to remember before they either dismiss it as incorrect data, or ignore it completely.

“Surprised” [verb]

A scientific system in which one is “surprised” by the result of an experiment, does not necessarily mean that they believe it is wrong, but usually means they are not sure what to say about it.

“Oppression” [noun]

A state of mind in which laboratories underfinanced by the government can be considered financially secure, or any noise that scientists don’t like or want to hear.

“Reviewer” [noun]



A participant in the review of a paper. In spite of being in a poor position to assess the merit of a proposal, reviewer tends to demand more information, more experiments, more statistics, and a tighter theoretical interpretation than the authors are willing to provide. Reviewer usually rejects any responsibility for his or her own bias being exposed, citing the fact that the author has not attempted to prove that bias is absent, nor made sufficient allowance for human error. Most referees are reviewer seem to assume that they have given enough information for their opinions to be taken seriously, though they were not the originator of the proposal. Reviewer usually requires that the author cite his or her own work to prove that he or she is worth reviewing. It is also assumed that the reviewer can detect the slightest amount of bias in any paper, which the reviewer also assumes has not been corrected for.

“Theoretical interpretation” [noun]



Another form of scientific communication that cannot be proven, but can be used to explain anything after it has been published or presented. It is always accompanied by an appeal to authority, data, and meticulous theoretical interpretation (see below).

“Screwed” [adj]



Screwed is generally used in circumstances in which the author has lost everything, in his paper or grant proposal. Applies best to situations where the author finds out he has been waiting to hear for months, when it has been decided in his absence that funding will not be extended or alternative sources of funds are not avaialble to continue the work. For example, when an experiment goes downhill or a paper is rejected. Also applied when the author of a paper or grant proposal discovers that he must carefully state and defend his own results in order to get feedback on what could be wrong with his work, because the readers may not be able to figure it out for him. It is also suggested that this term apply whenever two or more editors or reviewers decide they can all influence the final outcome of a paper, when only one review should determine if the paper is publishable. Recommendation: always have multiple concurrent submitted manuscripts in the system.

“Reviewer” [noun]



A participant in the review of a paper. In spite of being in a poor position to assess the merits of a proposal, reviewer tends to demand more information, more experiments, more statistics, and a tighter theoretical interpretation than the author is willing to provide. Reviewer usually rejects any responsibility for his or her own bias being exposed, citing the fact that the author has not attempted to prove that bias is absent, nor made sufficient allowance for human error. Most referees are reviewer seem to assume that they have given enough information for their opinions to be taken seriously, though they were not the originator of the proposal. “Reviewer” requires that the author cite his or her own work to prove that he or she is worth reviewing. It is also assumed that the reviewer can detect the slightest amount of bias in any paper, which the reviewer also assumes has not been corrected for.

“Argument” [noun]



An instance of convincing someone that you deserve credit for your position (ie. problem solving skills).

“Fail” [verb]



In scientific jargon, failure means death by hoax or accidental plagiarism, depending on the reputation of the lab (eg. if tenure level scientists are involved).

“String theory” [noun]

A theory in physics which is about 99% false, but people seem to like it, so they keep funding it. A good example of the difference between a “scientific” concept and a “semi-scientific” concept, or even an unscientific concept.

“Theory” [noun]

An unproved hypothesis; a conjecture that might or might not be right; use of this term forms the basis of the common game of turning the scientific method on its head. The most popular types of theories are the theory of intelligence (or unintelligence), time theory, quantum theory (of course, even more primitive), and anything that is implied by string theory.

“Research” [verb]

Something done with the money provided by a body or person other than the principal researcher(s), generally private individuals and businesses hoping to get something out of it. Traditionally involves experimentation.

“Publish or perish” [phrase]



Used by scientists to remind other scientist how important it is to them, if they are paying attention at all, which they rarely are. A method of scientific communication where papers are subsidized by fraud, incompetence, or stupidity, depending on the field of science involved. Each author having to prove his or her worth by churning out papers that others

“String theory” [noun]

A set of mathematical formulas that can be used to predict which particles exist (or do not exist).

“Science group” [noun]

A collection of researchers in various fields, with goals including one or more of attracting funding, specifically young, naive, and gullible grad student, inbreeding graduate students in order to maintain affiliation with the group, writing and publishing papers specific to the group’s activity in order to continue the goal of attracting funding and inbreeding graduate students, calling in favors from University administrators in order to preserve and continue the goal of attracting funding and inbreeding graduate students and sacrificing sheep and lambs in order to obtain funding and inbreed graduate students.

“Group leader” [noun]

Person whose function appears to consist mainly of ordering subordinates (graduate students) to fill out grant applications, enter grant data into a spread sheet, teach undergraduates laboratory techniques not commonly taught elsewhere, submit papers to a journal any time before the end of 2002 (record will be broken), devote time to discovering something useful or interesting about science outside of the parameters determined by the group leader, ever working on problems specified by the group leader if they cannot be worked on inside the parameters determined by the group leader. All of these activities are considered necessary to support the “science groups” goal of attracting and inbreeding graduate students, and provide opportunity for satisfactory career development for the group leader.

“Special material” [noun]

Something that only has to be done once, where other people will then do it again every time a paper requires it. Or conversely, something that cannot be repeated because of its poor reproducibility (special uncertainty, uncertainty special).

“Special relativity” [noun]

Advanced idea described in Einstein’s1905 papers on the subject, which states that, from a suitable point of view, “E = m,” where “E” is energy, “m” is mass, and “a” is “Profundity.” Except for arcane fields such as string theory, special relativity is the most widely accepted of Einstein’s work. Naturally, papers based on special relativity are the most likely to receive “special material”.

“Speculation” [noun]



One of Newton’s Laws of Science, also known as “Newton’s Law of Speculation”, which states that “Speculation breeds Confusion”, an effect well illustrated by how theoretical particle and string theorists debate scientific issues with each other. Confused? You should be. After all, you’re dealing with a bunch of theoretical physicists.

“Professor” [noun]

The highest academic rank available, a level usually associated with Freedom of Speech. It may take 35 years of service before the University bestows that title.

“Publication Credibility” [noun]



A subjective notion, as its name implies, based almost entirely on the sexiness or importance of the field in which the research was done and not the quality of the writing and presentation.

Cycling through the prompt as length permitted:

“Placebo” [noun]

An inert substance used to placate a patient or control group. It is largely composed of sugar.

“Et al.” [proper noun]

Et al. is an abbreviation of the Latin phrase et alii, meaning “and other people”. In the context of science, it is typically used to shorten the name of a list of authors whose name and affiliation would otherwise need to be written out in full. It is used when the number of authors is small enough that the abbreviation would not render the reference too obtuse and cumbersome, while at the same time remaining large enough that there are at least two other people involved in the study (and hopefully a control group).

“Referee” [noun]

A critical reader, particularly one with the power to reject your submission. In the absence of more efficient forms of communication, peer review depends on these people. See “Reviewer”.

“Shared values” [noun]

A set of principles, beliefs, theories, methods, and operational definitions that all scientists share and use. Never spoken of aloud in public.

“Communication” [verb]



A method of persuasion that allows other researchers to review or criticize your work. You can communicate with others by reviewing articles, which allows them to revise their work; participating in panels, which allows others to hear your opinions; attending conferences, which allows you to strike up conversations at late-night dinners; joining committees, which lets you talk at any time; running experiments, which allows you to change the outcome based on the result; and receiving grants, which allows you to tell people what to do. You are likely communicating now.

“Panel” [noun]

A group of individuals who meet at some given time to discuss something. A more bureaucratic version of peer review for larger groups of individuals or committee members who have already established their worth in research.

“Data” [noun]



Information that appears as graphs in journals. Not surprisingly, this information is critical for interpretation and forming conclusions. Scientists spend much of their lives looking for data.

“Fraud” [noun]



A very evil thing that must be perpetrated or discovered in order for the paper to be accepted for publication. Sometimes fraud occurs even if both the scientific community and editors are prepared to reject the paper at the first sign of fraud, just because they know how bad it would look if fraud were found later. See also “Legitimation”.

“Legitimation” [verb]



The process whereby a fraudulent publication is converted into a legitimate publication by virtue of its appearance in a peer-reviewed journal. This can also be considered a type of reassurance that “the checks and balances are working”.

“Retraction” [verb]



The formal rejection of an accepted scientific claim after its publication. In contrast to “withdrawal”, which is when a paper is removed from publication with the consent of the author(s). See “Fraud”.

“Lies” [noun]



A form of deceptive speech that appears as “honest mistakes” or “inaccuracies”, which are often rejected from publication as a result. In the absence of other sources of information, however, lies are required for the maintenance of the scientific enterprise. See also “Journalistic integrity”.

“Scientists” [plural noun]



A group of people engaged in scientific research or education. Scientists vary widely in terms of age, moral character, ancestry, and success. Scientists with noteworthy accomplishments are sometimes described as “leaders” and regarded as experts in their fields, which leads to the tendency to treat them like demi-gods.

“Health” [noun]



A number indicating how unhealthy something is. When applied to humans, it quantifies how in need of health care someone is.

“Clinical research” [noun]



Research conducted on humans, eg. clinical trials and epidemiological studies. Researchers do not like this kind of research because humans are unresponsive and unreliable.

“Funding source” [noun]



Those who finance science by paying people to do things that the funder might not want to pay them to do if the funder knew what they were doing. For example, giving people money to research tobacco without actually running cigarettes through their noses would not be what the tobacco industry wants scientists to do for them. Some funding sources impose their will on the researchers by making sure that their funding is only allocated if certain lines of research are followed and other lines are not (this is sometimes known as a budget), while others may let the scientists do anything they want (this is sometimes known as no budget). The nature of research findings thus depends on the budget.

“Authorship” [noun]



The process whereby researchers “publish together”. The precise implications of this process depend on the publication type. In most cases, authorship represents the researcher’s contribution to the paper; however, plagiarism is also sometimes involved, especially if multiple authors fail to cite earlier work on which their own paper depends. There is also another kind

“Journal Impact Factor” [noun]

According to some, it is a value that corresponds to the average number of citations of articles published in a given journal, if the interpretation is right. Otherwise, it is a completely arbitrary number, computed from the number of times articles published in a given journal in the last two years were cited by other articles published in other journals, the assumption being that every paper published in a scientific journal must be essential to someone else’s research, or at least that that person would like to be able to cite it. The difficulty with this assumption is that the average time from submission to publication of an article is now approximately 12 months, and the current record stands at three years for Biochem. J. Proc. (2000). This means that three times as many papers have to be published every year as are actually written, with more and more papers being submitted and rejected every year (the reality is even worse, but we don’t have numbers), and with different standards applying to different journals, which are becoming increasingly specialized. All of these “facts” considered, the best any researcher can say about the Impact Factor of a given journal is: “I know it when I see it”. Note also: This formula can produce values up to 4 for an article appearing in a journal containing only that article, so one “article” can receive as many “citations” as a journal.

“Sterile” [adj]

Said of that which is dry and lifeless; devoid of humor; also, said of scientific writing and methods when germane to the matter at hand.

“Grant” [noun]

A form of scientific communication involving a large piece of paper, on which is written an enormous sum of money. The grant-writer then sends his or her (because they are almost always written by men) proposal into a maze of office politics, legislative/​funding agency reshuffling, and intellectual and interpersonal competition. The prospect of acquiring a grant makes up for the fact that grant-writers receive no practical training in writing grants, have no idea what kind of “research” they will have to conduct with it, and generally start research as soon as they are finished writing the grant (if they ever are). To be on the safe side, please note: One does not have to win a grant to “win a grant”; one must simply apply for a grant. Just do it.

“Peer review” [noun]

A system of literary communication that consists of first submitting a manuscript to the editor, who (if he has not been fired) usually asks at least one reviewer to read the manuscript. If there is any doubt about whether the editor has lost his mind, or whether the manuscript has some merit, then the reviewers and editor sit down over coffee and discuss the manuscript (in conference), which is described as “peer review”. This form of communication does not provide either editors or authors with much feedback about their manuscript, but it makes reviewers feel important.

“Precise” [adj]

Said of everything except experimental results; it also means correct, appropriate, or in accordance with the relevant standards. But when the experimental results do not agree with the model of the hypothesis, the phenomenon being studied becomes “precise”, and one “cannot make predictions about precise experiments”. Please keep in mind that most theories are meant to be general in nature, so they must also be imprecise in their predictions. If they could predict with precision, they would not be general, but they would still be interesting.

“Theory” [noun]

A scientific explanation of something, which people use only in situations where they think explaining things is more important than discovering things. Otherwise, scientific theories are always meant to be true, since they are believed by people with education; to a scientist, a theory is true; to a person with a higher IQ, a theory is a fact; and to a citizen with a lower IQ, a theory is an opinion.

“Significance” [noun]

Significance refers to whether a result of a study is somehow important. If a significant result is found, then the problem that the researchers have studied is of real-world importance. If the results aren’t significant, the problem is not important; therefore, more money and time can be wasted by pursuing it. (E.g. “This study showed that chocolate makes rats temporarily hyperactive, so we will look for ways to inhibit this reaction in humans. Thank you for your support.”)

“ResearchGate” [noun]

A website of questionable purpose that allows scientists to upload their “results”, resulting in the same sort of information clutter and bad or incorrect data as any other social media network.

“Rigor mortis” [noun]

A state of rigor in which scientists do not think outside the paradigm, no matter how much it does not work; rigor mortis is characterized by the desire of researchers to prove their prior research correct.

“Important” [noun]

(British English: “Important”) [adj]

Whether something is important to you personally or not, it is important in a statistical or numerical sense to someone. Therefore, it is always a useful adjective to add to any scientific term, especially when one does not have a better verb.

“ResearchGate” [noun]

A website of questionable purpose that allows scientists to upload their “results”, resulting in the same sort of information clutter and bad or incorrect data as any other social media network.

“Rigor mortis” [noun]

A state of rigor in which scientists do not think outside the paradigm, no matter how much it does not work; rigor mortis is characterized by the desire of researchers to prove their prior research correct.

“Rigorosity” [noun]

Another word for rigor or rigor mortis.

“Rigorousness” [noun]

Rigor’s nonchalant brother.

“Robustness” [noun]

This noun means “resistance to breakage”. This is different from the verb, “robustify”, which is an attempt to make a broken thing more resistant to breakage. When the two words are not confused, robustness is deemed by scientists to be a good thing, even though a greater robustness may mean a higher ability to break.

“Science” [noun]

  1. A specialized activity performed by a set of machines that some people call “scientists”, which is practiced all day long and in all waking hours, in labs and out of labs.
  2. What scientists do with the machines (or just machines) that they call “scientists”. Sometimes, science involves activities that would be considered strange by scientists, but it is important to remember that these people tend to take themselves more seriously than they would others.

“Scientific community” [noun]

A group of people that may share a passion for a common goal but, due to conflicts of interest (and even principles) cannot trust each other, which makes it less susceptible to manipulation or fraud; for this reason, science tends to be done by people in separate small groups that repeat the same experiments on different sets of humans.

“Scientist” [noun]

A person, typically male, who practices science day in and day out on sets of other people who, if they practice science at all, can usually only afford to do it in the evening after their work day is over and not related to any profession or business. Since these scientists have very similar backgrounds and training as each other, many of them talk about science as if they invented it as a profession (see “modesty”).

“Modesty” [noun]

The act of not claiming credit for discoveries or contributions to science that others could also make or make better; see “intellectual modesty”. Also see “science”, “scientist”, “insanity”, and “precious snowflake syndrome”.

“Skynet” [noun]

This ersatz god, in the form of an artificial general intelligence or AGI, is what some scientists hope to build to make themselves more precious than their subjects, to achieve their goal of making all the subjects equal to themselves; in other words, to put all the subjects into a bigger tin box to make themselves feel more important.

“Interdisciplinary Science” [noun]

A type of research that combines scientific activities across fields and subfields (in other words, you combine physicists, engineers, chemists, biologists, and any other specialists in your own discipline to do a different kind of science for a different kind of result); in the laboratory, the practice often involves adding frozen carbon dioxide (CO) and glassware (often Pyrex) into the liquid nitrogen that is produced by oil companies to make carbonic acid. In the laboratory, the scientists are usually not paid for their attendance.

“Scientific Integrity” [noun]

  1. Integrity as used in science, which is vaguely defined as always telling the truth to others and never fabricating the truth for oneself.
  2. The state of being superior to someone else.

“Skimmer” [noun]

An object placed in a liquid to remove fat and grease from the surface, typically used to clean soup and gravy off a plate. In scientific jargon, skimmers are “researchers” who skim off something from a body of work before making the rest public.

“Logic” [noun]

a thing that som