Visual Poetry with GPT-2

The text included in these concrete pieces is mostly output of GPT-2: a machine learning model capable of mimicking the tone of input text. You can also feed text into the model and it will continue from there, finishing the sentence or line.

Experimenting with models trained on different authors has led to some visual pieces. The shapes are the result of strange questions in contemporary poetry: how do humans edit hundreds of thousands of words per minute of digital output? What happens when an author is gone but their voice can keep talking? How does this new kind of language collage let us braid and stitch our influences together?

Black text is output from a Robert Frost-trained AI model, red text is manually written by the author

An illustration of the branching paths the model can choose given an input phrase

Black text is manually written, red text is output trained on the author's voice

White text is manually written, red text is output trained on the author's voice

Red text is output trained on Jhave Johnston's Rewrites, white text is output trained on the author's voice

White text is manually written, red text is output trained on Jhave Johnston's Rewrites