GPT-2 Looper

About

GPT-2 Looper is an onging exploration into a hands on approach to working with language models influenced by the book Knots. Instead of fine tuning GPT-2 to try and mimic specific text, I use small parts of stories to generate text by playing with different parameters. I then hand edit the outcome, and re-feed that text back into the model until it kind of breaks down. This combination of generation, re-feeding, and hands on manipulation makes for some curious results. Sometimes it exposes weird parts of the model's base dataset, other times it shows some gaps where the model gets stuck, or brings back a single line in a sea of garble. Mostly tho it is an experiment into how a text can "feel" through repetition and manipulation.

Check out the Github Repo for the fanfic zine.

Or the flickr album.

There's also some blog posts.

Documentation

Generated Items

Printed

I eventually turned some of these experiments into some small printed fanfic, and even into some business cards.