GPT-2 Looper

About

GPT-2 Looper is an onging exploration into a hands on approach to working with language models influenced by the book Knots. Instead of fine tuning GPT-2 on large amounts of text to try and mimic something, I use very small parts of text to generate looping results by playing with different parameters. These are texts I write myself, and things I find online (planning documents, fics, comments etc). I then hand edit the outcome, and re-feed the results back into GPT-2 until it breaks down. This combination of generation, re-feeding, and hands on manipulation makes for some curious results. Sometimes it exposes weird parts of the model's base dataset, other times it shows some gaps where the model gets stuck, or brings back a single line in a sea of garble. Mostly tho it is an experiment into how a generated text can feel through repetition.

Check out the Github Repo for the fanfic zine.

Or the GPT-2 flickr album.

There's also some blog posts.

Documentation

Generated Items

Printed

I eventually turned some of these experiments into some small printed fanfic, and even into some business cards.