To be honest this week has been a wash; I feel as though I need a whole extra week dedicated to studying Markov chains to really understand what's going on in the syntax in order to really grasp how I manipulate it for poetry. Working through the code examples from class using my own source material was helpful in clarifying the process itself, but overall the process is still overwhelmingly mysterious to me.
The poetified self
To get a better understanding of what's going on code-wise, I repopulated examples from Allison's git with a scrape of my Instagram captions. My captions on this particular account are fairly abstract already since I use this account as a place to practice poetically expressive language, so the scrape is already pretty avant-garde without the context of images. Here's a sampling:
I don't really broadcast my daily activities/emotions/experiences, which I think would produce a more standardized/predictable language model ("today I went to the..." "this makes me feel" "checking out the [insert] event tonight at the [insert]"). It's interesting to consider poetry as a weapon against predictive modeling bots, and wonder what it could look like to create a whole cryptography network that obscures messages into poetry that can only be deciphered by a human who knows the language...
Thusly, my results were predictably(lol) chaotic. But stunningly beautiful, too? I actually found myself envious of some of the outputs, and felt at times like I was being out-poet-ed by the machine, or that I was looking into a reflection of me as the purely poetified self. Here are some favorites:
The title of this blog post, "the algorithm is Wilson" was one of my absolute favorites, and it is so meta that I'm actually speechless. The "Wilson" reference comes from a joke I made about putting googly eyes on pComp assignments and calling them Wilson as I am castaway on the island of grad school. This assignment fills that Wilson role quite well (I got mesmerized by producing), and is actually an algorithm.
I see how my work in learning Markov models lies in "tuning" the results so they read more naturally. This is definitely where spaCy and textgenrnn come in, but with my current state of overload/illness I just wasn't able to get that far this week. I also see how easily caption and comment bots can be created, which is pretty daunting. Like, scrape the comments section of a Fox News article and you got one nasty armchair warrior bot ready to deploy.
Comments