top of page
Writer's pictureGabriella Garcia

Live Web weeks 4-5: Sharing Poetry

After struggling the first few weeks of class, I sort of flipped the table over on the chatroom-focused iterative process and basically started from scratch on week four. It ended up being a super productive decision, here's what I figured out in that time:


- I wasn't grasping the logic by working off the in-class example code; I decided to follow tutorials line by line so I could follow the order of operations as I was typing it out.

- Most of the extracurricular tutorials I found were using Express.io (even socket.io now uses Express in its documentation) and thus I was getting stuck when looking outside of class code for examples. I decided to figure out Express in order to overcome that snag.

- I levelled up my code documentation by learning the "industry standard" presentation including git repo.

- Inline javascript was roadblocking my progress, so I reconnected with p5.js and learned how to call the p5 sketch from html. This allowed me to figure out the interactive elements apart from everything else and then smoosh them together afterwards.


Poetry chat

I wanted to use mouse data as per our assignment toward making a collaborative experience based on mouse events. Drawing boards using mouseDragged are fairly common, so I focused instead on using "click" data for interactivity. For minimum viable test I created a sketch that would generate random lines plucked from an array wherever you clicked the mouse:

Presentation mode because this CMS is defaulting to a tiny canvas: https://editor.p5js.org/medusamachina/present/2TLr33MwH


The next step was calling the sketch in the html. From there I began the sockets portion, warping Shiffman's collaborative drawing board tutorial to serve my purpose. I was intending both clients to experience the same line of text, but was pleasantly surprised when random lines from the array were plucked from both sides and kept it as a happy accident/bug-turned-feature. I made smaller changes to make the text more legible.


From there I added an audio function that would allow one client to record while the mouse was clicked which would then send the audio over to any other users upon unclicking (mouseUp). This became the basis for my midterm, and documentation for that project covers much of the thought and production processes that were initially iterated here. With that in mind, here's the git repo for version one of poetry chat (minus SSL certs, node modules, and p5 dom/sound libraries for the sake of upload time):



A quick screen recording of the oldest poetry chat prototype for visual reference, please refer to midterms for a more fleshed out version of the project incl screen cap documentation:


Comments


bottom of page