Web Final update

Continuing my midterm idea for my Live Web final.  Basically, I want to turn this into a more finished project.  I’m working on adding an actual frontend with some direction, so someone could come to the page and understand what’s going on without my instruction.

Screen Shot 2014-12-02 at 11.38.43 AM

 

I’m getting a general color theme and direction going, but I have a couple other ideas I may change to.  Functionally, the main feature I need to add is the ability to “continue” the tour.  Right now, if you’re in the panorama, to go to a new place you’d need to refresh your page.  That’s not ideal, and I want an input box to always be on the page (obviously not directly in the middle of the screen) so that you could go to multiple locations and have it be a functional chat tool.  Early user testing got me some good feedback, including potentially switching the headtracking control to the other party.  I also am planning to add a drawing tool to help with annotating locations on the screen.  The drawings would fade out after a few seconds in order to not muddy up the display.  I’m working off of my code here.

Live Web Inspiration

Screen Shot 2014-11-18 at 11.25.11 AM Screen Shot 2014-11-18 at 11.25.31 AM

 

Inspiration for Live Web: A project by Frank Swain and Daniel Jones, with visualizations from the awesome Stefanie Posavec.  Phantom Terrains is a visualization and sonification of wifi networks, heard through a hacked bluetooth hearing aid.  The map shows a test walk around the BBC Broadcasting House.  Stronger signals are represented by wider shapes. “Network identifiers, data rates and encryption modes are translated into sonic parameters, with familiar networks becoming recognizable by their auditory representations.” “Stronger network signals are shown as wider shapes; the colour of each shape corresponds to the router’s broadcast channel (with white denoting modern 5Ghz routers), and the fill pattern denotes the network’s security mode.”

This is a really interesting project.  By combining hardware, live software and data visualization, the creators have managed to display some of the invisible signals we’re living amongst every day.  Even though we all know we’re using computers and smart devices, we often forget about (or don’t know about) the invisible connections and information flow around us.

Alter HTML through Arduino

For Live Web this week, I have a potentiometer dynamically adding and removing paragraph tags to an HTML page.  The Arduino sketch calls a websocket client script, which the potentiometer values are sent to.  That client sends the values to a node server, listening on port 3000.  When it receives those values, it emits them using socket.io on port 4000.  My index.html page is listening for those values, and is adding or removing paragraph elements based on the number that comes in from the potentiometer (0-250).  The full code is on github here.  Code from Tom Igoe and David Tracy was used in the process of getting this up and running.

It’s a small prototype for now, but it makes me think of a physical mixer tool, where you can press a button to add an element like a div, adjust its size and other attributes with knobs and switches, and build a digital web page in real time through physical controls.

Live Web Recording & Playback

Screen Shot 2014-11-04 at 11.51.48 AM

 

My Live Web homework for the week uses socket.io, node.js and recorderjs to make a live audio soundboard.  Allow microphone access, and press record to begin recording yourself, press stop to stop.  Then an audio player and download link will appear below.  Multiple users can use it, and play sounds congruently.  Any individual files can be downloaded.  html file below, full code on github here.

 

Live Web Midterm // Interactive Storytelling

For my Live Web midterm, I used the Google maps API, socket.io, nodejs, headtrackrjs, and peerjs to make a new tool for people to share stories and experiences with each other.  Right now, two users log on – one to the “sender” page, and one to the “receiver”.  The sender will input a location in the text box and submit it.  On submittal, the location is geocoded by Google, and the JSON object returned is parsed to get the lat/lon.  That is used to set the streetview panorama.  There is also a headtracker on the sender page, which…tracks the senders head.  Values from the head tracker are mapped to the heading and pitch of streetview, thus creating an effect of looking around within streetview.  There is also live audio streaming between the browser windows.

 

I see this as a tool to enhance peoples storytelling experience across long distances – instead of just talking about a place, or looking at a picture, they can log on and really look around the location they are talking about while talking about it.   In the future, I think annotations (either temporary drawings, or long-term markers left by people) could add another meaningful layer of interactivity.  There is a short video below, with a code snippet below.  The full code is on github.

midtermDoc from John Farrell on Vimeo.

Live Web: WebRTC

Our homework assignment was to augment a previous homework using webRTC to send either video, images or files in real time to other users.  I decided to pursue something I had seen using the google streetview API and a library called headtrackr.  True to its name, headtrackr tracks your head. By mapping the head positions to variables controlling the heading and pitch of streetview, you can control google street view by moving your head.  I’ve had a hard time adapting this to actually use webRTC.  I’m still working on it, but at the moment that part just doesn’t work.  I would like to adapt this to maybe have people set the locations for other people to look around to show them their favorite areas on Earth.

 

 

Node servers & web sockets

Live Web homework this week was focused on running web sockets on a remote node server.  I made a little canvas drawing tool, which displays a different color for different user who is drawing on the canvas (the circles I drew in my active window were blue, but showed up orange in the other).  It’s fine and I think I understand sockets a little better but I’m not too happy with it.  I wasted A LOT of time trying to get the node server and sockets up and running on my Yun, with the hopes of either streaming data from the Yun to the web or vice versa.  After a lot of frustration, I realized I needed to go back and just try to understand sockets from the beginning, and had to spend a good deal of time on that.  Lesson learned – don’t start with the most complicated thing you can think of.  Screenshot and code below:

Screen Shot 2014-09-23 at 8.46.54 AM


 

Websocket + Node chat

Week 2 for Live web, we learned about creating a live chat service using websockets and node js.  We created a basic example in class, and were supposed to see if we could expand on it at all.  Fortunately, we had some leeway since we barely got the topic covered in class.  I spent about 8 hours trying to be able to send picture urls and have the photo appear on the other client, but I could never quite get it to work.

I did manage to modify the chat application to send and receive JSON objects.  I’m not sure what exactly that’s useful for in the context of live chat, but it’s something different.

In this image, the user typed “hey!” and the resulting JSON object showing type, text, id and time is visible in the console.

Screen Shot 2014-09-16 at 12.25.35 AM

On the other side, the user gets the full JSON object, with a very clarifying “they said: ” in the console, as well as the time parsed to your local time.  The id of the sender is hidden from the recipient.

Screen Shot 2014-09-16 at 12.25.53 AM

 

 

Overall this was pretty frustrating as I couldn’t get what I wanted to work, but I think with a little more time on this I’ll be more comfortable and capable of better realizing my project ideas.

Chat Experience

Tonight I spent a few minutes on ICQ chat.  I filtered to the most popular chat room, “20-somethings” and joined.  There were roughly 250 people online at the time.  Aside from the waves of people signing in and out of the room clogging the feed, there were probably only 5-7 people regularly talking.  Instead of telling them about a recent personal experience, I regaled them with quotes from the Cormac McCarthy book “Blood Meridian.”  Quotes included: “When the lamb is lost in the mountain, they is cry.  Sometime come the mother.  Sometime the wolf.” and “War endures.  As well ask men what they think of stone.  War was always here.  before man, war waited for him.”  They’re slightly biblical in nature, and I was curious if people would respond to that at all.  One guy played along, telling me that he had once been the lamb, as well as that “war is a dick.”

It seems difficult to have much real conversation on a random chat, especially with the notification mid-feed of every time someone signed in or out.  Most people wanted to talk about their favorite video game or make up incredibly vulgar nicknames for other people in the chat, which I guess is something an anonymous chat service facilitates.  The medium forces rapid fire conversation, often pointless in nature.  This probably has to do with the fact that it’s an anonymous general forum vs one more focused on a given topic.  I think that would go a long way towards having a more enjoyable and refined experience.