Temporary Expert Final :: Surveillance

TEfinal.001 TEfinal.002 TEfinal.003TEfinal.004TEfinal.006TEfinal.007
TEfinal.008 TEfinal.009 TEfinal.010 TEfinal.011 TEfinal.012

For my final topic in Temporary Expert, I wanted to look at the incredibly broad topic of surveillance.  It’s an old topic, and one that came back to the forefront last year through the Snowden leaks.  While it was always something I had an interest in, it rarely went past fleeting interest.  Some time in the last couple months, I came across the concept of Van Eck Phreaking, and it’s something that has just stuck in the back of my mind.  Van Eck Phreaking is a way to recreate displays, originally CRTs but can now be used with flat panel displays, by interpreting the electronic frequency signals that the electronics give off.

Two things about that technology stood out to me (aside from the sheer “magic” factor of it): One, the complete and utter privacy invasion into the personal portal that is your laptop screen.  It’s the ultimate big brother version of someone leering over your shoulder; it presents the opportunity to steal everything personally and professionally important that you do.  The other was the “magic” factor I mentioned – listening to stray electronic signals, the byproduct of our devices, and being able to reconstruct our activities through unexpected patterns.  Thinking about the topic was enough to send me down the path of surveillance.

Because my interest in the topic had been so random, my breadth of knowledge was insufficient.  I set out to explore surveillance, in all its forms.  I researched everything from the origins of spying to the technical programs commissioned by the NSA that spy on Americans to this day.  I read about famous government moles, the Stasi, methods of computer encryption, artists working in this space, and recent publicized breaches.   I also had conversations with Alexander Galloway (Previously of Carnivore fame) and Lauren McCarthy.  I had a very nice conversation with Alex about ethics and approaches, although it seemed he doesn’t do much work in this realm anymore.  I didn’t have the most productive conversation with Lauren, though much of that could be blamed on me as I wasn’t totally sure what I was going in to ask her.  It was helpful to make my articulate my own thoughts, but not much past that.

From that research, I developed a few initial conclusions.  The space is broad, exceedingly complex, and very futuristic.  The technology is incredibly sophisticated, and pervasive enough that it’s tough to escape while living in modern society.  The other thing I realized is peoples general ambivalence to the topic.  Edward Snowden leaked the details of the largest government data collection system ever in June of 2013, and nothing has really changed.  There were protests (Full disclosure – I didn’t go, and had to look up if there were even protests), but a couple of weeks ago, reform of the NSA was voted down.  You can even here people defend the collection (“If you don’t have anything to hide…”).  I’m really curious why people don’t seem to care about this.  Is the technology and scope just too far over many peoples heads?  Is it a sense of hopelessness?  Is it removed just enough from your daily life that you forget and carry on?  Before I could try to answer such a complex question, I began on a series of experiments to examine the nature of surveillance, in various forms, to try to break things down for myself.

I wanted to examine what people do, say, and make.  And I thought it was important that my experiments involve me being on both sides of the situation.  Watching and being watched.  My first experiment involved watching myself, watching what I do, as an ode to Van Eck Phreaking (something I wasn’t going to accomplish in this timeframe).  I wrote a script to capture my desktop every 5 minutes for a week, and compiled the images.

Right away I noticed behavioral changes in this.  The script still played the mac screenshot sound, so I knew when it took the screenshot.  I began thinking constantly about what I had up on the screen.  Even though I was controlling the app, and the photos were saving locally, I still knew this was something I wanted to put online, so I was hesitant.  I’d quickly do sensitive tasks right after a screenshot, or try to sneak in a facebook check; I almost felt like I should always be having work up on the screen.  But if I had music playing, or the volume muted, or maybe had headphones plugged in, I wouldn’t hear the sounds, and would forget it was happening.  That was my first indication that the feedback was potentially a really important aspect to these experiments.

My second experiment focused on what people say.  Using an analog circuit (thanks to some internet resources of course, I didn’t come up with this), I built a laser microphone.  Sounds make things vibrate, and speakers vibrate when they play music, just as glass vibrates when you talk behind it.  By placing a small mirror on the speaker, I could shine a laser at it, and catch the reflection on a photoresistor.  Running that signal through a transistor to a headphone jack, you can head music playing simply by catching the reflections of a laser.  When I showed this to people, their reaction was always the same – it’s like magic.  This is an old technology, been around for decades, but it was leaving people speechless when they realized what was happening.  That says a lot about the state of todays technology.

Lastly, I did two experiments on what people make.  The first, as the one who surveils.  Thanks to some help from Surya, I was able to engage in what’s called packet sniffing, the act of watching the packets we send over a network when your computer makes requests to a server.  Again, this is not a new technology.  But it’s incredibly interesting to see network traffic flashing across your computer.  I only did it through a local host on my computer, but you immediately feel a sense of power when you can identify the packets containing “payloads,” or the actual content you’re trying to send, open it, and read it.  You can also inject packets, in what’s called a man-in-the-middle attack.  I see a packet come through requesting information, and step in to impersonate the response, sending what I want.  I used this to control a small network game of pong, where I could intercept my own commands and change them.  There’s obviously security measures to prevent me from doing this to your gmail, but all that’s really stopping me is my own technical ability.  This is a thing that is done, and there’s a whole range of emotions you go through realizing you can perform actions like this.

For the last experiment, I wanted to get a sense of what I make.  And, as far as the government or advertisers are concerned, I make data.  Data to track me, analyze my actions, place me in a demographic, and sell things to me.  Everyone of us does this, and it drives both government surveillance and the entire online advertising world.  I thought back to the feedback sounds from my desktop experiment, and from examples like We Live in Public, a great documentary.  People are usually fine with surveillance until they have to be confronted with it.  The realization that it’s constantly happening, the cognizance of being watched, is what breaks people down mentally and emotionally.  I wrote a Chrome extension that blinks a small red light every time you (theoretically) create a data point.  I didn’t have the time to really track data you make, but this simulates it by tracking clicks, text inputs, web page changes, etc.  For any action that would create a data point used to quantify you, a light blinks.  I haven’t had time to really test it, as I just finished it, but I’m optimistic(?) about the potential.  I don’t know if another blinking light is what we really need, if it will help anything or drive awareness, or even what the goal really is, but I think it’s a step in the right direction.

Live Web Inspiration

Screen Shot 2014-11-18 at 11.25.11 AM Screen Shot 2014-11-18 at 11.25.31 AM

 

Inspiration for Live Web: A project by Frank Swain and Daniel Jones, with visualizations from the awesome Stefanie Posavec.  Phantom Terrains is a visualization and sonification of wifi networks, heard through a hacked bluetooth hearing aid.  The map shows a test walk around the BBC Broadcasting House.  Stronger signals are represented by wider shapes. “Network identifiers, data rates and encryption modes are translated into sonic parameters, with familiar networks becoming recognizable by their auditory representations.” “Stronger network signals are shown as wider shapes; the colour of each shape corresponds to the router’s broadcast channel (with white denoting modern 5Ghz routers), and the fill pattern denotes the network’s security mode.”

This is a really interesting project.  By combining hardware, live software and data visualization, the creators have managed to display some of the invisible signals we’re living amongst every day.  Even though we all know we’re using computers and smart devices, we often forget about (or don’t know about) the invisible connections and information flow around us.

Alter HTML through Arduino

For Live Web this week, I have a potentiometer dynamically adding and removing paragraph tags to an HTML page.  The Arduino sketch calls a websocket client script, which the potentiometer values are sent to.  That client sends the values to a node server, listening on port 3000.  When it receives those values, it emits them using socket.io on port 4000.  My index.html page is listening for those values, and is adding or removing paragraph elements based on the number that comes in from the potentiometer (0-250).  The full code is on github here.  Code from Tom Igoe and David Tracy was used in the process of getting this up and running.

It’s a small prototype for now, but it makes me think of a physical mixer tool, where you can press a button to add an element like a div, adjust its size and other attributes with knobs and switches, and build a digital web page in real time through physical controls.

Temporary Expert Update

After narrowing down to the extremely broad topic of “surveillance” I started to work on some daily experiments.  Without a clear project idea, I was hoping these would help to clarify my thoughts and come up with an interesting project.  Using this idea generator, with areas to plug in mood, time period, arc and audience, I did daily sketches of scenarios based on what was presented to me.  Below are a some examples of scenarios, as well as pages of sketching.

ex2ex1IMG_4075 IMG_4076 IMG_4078 IMG_4077

So, that sort of helped.  It most definitely got me thinking differently, but still didn’t solidify an idea.  To start thinking about effects of surveillance, I wrote a script that screenshots my desktop every 5 minutes.  I have a large collection now, and am going to probably overlay them all with very low alpha, or make a grid photo.  I began to feel the psychological effects of this almost immediately, even though it was something I was doing, only I was controlling and I could delete at any time.  I found myself closing my laptop more, trying to vary what I was leaving up on the screen if I stepped away, and trying to keep more work-oriented things up on the screen instead of something like a sports article or anything I would consider non-serious.

The last class proved extremely helpful for me.  As I talked through the project more with Kina and Dan, I began to solidify on the idea of breaking down “surveillance” into the various components and methods possible to do surveillance, and try different experiments within those areas.  Most interesting to me has been the idea of passive surveillance, of observing the effects of human activity, which leave a trail of data that can be used to track people.  Van Eck Phreaking got my interested in this idea, which has been reinforced through discovery of audio signal stealing through gyroscopes, or deciphering sound through vibrating glass.  I have a list of resources and people to talk to after class, and I’ve been trying to read and watch a lot of classic cyberpunk and near-future fiction to more immerse myself in the topic.  This update is a few days late, so I will have another coming shortly after I reach out to a few people like Eric Rosenthal, Kyle McDonald, and Alex Galloway.  I’ve also talked to Shawn VE and Surya, both of whom were excited about what I was doing, and were very willing to help and offer resources.

Live Web Recording & Playback

Screen Shot 2014-11-04 at 11.51.48 AM

 

My Live Web homework for the week uses socket.io, node.js and recorderjs to make a live audio soundboard.  Allow microphone access, and press record to begin recording yourself, press stop to stop.  Then an audio player and download link will appear below.  Multiple users can use it, and play sounds congruently.  Any individual files can be downloaded.  html file below, full code on github here.

 

Temporary Expert: Carnisseur

title-01

Temporary ExpertinVitroPatent_wed (1).1 inVitroPatent_wed (1).2 inVitroPatent_wed (1).3 inVitroPatent_wed (1).4 inVitroPatent_wed (1).5 inVitroPatent_wed (1).6 inVitroPatent_wed (1).7

The above patent represents the latest efforts of the Carnisseur Corporation.  We have been the leading force in the in vitro meat industry, and our constant innovations have kept millions sustainably satisfied for years.  Recently, our patent on the Accelerated Growth of meat cultures expired, allowing many to enter the in vitro meat market and begin experimenting.  Some people have taken to hacking our printers and culture cartridges, resulting in unsafe meat combinations being created, and people trying to print more meat than is appropriate from a cartridge, leading to potentially cancerous cells.

As a result, Carnisseur has had to respond with changes to our meat printing system.  The first is our new Culture Lock Connector, a patented digital connector exclusive to the Carnisseur line of printers and cartridges.  The second is in a system of Digital Rights Management software loaded onto the cartridges and printers.  Only Carnisseur-approved cartridges will function on the printer, and any tampering will result in destruction of the cells within the cartridge.  The cartridges will also not print after producing 4lbs of meat to prevent the risk of potentially damaging meats.  When a consumer is finished with their cartridges, they are encouraged to send them back to Carnisseur for reuse, and will receive a discount in the process.

_________________________________________________________

In this project, we wanted to explore the ways the current landscape of regulations and litigious business practices could negatively affect the world of near-future food, often viewed through rose-colored glasses.  In vitro meat is an expensive endeavor, and people will want to profit off of its creation.  A combination of patents for the Carnisseur corporation ensure their competitive domination of the in vitro meat landscape.  Having already dominated the printer market, Carnisseur printers now only print their meat cartridges, thus freezing out any competitors.  Partially under the guise of consumer safety and brand quality, Carnisseur no longer has to worry about advances from large producers like Tyson, Purdue or Applegate.

Carnisseur was made with Jason Sigal and Karam Byun

Social Bieber Analysis

An exploration into the world of Justin Bieber, on Instagram.  Posted media that included the tag #belieber was pulled, parsed and analyzed.  Our attempt at leapfrogging each others data requests was unsuccessful, and shown below.

After that, an Openord hashtag co-occurrence graph displaying the many communities using the #belieber tag, grouped by the other hashtags they use – some were expected, like those tweeting about other teeny bopper bands and Justin Bieber.  Others were not expected, like someone trying to promote their jewelry brand, or the cast of the TV show Teen Wolf, or a group who mainly tweeted about guns and 2nd Amendment rights.

The map shows the locations of the most frequent #belieber posters.  Made with David Tracy.

 

DT_JF_BIEBS_03.1

DT_JF_BIEBS_03.2

DT_JF_BIEBS_03.3

DT_JF_BIEBS_03.4

DT_JF_BIEBS_03.5

DT_JF_BIEBS_03.6

Maps Finale

Finishing off Maps Lies & Storytelling with two maps – one two color map, and a final.  For my two color:

My first map in d3, required to be two only colors.  This shows all of the airports in the world, with nodes sized according to the elevation the airport exists at.  The larger dots end up showing the topology, with mountain ranges (notably, the Andes) visible.  Mouse hovering highlights the dot with a larger ellipse, as shown below, and you can pan/zoom.  I’d like to refine the aesthetic more, and focus on some better interaction, but it’s ok for a first d3 map.  The map lives here.  Code is below.

Screen Shot 2014-10-21 at 4.26.02 PM

 

I abandoned my ebola map plan out of frustration with inconsistent data and a lack of inspiration to make a good map.  I turned back to some social data-inspired map making, exploring the community of people talking about chemtrails on instagram.  I used python to access the instagram API, found 270k posts using the tag #chemtrails, then pulled the most recent 3.3k posts.  I filtered those results by those who had listed location data.  Various filtering and parsing left me with some lists and dictionaries that I could then make into a network graph using the networkx library.  Nodes represent user posts, with the node size mapped to the number of posts they had. The edges represent relationships between users using the same hashtags, colored by modularity class.

I highlighted the most prominent users, displaying their name, followers, following and post counts.  Code is at the bottom of the post.

maybeDone-01

d3 map code:
 chemtrails code:

Live Web Midterm // Interactive Storytelling

For my Live Web midterm, I used the Google maps API, socket.io, nodejs, headtrackrjs, and peerjs to make a new tool for people to share stories and experiences with each other.  Right now, two users log on – one to the “sender” page, and one to the “receiver”.  The sender will input a location in the text box and submit it.  On submittal, the location is geocoded by Google, and the JSON object returned is parsed to get the lat/lon.  That is used to set the streetview panorama.  There is also a headtracker on the sender page, which…tracks the senders head.  Values from the head tracker are mapped to the heading and pitch of streetview, thus creating an effect of looking around within streetview.  There is also live audio streaming between the browser windows.

 

I see this as a tool to enhance peoples storytelling experience across long distances – instead of just talking about a place, or looking at a picture, they can log on and really look around the location they are talking about while talking about it.   In the future, I think annotations (either temporary drawings, or long-term markers left by people) could add another meaningful layer of interactivity.  There is a short video below, with a code snippet below.  The full code is on github.

midtermDoc from John Farrell on Vimeo.

Temporary Expert Update

Karam, Jason and I have decided to further pursue the topic of in vitro meat.  While it’s painted by many as one of the panaceas for world hunger, we feel it’s a solution often looked at through rose-colored glasses.  This is an extremely complicated, complex entity.  It will take people many years and an enormous financial investment for this to ever potentially be a scalable product for the masses.  Should in vitro meat reach that level, it will be subject to all of the intricacies, restrictions, regulations, and _____ that other consumer products deal with on a regular basis.

There is already a US patent for “…the production of tissue engineered meat for human consumption, wherein muscle and fat cells would be grown in an integrated fashion to create food products such as beef, poultry and fish.”  So there’s a patent on the overall process.  But what about further down the line, when that patent has expired?  Our project is seeking to explore a new patent within the consumer world of in vitro meat.  

In this near future, in vitro meat has achieved market saturation.  Consumer products are regularly sold which allow the printing of various meats at home.  Our company, Carnisseur, is the leader in countertop meat printers, which function by printing from individual cartridges, or “cultures”, which contain the animal cells and serum to print from.  However, companies have begun to sell their own cultures to print from on our machines, which is bad for business.  That’s why we are planning to debut a new digitally-encrypted style of cartridge, with a proprietary connector, to ensure the quality of meats printed on our machines, as well as keep business within the company.   We want consumers to be locked into our ecosystem, which is why we have also talked about protecting the meat itself.  Our printers will have a very specific crosshatch pattern the cells are printed in.  Only the knives our company makes, either by way of a specialized blade, or EMP to breakdown the hatched pattern, will be able to easily cut through the meat.

We want to explore how the nuance of business will stifle true innovation in the world of in vitro meat, and consolidate power and money in the holdings of only the most powerful companies.  Below are some pictures of whiteboard diagrams and brainstorming sessions, where we first came up with the idea of “DRM for in vitro meat”, as well as an early mockup of our proprietary connector.

 

IMG_3968

 

IMG_3969

IMG_3982

printer [Converted]-01