Flirting with Re-joining Facebook: Algorithmic Surveillance Awaits

Standard

As I continue to flirt with the idea of re-joining Facebook, I am considering how while protecting my privacy. Sure, I’m concerned about identity theft, password hacks, and distribution of my images and words to other sites, but what I am most concerned about is how to protect my privacy from Facebook. Since Facebook has a history of loosening privacy on the site, (Fletcher, 2010; Goel, 2013; Manjoo, 2011; Vargas, 2010) I am not one to trust the basic privacy settings outlined on Facebook’s pages.

Why?

Like all Facebook users, I am subject to algorithmic surveillance–a term first used by Clive Norris and Gary Armstrong in their book, The Maximum Surveillance Society (1999), defined as surveillance systems that follow sequences. Broad, huh? Well, Lucas Introna and David Wood (2004) remarked elsewhere that researchers use the term in connection with surveillance and computer technologies that capture complex data about events, people, and systems. No stranger to algorithmic surveillance, Facebook uses complex (and proprietary) algorithms to filter content for users based on their activities within the site. And, recently, Facebook announced it will use browser web history to capture more data for advertising revenue (currently, users can opt-out of this practice).

While Facebook uses data for advertising revenue, compliance with federal requests, and for research (among other activities defined in the data and use policy) the question remains is the benefit of social networking worth the cost of sharing our information? Given that Facebook uses data to manipulate the ways we experience information on our screens, it may not be after all.

Let’s think about this another way. Earlier this year, commentators, citizens, academics, and journalists issued concern over the emotional contagion research conducted in 2012. The researchers of the study performed algorithmic manipulation in a concentrated study to learn if the emotions of users could change based on what the users experienced in the Facebook ecosystem. Setting aside commentary on the ethics and legality of the study, what’s engaging about the fracas springs from acknowledgement of purposeful manipulation of the emotions of users by particular people, at a particular time, and in a particular context. In print, there was proof that Facebook had the ability to shape content to affect people’s lives. People reacted to what they thought and felt was wrong. There were names, faces, and decisions–all made by people. But, the algorithms Facebook uses still manipulates people, their emotions, and the information in their feeds. Do we feel more comfortable pointing the finger at people and excusing the unknown variables of the algorithms?

I do not necessarily have an answer to that question, but in further reflection, consider the recent controversy over Facebook’s algorithms. The political and social outpouring on Twitter since the shooting of Michael Brown in Ferguson, Missouri and the near domination of the ice bucket challenge on Facebook illustrates algorithmic manipulation. John McDermott just yesterday argued the implications of this algorithmic disparity are considerable given the reliance of the site to provide information to millions. He argued, “The implications of this disconnect are huge for readers and publishers considering Facebook’s recent emergence as a major traffic referrer. Namely, relying too heavily on Facebook’s algorithmic content streams can result in de facto censorship. Readers are deprived a say in what they get to see, whereas anything goes on Twitter” (2014, para. 3).

Censorship isn’t the only issue with Facebook’s algorithms, however. Ideological concerns over what political, social, and cultural events, ideas, and information play out in algorithmic culture and especially on Facebook. The Facebook ALS/Twitter Ferguson story illustrates this concern quite well. While the social media company continues to use algorithms that hide news stories, events, posts, images, and videos from users, algorithmic manipulation will continue to happen every time someone logs on to the site.

So, what does algorithmic manipulation have to do with protecting privacy and data from Facebook? Well, the more content a user shares with the site either voluntary or through web browsing histories, cookies, and/or widget data, the more data the algorithms have to manipulate what the user experiences in the space. It’s kinda tricky, right?

As I continue to think about re-joining Facebook, I know that some first steps will be to use a VPN to access the site, have a clean browser history and a private window. But, I also know that I will have to put the basics on the page I create–enough for people to recognize me professionally. And, of course, I won’t be able to “like” anything or share any interests. I am also not sure if this will be enough. So, if anyone out there has any suggestions or resources, please email me or send a comment.

 

“Invasion of Privacy” vs. Desire for Surveillance

Standard

Dennis Baron’s blog post, “The phrase of the year for 2013 is ‘invasion of privacy‘” elucidates readers on the state of privacy connected with mass digital surveillance. Baron makes the point that privacy stretches back (in the US, at least) well into the late 19th century with the first mention in an issue of Harvard Law Review by Warren and Brandeis, and moves to discuss recent developments in social media, specifically Mark Zuckerberg’s position of privacy as it connects to the operations of Facebook.

Baron shares that new digital technologies have allowed for an invasion of privacy that perhaps private citizens have not yet fully realized. However, thanks to Edward Snowden’s efforts to inform citizens across the globe about the mass surveillance programs of the NSA, people are hearing and seeing more about surveillance in the news and in social media.

I agree that newer digital technologies like computer cookies, web beacons, widgets, and various other tracking technologies (see “Know Your Elements” provided by Ghostry) have given corporations and governments access to meta-data about individual keystrokes. Thankfully, employees and volunteers and Electronic Frontier Foundation (EFF) have  worked to provide resources for people about tracking technologies, as well as engage in legal battles over privacy concerns.

However, I also think that digital technologies like closed-circuit cameras and televisions, biometric devices like facial scanners, license plate scanners, are all part of a larger multi-actor and actant (to borrow from Latour) surveillance assemblage (to borrow from Deleuze and Guattari) where no one corporation or government is wholly responsible for the mass surveillance systems that have been globalized in multiple fields from medicine, education, government, and so on, but rather multiple corporations, governments, businesses, and individuals may all be responsible for an overall surveillance state (that is, if people use any type of device for surveillance purposes).

If we begin to think about this larger multi-surveillance assemblage, then we also need to think about the desire to have so many types of surveillance systems. Simply, what’s motivating people to have so many types of surveillance systems?

Sometimes I think that the argument with regard to surveillance is not so much about invasion of privacy or privacy protection, which are both extremely important arguments that many people from public leaders, policy makers, researchers, attorneys, journalists, etc need to continue to make and engage with, but that the argument rests more with a desire—a desire to constantly watch, to hear, to record, to observe.

If we are to make changes in privacy laws, privacy protection, then I also think we need to address a larger component of this conversation–desire. If we don’t address our motivations for action, then I think we are missing a large part of the discussion on what even motivates us to engage in surveillance, much less to have desire for privacy.

I agree with Baron that the public/private blurring of privacy has already become an issue, especially with online tracking technologies capturing data about individual’s whereabouts on the web.

I think any headway people make in journalistic, legal, scholarly, etc circles about “invasion of privacy” and privacy protection will occur when we can also address the root cause of why people use surveillance tools to begin with–desire.