Flirting with Re-joining Facebook: Algorithmic Surveillance Awaits

Standard

As I continue to flirt with the idea of re-joining Facebook, I am considering how while protecting my privacy. Sure, I’m concerned about identity theft, password hacks, and distribution of my images and words to other sites, but what I am most concerned about is how to protect my privacy from Facebook. Since Facebook has a history of loosening privacy on the site, (Fletcher, 2010; Goel, 2013; Manjoo, 2011; Vargas, 2010) I am not one to trust the basic privacy settings outlined on Facebook’s pages.

Why?

Like all Facebook users, I am subject to algorithmic surveillance–a term first used by Clive Norris and Gary Armstrong in their book, The Maximum Surveillance Society (1999), defined as surveillance systems that follow sequences. Broad, huh? Well, Lucas Introna and David Wood (2004) remarked elsewhere that researchers use the term in connection with surveillance and computer technologies that capture complex data about events, people, and systems. No stranger to algorithmic surveillance, Facebook uses complex (and proprietary) algorithms to filter content for users based on their activities within the site. And, recently, Facebook announced it will use browser web history to capture more data for advertising revenue (currently, users can opt-out of this practice).

While Facebook uses data for advertising revenue, compliance with federal requests, and for research (among other activities defined in the data and use policy) the question remains is the benefit of social networking worth the cost of sharing our information? Given that Facebook uses data to manipulate the ways we experience information on our screens, it may not be after all.

Let’s think about this another way. Earlier this year, commentators, citizens, academics, and journalists issued concern over the emotional contagion research conducted in 2012. The researchers of the study performed algorithmic manipulation in a concentrated study to learn if the emotions of users could change based on what the users experienced in the Facebook ecosystem. Setting aside commentary on the ethics and legality of the study, what’s engaging about the fracas springs from acknowledgement of purposeful manipulation of the emotions of users by particular people, at a particular time, and in a particular context. In print, there was proof that Facebook had the ability to shape content to affect people’s lives. People reacted to what they thought and felt was wrong. There were names, faces, and decisions–all made by people. But, the algorithms Facebook uses still manipulates people, their emotions, and the information in their feeds. Do we feel more comfortable pointing the finger at people and excusing the unknown variables of the algorithms?

I do not necessarily have an answer to that question, but in further reflection, consider the recent controversy over Facebook’s algorithms. The political and social outpouring on Twitter since the shooting of Michael Brown in Ferguson, Missouri and the near domination of the ice bucket challenge on Facebook illustrates algorithmic manipulation. John McDermott just yesterday argued the implications of this algorithmic disparity are considerable given the reliance of the site to provide information to millions. He argued, “The implications of this disconnect are huge for readers and publishers considering Facebook’s recent emergence as a major traffic referrer. Namely, relying too heavily on Facebook’s algorithmic content streams can result in de facto censorship. Readers are deprived a say in what they get to see, whereas anything goes on Twitter” (2014, para. 3).

Censorship isn’t the only issue with Facebook’s algorithms, however. Ideological concerns over what political, social, and cultural events, ideas, and information play out in algorithmic culture and especially on Facebook. The Facebook ALS/Twitter Ferguson story illustrates this concern quite well. While the social media company continues to use algorithms that hide news stories, events, posts, images, and videos from users, algorithmic manipulation will continue to happen every time someone logs on to the site.

So, what does algorithmic manipulation have to do with protecting privacy and data from Facebook? Well, the more content a user shares with the site either voluntary or through web browsing histories, cookies, and/or widget data, the more data the algorithms have to manipulate what the user experiences in the space. It’s kinda tricky, right?

As I continue to think about re-joining Facebook, I know that some first steps will be to use a VPN to access the site, have a clean browser history and a private window. But, I also know that I will have to put the basics on the page I create–enough for people to recognize me professionally. And, of course, I won’t be able to “like” anything or share any interests. I am also not sure if this will be enough. So, if anyone out there has any suggestions or resources, please email me or send a comment.

 

Advertisements