Flirting with Re-joining Facebook: Algorithmic Surveillance Awaits

Standard

As I continue to flirt with the idea of re-joining Facebook, I am considering how while protecting my privacy. Sure, I’m concerned about identity theft, password hacks, and distribution of my images and words to other sites, but what I am most concerned about is how to protect my privacy from Facebook. Since Facebook has a history of loosening privacy on the site, (Fletcher, 2010; Goel, 2013; Manjoo, 2011; Vargas, 2010) I am not one to trust the basic privacy settings outlined on Facebook’s pages.

Why?

Like all Facebook users, I am subject to algorithmic surveillance–a term first used by Clive Norris and Gary Armstrong in their book, The Maximum Surveillance Society (1999), defined as surveillance systems that follow sequences. Broad, huh? Well, Lucas Introna and David Wood (2004) remarked elsewhere that researchers use the term in connection with surveillance and computer technologies that capture complex data about events, people, and systems. No stranger to algorithmic surveillance, Facebook uses complex (and proprietary) algorithms to filter content for users based on their activities within the site. And, recently, Facebook announced it will use browser web history to capture more data for advertising revenue (currently, users can opt-out of this practice).

While Facebook uses data for advertising revenue, compliance with federal requests, and for research (among other activities defined in the data and use policy) the question remains is the benefit of social networking worth the cost of sharing our information? Given that Facebook uses data to manipulate the ways we experience information on our screens, it may not be after all.

Let’s think about this another way. Earlier this year, commentators, citizens, academics, and journalists issued concern over the emotional contagion research conducted in 2012. The researchers of the study performed algorithmic manipulation in a concentrated study to learn if the emotions of users could change based on what the users experienced in the Facebook ecosystem. Setting aside commentary on the ethics and legality of the study, what’s engaging about the fracas springs from acknowledgement of purposeful manipulation of the emotions of users by particular people, at a particular time, and in a particular context. In print, there was proof that Facebook had the ability to shape content to affect people’s lives. People reacted to what they thought and felt was wrong. There were names, faces, and decisions–all made by people. But, the algorithms Facebook uses still manipulates people, their emotions, and the information in their feeds. Do we feel more comfortable pointing the finger at people and excusing the unknown variables of the algorithms?

I do not necessarily have an answer to that question, but in further reflection, consider the recent controversy over Facebook’s algorithms. The political and social outpouring on Twitter since the shooting of Michael Brown in Ferguson, Missouri and the near domination of the ice bucket challenge on Facebook illustrates algorithmic manipulation. John McDermott just yesterday argued the implications of this algorithmic disparity are considerable given the reliance of the site to provide information to millions. He argued, “The implications of this disconnect are huge for readers and publishers considering Facebook’s recent emergence as a major traffic referrer. Namely, relying too heavily on Facebook’s algorithmic content streams can result in de facto censorship. Readers are deprived a say in what they get to see, whereas anything goes on Twitter” (2014, para. 3).

Censorship isn’t the only issue with Facebook’s algorithms, however. Ideological concerns over what political, social, and cultural events, ideas, and information play out in algorithmic culture and especially on Facebook. The Facebook ALS/Twitter Ferguson story illustrates this concern quite well. While the social media company continues to use algorithms that hide news stories, events, posts, images, and videos from users, algorithmic manipulation will continue to happen every time someone logs on to the site.

So, what does algorithmic manipulation have to do with protecting privacy and data from Facebook? Well, the more content a user shares with the site either voluntary or through web browsing histories, cookies, and/or widget data, the more data the algorithms have to manipulate what the user experiences in the space. It’s kinda tricky, right?

As I continue to think about re-joining Facebook, I know that some first steps will be to use a VPN to access the site, have a clean browser history and a private window. But, I also know that I will have to put the basics on the page I create–enough for people to recognize me professionally. And, of course, I won’t be able to “like” anything or share any interests. I am also not sure if this will be enough. So, if anyone out there has any suggestions or resources, please email me or send a comment.

 

Advertisements

Why is Breaking Up with Facebook Hard to Do?

Standard

Over a year ago, I left Facebook after a seven-year relationship with the social media space. I wrote about my reasons in an article published by Hybrid Pedagogy titled, “Breaking Up with Facebook: Untethering from the Ideological Freight of Online Surveillance.” Essentially, Facebook tracks and monitors user movements and actions throughout their ecosystem using complex algorithms.

I began noticing the algorithmic movements when I saw personalized advertisements on the sides of the Facebook newsfeed. And, while I ultimately deleted my account because of Facebook’s graph search feature, I also felt uncomfortable with Facebook’s algorithms deciding what content I would experience in my newsfeed by promoting some posts over others. 

About a week after publication, a new scandal erupted on social media networks, in mainstream media, and in academic circles. A Facebook employee, a university faculty member, and a graduate student reported on a study conducted in 2012 focusing on emotional contagion, sharing they were able to manipulate newsfeeds for users to learn if emotional contagion could occur. There were several accounts about this study from ethics (Albergotti & Dwoskin, 2014; Arthur, 2014Junco, 2014) to questions about methodology (Albergotti, 2014Grohol, 2014Hill, 2014) to commentary about the experiment (Auerbach, 2014; Boyd, 2014Crawford, 2014). The tools that allowed the researchers to manipulate the newsfeeds were the algorithms Facebook used to control how users experience content on their screens. 

Facebook is back in the news this week because of their algorithms for the lack of content displayed about the ongoing political and social events in Ferguson, Missouri. Many users of Facebook and Twitter have reported that while Twitter shows real-time events in their streams, their Facebook newsfeeds are decidedly quiet about the events. 

If algorithms control what users experience in Facebook, then what really, is the benefit of being a Facebook user if users cannot experience what they want to in the space? 

I ask this question because recently I was encouraged to rejoin Facebook for professional reasons. The person who brought this up to me is someone I have a great deal of respect for, trust their advice, wisdom, and experience in several areas. This person is also aware of and a supporter of my research. 

And, here’s the rub: I know this person is right–right about re-joining a social media space that can provide professional benefits through online social networking.

But, I also can’t shake that re-joining this space calls into question my ethos as a researcher and private citizen who is aware of the surveillance and algorithmic practices of Facebook. This isn’t necessarily because I wrote an article about leaving (well, in small part it is), but that to re-join means I am subject to surveillance, to algorithmic manipulation, and that I become a commodity to Facebook again–all in the service of finding benefit from online networking.

I spoke with a dear friend and colleague about this earlier, and she advised me to consider re-joining, but to do so as connected to my research. Perhaps re-joining (if I decide to do so) will foster a new research project. 

In the meantime, I find myself in a dilemma. Even though I have officially cut ties with Facebook, it seems that breaking up is really hard to do. 

“Invasion of Privacy” vs. Desire for Surveillance

Standard

Dennis Baron’s blog post, “The phrase of the year for 2013 is ‘invasion of privacy‘” elucidates readers on the state of privacy connected with mass digital surveillance. Baron makes the point that privacy stretches back (in the US, at least) well into the late 19th century with the first mention in an issue of Harvard Law Review by Warren and Brandeis, and moves to discuss recent developments in social media, specifically Mark Zuckerberg’s position of privacy as it connects to the operations of Facebook.

Baron shares that new digital technologies have allowed for an invasion of privacy that perhaps private citizens have not yet fully realized. However, thanks to Edward Snowden’s efforts to inform citizens across the globe about the mass surveillance programs of the NSA, people are hearing and seeing more about surveillance in the news and in social media.

I agree that newer digital technologies like computer cookies, web beacons, widgets, and various other tracking technologies (see “Know Your Elements” provided by Ghostry) have given corporations and governments access to meta-data about individual keystrokes. Thankfully, employees and volunteers and Electronic Frontier Foundation (EFF) have  worked to provide resources for people about tracking technologies, as well as engage in legal battles over privacy concerns.

However, I also think that digital technologies like closed-circuit cameras and televisions, biometric devices like facial scanners, license plate scanners, are all part of a larger multi-actor and actant (to borrow from Latour) surveillance assemblage (to borrow from Deleuze and Guattari) where no one corporation or government is wholly responsible for the mass surveillance systems that have been globalized in multiple fields from medicine, education, government, and so on, but rather multiple corporations, governments, businesses, and individuals may all be responsible for an overall surveillance state (that is, if people use any type of device for surveillance purposes).

If we begin to think about this larger multi-surveillance assemblage, then we also need to think about the desire to have so many types of surveillance systems. Simply, what’s motivating people to have so many types of surveillance systems?

Sometimes I think that the argument with regard to surveillance is not so much about invasion of privacy or privacy protection, which are both extremely important arguments that many people from public leaders, policy makers, researchers, attorneys, journalists, etc need to continue to make and engage with, but that the argument rests more with a desire—a desire to constantly watch, to hear, to record, to observe.

If we are to make changes in privacy laws, privacy protection, then I also think we need to address a larger component of this conversation–desire. If we don’t address our motivations for action, then I think we are missing a large part of the discussion on what even motivates us to engage in surveillance, much less to have desire for privacy.

I agree with Baron that the public/private blurring of privacy has already become an issue, especially with online tracking technologies capturing data about individual’s whereabouts on the web.

I think any headway people make in journalistic, legal, scholarly, etc circles about “invasion of privacy” and privacy protection will occur when we can also address the root cause of why people use surveillance tools to begin with–desire.

Skype Twitter Hacked/BGSU Outlook 360

Standard

Around 3:30 pm today, the Twitter account for Skype (@skype) issued the following tweet (re-tweeted by @AnonyOps):

The Syrian Electronic Army, a group of Syrian youth who use their knowledge and skills to disrupt websites from the West, mainly spread denial of service attacks and alter prominent media or commercial outlets. The group maintains that media outlets and politicians in the West are promoting false stories about what’s happening in Syria.

Since the groups’ first hack in 2011, the members have led very public DNS attacks against several organizations, including The Onion, a gmail account for President Barack Obama’s Organizing for Action campaign, and even the New York Times.

At times, these attacks have had an overt political message. For instance, when the group took control of Organizing for Action’s gmail account, a link to a propaganda video about the United States’ capabilities to mount warfare upon other states appeared on YouTube. Also, members of SEA disrupted the Associated Press account to report that Obama had been injured.

The latest disruption by SEA through the hacking of Skype’s Twitter account, represents an thematic effort to disseminate political messages about companies in the West engaging in questionable practices.

Since Edward Snowden’s disclosures to Glenn Greenwald of the Guardian in May of 2013, the stream of news regarding mass surveillance by not only the American government, but also corporations like Microsoft, Google, Yahoo!, and Facebook have become daily news stories, tweets, blog posts, and so forth.

As someone who began researching surveillance in late 2012, once I learned about the tracking technologies and surveillance practices of certain websites, I became increasingly suspect with engaging or interacting within certain virtual spaces.

In July of 2013, I deleted (not “deactivated” but deleted) my seven-year-old Facebook account. I realized that hundreds of tracking technologies were tracking me across the web, and Facebook was one of them. Facebook used my data to turn a profit.

I have also started the process of moving my personal email account from gmail to the hush email platform.

While I also use a host of alternative sites for tracking the trackers, what concerned me most about the tweet by SEA rested with my connection to my university email account.

Recently, Bowling Green State University upgraded their communications system, including the email system, to Outlook 360. While I have had concerns over the design of the mail system, which my friends can attest to, I have long wondered if my university email meta-data was being tracked in some way.

If there is any truth to the tweet sent by the SEA (and, I do give weight to this tweet given news reports on mass surveillance and tracking technologies), then I find myself in a state of limbo. As much as I am moving away from mainstream technologies that use tracking devices, and as much as I align myself with supporting Internet freedom from mass surveillance, I am still contracted to perform a job, a service, and most importantly, complete a degree. All while, my meta-data, along with tens of thousands of other BGSU faculty, staff, students, and alumni may potentially have their email data tracked and sold.

What to do?