Writing in the Surveillance State

Standard

For about a year now, I’ve been thinking about a project where I navigate in and out of online surveillance.

Some of this thinking comes from an article published in Hybrid Pedagogy about why I left Facebook.

Since that time, I rejoined Facebook in July/August 2014 for the academic job market, and with the conclusion of the job market, left in April 2015.

I’ve come to realize that there’s a lot about online surveillance happening, and as a regular everyday consumer/citizen, what might I learn by exploring surveillance online by making behavioral changes to how I connect digitally.

I also pitched this project for a conference in my discipline next year,and while the proposal goes through the review process over the summer of 2015, I had better get started on the project to have data to show in a year.
from no

Going Full Blast w/Notifications on Facebook

Standard

Just a quick update:

Over the past couple of weeks, I’ve been using Facebook more socially. And, I’ve noticed a few things about Facebook’s algorithms:

  1. The more I interact with another Facebook friend, the more their posts appear in my newsfeed.
  2. Also, I receive notifications when friends I interact with more post new images or posts.

Because of the selective nature of Facebook’s algorithms, I will be changing how I receive notifications from my Facebook friends to “get notifications” and “close friends” to learn how much detail I will see from my friends. I suspect I will experience information at “full blast,” with all of my friend’s activity appearing in my newsfeed.

I will post results from this experiment within two weeks.

Sponsored Ads on Facebook Serve Up “Liked” Pages from Female Friends

Standard

I’ve been on Facebook for about a week now. Every time I log in, I feel dizzy. The interface is clunky. There are too many ads–ads in the newsfeed, ads along the sides, and ads underneath photo pop-up. Facebook, circa 2006-2009, used to have a pretty clean interface. Now, not so much.

Since I log in via VPN, many of the ads I see are personalized from the log data, which includes the IP address. As of this post, I am currently logged into an address in Chicago, so I am experiencing some ads for DePaul University in Facebook. Here are some other ads I am experiencing:

Olive Garden Ad

Olive Garden sponsored advertisement as seen on Facebook, September 1, 2014

This Olive Garden ad might be pretty standard for several Facebook users. There is nothing in my limited Facebook data to suggest that I would be interested in Olive Garden.

I am seeing some advertisements, however, where Facebook friends “like” the brand, and since the friends like the page, then Facebook algorithms show these ads to me in case I might be interested in these pages as well:

Facebook friend "likes" one of the sponsored ads, so I see it as well.

Facebook friend “likes” one of the sponsored ads, so I see it as well.

And, what’s interesting about these sponsored ads that I experience in my newsfeed is how Facebook’s algorithms build in “sex” into their equations. All of the sponsored ads I currently experience that are “liked” by a Facebook friend have “female” as their sex category.

I am curious to know if others experience these “personalized” ads based on their sex or if there are extreme differences. If you experience either, please let me know.

Oh, and beginning this week, I will be playing with the relationship status on Facebook to learn what content Facebook’s algorithms will present to me. For the week of 9/1/14 through 9/7/14, I will be “widowed.”

Reasons for Re-Joining & Why I Use a VPN for FB

Standard

I re-joined Facebook yesterday, and I want to take a moment to comment on the reasons like how I am using the site for research, and also what I do to login to the site.

Reasons for Re-Joining

About a week ago, I was encouraged to re-join Facebook as I embark on the job market this fall in my discipline of rhetoric and writing/composition because people do post information about jobs, universities, departments, etc on Facebook and that I would miss out on opportunities to learn more about these elements by not being on the site.

However, as a digital surveillance/algorithm/rhetorics researcher, I left Facebook over a year ago because of the algorithmic surveillance happening in that space, and I even wrote an article outlining my concerns, which appeared in Hybrid Pedagogy. Re-joining the space gave me considerable pause.

After talking with a dear friend and colleague about my concerns, she suggested I approach my time with Facebook as a site of research and this helped me evaluate the benefits and constraints of re-joining differently.

How I am Using the Site for Research

In addition to blogging about my experiences with re-joining and interacting in Facebook, I will be intentionally playing with the data I input into the site. Here are some parameters that I’ve set:

1. While there are untagged images of me in the Facebook ecosystem, I have decided I will not post or allow my face to be tagged in Facebook because of Facebook’s facial recognition research program, DeepFace. Yes, I am aware that images of me appear on Twitter and on other sites, including Google’s image search.

For now, I do not want to participate in Facebook’s DeepFace project or I want to limit my involvement as much as possible.

2. Since Facebook’s “like” feature has not only legal, commercial, and privacy implications, I will not use the “like” feature in Facebook. Besides, it is just another way to contribute data for Facebook to offer personalization through their algorithms.

3. I will not be adding any interests because Facebook uses that data for personalization.

4, I have intentionally provided a bogus birth date. I really didn’t want to provide a birth date at all, but Facebook mandated that I provide one to have a better age experience. So, I chose a year out of my demographic range to learn what advertisements Facebook would serve me for that age range.

5. I will be “hiding” items from my timeline such as friend acceptances, and other items that I haven’t encountered yet, but once I do, I will write about what I hide to be transparent.

6. Since I am not approaching the research from a human subjects perspective, I will not (at this time) be including participants. If I do happen to make mention in my blog about an interaction (from my perspective), I will keep details of the person’s identity anonymous, and also not use direct quotes or information that may aid in uncovering the identity of the person.

7. I will also be playing with different features on the site, like changing my demographic information and viewing certain pages more than others to see what appears more often in my newsfeed.

What I do to Login

When logging into Facebook, I use a VPN. So far, I’ve logged into locations in Los Angeles, Chicago, and New York. Each time I login, I notice different advertisements on my screen. For example, here’s the most recent ad I saw this morning:

Advertisement appearing on my Facebook feed.

Advertisement appearing on my Facebook feed.

This personalized advertisement pulls from the following categories, my log data (I used a VPN to login to a site in Chicago) and the demographic information I added: Sex, “female,” and age, “40.”

Additional information I use to set up my Firefox browser include setting Firefox to privacy settings of “tell sites I don’t want to be tracked” and “Firefox will never remember my history.”

I also installed EFF’s Privacy Badger on Facebook to help block spying ads and invisible trackers.

As I continue interacting with the site, I will update the blog with more information.

Flirting with Re-joining Facebook: Algorithmic Surveillance Awaits

Standard

As I continue to flirt with the idea of re-joining Facebook, I am considering how while protecting my privacy. Sure, I’m concerned about identity theft, password hacks, and distribution of my images and words to other sites, but what I am most concerned about is how to protect my privacy from Facebook. Since Facebook has a history of loosening privacy on the site, (Fletcher, 2010; Goel, 2013; Manjoo, 2011; Vargas, 2010) I am not one to trust the basic privacy settings outlined on Facebook’s pages.

Why?

Like all Facebook users, I am subject to algorithmic surveillance–a term first used by Clive Norris and Gary Armstrong in their book, The Maximum Surveillance Society (1999), defined as surveillance systems that follow sequences. Broad, huh? Well, Lucas Introna and David Wood (2004) remarked elsewhere that researchers use the term in connection with surveillance and computer technologies that capture complex data about events, people, and systems. No stranger to algorithmic surveillance, Facebook uses complex (and proprietary) algorithms to filter content for users based on their activities within the site. And, recently, Facebook announced it will use browser web history to capture more data for advertising revenue (currently, users can opt-out of this practice).

While Facebook uses data for advertising revenue, compliance with federal requests, and for research (among other activities defined in the data and use policy) the question remains is the benefit of social networking worth the cost of sharing our information? Given that Facebook uses data to manipulate the ways we experience information on our screens, it may not be after all.

Let’s think about this another way. Earlier this year, commentators, citizens, academics, and journalists issued concern over the emotional contagion research conducted in 2012. The researchers of the study performed algorithmic manipulation in a concentrated study to learn if the emotions of users could change based on what the users experienced in the Facebook ecosystem. Setting aside commentary on the ethics and legality of the study, what’s engaging about the fracas springs from acknowledgement of purposeful manipulation of the emotions of users by particular people, at a particular time, and in a particular context. In print, there was proof that Facebook had the ability to shape content to affect people’s lives. People reacted to what they thought and felt was wrong. There were names, faces, and decisions–all made by people. But, the algorithms Facebook uses still manipulates people, their emotions, and the information in their feeds. Do we feel more comfortable pointing the finger at people and excusing the unknown variables of the algorithms?

I do not necessarily have an answer to that question, but in further reflection, consider the recent controversy over Facebook’s algorithms. The political and social outpouring on Twitter since the shooting of Michael Brown in Ferguson, Missouri and the near domination of the ice bucket challenge on Facebook illustrates algorithmic manipulation. John McDermott just yesterday argued the implications of this algorithmic disparity are considerable given the reliance of the site to provide information to millions. He argued, “The implications of this disconnect are huge for readers and publishers considering Facebook’s recent emergence as a major traffic referrer. Namely, relying too heavily on Facebook’s algorithmic content streams can result in de facto censorship. Readers are deprived a say in what they get to see, whereas anything goes on Twitter” (2014, para. 3).

Censorship isn’t the only issue with Facebook’s algorithms, however. Ideological concerns over what political, social, and cultural events, ideas, and information play out in algorithmic culture and especially on Facebook. The Facebook ALS/Twitter Ferguson story illustrates this concern quite well. While the social media company continues to use algorithms that hide news stories, events, posts, images, and videos from users, algorithmic manipulation will continue to happen every time someone logs on to the site.

So, what does algorithmic manipulation have to do with protecting privacy and data from Facebook? Well, the more content a user shares with the site either voluntary or through web browsing histories, cookies, and/or widget data, the more data the algorithms have to manipulate what the user experiences in the space. It’s kinda tricky, right?

As I continue to think about re-joining Facebook, I know that some first steps will be to use a VPN to access the site, have a clean browser history and a private window. But, I also know that I will have to put the basics on the page I create–enough for people to recognize me professionally. And, of course, I won’t be able to “like” anything or share any interests. I am also not sure if this will be enough. So, if anyone out there has any suggestions or resources, please email me or send a comment.

 

Why is Breaking Up with Facebook Hard to Do?

Standard

Over a year ago, I left Facebook after a seven-year relationship with the social media space. I wrote about my reasons in an article published by Hybrid Pedagogy titled, “Breaking Up with Facebook: Untethering from the Ideological Freight of Online Surveillance.” Essentially, Facebook tracks and monitors user movements and actions throughout their ecosystem using complex algorithms.

I began noticing the algorithmic movements when I saw personalized advertisements on the sides of the Facebook newsfeed. And, while I ultimately deleted my account because of Facebook’s graph search feature, I also felt uncomfortable with Facebook’s algorithms deciding what content I would experience in my newsfeed by promoting some posts over others. 

About a week after publication, a new scandal erupted on social media networks, in mainstream media, and in academic circles. A Facebook employee, a university faculty member, and a graduate student reported on a study conducted in 2012 focusing on emotional contagion, sharing they were able to manipulate newsfeeds for users to learn if emotional contagion could occur. There were several accounts about this study from ethics (Albergotti & Dwoskin, 2014; Arthur, 2014Junco, 2014) to questions about methodology (Albergotti, 2014Grohol, 2014Hill, 2014) to commentary about the experiment (Auerbach, 2014; Boyd, 2014Crawford, 2014). The tools that allowed the researchers to manipulate the newsfeeds were the algorithms Facebook used to control how users experience content on their screens. 

Facebook is back in the news this week because of their algorithms for the lack of content displayed about the ongoing political and social events in Ferguson, Missouri. Many users of Facebook and Twitter have reported that while Twitter shows real-time events in their streams, their Facebook newsfeeds are decidedly quiet about the events. 

If algorithms control what users experience in Facebook, then what really, is the benefit of being a Facebook user if users cannot experience what they want to in the space? 

I ask this question because recently I was encouraged to rejoin Facebook for professional reasons. The person who brought this up to me is someone I have a great deal of respect for, trust their advice, wisdom, and experience in several areas. This person is also aware of and a supporter of my research. 

And, here’s the rub: I know this person is right–right about re-joining a social media space that can provide professional benefits through online social networking.

But, I also can’t shake that re-joining this space calls into question my ethos as a researcher and private citizen who is aware of the surveillance and algorithmic practices of Facebook. This isn’t necessarily because I wrote an article about leaving (well, in small part it is), but that to re-join means I am subject to surveillance, to algorithmic manipulation, and that I become a commodity to Facebook again–all in the service of finding benefit from online networking.

I spoke with a dear friend and colleague about this earlier, and she advised me to consider re-joining, but to do so as connected to my research. Perhaps re-joining (if I decide to do so) will foster a new research project. 

In the meantime, I find myself in a dilemma. Even though I have officially cut ties with Facebook, it seems that breaking up is really hard to do. 

Privacy Protection & Tracking the Trackers

Standard

While I am by no means someone who deals with sensitive information, I do have a desire to protect my privacy online. Since I began learning about tracking technologies, mass surveillance, and ways corporations and governments can access data, I’ve been slowing moving away from certain websites, email programs, and even integrating tools to protect my privacy online. Here are some recommendations:

Hushmail

Hushmail bills itself as a privacy-oriented communications company, with no third-party ads and built-in encryption. You can get a free account with 25MB of storage. If you need more storage, then there are pay plans.

I started moving away from my personal gmail account in favor of hushmail. While the hushmail interface is rather basic in design, the HTTPS protection and encryption tools far outweigh a more sleeker-looking design.

RiseUp

Another email service that provides privacy protection for its users. Riseup uses encryption, does not log the IP address, and does not share data with anyone.

DuckDuckGo

A search engine that values customer privacy. Not only does DuckDuckGo not track it’s users, the company offers an eye-opening PSA about the perils of Google’s tracking capabilities. DuckDuckGo also provides an infographic on the “filter bubble” of search engines (see Eli Pariser, “The Filter Bubble” 2011, but also the TedTalk for more info).

The PSA is humorously pointed; however, the information provided in the graphic does match with Surveillance Studies research on pairing search data with individual identity with undesirable results, e.g. lesser credit ratings, denial of health insurance, loss of employment, etc.

Ghostry

This is a rather popular privacy tool that provides ways to track the trackers, but also ways to learn more about types of tracking technologies along with the companies that track data online.

Electronic Frontier Foundation (EFF) Guide to Surveillance Self-Defense

This guide provides information about privacy, surveillance, and ways to protect privacy in many forms of communication. I highly recommend reading over the information in this site if you are unfamiliar with mass surveillance.

I also recently installed AVG’s Privacy Fix tool. I am a bit skeptical about some aspects of the tool, e.g. it asked for my login information for LinkedIn (I am rather weary of providing login information to another application, which could *potentially* be hacked). However, the tool does provide a “dashboard” tool allowing front-end users to see the tracking technologies on websites, and even how data is shared on the site.

I am always looking for privacy tool; if you have any recommendations, please comment!