My favorite part of Billy Madison is the end, when Billy is up against Eric in the quiz-section of the bizarre Bruce Jenner-inspired â€œacademic decathlon.â€ Billy gets a question about literature and the Industrial Revolution (answer was obviously about Charles Dickens, duh!), and then chooses the category for Eric’s question. â€œI chooseâ€¦ Bus’nessâ€¦. Ethicsâ€¦.â€
Then goes berserk.
For many, the question of ethics is a no-brainer; do the right thing. Don’t lie, don’t cheat, don’t steal. And most of us understand that ethics aren’t a black-and-white issue. We always have interpretation, shades of gray, and tough choices that may be slightly unsettling.
This past week, several ethical issues involving data collection have arisen, and these issues impact each one of us in different ways.
So what are ethics, exactly? Aristotle believed in ethos, or the â€œauthorâ€ as being of the utmost character and competence. In fact, most Communication scholars agree that ethos is comprised of at least three different parts:
- the character of the person (is s/he trustworthy, sincere, and genuine?)
- the competence of the person (is s/he knowledgeable on the subject and done research for support?), and
- common ground (does s/he understand the audience and respond to their thoughts, wants, or desires?)
Today, ethics is simply moral principles that you abide by as an individual, society, or a culture â€“ the social norms, rules, or mores that everyone teaches us. These features help us function productively within society, and separate law-abiding citizens from criminals. In the United States, so much effort and time is focused on explicit messages because we are an individualistic society; we have specific laws, contracts, and rely heavily on verbal communication. This is why you can’t get a cell phone without signing a twenty-page document (and who reads those anyway?) â€“ our society dictates it, encourages it, and even wants it. Furthermore, our particular society is rooted in the concept of â€œfree will;â€ we get to make choices in life, have privacy, and do as we please as long as we don’t break the law.
As you can see, ethics is more complex than just a simple dictionary definition!
One of the major ethical issues involves the collection of data for marketing purposes. As any loyal Facebook user knows, whatever you do on the Internet is fodder for advertising. The second you create a post making fun of Star Wars (maybe because you’re in the minority and kind of hate it), t you get ads on Facebook asking you to purchase Star Wars themed memorabilia.
Other than Facebook, several key companies are involved in data collection and marketing (commonly known as data-mining) â€“ these are Acxiom and LexisNexis. Acxiom is an interesting company in itself due to its emerging involvement in politics beginning with the George W. Bush administration. Acxiom has led the way for other data collectors to begin predicting the behavior of individuals, particularly when it comes to voting. It is such a science that they can determine what type of music an Independent, Democrat, or Republican may listen to as well as they type of car s/he drives. It’s all magical and totally neat. You should be impressed by how accurately these companies can tell you who you really are!
And now the medical community wants in on this data-mining â€“ they want to connect your purchases to your health. Both Acxiom and LexisNexis state that they do not share their data with the medical community, and that they use information strictly for marketing purposes, but do sell to insurance companies. And you may be thinking, â€œWell, these doctors just want to intervene before something tragic happens; it’s a preventative measure.â€ And this is true. However, studies show that people lie to their doctors on a consistent basis because they don’t want â€œto get a lectureâ€ from their doctor. Data-mining would ultimately eliminate the lies or even the inaccuracies if you try to be as truthful as possible. So far, all the controversy is circulating around hospitals and the benchmarks they need to achieve as part of the Affordable Care Act. However, as we all know, if you give people an inch, they will take a mile, and the concern is that this data-mining will inevitably end up with your primary care physician calling you to say, â€œPut down the double cheeseburger!â€ This may seem extreme. But if you have high cholesterol, have been obediently following the DASH diet, and then go to McDonald’s because you’re on business in the middle of a food desert, the last thing you want is a call from your doctor who saw the charge roll through on your credit card. Part of the ethics of our society is that we honor the code of free will; we allow people to make their own choices (although this is increasingly becoming a concept that is controversial in itself, but that’s a whole other blog).
AÂ study was just released by Facebook which has garnered quite a bit of attention from academics in particular. In January 2012, Facebook altered the News Feeds of almost 700,000 users to make them show more negative or positive information to determine if the users would then perpetuate the cycle of posting more negative of positive information (a phenomenon academics call contagion). The rub is that these users were not aware that Facebook altered their News Feeds for a whole week. Facebook instead relied on a very obscure passage within their terms and conditions to move forward with this experiment.Â The passage they reference is the Data Use Policy under term 1. Privacy. Under â€œHow We Use the Information We Receiveâ€ the last bullet point pertains to this study: for internal operations, including troubleshooting, data analysis, testing, research and service improvement.
Right now, an Oxford comma would be incredibly helpful in interpreting this very small passage amongst a plethora of detailed examples and statements.
Hopefully you can begin to see why this study is being questioned â€“ this little passage and the fact that people unknowingly were studied have enormous ethical implications. First, when any academic does a study involving human beings, that study needs to be approved by an Internal Review Board (IRB). The reasons why IRBs are in place are to preserve human dignity, and allow for participants to make choices (they can opt in, or opt out). Of course, IRBs also prevent lawsuits and scarring people for life. The classic sociological experiment done by Phillip Zimbardo would likely never gain approval by an IRB today because of the potential negative impacts it would have on participants in the future. Even if a researcher in an academic environment simply wants 500 people to fill out a survey about their access to organic food, the researcher needs approval from an IRB at his/her University. And for the study to be published, the IRB approvals need to be in place if humans were used.
Clearly, IRB approval was needed here, but it was likely based on this very small passage that corresponds specifically to â€œinternal operations.â€ When I think of internal operations, I think of information never leaving the company; whatever information is collected is to help the company function more efficiently. People may bring up the argument, â€œWell, if you don’t want your information used, don’t post it on the public forum known as the Internet!â€ But if most people are like me, we’re trusting Facebook to make good decisions with our information â€“ we trust them to accept our privacy settings and to not make our information â€œwhollyâ€ public if we have things set to â€œFriends Only.â€ We trust them not to manipulate our News Feeds to mess with our heads. Furthermore, they claim that because a computer sorted through all of the information, and the researchers had no knowledge of or personal connection to the individuals studied, that the study itself (and how it was conducted) is ethical.
This whole situation reminds me of the South Park episode, â€œHumancentipadâ€ where Kyle gets tied up to two other people, forced to digest their fecal matter simply because he did not thoroughly read the terms and conditions associated with having an iPad. The episode hinges on a key concept, coined by Vernon Jensen, called â€œrightsabilities,â€ which is this constant struggle between maintaining our rights to free speech and the social responsibility of such speech. The episode clearly indicates that companies may have certain rights to do things based on a litany of terms and conditions, but is it socially responsible?
A prime example of â€œrightsabilitiesâ€ is the Westboro Baptist Church â€“ the U.S. Supreme Court gave them the right to protest at the funerals of men and women killed in combat, but is what they are saying or doing during those protests ethically responsible? Going back to Aristotle’s concept of ethos… are these companies practicing common ground in such a way as to give the audience what they want, need, or desire?
The answer is no.
These data-mining techniques and the study done by Facebook suggest profitability over being socially responsible. We need these companies to be more socially responsible because we are entrusting them with our information. In order for us to remove ourselves from these types of data collections, we have to opt out of all of the conveniences that we rely so heavily upon to function within society. If we don’t want the possibility of a doctor calling us to tell us to stop buying candy, we need to use cash on every purchase (and no scanning of membership cards to give us discounts). If we don’t want Facebook altering our News Feeds, we have to revert back to phone calls, the archaic email or – gasp, dare I say it – snail mail. This just isn’t plausible in today’s society â€“ our reliance on technology has grown so rapidly that opting out of many of these things simply puts those individuals â€œbehind.â€ It is a vicious cycle, but if companies can perform with more integrity and think about their customers first and foremost rather than profitability or academic accolades, then maybe the question of ethics will become moot.