Chrissy Clary

ponder. conspire. digitize.

The Dilemma of the
Digital Strategist

Since 2001, I have been building devices designed to communicate using the Internet. I’m a digital strategist with a portfolio of websites and digital marketing campaigns.

I have grown curious about the effects of my work, and the work of others like me, on people around the world.

In an attempt to scratch this itch, I’ve been asking a lot of questions and digging around in the literature. The bumper sticker on my worn-out Honda CRV reads, “I’m not lost, I am exploring.” That concept is the lens I used as I explored the idea that the Internet has had and is having a profound influence on how we think, what we know, and how we relate to one another.

One concern I have centers around ethics, or the potential lack thereof, within the community of people working in digital strategy.

Corporations and large organizations now hold astounding amounts of data on all of us. With machine learning, algorithmic targeting, and carefully curated messaging, they are trying to deliver you messages that you will find engaging (Davis and Patterson, 2012).

Eric Schmitt explains how it works at Google in this interview on YouTube.

Here, Schmitt says that at Google, the idea “is to get right up to the creepy line but not cross it.” He talks about the “creepy line” but does not draw that line for us. How do we know where that line is? In many cases, companies are not clarifying their ethics regarding online communications. Instead, the line is a proprietary idea held and perhaps only partially understood within that organization.

Don’t get me wrong, I love googling just as much as the next person. My concern is about the ethics practiced by many organizations regarding data mining and targeting. The dilemma is becoming clear. Digital strategist are working to expand the Internet without defined ethical guidelines.

Perhaps a larger concern for me is with respect to my own ethical practices. This exploration has brought me to the realization that my ethics are based on practices of the past even as I help to build the communication machines of the future.

As a digital leader at my organization I have a responsibility to help establish and communicate our ethics and attitudes on this subject. Having a core understanding of how my decisions affect others will aid me in drawing a creepily line of my own.

I start by attempting to answer the following questions:

  • What is the effect of algorithmically delivered news?
  • What data do they have on us?
  • How do we know what to trust?
  • Who do we trust?
  • Where does the conversation around digital marketing ethics sit today?

This list is incomplete; there are so many more questions, but it would be impossible to list them all and try to answer them all. My journey is only beginning.

Now Read: What is the effect of algorithmically delivered news?

 

References:

Davis, K. & Patterson, D. (September 20, 2012). Ethics of Big Data. O’Reilly Media, Inc.

Eric Schmitt discusses how algorithmic targeting works at Google (2011). Retrieved from https://www.youtube.com/watch?v=uB-2n6KSYWk

 

Is there a conversation around ethics and
digital marketing happening today?

A dialogue about ethics practices in the field of digital marketing, while not completely non existent, is limited and lacking in substance.

When the machines take over the world what part will marketers have played in the demise of of the human race? Unfortunately, we don’t know and won’t for a while.

In my opinion normalizing a conversation around ethical practices for the digital strategist should be an essential part of the conversations we are having regarding consumer targeting. I have no expectation that we will get it right, or save the human race, but it is worth a healthy conversation.

Notably, the topic is a complex one, but much of what I found easily available on the Internet appears to lacking in research and depth. Scholarly references, while not plentiful, were available for those with access to scholarly databases.

In the USA in 2013, full-year Internet advertising revenues totaled $42.78 billion, up 17 percent from the $36.57 billion reported in 2012 (IAB 2014). Search-related revenues accounted for 43 percent, display-related advertising for 30 percent and mobile, which grew by more than 140 percent between 2011 and 2013, reached 17 percent of the Internet advertising revenues (Nill, Aalberts, Li, & Schibrowsky, 2015).

The lack of dialogue focused on ethical practices in digital marketing compared to the amount of money being spent on digtial advertising appears unbalanced, to me at least. I found quite a few discussions centered on the pros and cons of machine learning and the use of big data, but to little regarding the use in the marketing space.

I believe we are only beginning to understanding the effects of our focused, personalized targeting efforts.

Consider this, “the recent advances in the use and potential abuse of ‘big data’ is one of the most pressing issues facing both marketers and public policy decision makers,” according to Alexander Nill, Robert Aalberts, Herman Li and John Schibrowsky -contributors to the Handbook of Ethics and Marketing. The team of researchers are in support of “more ethics-based research” focused on data privacy and consumer targeting (2015).

While [Online Behavioral Advertising] potentially provides advantages to online
consumers such as ‘free’ access to online sites – the advertising revenues
pay for keeping the sites free of charge – the practice has the technological
potential to violate consumers’ privacy to a hitherto unmatched extent.
OBA is poorly understood by most consumers, often non-transparent
And sometimes outright deceptive. Since the practice is relatively new, laws and
regulations are still evolving (Nill, Aalberts, Li, & Schibrowsky, 2015).

Evil much like beauty is in the eye of the beholder. Ethical decision making works in a similar fashion, each person makes decisions based on the experiences and biases they bring to the table (2015). Statements like “don’t be evil” (Google, 2014), just won’t cut it and do not clearly define the ethical stand of the company.

What I am not advocating for is the erection of regulations that stifle creativity or risk innovation, but I do think it is time for a real discussion about the ethics of profiling and targeting consumers.

References
Google. (2014). U.S. Public Policy. Retrieved from https://www.google.com/publicpolicy/transparency.html

Nill, A., Aalberts, R. J., Li, H., Schibrowsky, J. (June 26, 2015). ew telecommunication technologies, big data and online behavioral advertising: do we need an ethical analysis? Handbook on Ethics and Marketing. Retrieved from https://www-elgaronline-com.ezp-prod1.hul.harvard.edu/view/9781781003428.00025.xml

Who and what do we trust?

We as a society are still placing trust in large brands and big names. Maybe that trust is wavering or changing a bit, but in a society where alternative facts and echo chambers are real, it is becoming increasingly difficult to recognize authority and know what to trust.

When I googled the phrase “what makes an authority,” I didn’t receive the results I had hoped for. Instead of handily presenting me with a psychological paper, my search returned a list of resources designed to teach me how to become an authority myself.

The Launch Coach wants to help me become an authority, not simply an expert. The Goins Writer can help me get 100K readers in 18 months, and the Authority Blog Starter Kit shows what an Authority website really is.

Before I conducted this search, I thought we had to earn authority. From this results list, it looks as if we can purchase it at a reasonable rate. To understand the difficulty in knowing exactly who to trust, or consider an expert, think about how many people out there are calling themselves experts. “In 1970, there were about two dozen think tanks. Today, there are over 3,500 worldwide” (Weinberg, 2011).

To dig into this idea, I ran a few studies through Amazon Mechanical Turk—a service that allows anyone access to a massive number of users at a low cost. I ran two tests. In the first test, I surveyed 200 people, asking them to select the tweet that was most likely true.

The tweets that appeared to garner the most trust were a posting by NBC about celebrities raising money for semolina victims (31 percent) and a tweet from Embry-Riddle Aeronautical University’s head of athletics with recent stats from a game (27 percent). Coming in a close third was a posting by Business Insider Magazine about Ikea’s effort to build shelters for homeless people that can be assembled quickly (21 percent).

The final two tweets identified as trustworthy were retweets by people who carried no brand recognition. One claimed a black man had been killed by white supremacists in New York City (11 percent); the other accused Congressman Devin Nunes of protecting President Trump (10 percent).

I also asked survey participants why they chose as they did.

Here are some of my favorite responses from my survey: “Don’t think anyone would make it up;” “This tweet has most number of hearts and retweets;” “Nunes is corrupt;” “Because it is from a reputed news channel with blue tick, which means it is authentic;” “It is from a reputable Twitter account and the story is non-clickbaitey;” “Ikea often does things like this, it is totally believable. As it is the one most easily proven, I chose it over the others;” “It seems factual enough and people don’t usually lie over scores.”

In the second test, I asked Amazon Turk workers to review a website and tell me which sites of the ones presented were most likely to be reputable and which were least likely to be reputable. I had 50 responders to this test.

The five sites selected for this test originated from a Google search for the phrase “raising children” and are presented in the table below.

Website % who selected not reliable % who selected reliable
Raising Special Kids 14% 14%
Wikihow 35% 41%
Washington Post 19% 21%
American Psychological Association 14% 12%
Up Worthy 18% 12%

A few reasons users gave for identifying a website as disreputable:

  • Raising Special Kids: “The site design looks cheap and dated (I’m on a desktop). There’s not really any helpful information on the front page. It looks like I’m going to have to click around to find out what sort of site it is.”
  • Wikihow: “I think the article is good. But I am not sure if the person who wrote the article is trustworthy.”
  • Washington Post: “Too long.”
  • American Psychological Association: “Pretty disorganized format with poor choice of fonts and bad front-end code upon checking the page’s development features.”
  • UpWorthy: “It’s a site known for click-bait and low quality content”

Reasons participants offered for identifying a website as a reputable source:

  • Raising Special Kids: “The website looks legit and is followed by .org.”
  • Wikihow: “It seems to incorporate the most different sources, rather than just relying on one person’s point of view.”
  • Washington Post: “It is a news article and has a Harvard psychologist offering tips.”
  • American Psychological Association: “APA is a trusted source.”
  • UpWorthy: “It seems relevant to what I am looking for but also interesting to read.”

From my anecdotal surveys, I can begin to make a couple of assumptions.

First, we still have trust in branded organizations. In many instances, survey participants indicated that they made their selections based on brand recognition.

Second, our personal experiences greatly affect what we find believable. In the end, it appears our confirmation biases—a tendency for look for information that confirms previously held ideas—heavily influence our judgments about what is true and who or what we should believe.

Third, design plays an important role in how we decide if information is true.

Next Read: Is There a Conversation Around Ethics and Digital Marketing Happening Today? 

References:  

Weinberger, D. (2012). Too Big to Know: Rethinking Knowledge Not That the Facts Aren’t the Facts, Experts Are Everwhere, and the Smartest Person in the Room is the Room. Basic Books. New York.

How does trust work?

Anyone with a little determination can build a website, set up a Twitter account, and position themselves as a trustworthy corporation. We as users may be spending too little time considering the objectivity and sources of the information we receive and believe.

My son (a twelve-year-old gifted student and self-proclaimed gamer) and I were discussing trust on the Internet. He is in middle school and in the early stages of developing his junior researcher skills.

Hoping he had learned a thing or two about researching an idea and trying to find reputable information, I asked him how he knows if something he finds on the Internet should be trusted.

He confidently explained that the best way to know if something is true is to look at the search results page. If there are several entries that appear to support a particular idea, then most likely it is true. He also shared with me that one of his teachers reassured his class that Wikipedia is a trusted resource because they only let people with PhDs become editors on the site. I thought “Oh, crap.”

To set the record straight, a PhD is not in fact a requirement to be a Wikipedia editor. According to the Who Writes Wikipedia? page on the nonprofit’s site, all it really takes to be a Wikipedian is a little bravado: “Yes, anyone can be bold and edit an existing article or create a new one, and volunteers do not need to have any formal training” (Wikipedia, 2017).

When I tried to explain that much of what we see is being presented to us algorithmically based on a code developed by someone at a corporation such as Google, my son assured me that the people at Google can be trusted. I asked him why he would trust people he has never met, and he responded, “Why would they lie to us?”

Michael Patrick Lynch discussed this idea in his 2016 book, The Internet of Us, writing that “an organism’s default attitude toward its receptive capacities—like vision or memory—is trust.” Meaning it is natural for people to trust what they see or what someone is telling us, our default is to believe.

Lynch also explains that if we are not knowledgeable on a topic, we may choose to look for answers from someone with expertise in the domain, but he stresses the need for reference checks.

Of concern is how we consume information today. With more than 60 percent of adults getting their news from social media, according to a study by the Pew Research Center, it is important that users understand that not all facts are checked and that there is no governing body for the content published to the Internet.

Lynch makes the point that “digital media gives us more means for self-expression and autonomous opinion-forming than humans have ever had. But it also allows us to find support for any view, no matter how wacky” (2016). This last bit is the concerning part because it suggests that no matter what your particular truth is, “no matter how wacky,” you could find verification for the idea on the Internet.

On November 19, 2016 Mark Zuckerberg, founder and CEO of Facebook, posted a message to Facebook users about “misinformation” on the site and outlined what the corporation plans to do moving forward to reduce the prevalence of fake news within the social media network:

The problems here are complex, both technically and philosophically. We believe in giving people a voice, which means erring on the side of letting people share what they want whenever possible. We need to be careful not to discourage sharing of opinions or to mistakenly restrict accurate content. We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties.

I don’t believe we have the maturity as a species to reason through all the information that is being presented to us in a logical, fact-finding way. We used to pay people to do that for us—they were called journalists. Today, we are still believing information the same way we always have and perhaps are taking it at face value and swallowing it whole. Is this ignorance or laziness? Both? Neither?

Next Read: Who and What Do We Trust?

References:

Lynch, M., P. (2016). The Internet of Us: Knowing More and Understanding Less in the Age of Big Data. Liverlight Publishing Company. New York.

Wikipedia. (May 4, 2017). Who Writes Wikipedia? Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Wikipedia:Who_writes_Wikipedia%3F

Zuckerberg, M. (November 19, 2016). Facebook. Retrieved from https://www.facebook.com/zuck/posts/10103269806149061

What data do
they have on us?

Corporations are gathering large amounts of data on us: data about our actions, our interests, what we search for and read, and how we interact with others. Of concern is the idea that interpretations of our right to digital privacy are in the hands of the corporations and government organizations that own the data, leaving us little control over our own digital data points.

Have you ever heard of a data broker? This term refers to companies that gather information on consumers through a variety of channels and then sell that information to corporations. The corporations use the data to build consumer profiles. They then use those profiles to market to those consumers in a targeted, personalized way.

According to Brian Naylor’s article on the NPR website, data brokerage is not anything new, “but the Internet upped the ante considerably” (2016).

Naylor explains how it works:

Once these companies collect the information, the data brokers package and sell it—sometimes to other brokers, sometimes to businesses — that then use the information to target ads to consumers. And it’s a lucrative industry. One of the largest brokers, Acxiom, reported over $800 million in revenue last year” (2016).

This raises concerns around the right to privacy. We do have that right, right?

The Fourth Amendment reads:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized (National Constitution Center, 2017).

Unfortunately, the Founding Fathers did not anticipate digital media or the Internet, so the Fourth Amendment does not include anything about large corporations selling, buying, storing, or mining your data.

The American Civil Liberties Union expresses its concerns about the privacy of our data on its website:

Technological innovation has outpaced our privacy protections. As a result, our digital footprint can be tracked by the government and corporations in ways that were once unthinkable. This digital footprint is constantly growing, containing more and more data about the most intimate aspects of our lives. This includes our communications, whereabouts, online searches, purchases, and even our bodies. When the government has easy access to this information, we lose more than just privacy and control over our information. Free speech, security, and equality suffer as well (2017)

To compound this concern, we are freely sharing intimate information on websites and applications owned by large corporations. That information is being archived, mined, and sold. Based on data gathered, corporations can use machine learning to understand and make assumptions about who we are and what actions we are likely to take.

Curious about my own data floating around on the Internet, I spent 30 minutes googling myself. I found information on my age, design style, address, career and education history, cities I’ve lived in, homes I’ve owned, things I am interested in, projects I’ve worked on, awards I’ve won, court records, and my relationship and marital history. Click on the items in the chart below to see what my search revealed. 

This search did not take into account the mounds of search and activity data about me that corporations have stored but not made public. If you are interested Google will share an archive of your data from the Google apps you use. You can delete those accounts and clear your browser history, but I was unable to find a statement indicating that if you delete your account or clear your history that data will be erased from Google’s files (Google Support, 2017).

In the book The Internet of Us, Michael Patrick Lynch notes, “In some cases—many cases in fact—we trade information in situations where trust has already been established to some degree.” For instance, if you have an affinity for a particular brand you may be more comfortable completing an online form and giving over your personal information than if you had never heard of the company (2016). Can we assume that humans who give their data up freely trust the organizations to which they are giving that data?

We may well be trusting blind. We have little visibility into what data is being stored on us and even less visibility into how those data are being mined and harvested. Even if we did know what data these corporations have, most likely we still would not know what assumptions the corporate algorithms are making about us.

Alas, in an article on Backchannel.com, David Weinberger explains how machine learning works and how little we can hope to understand:

Clearly our computers have surpassed us in their power to discriminate, find patterns, and draw conclusions. That’s one reason we use them. Rather than reducing phenomena to fit a relatively simple model, we can now let our computers make models as big as they need to. But this also seems to mean that what we know depends upon the output of machines the functioning of which we cannot follow, explain, or understand (2016).

Next Read: How Does Trust Work?

References: 

ACLU. (2017). Privacy & Technology. Retrieved from https://www.aclu.org/issues/privacy-technology

Google Support. (2017). Retrieved from https://support.google.com/accounts/answer/3024190?hl=en

Lynch, M., P. (2016). The Internet of Us: Knowing More and Understanding Less in the Age of Big Data. Liverlight Publishing Company. New York.

National Constitution Center. Amendment iv. Retrieved from https://constitutioncenter.org/interactive-constitution/amendments/amendment-iv

Naylor, B. (July 11, 2016). Firms Are Buying, Sharing Your Online Info. What Can You Do About It? NPR. Retrieved from http://www.npr.org/sections/alltechconsidered/2016/07/11/485571291/firms-are-buying-sharing-your-online-info-what-can-you-do-about-it

Weinberger, D. (April 18, 2016). Alien Knowledge: When Machines Justify Knowledge. Backchannel. Retrieved from https://backchannel.com/our-machines-now-have-knowledge-well-never-understand-857a479dcc0e

« Older posts

© 2018 Chrissy Clary

Theme by Anders NorenUp ↑