Since the inception of advertising, the goal of a brand has always been “How can I get my message in front of consumers who are most likely to be influenced?” Over the decades, media such as newspapers, radio, and television have made increasing strides to bring in a well-defined audience to a place where an advertiser can effectively reach them. But despite the successes, this has remained a game of statistics, general summaries, and aggregation.
Social media sites—exemplified by Facebook—have changed this equation dramatically by bringing concrete data about single individuals into the mix. Now a brand can go beyond targeting people based on a strategic set of numbers and demographics, to a broad group that includes interests, thoughts, and what they’ve recently searched for online. These networks give the user ample opportunities to reveal personal information about their likes and dislikes, who they follow, who they communicate with, the games they play, the music they listen to, and a large subset of their online activity. This data can be used to predict purchase behavior, determine influencers, and many other pieces of information that make a marketer’s job easier.
However, this comes at the cost of the consumer’s privacy—a high cost, which is only beginning come to light. The inevitable backlash is coming. First amongst a small group of privacy crusaders, followed by a large percentage of the internet-using population, landing inevitably in litigation.
Who do you trust?
The issue at hand is that there is a fundamental breach of trust at the heart of the way in which traditional networks like Facebook and Google+ operate. As a person, every day I make trust-decisions with people and entities that I interact with. When I use my credit card, I am trusting my bank to know what I buy. When I read a book on my Kindle, I am trusting Amazon to know what I read. When I go to my doctor, I am trusting the healthcare system to know what ailments I have and what medicines I am receiving. We make these trust decisions because we understand how these entities work, and we know that they are responsible (and in some cases liable) for maintaining our trust. These entities are also compartmentalized—Amazon doesn’t know what medicines I’m taking and my doctor doesn’t know what I’m reading.
But when I connect those same transactions with a social network, the trust lines are no longer valid. Not only do I end up sharing data that I may have thought was private with Facebook, but Facebook is not bound by the same trust relationship that I have with the original party. Suddenly Facebook knows both my medical history and my reading history, and I may have never intended for those two to come together. What’s more, Facebook will then sell that information to parties I have never met, much less decided are trustworthy. It seems like a valuable product for advertisers—a car salesman, for example, might want to know that a person is a Microsoft executive, a Republican, makes over $160k/year and has a car that is five-years old. But that person himself might never have wanted their employment data and political affiliation correlated. Most people only accept this now because they don’t realize it’s happening. Once people become aware, I expect consumers and government to stop it.
The issue at hand is that mainstream social networks are breaking the underlying trust relationships that we believe exists.
Where will that leave brands who want to do targeted messaging? Does this mean we have to go back to vague targeting, aggregations and statistical likelihoods and give up our concrete data? Luckily no. The issue is not the targeting itself, it’s the breaking of the underlying trust relationships by the current mainstream social networks.
As a consumer, I appreciate well-targeted advertising. When I am in the market for a camera, a car or even a new show, I welcome suggestions that are relevant to me and that can help inform my opinion, inspire me, educate me, or even incentivize me with a deal. My concern comes from how this information is collected.
The trick will be in devising a way for the original information holders that you trust—be it your bank, your doctor, or an online retailer—to share relevant and verified facts on your behalf without revealing the information behind those facts. For example, if I am writing a review of a hotel, I may want to claim that I am a person who frequently stays at 5 star hotels so you know that my review is credible. A travel company that I already trust with my travel history could issue a statement on my behalf that says I stay at many five star hotels without releasing my actual travel history. Or that company could issue me a statement that says I work at a tech company without releasing my actual email address or even my company name.
How badges work
Statements of information from one company to another—referred to as badges—can be beneficial for a person when they are needed. So I might use my travel badge in a travel review site and my tech badge in a tech review site. The badges themselves do not contain any of the personal identifying information—PII—that was used to generate it, so there is no breakdown in the trust barrier. And because the use of the badges is decided by the consumer at the time of an interaction, we can have confidence that the consumer is comfortable sharing that verified fact about himself. That is to say, if I post a statement with a tech worker badge attached, I would not be adverse to seeing an ad that is targeting tech workers. Or if I posted with a stays-in-five-star-hotels badge, I would not be surprised to see high-end hotels ads targeting me. What I do have a problem with is when I am talking on a travel site and I see ads that are targeting me for being a tech employee. The lack of PII in badges makes that type of privacy impossible.
Badging also solves one of the most egregious PII sharing issues on social networks— the insistence on “real names”. Because these networks are based on aggregation, they need something to hang all of the aggregation around. Any reputation a user has in these networks is based on sharing some subset of the data they have collected about the user. Because this boundary is fluid, these systems are increasingly plagued by data leaking between scenarios in ways that a user never intended, such as when a school teacher is fired for some activity that they did in their own time but made it onto their Facebook feed.
Badging enables a user to share just the parts of his identity that are relevant at the time. For a hotel review, it is not relevant that I am a tech worker or that I am Dave Vronay. All that matters is that I often stay at 5-star hotels. Badges give people an easy way to use just the subset of their verified identity that is relevant for the task at hand, while also letting advertisers know how a person wants to be approached.
The solution to enhance consumer privacy is Badge Authority
Badge Authority is a secure intermediary that sits between badge issuers (those who have information) and badge consumers (those who want to use badges). Email domain based badges are currently in use on the Heard social exchange, and we expect them to become increasingly popular across the internet as PII issues become a mainstream concern. To issue or consume badges on your own site, please contact eweware.