Monthly Archives: November 2014

Opt-out of the involuntary personal data economy

The personal data economy starts off with acquiring data on individuals from census information, public records, web trackers, surveys, and various other methods, some more and some less nefarious.  Data is aggregated in data warehouses and sold by data brokers in large batches to whoever is interested in buying it (read more here).  Most commonly it is either used for marketing and ad-targeting, or general market research. This happens constantly and mostly without our explicit consent, although often without our explicit descent either.

Many articles can be found on the net written by people who went to data warehouse and brokerage companies and requested to to see what they knew about them (one, two, three, etc.).  More often than not the brokerages complied, and the results were a mix of surprise at some things that these companies got right like address histories, and perplexity at vague inferences which seemed to be off the mark like race and lifestyle type.  All in all it seems a great deal of accurate data about us is out there, but it’s mixed in with a lot of noisy junk that makes little sense.

However, aside from being willing to tell you what they know (or think they know) about you, data brokerages will also let you ‘opt-out’ of their databases.  Ken Gagne at Computerworld drew up a list of the 10 biggest data warehousing and brokerage companies, and went through the paces to see if he could opt-out of their data pools.  Of the 10 he tried, he was able to effectively opt out of 9 of them with some caveats.  The process was not always straightforward, but he’s documented it well so you can follow in his footsteps if you’d like.  He notes at the end of the article that this is not a ‘fire-and-forget’ process.  Just because you’re out doesn’t mean you won’t sneak back in when they purchase their next big batch of data. There is a service called PrivacyMate that offers to do the repeat work for you, but at $120 a year it doesn’t come cheap.

WhatsApp encrypts messages, and confuses me.

In February of 2014, Facebook acquired WhatsApp for $19 billion.  At the time, WhatsApp had 400 million users, mostly in Europe, and were rolling out the 99 Cent annual subscription.  That means the company had an annual turnover of $400 million, but was somehow worth $19 Billion.  While the company was acquiring almost a million users a day, that valuation still seemed excessive and everyone tried to guess why it was worth so much.

For me, it seemed obvious.  My take was that the payment was not for the value of WhatsApp per se, but rather Facebook was paying to protect their existing business mode.  Facebook’s revenue comes from understanding their users, and pitching them ads that they like, thereby generating higher click-rates than their competition.  They had noticed that a large portion of communication between their current users was now outside of the Facebook platform, and had paid that price to stay in-the-loop in terms of what their users were talking about.

And today (18 Nov ’14), WhatsApp announces end-to-end encryption for WhatsApp messages.  It’s only Android users for now, and does not include group, image or video messages, but all of that is coming soon.  What this means is that WhatsApp, and thereby Facebook as well, does not have access to the content of its users’ messages.   On top of that, they have been planning and implementing this since the Facebook acquisition, so we can assume Facebook knew this was coming.  This then begs the question, why on earth did Facebook pay $19 Billion for half a billion in annual revenue?  For WhatsApp this is a great move, you just have to look at the effect of Facebook’s acquisition of Moves App on the latter’s app store ratings to see why.  But why would Facebook be OK with this?  Don’t they now own a super expensive but lame racehorse?  I thought I had this one all figured out, and now I’m as confused as everyone else.

Verizon Smart Rewards not so Smart

A while back Verizon introduced their Smart Rewards program which offers bonuses to to users.  All of the marketing for this program makes it seem like you just sign up and get stuff, that’s it.  But it’s not till you get down to the very fine print  that you see you must enroll in “Verizon Selects” to participate, and you are paying for those bonus points (of questionable value) with your data:

“Participation in Smart Rewards may require enrollment in Verizon Selects, which personalizes marketing customers may receive from Verizon and other companies by using information about customers’ use of Verizon products and services including location, web browsing and app usage data.”

On the Verizon Selects website they say a little about what data they use:

“Simply put, Verizon Selects will use location, web browsing and mobile application usage data, as well as other information including customer demographic and interest data”

So whether or not you agree with the transparency, it is still an interesting concept to reward interested individuals for their data instead of just taking it.  It appeared to be a step in the right direction from our point of view. That was until it came out that the Verizon Selects opt-out didn’t actually opt you out of it, and even those who never opted-in were still surrendering their data without rewards.  Jacob Hoffman-Andrews blew the lid off of this doing a little snooping in the information that Verizon mobile browsers were putting out there.

Earn “points” for surrendering your personal data, and by circumventing any privacy you had on your mobile device.

To make matters worse, the really bad part was not that Verizon was jacking your data (they were), but they were circumventing all of your privacy protections by making you completely trackable to every website you visited.  More or less, every time your phone talks to the outside world, they insert a marker into that conversation (at the cell tower, mind you) which tells the other party who you are.  It’s like Verizon was trying to shoot itself in the foot from a customer trust standpoint, but had tiny, child-like feet and a bow-and-arrow so they had to work really, really hard before they managed it.  The New York Times also reported that AT&T has/had a similar program in the works.  I bet the conversation in upper management there went from “who’s fault is it that we didn’t do this first?” to “thank Zeus we didn’t do that” in a single heartbeat.

91% of Americans feel they are not in control of their info, and think it’s their own fault.

From the BBC Article on the Pew Research Privacy Report

The Pew Research Center released a study on how Americans feel about their privacy in a post-Snowden era.  Not surprisingly, the results show that 91% of Americans feel they don’t have control over how they data is used, and for good reason.  The information they are worried most about is consistent with other studies, where data which can be used to defraud or impersonate them have highest priority, followed by behavior information, with social and demographics at the lowest priority.  

Screen Shot 2014-11-12 at 14.19.05

However what is interesting is the extremely high importance of health care information that the new study revealed, which as far as I can tell is not dangerous in terms of fraud or impersonation, but just … well …. private.  Similarly, inter-personal communications such as emails, calls and text are also considered highly private.  The increasing importance of the privacy of communication could very well be attributed to the events surrounding Snowden and the NSA.

Screen Shot 2014-11-12 at 14.21.48

How Individuals Prioritize Who They Feel is Responsible for Safeguarding their Privacy Across Different Countries and Continents. 1 means “Most Responsible” – Taken from a Lightspeed GMI Presentation at MRMW’14

But this only becomes really interesting when looking at the the Pew survey in context.  A recent study from Lightspeed GMI surveyed where individuals lay responsibility for preserving privacy: “who’s job is it to keep this stuff private?”  Germany, Mexico and India all put the mandate on the mobile providers and then government to regulate the use of private data and guard individual privacy.  In the US however, people believe it is their own job (followed by providers, marketing companies, and only then government) to keep their information private.  However, most people do not have the data they feel they are responsible for protecting, nor do they understand how analytics can get this information from seemingly unrelated data (some health information can be decoded from accelerometer data for example).



Beyond Quantified Self: Quantified Community

The next big buzzword in Quantified Self is out, and it didn’t come from Quantified Selfers. “Quantified Community” is the term real estate moguls are using when they refer to the newest high-end real estate projects in New York City.  Real estate companies would like to quantify everything they can about the individuals within a space, as well as environmental parameters of that space.  The goal is to present investors and renters/buyers with a complete documentation of the space and its inhabitants to help them reach a decision.

Hudson Yards Developer’s Concept

The Hudson Yards real estate project on Manhattan’s West Wide is the proving grounds for this new concept.  They want to measure everything from air quality sensors in the environment, to individual step counters and quantifiers on the individual’s smartphones, all opt-out of course. Even quantifying individuals who are not participating seems to be within the scope.  A suggestion to use Google Glass has been made, which will surely prompt an outrage, perhaps “Community of Glass-holes” will be trending soon.

The resulting data will be used not only for marketing purposes, but also to improve city planning in the immediate future.   Even reducing the energy footprint of these communities appears possible with accurate usage data. Users are expected to want to contribute their data since their participation will have a positive impact on their daily environment.  As always, success of the project will live and die with the willingness of the user to share.  And that depends on how well they understand data analytics, how much they they trust the entity collecting data, and how well they understand what it will, and will not, be used for.