More Privacy and Better Personalization – a Mission Impossible?

Trust Beats Big Data When it Comes to Personalization

This post was originally published in Reaktor’s blog and it continues on the MyData theme. It’s good to be back in action after a half-year blogging silence. I’m also really excited about the Mydata 2016 conference next week, where I’ll be hosting a session titled “Challenges in Big / Small Personal Data Analytics”. Stay tuned for updates after the event!

Personalization is a hot topic in the e-commerce and media businesses. Companies seek to stay ahead of their fierce competition by recommending relevant stuff for their customers to buy and content to consume. The personalization trend is driven by the rapidly increasing amount of data on customers’ actions. The data is then crunched with intricate algorithms in order to make better guesses on what customers want.

The promise of big data is that the more data is collected from users the better recommendations can be provided, and the more profits the services can generate. But are the current recommendation systems really that good? Some say no, and I agree.

Recommendation systems are designed and built by data scientists. A common misconception is that they are like magicians that turn any piles of data into gold with their algorithmic wands.

The reality is that even the most sophisticated recommendation engines and machine learning models can only take you so far. Far more important than engines and models is the input data that the recommendation is based on. Moreover, it is quality over quantity – collecting more data will not improve the recommendations if the data is irrelevant or corrupt.

The data that is easily available to data scientists is collected from users’ historical actions – views, clicks, likes, and purchases. This data is crunched to identify groups of people as well as items that appear together frequently, and used to provide familiar recommendations such as “Customers who liked this item also liked…” or “Customers who are similar to you also like …” This makes sense and can be useful, but is far from a truly personalised experience that makes me say “Wow, I really want to buy this!”.

How could the recommendations then be improved? What kind of data would be more relevant?

Encourage trust and you’ll never recommend mens’ shoes to teenage girls again

Knowing a few simple facts about the user’s background and demography would be a good start. Or, at the very least, beginning to avoid absurd experiences such as consistently offering pregnancy tests to men. However, even basic data such as gender isn’t openly available, so profiling the users is often more or less guesswork.

Naturally, the best source for information about a certain user is the user herself. So why not ask the user for the relevant information? Why not let the user fill in the gaps in the profile and correct possible inaccuracies?

“But the users don’t trust us and will not give any more of their data!”, you might think. Indeed, according to a recent survey there is increasing lack of trust towards collecting personal data from customers’ side.

This is due to most data collection and crunching being done in secret, without explicitly telling the user what data is being collected and how it is used. It should not then be a surprise that the user denies any further attempts of data collection.

There is hope, however. Taking a more user-centric approach and increasing transparency can help regain the trust of users. Inform the user and, more importantly, give them control over what is being collected and how it is being used.

Another thing that consumers repeatedly say in survey is that they want “more personalization – but also more privacy”. At first, this seems like an apparent contradiction. Yet I claim that it’s not.

Personally, I would be much more willing to share my personal data with a service provider if I would understand how a particular piece of information helps improve the service. I might even add relevant data from other services – for example, purchase history – from another loyalty program to make the data even more relevant. That is, if I’d also have the control over how the data is being used and the right to edit or even delete the data, when necessary.

Naturally, this is also a challenge from a user experience and interaction design point of view. How to give the user control over their data collection and use without making it too burdensome for her?

Increasing transparency and trust in personal data use is one of the key questions in the forthcoming MyData 2016 conference.

Reaktor is supporting the conference, and we are excited to explore how personal data can be handled in a more sustainable manner to create better services with our clients. And maybe these discussions affect ecommerce business practices, and the recommendations will finally start feeling like tips from a trusted friend – instead of a machine doing blind guessing.