I can’t go on Facebook without seeing magicians.
I can trace it back to when I watched a video of America’s Got Talent. It started with singers, but soon it moved on to other categories, including illusionists. That was enough to tell Facebook’s algorithms that I had to be interested in magic and that it should show me more of what it deduced I wanted to see. Now I have to be careful, because if I click on any of that content, it will reinforce the algorithm’s notion that I must really be interested in card tricks, and pretty soon that’s all Facebook will ever show me. Even if it was all just a passing curiosity.
My experience is not new or particularly unique — Eli Pariser warned us about social media “filter bubbles” back in 2011 — but it’s a handy illustration of the dark places an algorithm can take you. I may get a bit annoyed when Facebook serves up a David Blaine video, but filter bubbles can be downright dangerous, turning otherwise neutral platforms into breeding grounds for all sorts of ugly ideas.
Where does my data go?
The truth is, most people have little understanding of how AI works — they just know that computers are collecting their data. And that can be scary.
Where does that data go, and who has access to it? Is it being used for my benefit, or is it being harnessed to sell me things and increase corporate profits? If you are offering a product or service with AI built into it, these are the questions your users and customers will ask. If someone is entrusting you with their data, you don’t just owe them answers. You owe them transparency.
When we were first designing Charli — our software that uses AI to help customers automate tasks and keep track of all their content and other “stuff” — we envisioned it as a “fire-and-forget” product. In other words, we were asking people to hand their data over to Charli and let the AI worry about it.