Talking to users and taking their feedback is part of our life as designers and this is where we can contribute the most. When we are talking about building intelligent systems we can be integral part it of by making sure our user's interests and goals are what drives artificial intelligence forward.
Built-in feedback For example, when designing interfaces with AI we should keep in mind that it possible that the prediction will be wrong so we have to design with that in mind.
A great way is to start by building interfaces that have structured feedback built in -- if your model has made a mistake a structured feedback is often better than a yes/no question.
Netflix built a whole system just to try and show you what they think it is the one artwork that will make you tick but what if you really dislike that artwork? What if it is offensive?
If there's no structured feedback in place you will solely rely on what the machine says it is right and you might remove from the loop its human factor.
Don't try to deceive people
The main danger of AI, in my view, is that we might outsource important life decisions to a yes or no black box and if we try to play the human at that moment we might make things worse.
Be sure to tell people what's happening, show them why you're recommending something YouTube does a great job at this, it recommends you new videos based on what other people also watched and they make sure to write below the videos the one that triggered that recommendation: "Kevin Kenson viewers watch this" .With a little bit of copy you can give your users more transparency and maybe get them to trust you more in the future because now they know why you recommend them things.