This week on Product Love, I sat down with Ben Taylor to continue our conversation. Ben Taylor is the chief data evangelist at DataRobot, an automated machine learning platform that makes it fast and easy to build and deploy accurate predictive models.

Last time, we talked a lot about AI and start-ups. Before jumping into Part 2, get a summary of our previous episode or listen to it now on Apple or Spotify.

During this episode, we talk about trusting the customer, biases in AI, and being obsessed with the problem.

Trusting the customer … or not

Ben left us on a bit of a cliffhanger last time. He ended the episode by saying that he didn’t trust customers that much. That’s a pretty bold statement for anybody in tech, and particularly in product management, to hear. After all, PMs are constantly as touted as champions of the customer. But don’t fret — here’s what he means.

Sure, customers can provide some of the best product inspiration. In fact, Ben gets some of his best ideas from customers, especially the less technical ones. However, their feature requests often get so specific that you risk becoming a consultant. Don’t trade custom work for long-term product development that can truly make a difference.

Instead, he suggests measuring based on behavior (a statement I wholeheartedly agree with). If you ask users what they do in the product, there actually might be a huge disparity between what they say and their behavior. For example, the sales team might be against PMs retiring certain features because they think big accounts that use those features will attract more. If you look at the data, that’s might not be true. There could even be an instance where the users haven’t even touched that part of the application in months.

Biases in AI

We all know AI isn’t perfect and that plenty of companies have caught fire for producing discriminatory algorithms. Ben understands that humans are biased and machines need to be trained to find those biases.

He brings up an example at HireVue, where he had to help with resumes. He asked members in a board room to raise their hand if they thought it was bad to predict race in a resume. No one raised their hand which signified to him that it was a concern. The team would get data sets that were racist/sexist, so they decided to train the algorithm to reject those biases. The AI machine had learned what part of a resume signified race or gender, and then would remove that part from the document; for example, no names were displayed.

Hiring data scientists

Ben’s previous team was full of physicists with PhDs. Did that just get you down? No worries. His favorite quality that he looks for when building out an AI power team is someone who can get through blockers.

If you’re a developer in the AI space, what’s the likelihood you’re going to get blocked? 100%. It’s natural. However, he looks for someone who can get around these challenges. This could mean you’re either really good at Google, Youtube tutorials, searching through Stack Overflow, or you might be obsessed with problem-solving.

Want to learn what Ben thinks about NPS? Patents? Or the most common mistakes people make in AI startups? Listen to the episode above.

About the Author

Eric Boduch is the chief evangelist for Pendo. Previously, he served as the CEO of Brainstorm SMS Technologies LLC (dba SMaSh, Inc.) and was the co-founder and CEO of several other companies. Eric holds a Bachelor of Science from The School of Computer Engineering at Carnegie Mellon University in Electrical and Computer Engineering and is a graduate of its Executive Management Program.