Podcast

Full Transcript: Jon Noronha on the Product Love Podcast

This week on Product Love, we revisit a conversation with Jon Noronha, a director of product at Optimizely. Optimizely is a customer experience experimentation platform. Jon describes himself as skeptical, but also optimistic, which is a bit of an oxymoron. But in product management, he makes it work.

His skepticism comes from how easy it is to lie and misconstrue facts with charts and data. It’s just as easy to take misleading quotes out of context, so his skepticism allows him to investigate and understand the real reasons. Despite this, he tries to radiate optimism to his team. Product managers often have to be the loudest cheerleaders for their product as well as the biggest critics. The balance of being skeptical and optimistic allows one to be the anchor for their team through emergencies as well as the visionary that drives the team’s motivations.

We discussed how his mindset and product values at Optimizely have changed along the way. Early on, his team valued ease of use and how the customer experience journey had to be fairly simple and straightforward. But over time, the market’s gotten savvier and so have the users. In response, they changed their strategy to focus more on extensibility so users could configure their product for exact use cases. There was more value extracted from thinking about how different target users could change the way they built their product.

In this episode, Jon shares his insights on experimentation, and why product managers should actively seek what’s important in a product but learn how to let go of things that don’t matter. 

I am now happy to share a lightly edited transcript of our conversation. Whether you prefer audio, or love reading, I hope you enjoy it!

You can check out the original post here and stream the audio version here or subscribe on iTunes today.

 

 

Eric Boduch: Welcome listeners to another episode of Product Love. Eric Boduch here today with guest Jon Noronha. Jon is the director of product at Optimizely. So to kick this off Jon, why don’t you give us a little overview on your background.

Jon Noronha: Yeah, happy to. So I really lucked into the whole field of product management. It

wasn’t something I particularly aspired to or sought. But actually fell into it starting with an internship at Microsoft several years ago, where I didn’t even know what the job was, but I found myself doing it and liking it and eventually loving it. And so I actually spent several years as a program manager at Microsoft, working on the Bing team, which was definitely a trip, kind of seeing this company that had been used to building box software switch to building an online service. And during my time there, I saw how much Bing was adopting this practice of A/B testing experimentation and thought wow, this is really powerful. Also really hard. Everyone should be doing this, and it should be easier.

And so about four years ago, I made a jump from Seattle down back to San Francisco, where home is, and joined Optimizely, which at the time was pretty small. It’s grown a fair amount since then, and I’ve been in product management here ever since.

E: Awesome. So let’s chat a little more about your time at Microsoft. I mean of particular interest to me is that you worked on the Bing team, like you just talked about. And you had to have some really cool experiences, because it’s got to be difficult competing with an entrenched competitor, i.e. Google, in a winner take all space like search.

J: Oh absolutely. Not just an entrenched competitor, but a truly good one. It wasn’t a case where Google search sucked and everyone was desperate for a competitor. It was quite the opposite. People loved Google, and that included me. Many of us did. But we also felt there had to be some kind of competition, some kind of challenge. We wanted to actually build that. And at Microsoft, it wasn’t just about building a search competitor. It was really about building this muscle of running an online, always-on-service that a billion people could use. That was really the landmark for us.

When I joined Bing in 2011, we were losing $400,000 an hour. That was my favorite TechCrunch headline. So it was truly a hole that we started in, having to compete against a company like Google. What I think most people don’t know is that by the time I left in 2014, Bing was actually making about a billion dollars a quarter. So it was most amazing to actually see the turnaround there. The turnaround didn’t necessarily come from beating Google on all fronts, but from taking something that was way behind to building something that actually pretty competitive in a lot of aspects. And even I’d like to think ahead in certain areas, like image search where I was working.

You’re right though, it’s a winner-take-all space. I mean part of what let Bing do that growth was going from on the order of 10% market share to like 25% over that time period. And that meant you could finally compete for advertisers for the first time and actually get them on your side. So it was really quite a trip. And in particular, having seen Microsoft make that cultural change from these three-year development cycles, you know, software in a shrink wrap box, to deploying code every week. Every part of how we worked had to change from the ground up.

 E: Yeah, it’s interesting. I mean I grew up where Microsoft was kind of cool, and then became totally uncool, and now they seem to be cool again. And a lot of it seems to be this push into the cloud, new leadership under Satya. Sounds like it’s really kind of, I don’t want to say righted the ship, because it was always a valuable company, but became something that people are super excited about. 

J: Yeah, I think so. I mean certainly the share price reflects that, which all of us Microsoft people are happy about. But yeah, absolutely. And I think a lot of that actually started at Bing. Not because it was all search, but because that’s where Microsoft built the muscle of deploying online services for the first time. And the funniest thing was seeing people who’d been at Microsoft for 15, 20 years having to rethink really every part of how they worked. For me as a new hire there, it was easy. It was obviously that we should be doing more frequent code deploys. We should be doing experiments. But I think it was most remarkable seeing how they steered that very large ship in a different direction.

E: Cool. That had to be an awesome experience. And from then Optimizely now. And you once said in a quote, “Most software engineering is a waste of time. This sounds like dramatics, but it’s the essence of why product management exists as a discipline.” Can you explain that?

J: Yeah, I’ve gotten a little trouble for saying this, but I will stand by it, and maybe I’ll try to expand on what I mean. So I don’t know if you feel this way as a product person or if others do, but my feeling was always that at any given time, I have at least 100 ideas of what our team could be working on. 100 really good, well-validated ideas. Not just pulled out of nowhere, but serious requests that people are asking for that would make things better. But in any given month, I can do maybe two or three of those things. That’s the ratio, right? And so at a minimum, there’s this opportunity cost. Whichever things we choose to do come at the expense of the other 97, 98 things that you might be doing. And so it’s kind of a painful realization to think in those terms. But it’s the godly, ungod truth of what you’re trying to deal with here.

I actually saw this first at Microsoft. My favorite story ever from Microsoft is the story of I believe it was Excel 2003, where believe it or not, in the early 2000s, Microsoft had the whole idea behind Google Docs. They wanted to build a collaborative document editing thing where people could get together on the cloud. That term didn’t quite exist yet. But they’d all go online, and they could share and work together on a spreadsheet and all that. So they spent three years building collaborative document editing online. A whole brand new big version of Excel. Meanwhile, they only had two guys across all of their thousands of people working on Excel, just fixing bugs in the old version, as kind of a hedge, just in case. Turns out though, that in like 2001, 2002, the supporting technology wasn’t all there to make an online service of the caliber of Google Docs. And so they realized about six months before launch, this thing’s not gonna work. It ain’t gonna fly. And so they pulled the entire thing. They just pulled the plug. And they had this oh shit moment where they said, “What do we do now? We promised everyone an Excel 2003, and it’s not here.”

They took those two guys who’d been fixing bugs for all three years, and they added a little bit of visual polish on top of what they’d done, and they released that as the new version of Excel. Excel 2003. And that was the most successful Office launch they’ve ever done. People loved it. And they loved it because it was stable, nothing had really changed, and it just worked. And I take that as an example of all these big features they could’ve added might have been detracting from the experience. And you see that in other Microsoft launches, certainly. Vista I think most painfully.  I’ve seen the pattern in a different way at Optimizely, having been at this company for four years now. Four years as a product manager is enough time to sort of see the consequences of your actions. You can see the things you’ve built, and see which ones actually paid off, which ones didn’t. And I have to say, it’s never the ones that I guessed or expected. There are tiny features that we did as a hack week project in a week that are loved and used all over the place. And unfortunately there are big things that we spent a year working on, slaving over, that barely got touched. And so when I look at all that, plus this opportunity cost, I just realize that our job as product managers is profoundly important of seeking out the things that really matter and giving up early on the ones that just aren’t gonna count.

E: Yeah, I think that is important. And experimentation plays into that, right? What you’re doing today.

J: Absolutely, yeah.

E: So let’s talk a little bit about experimentation. It’s obviously something you’re really familiar with. Why is experimentation important to product managers? And is it only important to B2C product managers?

J: Yeah. Great question. I think it’s essential. I mean that’s what I sort of realized first hand at Microsoft. And now having kind of worked in that way, it’s hard to imagine ever going back. You think back to this idea that we could be working on 100 things in a given time, but we can only do maybe three of them. The core question we have to ask ourselves is which three do we do? Not just that, but how deep do we go? How much do we iterate on the things we did last month versus moving on to something else? And I think the best way to answer that question is through some kind of experimentation. So experimentation lets you do, is actually quantitatively measure the impact of the things you’re building. You can actually take some feature you’ve built, launch it out to half your traffic or 10% or whatever, and actually see how that changes the behavior of your users. Does it drive more retention? Does it drive more conversions? Whatever it is your product is out there to do, your feature should in theory be driving more of that thing, or less of that thing, whatever your goal is. And you want to actually know if that’s true.

The other thing it lets you do is it let you iterate on a product after launch. I’m kind of shocked how often we all, and this includes me, still fall into the same pattern of working really hard on some product launch, spending six, nine, 12 months on it, launching it, and then forgetting about it. Moving on to some other thing and doing something else. Because there’s always a roadmap, there’s the next 97 things to do. But a whole lot of the value for experimentation comes after that launch, not just at the beginning. And I don’t think any of this is really different for B2B versus B2C. I mean I think B2C has the advantage of higher traffic for sure. So they can experiment more and at larger scales. But I have to say that we’ve been seeing ourselves a lot of benefits from experimentation, even as a relatively small B2B company at Optimizely. So I think it’s universal that if you’re doing product management, you should be measuring your work and measuring the impact of your work.

E: Yeah I would 100% agree. It’s always been interesting when you see people think about, okay, success, we’ve launched this new feature. We’ve launched this new set of functionality. And then they stop and they don’t measure engagement or they don’t experiment with it to see how that feature is being accepted. How they can improve that engagement, or how they can even educate users.

 

J: Yeah I’m sure you guys see this at Pendo too, but I just find that so much of the benefit comes from that last 5% of the work, where you actually make your feature, whatever is was, discoverable and adopted and onboarded and understood. And it’s such a shame that we put so much work into just releasing the thing and so little into everything that comes after.

E: Yeah absolutely. I think launch is one thing, but the most important thing is engagement and actually helping your customers or users get their jobs done more efficiently, more effectively.

J: Totally. 

E: So Pendo recently published, along with Product Collective, a report called “The State of Product Leadership.” It targeted B2B software leadership, and in it they found very few respondents actually used A/B testing and product experimentation techniques. What’s your reaction to that, and does it surprise you?

J: It certainly doesn’t surprise me, but I think it’s changing fast. And I suspect if you were to run that same report in even just two or three years, you’d hear different answers, particularly from the most successful companies. What I’m seeing is that more and more companies are starting to think bigger about what experimentation means. And that’s even true of us at Optimizely. I think what we’re seeing is the shift from the idea of A/B testing as the sort of narrow technique focused on conversion rates on landing pages to experimentation as a broader mindset of how an entire business runs. And in particular, a product team. So even the teams that say they’re not doing A/B testing, I suspect if they’re effective they’re already bringing in some of these techniques. 

So one example is feature flagging, or gradual rollouts of a feature. So you spend your nine months building your thing, and then do you just launch it to 100% and hope for the best? Well that’s how a lot of teams work, but what we’re seeing more and more is that teams are doing a gradual rollout with the opportunity to roll back safely. So take my feature, roll it out to say 5% of users or to beta users, or in Facebook’s case, roll it out to New Zealand, because they’re isolated and separate. See how it performs in that world, measure in a sense whether qualitatively or quantitatively a reaction to that thing, and adjust your behavior off of that. So we’ll call that a good form of experimentation. And in general, I think teams should think more about what are the others kinds of experiments they could run. One of the most effective ones that I see is what’s called a painted door experiment. This is where before you spend the nine months building something, you make the simplest, lightest possible mockup of it. And in fact in extreme form, all you do is you stick a button in your UI as if you built the thing, like new widget or whatever it is, and just wait to see who clicks that button. And when they click it, because you haven’t actually built the thing yet, just make a popup that says, “Hey, we’re thinking of building this feature. We’d like to talk to you about it.” And do an interview with them. I think especially in B2B where you can talk to so many of your customers, it’s a super valuable exercise. And it’s been really eye opening for us.

We had two different features we were considering building at Optimizely, and so we ran painted doors for both of them, over the same time period to the same number of people. And in that time period, one of them got something like 65 people to click on it, and the other got three. So it’s not the be all end all of these things, but that gives us some indicator of if we were to spend all that time building these things, and they would each be one year long projects or more, we might get radically different value out of those two things.

Just another example of this is adoption promotion. The kind of things you guys focus on at Pendo. There’s so many kinds of experimentation that B2B leaders find value in here. And then finally the other place where I think teams are probably underrating experimentation, or maybe just not even thinking of it as A/B testing, is on more of the engineering side. So I see developers constantly coming up with multiple ideas for how to solve a problem. Multiple different services that could play the same role. Different backends or layouts of a backend. Even different algorithms with different parameters. And quite often what we do sort of by habit is that we pick just one of those things and we launch it upfront. But more and more what I think you’ll see is that teams will actually take those ideas, and they’ll take two or three or four of them and actually test them against each other and see how they perform. And be consistently surprised by the results of doing so.

E: And I like the painted door story. I mean that’s a great way to avoid your quote from above, where software engineering can be a waste of time or a lot of it. Right? Because you can deploy a painted door quite easily. And then you can gauge results and use that to prioritize engineering resources, right?

J: Totally.

E: Just another mechanism.

J: I so wish I could go back in time and do painted doors for some of the things we spent six months on. I think of the time it would have saved me, my god.

E: Absolutely. I can think of tons of stuff in my past experience too where it would’ve been great. So let’s talk a little bit about why experimentation is important for B2B companies, not only B2C. I think you touched on some of it, but let’s expound on that a little more.

J: Yeah, happy to. So like I said before, I’ll acknowledge that it can be harder in B2B. In B2C, you have this wealth of traffic, right? Like at Bing, we had over 100 million visitors every month. And so we could do a test on 1% of our population and still get significance on the tiniest of changes really quickly. I would do tests like changing a font size from 13 point to 14 point in our UI. And see that it drove a 0.2% increase in queries per user. And that was valuable to me. That kind of test doesn’t work in B2B, unless you are truly massive scale B2B company. But at the same time, the same problems that I brought up for B2C are all still there. There’s this risk of wasting time, and also this sort of downside risk of a bad launch. In B2B it’s even more costly to alienate your users. Just screwing up one account could cost you hundreds of thousands or millions of dollars, depending on your scale. And so this is something that I think B2B companies are still learning and exploring how to do. And I’ll be honest, even at Optimizely, which is a company built around experimentation, we were bad at dogfooding our own product for a long time. For myself as a product manager, I even thought, hey, I can just talk to people. I don’t need to run an experiment. I can just go interview them. And that’s true, but it’s not an either/or. So user research, one to one interviews are incredibly valuable. But in the last year or so, I’ve seen a bunch of cases where we’ve run these simple, easy experiments like those painted doors, and actually gotten a surprising amount of value on them.

I think some of the biggest areas have been in this realm of feature, discovery, and adoption. So B2B products are often fairly complicated. They have a lot of different features and functionality in them. Some of it’s advanced, some of it’s simple. You sometimes want to hide the advanced stuff early on for a new user and vice versa, you want your existing users to find more so they get more value. And so we’ve been playing around a lot with moving things around in our UI. Drawing more attention to things. Just that simple popup or that new badge that draws attention to a feature in the UI can make all the difference to adoption. And that difference in adoption can lead directly to retention or growth of an account.

The other place where I see this really in B2B is in the whole user acquisition side. So in marketing and growth type uses. I mean I think A/B testing is already very well established for B2B marketing teams. Really every B2B marketing team worth its salt is testing things like landing page and signup forms and all of that. But just as much in the product side, I think what we’re starting to see is the idea of product led growth. Our product qualified leads. This idea that you create some kind of free or self serve model for your B2B product. Let people come in and explore it and try it out. And then see how many of those you can actually convert into paying users. And that conversion rate is a prime goal to be experimenting on. And again, there’s simple things that you can try. Are there certain features you can highlight? Are there emails you can send them along the way to nurture their discovery? Are there things you can do in your help docs that actually help them understand and get value? If you spend three months building a new onboarding, did that work or pay off? Did it actually increase that rate?

And what we see is the companies that can get that rate, even from say 5% to 7% can transform the whole arc of their business. It changes how much they can afford to spend on acquiring new leads for that thing. They can run more ads. And eventually they can change their entire model. You’re starting to see companies like HubSpot switching back from this very sales-led model to more of a freemium trial led model because they’re able to prove the model works. I think Dropbox has actually seen the same thing on their business side, where by having this incremental growth approach of getting more and more people into trials, getting that initial value into a sale, they’re able to drive a huge amount of growth over there.

E: So those are all great cases of experimentation gone right, or goals for experimentation. I’m sure your time, both as a product manager and more specifically at Optimizely, you’ve seen where experimentation can go wrong. Can you talk to us a little bit about that?

J: Oh yeah. All too many times. The way I like to think about it is experimentation is this very powerful tool that can easily be misused. I almost compare it to an antibiotic or an opioid or something. Something that’s very powerful for a certain case, but very risky or problematic if used in the wrong direction. I think the best example of this that I saw was actually when I was at Bing. Where we were building this culture of experimentation really from scratch. We went over a period of four years from maybe five or ten experiments a week to 400 a week. And at that scale, the entire division is experimenting all the time, and those experiments are driving a huge amount of behavior from the bottom level of the organization to the executives. So when you do that, you have to make sure that certain parts of your experimentation culture are really ticking effectively.

Probably the most fundamental one is what is the metric that you’re actually optimizing for? And is it the right metric for whatever it is you’re trying to achieve? And at Bing, we actually had a lot of challenges setting that metric. Early on, we said, we are trying to compete with Google. We want people to actually stick with Bing. We had a phenomenon where people would download … they’d get a new computer, they’d open it up, they’d see Bing, and their very first search would be “how do I switch over to Google?”

And so we had a north star metric in getting people to do more of their searches on Bing and not leave us for Google right away. And every experiment we ran was geared towards that metric. We quantified it as “queries per unique user.” So essentially, how many searches were we getting? On the theory that every search we got was a search that Google didn’t. And also on the theory that would eventually drive ad revenue, because we could show more ads based on those queries.

And so we spent years trying to get people to just do more searching. Come back more often, search more often, refine your search, tweak your search, filter your search. Anything that would get us more queries. And that actually ended up leading us astray a bit. I remember a couple years into working at Bing, I saw a quote from an executive at Google where they said something along the lines of, we’re the only company on Earth that tries to get rid of our users as quickly as possible. And that struck me because, you know, stated as a metric, what they’re trying to do is reduce the number of times a user has to search. Reduce the time on the site. We were doing the exact opposite. And so if you looked at Bing’s search versus Google’s search, these different metrics had actually led us down divergent paths. At Google, it was all about the links and the results and getting you out of there as quick as possible. Whereas Bing was all about the searching itself. We had a bigger search box. We had more ways to change your search. We didn’t necessarily speed you to getting that thing you were looking for.

And so what happened was that a seemingly good sounding metric had actually led us down the wrong path. And we actually ended up reversing our metric part way through my time there. So we actually switched from trying to increase the number of queries per user to breaking that into two pieces. So we decided we should actually be reducing the number of queries per session. Because the idea is when that number’s low, we’re actually getting you to find what you’re looking for faster. But also we wanted to increase the number of sessions per user. So the idea was to have more frequent, quicker sessions rather than these long, long trees of searches that our old metric had been encouraging.

I think that’s just one example of how this well meaning process can actually lead you down the wrong path. Another way of thinking of it is that experimentation is this very powerful tool for getting more of something. But if what you’re measuring is a bad behavior, you end up inducing more bad behavior and not more good.

E: So let’s talk a little bit about how product managers should think about integrating experimentation into either their websites or their product offerings today.

J: Yeah absolutely. So even despite that caveat that experimentation can lead you astray, it’s also really powerful. And so I would encourage teams to find more ways to bring experimentation into their day to day process and really entrench it. Some of the things I’ve seen there that work really well are incorporating experiment ideas into for example a PRD. So as you’re defining product requirements, make sure that every product manager asks the question, “what experiments have we already run that justify building this thing, and what experiments could we run as we build this to either validate the idea or validate the technology or iterate on this feature after we launch?” 

And similarly, I would encourage setting goals around experimentation. So we’ve actually found that one of the most effective things you can do is just pick a number of experiments you want to run across your entire team. And this number could range a lot. So many teams when they start, they’re running one experiment a month. It’s a sporadic here and there kind of thing. Take the opposite extreme of companies like booking.com or Microsoft. They’re running thousands of experiments a year. In fact, I think both Booking and Microsoft are over a thousand concurrent experiments at once. So at any given time, there are a thousand different tests that are running, all creating different versions of that site.

I don’t think everybody can afford to run that many tests, from either a traffic or a work perspective. You can think of it in terms of how many developers you have. So think about what would it take for us to run one experiment per developer per quarter. What would that look like? And again, take an expansive definition of experimentation. Another way of looking at it is take the number of experiments that you ran in the last year and try doubling that next year. And ask yourself, what would it take to get to that goal? Sometimes it’s as simple as bringing more teams on. Sometimes it’s about building a cultural practice through things like experiment reviews or idea brainstorms of tests you could run. Sometimes it’s also about being able to run more ambitious kinds of tests.

So one of the things I see kind of in the industry is that there’s these two differentapproaches to testing that are out there. On the one hand, there’s what’s called client side experimentation. This is where you essentially have some kind of JavaScript code running on your site that’s modifying the domain of your webpage or whatever it is in real time. This is Optimizely’s first approach to experimentation. This is how a lot of other tools in the marketing space run. It’s great for a non technical team to make visual changes.

On the flip side, there’s what’s called server side experimentation. This is where you’ve got some kind of code running in your backend that’s actually running if statements powered by data. So they’re flipping a coin in the backend and saying, “If you’re in variation A, then run some code path. If you’re in variation B, then run some other code path.” And run different logic in each of those cases. And this is something that’s very geared towards development teams. And for deep tests all through the stack. Things like pricing changes or algorithm changes or whatever it might be.

And what I think is interesting is that most teams will do just one of these approaches or the other. Which means they’ll miss out. So very developer heavy teams will go the server side route and then miss the chance to do these simple, lightweight adoption tests. Or their marketing team will be stuck. And vice versa, teams that come from that marketing growth mindset often only do client side, but they’re missing out on the ability to do things like feature flagging and staged rollouts.

And so at Optimizely, we’ve been focused a lot on trying to find the best of both worlds and offer both of these options. And so I would encourage everybody to think of ways to incorporate all these different testing techniques into their process.

E: Now you’ve been at Optimizely for four years now. What have you learned from seeing a company grow and change over that time span?

J: So much. It’s crazy. When I joined, the company was about 150 people. We’re at 400 now. We’ve had plenty of seesawing around along the way. In some ways, it feels less like four years at one company than like four different companies for a year each. Because things just change so much in that kind of growth. We’ve gone from being this small business focused A/B testing tool more for marketers and growth teams to really trying to move upmarket and go after the enterprise, which has very different implications for the kind of product you build and the way you build product. And also trying to move beyond trying to reach marketing teams to find product managers and developers who want to test. So reaching a different kind of customer, a different target user. It’s a lot of change to go through. Kind of feel like a lobster or a snake or something that sheds its skin and changes over time.

Some of the things that we’ve seen along the way are that a lot of our earlier mindset and our product values had to change along the way. So I think early on we really valued ease of use above all else. And when I say ease of use, I think specifically the way we understood that was this idea of everything should be point and click easy. Anybody should be able to do it. But I think over time we realized that the market’s gotten savvier, and that savvy users don’t always want ease of use out of the box. They want control and configurability and predictability. So we’ve actually changed a lot of our strategy to less favor first try ease of use to look more at extensibility. How do we let anybody configure our product to suit their exact use cases? The way one of our product managers put it once was, “What’s easier? A car or flying a 747? Well definitely the car. But which one’s more valuable? Definitely the 747.” In particular because you have a pilot who learns to master that thing and take full control of the craft. And so we certainly haven’t gone all the way to the 747 side of the spectrum. But I think what we have seen is there’s a lot of value in thinking through how a different target user should change the way we build our product.

I think the other one that I’ve seen in four years is just a deep appreciation of technical debt. This is something that developers are always talking about. It’s a lot of tech debt. It’s a lot of tech debt. But only when you’ve been somewhere for several years can you really feel in your bones what it means to have debt. Not just technical debt, but product debt, design debt. I think the metaphor is actually very apt. Debt is actually a good thing sometimes. It often makes sense early on to borrow against the future when you’ll be bigger and stronger. So it sometimes makes sense to take shortcuts here and there. But I’ve now exquisitely felt what it means to live with those shortcuts and see how a thing that would have taken us a week to change long ago now takes us six months to change, just because of the scale and the sensitivity of where our systems are now.

E: Yeah absolutely. I’ve had a lot of conversations lately about technical debt, its existence, and how much it affects software products. That could be a whole podcast I think.

J: Absolutely.

E: So you talked about trends for the future too. What do you see in the next few years that’ll affect the craft of product management?

J: Yeah it’s a great question. I mean the biggest one that I see, that I feel us just all living with, is just the fact of engineering scarcity. So it kind of points back to the earlier idea. But our whole job as product people is to prioritize. To try to get more done with less. And the less side is even truer. It’s getting harder and harder to find developers. There’s more and more companies fighting for the same people. And that’s not changing quickly, even with things like boot camps. Demand is just wildly outstripping supply. And so I think the job of the product manager, which is itself fairly new and emerging, will only get harder, will only feel more pressure to pick the right things to work on. And I think in particular we feel more and more pressure to justify our work to the rest of the business. I think more and more often you’ll see companies asking other development teams what was the payoff of this investment? Or you want to hire five more engineers? Well that’ll cost us a million dollars. What are we getting for that? And that’s where techniques like experimentation I think will go from nice to haves to must haves as people realize that unless you can somehow quantitatively measure this, you’re gonna have some very uncomfortable board meetings.

Another trend that I kind of see is redefining what it means to be agile. So the trend of waterfall to agile is not by any means new. We’ve been talking about it forever. But what I see in practice is a word I just learned recently, which is wagile. So teams that want to be agile and take on some of the cosmetic trappings of agile, like a daily standup and a meeting called scrum or whatever it is. But they kind of miss the soul of agile. What does it really mean to be agile? And to me, the essence of agile is constantly incorporating moments where you can learn and adapt your plans as you go. I think the line from the agile manifesto is “we value adapting new information over following a plan.”

And I think a lot of teams are talking the talk about agile but not really walking the walk. And again, the way to do that is finding ways to really validate ideas early on, and being pretty aggressive about giving up on things that aren’t working or changing plans right away.

Just the last one I see in terms of trends is I’m starting to see just this whole field of product management grow all over the place. And in particular outside of Silicon Valley. I think it’s a field that has its roots on the West Coast, but it’s becoming something in the whole world. I was actually in Australia last week presenting at a conference on product management in Sydney, and it was amazing meeting some people there who had just started in the field of product management. They basically told me, this field didn’t exist even one or two years ago, so where do we even start? How do we do this? And I think that’s actually really exciting for all of us who are in product. I mean for one thing, it’s just an exciting career opportunity. But it also means that we can be the leaders.

We can train others and mentor others, because a lot more people are coming into this field of product management. 

E: Awesome. I think that was a great summary of some key trends I see coming up

too in the next few years. So let’s change the direction of the conversation a little bit and talk about you.

J: Sure.

E: So do you have a favorite software product? And why is it your favorite?

J: Gosh. It’s hard to choose. But I think if I had to pick one favorite, it would be Google Maps. Google Maps, especially on my phone, just feels like the software product I use that it feels the most like magic. It’s something I use where I’m just like wow. I can’t believe this works as well as it does. And I generally feel proud to work in technology when I use it. And that’s partly because I have a background in search, and I saw firsthand how hard these search problems are. There’s this massive data gathering problem, and you take the scale of the entire planet, that’s pretty big. There’s these hard machine learning algorithm problems, so how do you navigate between two points anywhere on Earth with traffic and all of that. And there are also UX nightmares, right? You have to work across all these different devices and use cases, present this map at all different scales. And I just think Google Maps has done an amazing job of solving so many of these things.

The other thing I like about it is I think when we think about these topics like experimentation, there’s this sort of dichotomy that comes up, where it feels like you can either be this data driven organization that’s all about this relentless micro optimization, or you can be this bold design led visionary company that makes big bets. Think of like an Apple as the best example of this.

What I think is interesting about Google Maps is they’ve managed to do both in this case. So on the one hand you’ve got this enormous wealth of data, and it is relentlessly optimized. You know those guys are experimenting all the way through. But they’ve also made radical UX changes and made big bets. I mean the biggest bet of all being, hey, what if we took cars and drove them on every street in the world and took pictures of everything, and somehow later that turned into a useful street view in a map? Oh and also by the way powerful self-driving cars. I mean there’s radical thinking behind this thing. There’s planning years out paired with optimization. So it’s experimentation at both this micro level of tuning an algorithm, but also this macro level of taking big bets and validating them in the world. It’s like you can do this incremental optimization. You can also take these huge risks. You can have the data, and you can have the soul. That’s why I love Google Maps.

E: Yeah I think that is a great product and a product a lot of us use on a daily basis.

J: Totally.

E: So if you were gonna go back and impart some words of wisdom to others in product leadership, what would they be? What would be your personal words of wisdom for people starting out in product leadership?

J: Totally. One of the biggest ones that I’ve learned, often painfully, is just the importance of transparency. And by transparency, I mean making decisions and tracking work out in the open where everyone can see that. And wherever possible I mean that literally. Like at Optimizely, what we’ve changed over the last four years is doing a lot more stuff on paper and post-it notes on boards all around our office where anybody can see it. It’s interesting to see how that’s kind of worked for us. It’s something that certainly wasn’t part of our culture at Microsoft in the same way. Something that surprised me when I moved to Optimizely, even things like open calendars were surprising to me. But this kind of ethos of transparency has just created all these interesting opportunities. Right? It’s gotten teams motivated in ways I haven’t seen before by really understanding the root problems.

It’s also helped surface these really daily prosaic challenges. Like we’ve learned how bad it is to have too much work in progress by making a note card for every single feature you’re working on and sticking it on a board and seeing how they just move or don’t move across that wall. Making these things visual and visceral can make a huge difference in tracking work.

Another one that I’ve learned, particularly working in B2B, is what it really means to have customer empathy. I think when I came from B2C, I thought I knew what that meant. I thought it meant looking at some analytics charts and occasionally doing like a focus group with some users. But I was in for a rude surprise when I worked in B2B and saw what it really meant to be empathetic. There’s kind of a pretend empathy you can have where you can state your customer’s problems. And then there’s really sitting there in their world. Again literally if you can. Actually finding ways to go out and meet your customers and talk to them as much as you can. Wherever possible, I try to leave the office for a week at a time and just go meet customers. See what they do. See what challenges they have. Watch them use Optimizely and see where they get stuck.

J: I’ll honestly admit that when I first started in B2B, I was kind of afraid of customers. The idea of doing a customer interview or meeting kind of sent a chill down my spine. I was afraid I’d screw something up with them. I was afraid they’d ask a question I didn’t know the answer to. But now that I’ve gotten the habit of it, I really can’t imagine any other way of working than constantly having those in-person interactions.

And actually the last thing just related to that is I’ve also learned that while you have to have that customer contact, you have to be data-driven, you can’t just be driven from the outside. I would tell anybody, don’t lose sight of your vision and your conviction. Especially if you’re a founder, or if you’re just a product manager with a lot of passion, don’t lose sight of what it is that you set out to achieve in the first place. Your customers can give you feedback, your data can guide you in a micro way. It’s easy to get pulled along by that stuff. Especially as you get pulled along by your data and your metrics. But they should serve you and not the other way around. So always think through what is your big bold vision? What is the thing you’re trying to achieve? And how can you use all these tactics like user interviews and A/B testing to help you get there?

E: So one final question for you today. Question I ask all my guests. Three words to describe yourself.

J: Oh boy. Well one word I would use is just curious. I try to be relentlessly curious about things. Even if I don’t have a particular reason to use that knowledge. So one of the reasons I’ve actually really enjoyed working in B2B is that I just get to see how every part of the company ticks. Especially at a smaller midsize company. So I spend a lot of time just hanging out with salespeople, trying to figure out “how do you guys do your job?” Listening on support calls, like what are people actually calling us about? How does marketing work? What does it mean to do a PPC campaign? These are all things I didn’t know before I started, and I can’t say they specifically drove any particular product question I had, but I found it so valuable to just carve out time to learn and see how everyone works. Particularly through customers.

Another that I think is related is skepticism. So I consider myself very skeptical. I’m always giving people this question mark face when they tell me things. I try to accept very little at face value. Which isn’t to say that I wouldn’t trust someone, but I’m especially skeptical of things like data and dates. So I found it’s very easy to lie with a chart. It’s very easy to take a misleading quote out of context. And it’s very easy to be overly optimistic when setting a date or a deadline. I kind of assume that everything is too good to be true, and always try to dig deeper and understand the real why behind something. Why did this thing break? What went wrong? What else could go wrong? How do we prevent it?

And then the last one, which I try to counterbalance that first one with, is optimistic. So I really try to be optimistic in my job, and I try to radiate that optimism to everyone I work with. I think we have an interesting job as product leaders where we’re in this tough position between being the cheerleaders of our product, but also its biggest critics. We have to find a way to do both of those roles. And so if you can perfect that balance of skeptical optimism, like it’ll be okay but we have to work harder to get it done, if you can be the rock for your team through emergencies, if you can paint that exciting vision, but also kind of foresee everything that’ll go wrong, and if you can realize that optimism on its own won’t get you there. Optimism plus really, really hard work and skeptical digging, then I think you can achieve a lot of success.

E: Awesome. Well, thank you. This has been a blast. Loved it.

J: I had a great time too. Thanks for having me. This was great.

E: Thanks Jon.