The “Full-Stack Startup” and Jiro’s Dreams of Sushi

Five years ago, Chris Dixon coined the “Full-Stack Startup” to describe the new wave of companies like Uber looking to upend entire industries through building a new, vertically-integrated stack.

The basic idea is this. Traditionally, as startup founders we see ourselves as toolmakers because we build software and that’s what software is best suited for. If we thought the experience of hailing taxis were broken, we’d build a better taxi dispatch software and sell that to taxi companies. Software can solve the dispatch problem elegantly because it’s relatively close-ended. Running a taxi company, on the other hand, seemed extremely under-leveraged in terms of technology and a terrible business.

However, Uber not only built dispatch software but also hired drivers to offer rides, allowing them to control the entire production function and eat the taxi industry altogether. That’s a much more expansive role for software, but a truly exciting one because the experiences are much more magical when it works.

Since then, venture capital has poured into all kinds of full-stack startups. Opendoor, Compass, WeWork, Shift, Triplebyte, Gigster, Pilot, Honor, Forward, Atrium, just to name few. At the same time, we’ve seen spectacular failures like Homejoy, Sprig, Munchery, Luxe, HomeHero, and to a lesser extent, Zenefits and Altschool. What explains the differences in outcome?

I think the two most important questions to ask are 1) how variable are the customers expectations and 2) to what extent can software help deliver on those expectations.

A lesson from Jiro

Jiro Ono, the software of sushi-making

Jiro Ono is a three-Michelin sushi chef in Japan and the subject of the documentary, Jiro Dreams of Sushi. At 85, Jiro has mastered every facet of his craft. From knowing the perfect length of time to massage an octopus (40 minutes) to developing a technique to preserve sushi rice at its optimal temperature (body temperature) to only serving each ingredient at “its ideal moment of deliciousness,” Jiro has explored, refined and practiced every painstaking detail to perfection. In a way, Jiro has done to sushi-making what software has done to many human tasks — eliminated variability and refined the quality of the output.

However, here’s the kicker. Three-Michelin Sukiyabashi Jiro has only 4 out of 5 stars after 71 reviews on Yelp. The problem? Even though Jiro has the best sushi-making software, he’s in the full-stack restaurant business where software does not provide enough leverage in providing a consistently positive customer experience.

Some of the two-star reviews on Yelp

First, Jiro’s customers come with a wide variety of expectations beyond great sushi. Some expect a certain level of service for the price while others care more about comfort and ambience. Some may even be looking for the meaning of life in Jiro’s sushi. Obviously, Jiro promises none of these things, but customers expect them nonetheless.

Second, even if Jiro has the most refined process for making sushi, customers are eating the sushi, not the process, and sushi tastes are highly subjective[1]. So in a way, Jiro’s software failed to deliver against even the singular goal of great sushi.

Traditional startups sell sushi-making software. Full-stack startups operate restaurants. Operating a full-stack startup, you live and die by your ability to manage your customer’s expectations while consistently delivering against the expectations leveraging software. Sounds basic but anyone in the service industry would tell you that it’s hard to execute on let alone having to do it at scale.

The bane of variable customer expectations and why services offer “Free Consultation”

When I built Crowdbooster, a social marketing software-as-a-service startup, we would often talk about “landing pages” because our customers knew roughly what they wanted and the landing pages together with a free trial were mostly sufficient in helping them figure out if Crowdbooster was right for them.

Crowdbooster’s landing page. That screenshot of the product was worth a thousand words.

My second startup, Upbeat, was a full-stack, tech-enabled public relations agency. Our product was not something you used, but a service to help you garner media coverage. Our customers did not know how public relations worked nor did they care to. All they knew was that they desired media coverage, and they paid us to help achieve that outcome. However, even when we delivered great media coverage, some of our customers were still dissatisfied.

The problem is that full-stack customers don’t really know what they want beyond the fact that they have a problem, and when the outcome is delivered, that’s when they begin to realize what they were looking for. This is why consultants have offered “free consultation” for ages — it’s the service industry equivalent of a free trial. The free consultation is an opportunity to explore the nature of the customers’ problem, educate them on what to look for, and set expectations on what they can and cannot expect from an engagement. Many full-stack startups like Honor, Atrium, and Pilot take this approach and force you to talk with an expert agent during the sign-up process.

However, a free consultation, like any human conversation, is a lossy process at best. To avoid dealing with the fickleness of humans, you can instead choose a more bounded problem by constraining the customer segment to only customers you know you can deliver for (as long as it doesn’t constrain your market long-term). This is the like running a fast food chain as opposed to Jiro’s restaurant. For example, OpenDoor targets only customers who want to sell their home fast (and fit their many other criteria). If the customer is not in a rush or they prefer to be serviced by a real estate agent for the experience or to feel like they got the best price, then they are not for OpenDoor.

If you qualify for an OpenDoor offer on your house, they still require a “review” with a human because it helps align expectations

How much leverage can you get from software?

Assuming you figured out how to manage your customers’ expectations and filter for the right segment, full-stack startups still have to consistently deliver a great customer experience with a production function that they don’t fully control. Uber, for example, went as far as calling human drivers their existential dependency and the final barrier to a perfectly-controlled customer experience. This is despite having built one of the most successful marketplaces in the history of startups. Traditional marketplace tactics like user ratings, apps to manage workers, offering different levels of service to different customer segments, insurance, etc. will eventually be insufficient for Uber because when you sell the outcome of a ride, any problems caused by drivers along the way is your fault, so you’d want to ultimately subsume that variable.

For a better framework on how to properly leverage software to tackle full-stack opportunities, I’d send you to Andrew Chen’s brilliant essay, “What’s next for marketplace startups? Reinventing the $10 trillion service economy, that’s what.” Notice in his essay that as we move fuller-stack, the leverage you gain from software begin to diminish. This is something to watch out for and you can use the strategies in his essay to mitigate.

From Andrew Chen’s essay

As Arthur C. Clarke once said, “any sufficiently advanced technology is indistinguishable from magic.” To me, full-stack startups are the ultimate magical feat, especially when you can appreciate the complexity of their production functions. As software continues to “eat the world,” full-stack startups will become more of the norm. I’d love to see more discussion from operators about how full-stack startups can better improve their odds of success. Let’s continue the discussion in the comments below or with me on Twitter @rickyyean.

[1] The documentary probably did more to align customer expectations and their subject taste to Jiro’s favor than anything else he’s done.

What will it take for us to trust Facebook again?

Are we about to see a Facebook bank run?

Building trust is hard because trust has to be built incrementally, and it only takes one (perceived) mistake to undo years of trust-building. Facebook is very publicly dealing with this issue today, but the problem they face goes beyond locking down developer access to user data and investigating potential offenders. Facebook has to design for consistently trustworthy user experiences and roll back initiatives that have created mistrust.

Let’s look at banks for example. It’s mind-blowing how we almost never question whether or not banks are trustworthy because giving the banks our money is just the thing that we do as Americans. The banks have so much of our trust that no one really cares about the fact that banks reinvest our money instead of keeping it in a vault somewhere.

The way the banks built up our trust is by meeting our expectations every single time over the course of decades, but even then, they are always teetering on the edge of losing our trust. Every time we withdraw money and get it, every time we check our balance and it checks out, and every time we send a payment and it’s received, the banks earn our trust. However, the slightest hint of failure (even in perception) will immediately cause us to panic and explore moving our money somewhere else. The last real bank run was almost 90 years ago, but the financial crisis of 2007 was enough to dramatically lower the percentage of Americans with strong trust of banks from 41% to 27% today (Gallup). This is in spite of government regulations, FDIC insurance, the Federal Reserve and other trust infrastructure that have been put in place to help. It takes decades to build trust and you can lose it in a second.

Technology companies typically do not have to deal with bank-level trust problems because technology companies are much less visible than banks. Google and Apple have all of our browsing data through Chrome and Safari as well as our contacts and text messages through Android and iCloud. AT&T and Comcast are ISPs, so they can monitor our entire Internet activity. These companies have way more information about us than Facebook, yet they’re not being asked to testify in front of Congress like Facebook. Facebook’s real problem is that it is the most visible technology company, and the visibility breeds mistrust.

We very explicitly and visibly gave Facebook our data, just like we give bank tellers our money

Facebook has a high bar to clear because from the first time we signed up, we overtly handed over our personal information for the purpose of using it to express ourselves and connect with our social circle. We typed in every piece of personal information and essentially told Facebook, “I am giving you my information now, please treat it with respect.” This is a very explicit act and it comes with expectation that we don’t ask from other technology services. Google, for example, has built a profile of me in the background that is arguably more extensively than Facebook, but they’ve never asked me explicitly for it. Everything Google did happened behind-the-scenes with every search, every visit to an AdSense-powered website, and every time I use Chrome.

Facebook as a product also has evolved very dramatically since its founding in 2006. When I gave Facebook my personal information, I never thought that it would be used to log into apps on my phone. The first time I accepted a friend request, I did not expect my thoughts to be algorithmically delivered to that friend. This is like depositing your money to a bank, only to discover later that it was being used to…uh, bet on sports at a casino? In order to restore our trust, Facebook needs to create more consistencies between expectations and reality.

Facebook relies on our data too heavily and obviously to create an engaging experience on Facebook

When we use Facebook, we are constantly reminded of the personal information we gave them because the entire user experience is predicated on our social graph and our interests. We see what our friends are liking and sharing. If we interact with a post from someone, Facebook reinforces our “friendship” with that person by showing us more posts from them in our feed. The makeup of our feed changes so readily with our interaction patterns that we all understand at some level that Facebook is tailoring the experience very aggressively. On one hand that leads to a feeling of control, but on the other hand, it makes Facebook’s targeting prowess way too obvious, inspiring fear.

Other tech companies do this, too, but just less obviously. For example, when we search Google, Google will show us ads based on that search. However, the search results still feel relatively objective (even though they are personalized), and it feels like it’s happening one search at a time so the targeting doesn’t feel like it’s compounded based on all the data they’ve accumulated about me over time. Imagine if you searched for “basketball scores” and Google learns from your past search history to show you scores specifically for your favorite team, Houston Rockets. Unless it’s clearly disclosed as location-specific or based on some other factor, it gets creepy pretty quickly when it becomes obvious that Google is keeping a close record of everything we do in its ecosystem and using it aggressively.

Facebook is omnipresent, making it seem like it is tracking our every move even when we are not using Facebook

Facebook is on the sign up and login screens for every new app we download because of Facebook Login. Every article and video we consume comes with a Like and Share button. This is not like Google AdWords or a Gmail email address. We barely pay attention to AdWords and we think of our email as a neutral utility. Facebook login and the Like/Share buttons are very clearly branded, and they are proactive activities that make us think a lot more about Facebook. Even when we are not using Facebook, we are always using Facebook.

And when we do use Facebook, the rest of the Web also finds its way back into our Facebook feed. Advertisers can retarget us inside of Facebook, making it too obvious that either Facebook follows us around or Facebook is selling our name and information to advertisers so they can track us down inside Facebook. This conflates the different contexts and spaces we operate in. Is Facebook following me around? Why am I seeing ads from that website on Facebook? This makes it harder for users to feel in control, leading to anxiety.

This context-conflation also bleeds into real life. When Facebook crosses our location data with our social graph to figure out who we are hanging out with in real-life in order to show us posts and ads they showed to the friends we hung out with, it creates the illusion that Facebook is listening in on our real-life conversations through the microphone on our phones, causing even more anxiety.

Conclusion

What Facebook is battling today is the consequence of years of trust erosion, and it’s going to take years for Facebook to restore that trust. To accomplish this, Facebook needs to first 1) create more consistencies between expectation and reality whenever they ask us to hand over our data 2) dial back on aggressively using our data to create an ultra-personalized experience and 3) reduce the cognitive dissonance from context-conflation when users go from Facebook to non-Facebook Web to real-life.