I’ve been taking the Cryptography class on Coursera recently, taught by Dan Boneh. It’s terrific… just difficult enough to be a fulfilling challenge, but easy enough that I haven’t churned (yet).
We recently studied a practical application of the Cipher Block encryption methodology called Cipher Block Chaining (CBC). Here’s a diagram from Quora that articulates how CBC works:
It immediately reminded me of the bitcoin blockchain diagram from Santoshi’s original bitcoin white paper:
The key relation in both images that the output cipher from each round of encryption is fed into the input of the encryption of the subsequent round, to create a chain. It’s very elegant. I never knew the origin of this structure before… and I’m sure its roots go back beyond CBC.
I love moments of abstraction connection like this… this is why I take Coursera classes. They’re very academic, which doesn’t seem useful at first, but I find they make me look at my day-to-day interactions through a new lens, which spurs serendipitous moments of creative connections I would otherwise miss.
Project Hieroglyph Close to Release
In 2011 Neal Stephenson penned an essay called Innovation Starvation about our current stagnation in accomplish big honking technical marvels. He is not alone in this worry, a classic fear often ascribed to pessimistic curmudgeons that pine for the good old days, but unlike that stereotype, Stephenson’s essay is great because it present a path forward, led by sci-fi.
The primary reason I read sci-fi is to be inspired by what’s possible, to view through a window a compelling and convincing possible future. Stephenson takes my interest one step further by saying that it’s sci-fi authors’ responsibility to create the hieroglyphs for future innovations. Hieroglyphs? Stepheson describes it best:
Good SF supplies a plausible, fully thought-out picture of an alternate reality in which some sort of compelling innovation has taken place. A good SF universe has a coherence and internal logic that makes sense to scientists and engineers. Examples include Isaac Asimov’s robots, Robert Heinlein’s rocket ships, and William Gibson’s cyberspace. As Jim Karkanias of Microsoft Research puts it, such icons serve as hieroglyphs—simple, recognizable symbols on whose significance everyone agrees.
I agreed back in 2011 when I originally read this essay, and the concept of sci-fi as hieroglyphs has been banging around in the brain every since. I saw a great quote tweeted out by Ian Hogarth today, quoting a blog post by Albert Wenger. He said, “[I]t is almost too easy to write a dystopia these days. The real challenge, it seems to me, is to write a new utopia.” Cue vigorous nodding in agreement, and the quote reminded me of Stephenson’s essay.
So, I googled the essay, and in the process of falling down the Internet rabbit hole, I discovered that the Arizona State University had partnered with Stephenson to create an organization dedicated to fostering the next generation of moon-shots, through sci-fi. It’s called Project Hieroglyph and their first anthology of fiction is being released in September. This sounds like a terrific read and I can’t wait to check it out.
Venture for America: Three Years Later
Three years ago, I blogged briefly about Venture for America (VfA), a non-profit that places America’s top college graduating talent into CEO apprenticeship roles in small business in declining US cities. This simple graphic from the About page really says it all:
This week, the VfA team kindly invited me down to Brown for a panel on Entrepreneurship*. Participating in this event gave me an appreciation for just how far VfA has come in three years. They’ve made some big splashes, like Tony Hsieh’s $1MM committment to VfA to help revitalize businesses in downtown Las Vegas.
But what was most striking to me was the evidence of true, organic growth, directly in line with the company mission. Because the organization is 3 years old and it’s a two year fellowship program, the data is starting to come in. The original class of fellows are graduating from their two year apprenticeships, and I had the privilege to hear about some of their journeys. The businesses the fellows joined were not rocketships (zero businesses are ever straight-up-and-to-the-right… despite the “overnight success” stories journalists love to write in retrospect), and the fellows had to deal with the same startup highs and lows I see founders deal with on a daily basis. All the stories I heard, positive or not, ended with lessons learned and new strengths found.
The most inspiring story I heard was of one fellow who was placed in a company in Detroit. Inspired by his experience, he’s forming a his own company at the end of his fellowship, a new CPG company selling dried pasta made from chickpeas. He has hired another VfA fellow to help him build the business, and they’ll be living in a Detroit apartment building with six other VfA fellows, some of whom bought the building and will be renovating and renting the property as a business. This is the VfA mission at work, playing out as well as I could possibly imagine. I was so inspired.
So congrats to Andy Yang, Eileen Lee, Mike Tarullo, and the rest of the VfA team on all their success. It’s amazing to see what they’ve built from scratch, and I look forward to their continued success in bringing more jobs into cities where they are most needed.
Spark’s Investment in Cover
When I first used Cover, I found the user experience to be delightfully simple. I walked in the restaurant, sat down, and said I’d like to pay with Cover. Then at the end of the meal, I walked out. That’s it. That’s how simple the experience is.
It reminded me a lot of the first time I used Uber. “What do you mean I don’t pay?” I remember thinking…, “Huh, I hope that tip was included…”, and “That was easy.” Cover has that exact same “A-ha” moment. It’s so easy, easy to the point of being unexpected.
With repeat usage the initial “A-ha” moment (I can just walk out? Whoa…) fades into a second “A-ha” moment: you’re a regular. Anywhere that accepts Cover now feels like a House account. You can just “put it on my tab.” It feels flattering, empowering, and, eventually normal. In line with the adoption curve of all interesting new technology: you feel joy first, and then in the long run becomes pleasant expectation.
If you’re in SF or NYC, give it a try the next time you go out to eat. Which leads me to the next reason why I’m excited to be an investor in Cover: the restaurant list. The Cover team has done a remarkable job of partnering with an impressive group of restaurants. Adoption by restaurants such as Momofuku Ko, Carbone, and Alder (the homes of world-renowned chefs) provide a level of validation I rarely find in early stage startups. It’s a testament to both the product experience and the persuasive hustle of Cover’s team.
Having a great product experience is an essential part of how I develop the conviction to make an investment, but it’s not everything. It’s also important that a company fits in Spark’s view of how Internet technology is evolving. In the case of Cover, mobile computing is a decade-defining technology shift, the important of which cannot be overstated. Every information-based service that was first disrupted by the Internet is now up for grabs again in the move to mobile, plus some additional opportunities that the Internet never quite nailed alone last decade.
Payments is an opportunity that was never thoroughly transformed in the Internet shift because only now, for the first time in history, does everyone have an advanced Unix-based computer humming away in their pockets, ready to run software to make paying for anything more dynamic, and (more importantly) more enjoyable. Only recently, at this point in time, is your “wallet” smart enough to hail your cab (Uber), book your hotel room (HotelTonight), and close out your check (Cover). The transactions in each of these products are vertically organized; meaning: they are single purpose payments. Today, Cover is manically focused on its single purpose: making the restaurant payment experience amazing. From that established beachhead, there will be opportunities to move both vertically and horizontally.
I feel honored to have the opportunity to partner with Andrew Cove, Mark Egerman, and the rest of the Cover team; and I’m glad to be joining Bryce at OATV for another tour of duty (we share investments in a few other companies).
As someone that previous did data-driven product design for a living, perfectly captures in this tweet my confusion over the public’s reaction to FB testing how emotion can be swayed by content in the newsfeed.
How is this different from what is now roughly two decades of data-driven design in web development? The designer optimizes for a goal using hypothesis testing, the same way a marketer inside Procter and Gamble has been using data-driven approaches to pricing and packaging in geographically isolated regions of the country (old school A/B testing) for nearly a century. No one would fathom asking P&G “where’s your IRB?” In fact, no one would fathom asking Google for an IRB approval for their hundreds of simultaneous live A/B tests in Gmail as recently as two weeks ago. So, why now? What unspoken ethical line did Facebook cross?
I think people are generally uncomfortable with the idea that their emotions can be swayed by social media, and are looking for some way in which this must be a violation of an existing ethical norm. But the two issues are orthogonal. The emotional power of social media is wild and scary. It currently is (and will continue to be) a means of manipulation. But that has nothing to do with the rational product development testing process that has been in place for decades, which provided evidence of social media’s emotional power. The public outcry feels much more like a reaction to the outcome of the study than the methodology, but people are thrashing about in their reaction, and so methodology is getting dragged into the mud in this emotional mess.
The whole uproar feels quite confused to me… a pathos response misusing logos arguments to compensate.
The Bestish Statistical Conclusion
A few months back, the Upworthy folks gave NYMag an all-access tour of the company, and the result was a compelling read.
One quote in particular has been sticking with me for months.
Curators load potential headlines and thumbnail images into a testing system, which shows each option to a small sample of the site’s visitors, tracking their actions—did they click it, did they share it? The system used to return detailed numerical feedback on each option, but it was decided that hard numbers overinfluenced the curators; now it tags options with things like “bestish” and “very likely worse.”
This is a wonderful UI tweak that helps curators draw the correct conclusions from the statistical exhaust in A/B tests. Don’t show p-values or percentage lift; leave that junk on the editing room floor. Instead, just spell out conclusions, and state those conclusions in loose language (“bestish”) that nearly all stats conclusions deserve.
People often complain that they never use high school calculus in the real world, and lament it was thus a waste of time. I agree. I could make arguments for how calculus expands your mind and teaches how to learn, but ultimately, I think there are other mathematical subjects that would be far more valuable to people’s day-to-day lives that are overlooked by high school curricula. Statistics tops that list for me. I suspect the average news reader consumes 3-5 statistical analyses per day (a totally wild guess); nearly all of which attempt to draw conclusions, and some of which do so incorrectly.
I wish the UX of statistical presentation had a clear “best practices” guide. When presenting any statistical-based argument to end-consumers, this Upwortiest approach of plain language abstraction would be a great addition.
A Tour of Input Methods
Web services that derive all their value from user-generated content spend enormous energy designing their input flows. This time is well spent because the input flow is the channel for all new content, the lifeblood of the service. A 3% (assuming, statistically significant) lift in content contributed from users can make a meaningful difference in engagement and retention, and increasing a submit button by 20px can make or break that 3% difference in A/B testing.
Here’s a tour of the input flows from a variety of popular web services:
Facebook - Facebook’s implementation of input should be considered a baseline best practice, though it’s not entirely intuitive on first glance alone. For example, if you want to add a photo, should you click the photo button on the bottom or the “Add Photos” icon up top?
Facebook’s input really shines when you add a link… it does some nice magic to detect the link, pull a representative photo, title, and description from the destination page, and format it all nicely for you. This complication is hidden from the end-user on first glance, and only emerges when it is relevant.
Twitter - Twitter’s input flow contrasts Facebook in its size because each text field is optimized to receive different length content. 140 characters is small, whereas Facebook status updates can be larger. Across all the examples in this post, you’ll notice that the surface area of the input field is deliberately designed to communicate to the end-user roughly how long contributed content should be. This visual indicator gently nudges the user towards contributing content that is more likely going to be well-received by consumers of the content.
The 140 text next to the submit button counts down as the user types, and is generally well-programmed to handle pasted text and link shortening intelligently (though, that hasn’t always been the case in the past). All the icons are intuitive (camera button adds a photo… map pin adds a location).
In a push to increase photo usage, the Twitter mobile app (not depicted here) now uses half the screen to show recent photos to add to the user’s update, which is smart and saves photo uploaders an unnecessary tap.
Tumblr - Feels kind of meta to write about the Tumblr input method inside the Tumblr input method. Remember what I said above how the surface area of an input flow is designed as a cue to tell the user how long successful content should be? I’m *far* exceeding that cue on this post right now. I’ve known I use Tumblr “wrong” for years, but I don’t care.
The Tumblr input flow is really two steps, and I’m falsely screenshotting only half of it. The first half of the flow is seen at the top of the content consumption feed: various icons representing the choice between posting text, photo, audio, video, etc…
This multi-step flow has two key benefits:
1) It means that content creation and consumption is nicely co-mingled, which I once heard Matt Mullenweg describe as one of the things Tumblr did right that Wordpress missed. Co-mingling creation and consumption lowers the barriers to create a new post, which leads to more content being added more regularly.
2) By splitting the process in half, the input flow can be well-customized to the media type being contributed. By contrast Facebook’s flow is designed to accept all media types seamlessly without prior explicit indication by the user. But the downside to Facebook’s approach is that there are no affordances in the input field that indicate what type of media is acceptable. Tumblr makes everything explicit, which is more intuitive at first glance, even if the cost is an extra click in the flow.
Pinterest - This is a two-step flow on Pinterest. The first part is the insertion of a URL, the second step appears once that URL is processed.
Note that content syndication to other social networks is so important to Pinterest that they give the sharing options material screen real-estate on the input flow. Twitter and Facebook by contrast omit these options. It’s all a question of priorities and business objectives.
Also notice, every input flow we have seen thus far has deliberately used a different color to denote the “OK” or “Complete” button, and it’s always the part of the input dialog that leaps off the screen visually.
Medium - This is definitely the most minimalist approach to date. My screenshot here is a bit of a cheat because really the input flow is the entire browser page. All the white space below the “Write your story” text is available to be filled with users content, and the sea of white space leaves a dramatic impact on the user. It’s a similar experience to looking at a white piece of paper in a notebook. If you ever want to really *grab* the user’s attention, surround your point of focus with copious white space… this will always make the point of focus jump forward.
I love the green accent color in this flow. By using it sparingly, it is far more effective in calling the user’s attention.
Minimalist style only works because users of Medium have been filling out publishing input forms for years on competing platforms (Wordpress, blogger, Tumblr, etc…). The white space obscures a key affordance: borders. Borders cue users where the edges of content input are. Borders say, “You can click inside me and start typing.” The lack of borders is beautiful, but it only works because users of Medium are experienced enough to know that there are implied borders in these invisible input boxes.
Wikipedia - I thought Wikipedia’s archaic input form would provide a nice contrast to Medium’s form. Since most content contributions to Wikipedia come in the form of updates to existing articles instead of new “from scratch” articles, I screenshotted the input form to edit an existing article. Medium’s white space feels like a breath of fresh air compared to this crowded subway car of Wikicode.
Wikipedia’s input for feels crowded, jargony, and raw. I think there are two reasons for this design choice:
1) Wikipedia doesn’t want every visitor editing all pages. It’s technically open to this option, but the painful design of this page is a deterrent, that I believe is deliberate. It says “if you don’t know Wikicode, you’re in the wrong place.”
2) It’s simply old and hasn’t been updated. In the early 2000s, I’d argue this page was relatively well-designed, especially compared to trying to edit raw HTML in your favorite text editor, which was the “state of the art” when Wikis were invented.
Snapchat - I’m throwing in a mobile-only example to show a technique that is really only possible in mobile device input flows.
Snapchat’s input flow paints edge-to-edge with full bleed photos. There’s no browser window, no scroll bars, no favorite icons or URL bar, no company logo, no settings icon or user profile badge. It’s just the input form… alone… on 100% of the screen. When you’re in content creation mode, the lack of distractions is a joy.
In keeping with painting the input flow edge-to-edge, this is the first example we’ve seen where the input buttons and choices are overlaid directly onto the content. This can be confusing to a first time user, especially if the content and the buttons accidentally blend together due to similar pixel color. But once you’ve experienced full bleed photo input flows like this, you quickly learn where all the buttons are, and there’s no going back. Old non-full-bleed input flows look dated… like trying to create content through a tunnel constraining your view. Like watching SD content on an HD TV.
Yo - I don’t really use Yo, so I feel a bit like a fraud for adding this example, but it’s simplicity is so seductive, that I can’t help but include it in this tour of input flows. In Yo, the only thing you can do is say “Yo” to another user. You do so by clicking the recipient’s name. It is an input flow that is essentially only 1-button, a “submit” button titled with the recipient’s name. I don’t think it can possibly get any simpler to create content than this. But, the simplicity also constrains the content creation diversity: no one can compete to be the next Shakespeare inside a Yo input flow.
So there you have it. A tour of input flows and a small sampling of lessons to draw based on some of the most popular web services’ best practices. Please add your own contributions or analysis in comments.