Stellar is fascinating. I’ve spent much of my day today learning everything I can about it.
I’m curious about what the total value of the currency is worth at this beta beginning. There’s a couple answers:
1) Stripe made a $3,000,000 loan to Stellar to fund initial operations. Stellar repaid this loan using 2% of all stellars*. This means $3MM / 2% = $150MM stellar market cap.
2) On reddit people are offering the following conversion rates:
- 4000 stellars for 1 hr of full-stack dev consulting work. 1 hr of a dev’s time =~ $100. So, that’s a 40 stellar : $1 ratio. There are 100 Billion stellars in existence. Which implies a stellar market cap of: $2.5B.
- Paying $2 for 5000 stellar. So that’s a 2500 stellar : $1 ratio. Which implies a stellar market cap of: $40MM.
Both of those are only offered rates. Nothing has actually transacted at those prices as far as I can tell. So, they represent the bid side of the bid/ask spread.
3) Stellar.org is giving away 19% of all stellars to owners of Bitcoin. This ratio implies a value of 1450 stellar : 1 BTC, which is a 2.4 stellar : $1 ratio (using today’s BTC price of $601.97). Which implies a stellar market cap of: $41.5B
This last # is pretty fishy because you don’t have to actually exchange your BTC to get your 1450 stellars… it’s just a gift for being an early supporter of bitcoin. So, it’s not really a conversion rate.
In conclusion, what is a stellar actually worth? Whatever someone will pay for it. Which of these valuations holds the most veracity, I guess the $150MM number… although $3MM is really just option value to Stripe, so it’s not perfect. The best valuation metric would be to know the salaries being paid in Stellar, compared to market rate alternatives.
* lowercase = currency unit. Uppercase = the non-profit company.
The key relation in both images that the output cipher from each round of encryption is fed into the input of the encryption of the subsequent round, to create a chain. It’s very elegant. I never knew the origin of this structure before… and I’m sure its roots go back beyond CBC.
I love moments of abstraction connection like this… this is why I take Coursera classes. They’re very academic, which doesn’t seem useful at first, but I find they make me look at my day-to-day interactions through a new lens, which spurs serendipitous moments of creative connections I would otherwise miss.
In 2011 Neal Stephenson penned an essay called Innovation Starvation about our current stagnation in accomplish big honking technical marvels. He is not alone in this worry, a classic fear often ascribed to pessimistic curmudgeons that pine for the good old days, but unlike that stereotype, Stephenson’s essay is great because it present a path forward, led by sci-fi.
The primary reason I read sci-fi is to be inspired by what’s possible, to view through a window a compelling and convincing possible future. Stephenson takes my interest one step further by saying that it’s sci-fi authors’ responsibility to create the hieroglyphs for future innovations. Hieroglyphs? Stepheson describes it best:
Good SF supplies a plausible, fully thought-out picture of an alternate reality in which some sort of compelling innovation has taken place. A good SF universe has a coherence and internal logic that makes sense to scientists and engineers. Examples include Isaac Asimov’s robots, Robert Heinlein’s rocket ships, and William Gibson’s cyberspace. As Jim Karkanias of Microsoft Research puts it, such icons serve as hieroglyphs—simple, recognizable symbols on whose significance everyone agrees.
I agreed back in 2011 when I originally read this essay, and the concept of sci-fi as hieroglyphs has been banging around in the brain every since. I saw a great quote tweeted out by Ian Hogarth today, quoting a blog post by Albert Wenger. He said, “[I]t is almost too easy to write a dystopia these days. The real challenge, it seems to me, is to write a new utopia.” Cue vigorous nodding in agreement, and the quote reminded me of Stephenson’s essay.
Three years ago, I blogged briefly about Venture for America (VfA), a non-profit that places America’s top college graduating talent into CEO apprenticeship roles in small business in declining US cities. This simple graphic from the About page really says it all:
This week, the VfA team kindly invited me down to Brown for a panel on Entrepreneurship*. Participating in this event gave me an appreciation for just how far VfA has come in three years. They’ve made some big splashes, like Tony Hsieh’s $1MM committment to VfA to help revitalize businesses in downtown Las Vegas.
But what was most striking to me was the evidence of true, organic growth, directly in line with the company mission. Because the organization is 3 years old and it’s a two year fellowship program, the data is starting to come in. The original class of fellows are graduating from their two year apprenticeships, and I had the privilege to hear about some of their journeys. The businesses the fellows joined were not rocketships (zero businesses are ever straight-up-and-to-the-right… despite the “overnight success” stories journalists love to write in retrospect), and the fellows had to deal with the same startup highs and lows I see founders deal with on a daily basis. All the stories I heard, positive or not, ended with lessons learned and new strengths found.
The most inspiring story I heard was of one fellow who was placed in a company in Detroit. Inspired by his experience, he’s forming a his own company at the end of his fellowship, a new CPG company selling dried pasta made from chickpeas. He has hired another VfA fellow to help him build the business, and they’ll be living in a Detroit apartment building with six other VfA fellows, some of whom bought the building and will be renovating and renting the property as a business. This is the VfA mission at work, playing out as well as I could possibly imagine. I was so inspired.
So congrats to Andy Yang, Eileen Lee, Mike Tarullo, and the rest of the VfA team on all their success. It’s amazing to see what they’ve built from scratch, and I look forward to their continued success in bringing more jobs into cities where they are most needed.
———- *Given that I’ve never been an Entrepreneur, I stayed as humble as possible and tried to field only VC related questions. My co-panelists had built some impressivebusinesses.
Today Cover is announcing their Series A, led by Spark. Cover is a mobile payment company that allows consumers to dine without waiting for the check. Techcrunch has more coverage.
When I first used Cover, I found the user experience to be delightfully simple. I walked in the restaurant, sat down, and said I’d like to pay with Cover. Then at the end of the meal, I walked out. That’s it. That’s how simple the experience is.
It reminded me a lot of the first time I used Uber. “What do you mean I don’t pay?” I remember thinking…, “Huh, I hope that tip was included…”, and “That was easy.” Cover has that exact same “A-ha” moment. It’s so easy, easy to the point of being unexpected.
With repeat usage the initial “A-ha” moment (I can just walk out? Whoa…) fades into a second “A-ha” moment: you’re a regular. Anywhere that accepts Cover now feels like a House account. You can just “put it on my tab.” It feels flattering, empowering, and, eventually normal. In line with the adoption curve of all interesting new technology: you feel joy first, and then in the long run becomes pleasant expectation.
If you’re in SF or NYC, give it a try the next time you go out to eat. Which leads me to the next reason why I’m excited to be an investor in Cover: the restaurant list. The Cover team has done a remarkable job of partnering with an impressive group of restaurants. Adoption by restaurants such as Momofuku Ko, Carbone, and Alder (the homes of world-renowned chefs) provide a level of validation I rarely find in early stage startups. It’s a testament to both the product experience and the persuasive hustle of Cover’s team.
Having a great product experience is an essential part of how I develop the conviction to make an investment, but it’s not everything. It’s also important that a company fits in Spark’s view of how Internet technology is evolving. In the case of Cover, mobile computing is a decade-defining technology shift, the important of which cannot be overstated. Every information-based service that was first disrupted by the Internet is now up for grabs again in the move to mobile, plus some additional opportunities that the Internet never quite nailed alone last decade.
Payments is an opportunity that was never thoroughly transformed in the Internet shift because only now, for the first time in history, does everyone have an advanced Unix-based computer humming away in their pockets, ready to run software to make paying for anything more dynamic, and (more importantly) more enjoyable. Only recently, at this point in time, is your “wallet” smart enough to hail your cab (Uber), book your hotel room (HotelTonight), and close out your check (Cover). The transactions in each of these products are vertically organized; meaning: they are single purpose payments. Today, Cover is manically focused on its single purpose: making the restaurant payment experience amazing. From that established beachhead, there will be opportunities to move both vertically and horizontally.
I feel honored to have the opportunity to partner with Andrew Cove, Mark Egerman, and the rest of the Cover team; and I’m glad to be joining Bryce at OATV for another tour of duty (we share investments in a few other companies).
How is this different from what is now roughly two decades of data-driven design in web development? The designer optimizes for a goal using hypothesis testing, the same way a marketer inside Procter and Gamble has been using data-driven approaches to pricing and packaging in geographically isolated regions of the country (old school A/B testing) for nearly a century. No one would fathom asking P&G “where’s your IRB?” In fact, no one would fathom asking Google for an IRB approval for their hundreds of simultaneous live A/B tests in Gmail as recently as two weeks ago. So, why now? What unspoken ethical line did Facebook cross?
I think people are generally uncomfortable with the idea that their emotions can be swayed by social media, and are looking for some way in which this must be a violation of an existing ethical norm. But the two issues are orthogonal. The emotional power of social media is wild and scary. It currently is (and will continue to be) a means of manipulation. But that has nothing to do with the rational product development testing process that has been in place for decades, which provided evidence of social media’s emotional power. The public outcry feels much more like a reaction to the outcome of the study than the methodology, but people are thrashing about in their reaction, and so methodology is getting dragged into the mud in this emotional mess.
The whole uproar feels quite confused to me… a pathos response misusing logos arguments to compensate.
One quote in particular has been sticking with me for months.
Curators load potential headlines and thumbnail images into a testing system, which shows each option to a small sample of the site’s visitors, tracking their actions—did they click it, did they share it? The system used to return detailed numerical feedback on each option, but it was decided that hard numbers overinfluenced the curators; now it tags options with things like “bestish” and “very likely worse.”
This is a wonderful UI tweak that helps curators draw the correct conclusions from the statistical exhaust in A/B tests. Don’t show p-values or percentage lift; leave that junk on the editing room floor. Instead, just spell out conclusions, and state those conclusions in loose language (“bestish”) that nearly all stats conclusions deserve.
People often complain that they never use high school calculus in the real world, and lament it was thus a waste of time. I agree. I could make arguments for how calculus expands your mind and teaches how to learn, but ultimately, I think there are other mathematical subjects that would be far more valuable to people’s day-to-day lives that are overlooked by high school curricula. Statistics tops that list for me. I suspect the average news reader consumes 3-5 statistical analyses per day (a totally wild guess); nearly all of which attempt to draw conclusions, and some of which do so incorrectly.
I wish the UX of statistical presentation had a clear “best practices” guide. When presenting any statistical-based argument to end-consumers, this Upwortiest approach of plain language abstraction would be a great addition.
Apparently a sole bidder picked up all the BTC that the US Marshalls seized, over $18mm worth at market value.
In nearly every market in the world (financial or otherwise) items sold in bulk will sell at a discounted price to the market price for an individual item.
By contrast, I would not at all be surprised if these BTC sold at a *premium* to the current trading market price. Why? Because the daily spot market for BTC is still quite young, with issues of illiquidity and high spreads. But at the same time there are increasingly growing businesses like Coinbase that depend on their ability to regularly acquire Bitcoin at market prices on demand. So, the ability to easily acquire a bunch of BTC in bulk is so convenient compared to the spot market, I could easily see a corporation paying a premium for the opportunity. Which is crazy.
Web services that derive all their value from user-generated content spend enormous energy designing their input flows. This time is well spent because the input flow is the channel for all new content, the lifeblood of the service. A 3% (assuming, statistically significant) lift in content contributed from users can make a meaningful difference in engagement and retention, and increasing a submit button by 20px can make or break that 3% difference in A/B testing.
Here’s a tour of the input flows from a variety of popular web services:
Facebook - Facebook’s implementation of input should be considered a baseline best practice, though it’s not entirely intuitive on first glance alone. For example, if you want to add a photo, should you click the photo button on the bottom or the “Add Photos” icon up top?
Facebook’s input really shines when you add a link… it does some nice magic to detect the link, pull a representative photo, title, and description from the destination page, and format it all nicely for you. This complication is hidden from the end-user on first glance, and only emerges when it is relevant.
Twitter - Twitter’s input flow contrasts Facebook in its size because each text field is optimized to receive different length content. 140 characters is small, whereas Facebook status updates can be larger. Across all the examples in this post, you’ll notice that the surface area of the input field is deliberately designed to communicate to the end-user roughly how long contributed content should be. This visual indicator gently nudges the user towards contributing content that is more likely going to be well-received by consumers of the content.
The 140 text next to the submit button counts down as the user types, and is generally well-programmed to handle pasted text and link shortening intelligently (though, that hasn’t always been the case in the past). All the icons are intuitive (camera button adds a photo… map pin adds a location).
In a push to increase photo usage, the Twitter mobile app (not depicted here) now uses half the screen to show recent photos to add to the user’s update, which is smart and saves photo uploaders an unnecessary tap.
Tumblr - Feels kind of meta to write about the Tumblr input method inside the Tumblr input method. Remember what I said above how the surface area of an input flow is designed as a cue to tell the user how long successful content should be? I’m *far* exceeding that cue on this post right now. I’ve known I use Tumblr “wrong” for years, but I don’t care.
The Tumblr input flow is really two steps, and I’m falsely screenshotting only half of it. The first half of the flow is seen at the top of the content consumption feed: various icons representing the choice between posting text, photo, audio, video, etc…
This multi-step flow has two key benefits:
1) It means that content creation and consumption is nicely co-mingled, which I once heard Matt Mullenweg describe as one of the things Tumblr did right that Wordpress missed. Co-mingling creation and consumption lowers the barriers to create a new post, which leads to more content being added more regularly.
2) By splitting the process in half, the input flow can be well-customized to the media type being contributed. By contrast Facebook’s flow is designed to accept all media types seamlessly without prior explicit indication by the user. But the downside to Facebook’s approach is that there are no affordances in the input field that indicate what type of media is acceptable. Tumblr makes everything explicit, which is more intuitive at first glance, even if the cost is an extra click in the flow.
Pinterest - This is a two-step flow on Pinterest. The first part is the insertion of a URL, the second step appears once that URL is processed.
Note that content syndication to other social networks is so important to Pinterest that they give the sharing options material screen real-estate on the input flow. Twitter and Facebook by contrast omit these options. It’s all a question of priorities and business objectives.
Also notice, every input flow we have seen thus far has deliberately used a different color to denote the “OK” or “Complete” button, and it’s always the part of the input dialog that leaps off the screen visually.
Medium - This is definitely the most minimalist approach to date. My screenshot here is a bit of a cheat because really the input flow is the entire browser page. All the white space below the “Write your story” text is available to be filled with users content, and the sea of white space leaves a dramatic impact on the user. It’s a similar experience to looking at a white piece of paper in a notebook. If you ever want to really *grab* the user’s attention, surround your point of focus with copious white space… this will always make the point of focus jump forward.
I love the green accent color in this flow. By using it sparingly, it is far more effective in calling the user’s attention.
Minimalist style only works because users of Medium have been filling out publishing input forms for years on competing platforms (Wordpress, blogger, Tumblr, etc…). The white space obscures a key affordance: borders. Borders cue users where the edges of content input are. Borders say, “You can click inside me and start typing.” The lack of borders is beautiful, but it only works because users of Medium are experienced enough to know that there are implied borders in these invisible input boxes.
Wikipedia - I thought Wikipedia’s archaic input form would provide a nice contrast to Medium’s form. Since most content contributions to Wikipedia come in the form of updates to existing articles instead of new “from scratch” articles, I screenshotted the input form to edit an existing article. Medium’s white space feels like a breath of fresh air compared to this crowded subway car of Wikicode.
Wikipedia’s input for feels crowded, jargony, and raw. I think there are two reasons for this design choice:
1) Wikipedia doesn’t want every visitor editing all pages. It’s technically open to this option, but the painful design of this page is a deterrent, that I believe is deliberate. It says “if you don’t know Wikicode, you’re in the wrong place.”
2) It’s simply old and hasn’t been updated. In the early 2000s, I’d argue this page was relatively well-designed, especially compared to trying to edit raw HTML in your favorite text editor, which was the “state of the art” when Wikis were invented.
Snapchat - I’m throwing in a mobile-only example to show a technique that is really only possible in mobile device input flows.
Snapchat’s input flow paints edge-to-edge with full bleed photos. There’s no browser window, no scroll bars, no favorite icons or URL bar, no company logo, no settings icon or user profile badge. It’s just the input form… alone… on 100% of the screen. When you’re in content creation mode, the lack of distractions is a joy.
In keeping with painting the input flow edge-to-edge, this is the first example we’ve seen where the input buttons and choices are overlaid directly onto the content. This can be confusing to a first time user, especially if the content and the buttons accidentally blend together due to similar pixel color. But once you’ve experienced full bleed photo input flows like this, you quickly learn where all the buttons are, and there’s no going back. Old non-full-bleed input flows look dated… like trying to create content through a tunnel constraining your view. Like watching SD content on an HD TV.
Yo - I don’t really use Yo, so I feel a bit like a fraud for adding this example, but it’s simplicity is so seductive, that I can’t help but include it in this tour of input flows. In Yo, the only thing you can do is say “Yo” to another user. You do so by clicking the recipient’s name. It is an input flow that is essentially only 1-button, a “submit” button titled with the recipient’s name. I don’t think it can possibly get any simpler to create content than this. But, the simplicity also constrains the content creation diversity: no one can compete to be the next Shakespeare inside a Yo input flow.
So there you have it. A tour of input flows and a small sampling of lessons to draw based on some of the most popular web services’ best practices. Please add your own contributions or analysis in comments.
Fred Wilson is going to Washington today and posted on his blog the argument he’ll use to sway conservatives in favor of net neutrality. The gist is: net neutrality is the status quo on the Internet, and if it ain’t broke don’t fix it. Because it ain’t broke, the primary reason politicians oppose net neutrality is because they are funded by 20 years of Cable/Telco donations. So, to be pro-net-neutrality is to be against big business swaying politics.
That is a rational, sensible argument; which Fred is bringing to a city that IMHO has little patience for open, rational debate.
The most effective pro-Internet, pro-openness lobby I have ever seen was the SOPA/PIPA blackout. And that had very little to do with a sensible argument and everything to do with an overwhelming public outcry on the scale of which lawmakers rarely see (like Million Man March rare). The lawmakers didn’t totally understand the outcry, but clearly felt it’s scale, which killed both bills immediately.
To make a trip to Washington effective, my gut says don’t rely too much on logic and instead make lawmakers feel like you represent a million voices. For a VC like Fred, that argument goes something like:
"I serve on the Board of Directors of Internet companies that have media influence on a daily basis with 250mm voting Americans, and here’s my opinion on why you should support Net Neutrality."
I’m delighted for Jonathan, Benny and the rest of the Timehop team for all the success they’ve earned lately. This Pando article uses a “curiousity gap" headline to promise the Secret to getting to the top of the App Store. The (unsurprising?) "secret" is a combination of:
Nuanced product iteration in quick cycles
Meticulous attention to details in onboarding
The “it works” feature: making it fast, reliable, and squashing bugs
Like every other 4-years-in-the-making-overnight-success (Tumblr is an excellent case study in this phenomenon), there is no secret formula. Just put your head down and grind out great work.
I was on vacation last week and took some of the time on flights to read Nexus by Ramez Naam. It’s a sci-fi book that explores the intersection of computers and the brain and how they could meld together in the future.
I typically only review books on this blog that I emphatically endorse. I generally don’t think there’s any point in blogging about a book that I didn’t like; it doesn’t help anyone to do so. I’m blogging about Nexus because I definitely recommend it for a particular person/audience, but it’s not for everyone. And I personally enjoyed it quite a bit.
First, the good: it’s a compelling vision of the day-after-tomorrow in bio-hacking. Today, as wearable computing continues to increase in popularity, it’s very intuitive to see how the devices that currently sit on our body will eventually start to be embedded into our bodies. Naam takes this through process to its logical conclusion, as the brain becomes the silicon-replacement medium for computation. Since the brain already has the ability to process information, the key insight that Naam explores is the I/O necessary to read/write info to/from the brain. Once that scientific piece is unlocked, many of Naam’s fictional explorations in this book ooze the essential verrisimilitude that makes sci-fi sing with familiarity.
The bad: this book is not well-written. It’s not all bad: the writing is at its best in the scientific accuracy and the action scenes’ detail. I really like the parts where Naam describes what it’s like to research and explore a new frontier, with all the dead ends and promising threads. The problems emerge in things like cliche characters, predictable metaphor usage, and one particularly bad sex scene. Just don’t read this book after reading any of the “greats” of contemporary fiction like Chabon or Franzen. They make this book look like the material of a college fiction seminar.
So, if you’re excited by the ideas of what hacking your own body could become, I highly recommend Nexus.
What’s the difference between sushi and cold dead fish? Positioning.
I am generally pretty bad at using positioning effectively. It’s something I am continually working to improve. I often find myself thinking there is one objective truth, and any form of spin on that truth is incorrect and unethical. I think this stems from my overly logical recovering programmer frame of reference.
Every time I detect spin in some article designed to pursuade my opinion, I think of Joe Friday retorting “Just the facts, ma’am.” (which apparently never happened)
But that black and white line of thinking is exactly why I’m not great at positioning. “Sushi” and “cold dead fish” are both the objective truth, one is simply positioned more attractively than the other.
“Pull media has quickly been replaced by push media, as the Times report makes clear in so many words. Information—status updates, photos of your friends, videos of Solange, and sometimes even news articles—come at you; they find you. And media that don’t are hardly found at all.”—
For years now, when people ask me, “How do you consume your news?” my answer has consistently been, “Twitter reads the news to me.”
EFF did a great summary of all the major web services and their policies regarding fighting for your digital rights. It’s filled with great “did you know” tidbits. For example, did you know that if the government requests your Amazon purchase history, Amazon will not notify you about the request? (!)
I’d love to see a modern-day SiteAdvisor that helps me navigate my digital rights. SiteAdvisor was a service founded by Chris Dixon that was a browser extension that told you whether or not you were about to visit a dangerous site (this is back before Chrome and FF baked this functionality directly into their browsers… in fact, Chrome didn’t even exist at the time). While I appreciate the protection from malware, I feel like data privacy is an equally pressing problem. I wish that my web browser would warn me before trusting a service with my data, if that service is likely to sell me out down the road.
For now, simply being armed with this knowledge is good… but if I knew this information at the moment that it matters most (when I’m considering signing up for a service) that would be even better.
An ex-politician seeking re-election has asked to have links to an article about his behaviour in office removed.
A man convicted of possessing child abuse images has requested links to pages about his conviction to be wiped.
As a follow-up to my blog post today, here is two examples of the “forget me” requests Google has received thus far. There is no doubt to me that Google is being asked to make its product worse for end-consumers.
European citizens now have the right to selectively redact a Google search engine results page (SERP), according to a recent ruling by The European Court of Justice. The best commentary I have read on this ruling comes from Jonathan Zittrain, so not surprisingly my own opinions on the subject on influenced by his thoughtful NYT op-ed.
The idea that citizens have the right to redact Google sits poorly with me, despite the fact that I empathize with the problem trying to be solved. Google has one job: surface the most relevant webpages associated with a set of keywords. If there is unflattering content connected to the keywords of your name, but that content is the most relevant possible content, then that’s what should appear. Doing anything other than serving the most relevant content will force Google to make their product worse.
When something unflattering does appear, the most direct solution an end-user should take is to contact the offending webpage’s owner to ask them to remove the content. If the offending content is slanderous or otherwise illegal, there are already legal remedies to these problems. Those remedies are messy and broken, hence I empathize with the problem the EU Court is trying to solve. But if a governing body believes that its citizens have the right to selectively censor web content about themselves, then the most direct solution is to legally require action by web hosts, not search engines. Placing responsibility for censorship compliance on Google instead of the underlying web hosts feels like either A) laziness or B) ignorance by the European Court.
Laziness: in that there is only one (or three if you count Bing and Yahoo) entity to regulate, so it’s simply easier and lazier to require Google’s compliance.
Ignorance: in that it’s entirely possible that the European Court does not understand that removing offending content from Google does not make the content disappear. The legal opinion from the court does make key distinctions that makes me think the author of the opinion understand the technical difference between removal from Google and removal from the web. But I still think ignorance is a possibility here because there is an assumption that removal from Google is *effectively* just as good as removal from the web, and that’s simply not true. Google is the most popular way to find relevant results on the web today. It’s possible that won’t be true in the future. If Google does start aggressively complying with these censorship requests, it’s very likely that a people search startup that helps users find all relevant webpages associated with a person’s name will gain popularity in the EU.
Either way, the ruling against Google is dangerous precedent and is yet another step on the dangerous path of conceding that the Internet is subject to country-by-country government regulation. Expect Neal Stephenson’s data havens from Cryptonomicon to rise in popularity as this regulatory trend continues.
It’s curious that Internet technology’s primary benefit is flattening the world and shrinking distances between people, and yet there is a huge boom in Bay Area real estate prices because all the people who work on building Internet technology cram themselves into a 40 mile radius to work together IRL.
It reminds me of the Go-Go years of ‘98-‘00 when Jason Calacanis’s Silicon Alley Reporter was at its height in popularity. As an outside observer looking back in hindsight 6 years later when I first arrived in VC in NYC, I couldn’t understand why a journal that was the pinnacle of tech thought leadership in the early boom years would be produced by applying squid secretions onto flattened dead tree pulp. Shouldn’t the leading magazine about the Internet be distributed on the Internet?
By contrast today, all the leading journals covering the Internet are distributed exclusively digitally, so we have made progress on that front. But, why does everyone building this humanity connecting technology need to live in the same city? It seems like a similarly naive lack of eating our own dog food as the Silicon Alley Reporter analogy.
Whenever I see something that seems incongruous, I have to remind myself that things change. They often don’t change as quickly as I’d expect, but when they do start changing, they change more rapidly than I anticipate.
In the first 8 years at Spark, we did not invest in a completely distributed team. But within the past year, we’ve made two such investments: Upworthy and Crowdrise.*
In both cases these companies benefit from being able to recruit from anywhere in the world, which is a strong perk in our market where the best talent is in short supply and essential to startup success. The tradeoff for this benefit is that remote collaboration is still up-and-coming, so remote team communication still has some friction. Group video calls have OK fidelity, but it’s nothing like an in-person face-to-face meeting. Similarly, interrupting a colleague remotely over IM or group chat can feel more distracting than asking a quick question across a cubicle (or open concept) wall.
I wouldn’t go so far as to say we’ll replace face-to-face interaction and all companies will be distributed. That feels both unrealistic and dystopian. Both Upworthy and Crowdrise make smart use of regular, all-hands offsites to help facilitate IRL connections between employees. My point is the benefits of being distributed will continue to get better as collaboration technology improves, so much so that at some point, it will become the more-attractive, intuitive choice for most tech companies. Just like how digital content publishing and consumption eventually got good enough that the successors to Silicon Alley Reporter were published exclusively online.
The tipping point for me on this point of view was the first time I used a Cisco Telepresence system. It’s shocking how lifelike the experience felt… not dissimilar from the first time I put on the Oculus Rift. Today, it’s unaffordable to 99.9% of startups, but like all new technology, it will eventually come down the cost curve as startups increasingly build equivalent experiences out of commodity hardware and open source software, such that any team can use it affordably.
And when distributed teams become the new normal, then the next Silicon Valley will be on the Interent.
*Side note: the early days of StackOverflow were also a distributed team, but they ended up consolidating into an NYC-based office around the time we made our initial investment.
It’s Teacher Appreciation Week (my Twitter feed tells me). So who is the best teacher I’ve had?
It’s a tough decision, but I would have to go with my senior year English teacher Doc Fast. I took two English electives with him, one on Faulkner and another on crises in faith in literature.
I had teachers in STEM subjects that taught me more, challenged me more, and generally gave me superpowers, but I think they all come in second to Doc Fast because I suspect it was pretty easy to teach me STEM subject matter. It was already my strength, and I was already engaged. By contrast I always lagged behind in grade performance and level of interest in English.
Before Doc Fast, I approached most English classes with a formula. Papers followed a template that was reliable but boring. Books were work rather than enjoyable and I could churn through them on a schedule but hardly drink them in the way I do now. Doc Fast made English a lot more fun, and taught me how to break the rules to be a better thinker.
I hate picking one teacher because there are a bunch of others that have strongly shaped who I am today. Quick thanks to Mr Morange, Doc Wacht, Mr Sherman, Mr Sweeney, and Doc Stearns.
Multipath TCP Is Intuitive, Awesome, and (Hopefully) Coming Soon
I’ve been reading up on Multipath TCP recently. It’s a backwards-compatible rethinking of the Transmission Control Protocol (TCP), which is one of the two essential network layer building blocks of the Internet stack (TCP and IP, most commonly written together TCP/IP).
Your average internet connection works in a surprisingly limited way. Let’s say you’re on your cell phone, and you have both wifi access connectivity as well as a solid LTE connection available. Your phone does not use both connections at the same time to give you greater bandwidth. Because of the way TCP is architected, it can’t mix Internet connection sources in this way.
This is silly, right?… if you lived in a home with two washing machines, wouldn’t you run both of them at the same time when you’re doing laundry, instead of sequentially doing two loads of laundry in just one washer? Despite TCP being invented 40 years ago, we are just now solving this problem.
One of the first real world implementations of this theoretical protocol design is in iOS7. When running on a network that supports Multipath TCP, your iPhone will use wifi as a primary connection and your cell data plan as a backup path. I imagine Android phones in the very near future will start to be hacked with Multipath TCP support in a full-on inverse multiplex approach which will maximize the use of both connections at all times.
From what I have read, it seems like the near term bottleneck is not client support, but instead network support. How do we make sure that all cell backhaul and all public wifi networks support Multipath TCP? We need a public awareness campaign. I hope my blog post here is one drop in that future ocean.
I try to find ways in my life that incentivizes my own good behavior. That’s why I use Lift, which I’ve blogged about previously. Pushing the big green button in the app when I’ve accomplished something is a reward that I have convinced myself is meaningful.
For others a big green button might not have quite the same allure. I still think well-designed incentives can work, but the payoff needs to change to be meaningful to a broader audience.
Companies like Earndit and Gym Pact take an interesting approach: incentivize good behavior using the most universally understood reward system possible: financial compensation. Earndit does so in the form of discounts from brand partners, and Gym Pact does so using cash “pacts” from other participants in the network. They’re clever.
Financial compensation has one clear challenge: it’s expensive. The cost of a dollar is 100 cents, by definition. I wonder if there is a way to find a form of compensation that people would universally value as nearly equivalent to a dollar, and yet costs only 5 cents or less.
I feel like virtual currency from gaming can play a role here. During the height or Zynga’s salad days, if you told a FarmVille addict they could unlock a new tractor by doing 50 jumping jacks, I suspect you’d see strong participation. The tractor costs Zynga nearly nothing, though 50 jumping jacks (even when scaled to millions of users) is not exactly a meaningful contributor to a public company’s P&L.
Perhaps there is an opportunity for a non-profit third party business structured similarly to the old SuperRewards or OfferPal model. It would go something like this:
- Game developer integrates FitPowerup (like the name?) into the paywall as an option for earning virtual currency.
- All the virtual currency earned by gamers would be considered a donation of property, valued at the dollar-equivalent cost of the currency.
- Gamers do exercise, monitored by either webcams or fitness tracking devices, in exchange for in-game currency. There would likely be a cap on the amount earned per month, so as to not cannibalize currency purchase too much.
It’s an interesting hypothetical balance of interests. For a profitable developer, tax deductions could impact their P&L, plus it would train their users to desire in-game currency more. The non-profit would have motivations similar to Michelle Obama’s Play60 initiative. And the gamer would get both free in-game currency and fitness benefits.
Just a thought experiment. I’ve been playing a bunch of Hearthstone lately, so free-to-play gaming ideas have been top of mind for me lately.
I read Michael Lewis’s new book Flash Boys over the weekend. It was a highly entertaining read, and I really recommend it as such. I’ve read three pieces by Michael Lewis over the years (The Blind Side, Flash Boys, and his profile on Shane Battier). All of these works (Flash Boys included) follow a similar formula: use a single, well-polished narrative to draw broader conclusions about a market. The rhetoric employed in all works are argument-by-example.
As such, I highly recommend the book as a compelling story. As for the larger conclusions about the corruption of Wall Street (or the value of a high plus/minus NBA small forward, or a nimble-yet-gigantic NFL left tackle) I think the fact that the book contains no research footnote documentation is telling. The primary focus is the story; that’s why it’s entertaining. The conclusions presented to the reader make the story feel more powerful and representative. It makes me want to research the market further; the larger value of the book beyond the story is that it generates compelling questions.
The main conclusion of the book is that our financial markets are broken, and HFT is largely used maliciously to fleece investors. I’ve already put my money where my mouth is on this subject by investing in Quantopian. A big piece of the thesis in the Quantopian investment is that HFT is bumping up against theoretical physical limits (such as the speed of light) and will provide poor growth going forward. When speed is no longer a competitive advantage, instead algorithmic diversity will be the next frontier of innovation in investing. And Quantopian is perfectly positioned to help foster this diversity of thought by making it as easy as possible for non-financial-engineers to experiment with ideas.
The TL;DR here is A) Flash Boys is a great read and B) I’m inclined to agree with its conclusions even if the book is not what convinced me.
I was talking with my friend Aaron White last night, and he was summarizing the plot of a sci-fi novel he was recommendation. I won’t mention the name of the book, lest I generate spoilers by accident. A core event in the book is a person is separated from his technology and as a result goes through an identity crisis due to the separation anxiety. In this sci-fi world people’s personalities are extended into the technology they carry, so to be without your technology is to be missing a piece of yourself.
I was reminded of this conversation today when I read the following quote from a teach describing her classroom after’s Snapchat’s latest product release. The teacher had to take away her students’ phones, and this was her comment:
For quite awhile now, kids have had a real anxiety about being separated from their phone, but today it was near panic.
Since when I started carrying a smartphone in 2006, I have felt mild anxiety being temporarily separated from my phone. The longest periods of separation are the best though… when I’ve gone hiking for multiple days and been without service the whole time. At first it’s distressing, but eventually it becomes wonderfully freeing.
This sci-fi novel plot line definitely hit home for me. I can’t wait to read the book.
Previously, John was one of the core team members at StartX, the accelerator program at Stanford, and a co-founder of the Stanford-StartX Fund, which is the investment vehicle for Stanford to invest in StartX companies.
As I got to know John over the course of a few weeks this winter, I was struck by the breadth and interdisciplinary nature of his experience. John has a strong engineering background, with experience in both Physics and Materials Science and can go deep on technical subjects; at the same time, he has a lens for investing, honed during his experience at StartX. In a wide range of conversation topics, it’s clear that John has strong balance in his prior experience that will help him greatly in working with startups from his new role in VC.
John will be working out of our Boston office, which is a bit of a homecoming given that he grew up in the Boston suburbs. John clearly has strong ties to Stanford and the Bay Area through his academic background and StartX experience, so he’ll be spending a bunch of time out west too.
[T]hose who walked instead of sitting or being pushed in a wheelchair consistently gave more creative responses on tests commonly used to measure creative thinking, such as thinking of alternate uses for common objects and coming up with original analogies to capture complex ideas.
Anecdotally, walking definitely boosts my creativity. Even better, I’m more creative when walking and talking with another person. I love walking meetings. The best part is when I’m explaining a complex subject to my co-walker and he or she misinterprets my explanation and says it back to me in a way that is wrong and novel. This misinterpretation will often lead me to new insights that I would not have discovered alone, and then the co-walker and I build on this new path of thought. Many of my favorite conversations flow in this manner.
I’d love to see follow up research on the effects of a shower on creativity. I do all my best thinking in the shower.
I find this fascinating, and I think the success of using a one-time half million dollar cash injection into the campus will hinge entirely on where vendors around Kendall Sq accept Bitcoin.
The most interesting aspect of this experiment is that, thanks to the open Blockchain, we will be able to see everywhere the half million dollars flows. This assumes of course that students don’t use off-blockchain services like Coinbase much, which perhaps is an unrealistic assumption, but even with Coinbase, there will still be a rough idea of where the currency ends up.
I can’t wait to see the results of both A) academic study this experiment spurs and B) bitcoin adoption around Kendall Sq in the coming months.
When Steve Jobs chose to name the iPad as such, I’m pretty certain he was making a gesture in the direction of Mark Weiser of Xerox PARC fame. He coined the term Ubiquitous Computing, which is the notion all people would be surrounded by hundreds of small purpose-build computers of various form factors. In this context Bill Gates’s “A PC on every desktop” was wildly under-ambitious. :)
Mark saw a world where these computers surrounding us would be relatively invisible; they would blend into our environment and naturally extend our unconscious. Little active attention should be required.
In the Ubiquitous Computing framework, computers would take three forms:
Tabs - wearable centimeter sized devices.
Pads - hand-held decimeter-sized devices.
Boards - meter sized interactive display devices
This page shows the original tabs, pads, and boards prototypes from Mark’s original research. I’ve long thought the iPad is named for the “Pads” in this framework, given that Steve “loving borrowed” his best work from Xerox PARC.
I was reminded of the Ubiquitous Computing origins of the iPad recently when I saw a bunch of headlines over the weekend saying that, after a year of no increase in growth, the iPad is failing when compared to its smaller, overacheiving sibling the iPhone.
Looking at tablets and smartphones as mobile devices in a new category that competes with PCs may be the wrong comparison - in fact, it may be better to think of tablets, laptops and desktops as one ‘big screen’ segment, all of which compete with smartphones, and for which the opportunity is just smaller than that for smartphones.
This sounds right to me, and it fits in the Ubiquitous Computing framework quite well. Tabs are wearables and phones. Pads are laptops and tablets. Boards are large format displays (TVs, 30” monitors, Out Of Home (OOH) displays). Tabs are the hottest area of growth right now, but that doesn’t mean the iPad is dying. The iPad is a complete different format, the Pad format in this framework. To figure out how the iPad is doing, you have to compare it to laptop sales, as Benedict does a nice job in his post.
In conclusion Ubiquitous Computing is a remarkably prescient framework describing modern computing, and mapping current devices available into that framework provides a nice landscape for where devices are competitive vs complementary.
A celebrity in real life (IRL) is one of those “I know it when I see it"-type definitions. When you see a celebrity (if they truly are a celebrity) then you recognize him or her, and can easily recall why, despite never previously meeting the celebrity. Celebrities are popular in this way; their image and reputation precede meeting them.
Online, it’s messier, in a surprisingly inconsistent way.
First, what is “Internet Famous?” Every web community has one core vanity metric that users can hang their hats on. On Tumblr, Pinterest, and Twitter, it’s followers. On Youtube its subscribers. On Reddit, it’s Karma. I’d argue that the most Internet Famous users on any of these services are the ones that rank the highest on this vanity metric. Internet Famous are the users with the broadest reach, and have the largest opt-in network of followers.
I managed to dig up the vanity metric leaderboard for most major social services online. You can click through on each of them below, but I summarize my informal findings in each subsequent paragraph.
Celebrity on Twitter is the most straight forward: it is a reasonable reflection of IRL celebrity. There isn’t anyone on the list that wouldn’t be recognizable to a large swath of the population IRL.
On the opposite end of the spectrum, you have the “celebrities” of Pinterest and Reddit, which have no correlation to celebrity IRL at all.
A skeptical critic who is external to these communities would say the Jenna Marbles of the world are not famous at all, because they lack recognizability. But their influence in these online networks is more powerful than IRL celebrities within the same community (Jenna captures more attention than Barack Obama on Youtube). I’d argue its no different than a Bollywood star’s influence in India compared to a K-Pop star’s influence in Korea; they’re niche-specific. It’s more targeted than broad reaching celebrity IRL, but within the community where the celebrity emerged, they are just as powerful as any celebrity.
Why does all this matter? A common pitfall I often see early stage startups fall into is assuming that connections in Hollywood will have any impact on the usage and distribution of their community. “If only I could get Justin Bieber to start taking selfies on my service, then I’d be huge!”
Celebrity IRL does not necessarily translate online. Instead, focus on the celebrities that emerge from within your own community. Highlight and celebrate them, so aspirational users can follow the model of what community celebrities do best. And as your platform reaches massive scale, you’ll find that Hollywood will want to get involved, with or without your consent. IRL Celebrities don’t bring their audiences with them particularly well, instead the best know how to go where their audience already is and engage them there.
“While FCC licenses are typically issued for a fixed period of time, renewals of FCC licenses are routine, with no legal, regulatory, competitive, or economic reasons that would limit the useful life of the asset. As a result, for financial reporting purposes, licensees generally treat FCC licenses as indefinitely-lived intangible assets under the provisions of Financial Accounting Standards Board (“FASB”) Accounting Standards Codification (“ASC”) Topic 820, Fair Value Measurements and Disclosures (“FASB ASC 820”).”—
As an addendum to my blog post today, this is an accounting journal’s analysis on how to value spectrum per FASB standards. It’s very dry, but brew a cup of coffee first and then read through if you care about the status quo on how spectrum is licensed and valued today because it’s the best resource I’ve found on this subject.
At risk are the billions of dollars broadcasters receive from cable and satellite companies in the form of retransmission fees, the money paid to networks and local stations for the right to retransmit their programming. The networks have said this revenue is so vital that they would consider removing their signals from the airwaves if the court ruled for Aereo.
This is why the Aereo case is so phenomenally interesting to me. It has nothing to do with streaming TV over the internet for small fee without paying licensing to the broadcasters. Frankly, the Aereo business model is rather uninteresting arbitrage. The crux of the case is this painful and justified choice the public broadcasters should be forced to make. They can either:
A) Keep their incredibly valuable, free public spectrum over which they broadcast their programming for free. In which case, Aereo is legal and free to continue to operate their business, and cable companies should stop paying retrans fees to public broadcasters immediately (which in theory, should lower the public’s cable bills).
B) The broadcasters can give up their incredibly valuable, free public spectrum over which they broadcast, and then keep their cable retrans fees. Aereo will be sunk unfortunately in this case, but the US people will get their spectrum back, which ideally would be used to the optimal public benefit.
Broadcasters don’t get to have their cake and eat it too. If the SCOTUS rules that Aereo is illegal, then the broadcasters will not be forced to make this choice. As alternative recourse, I hope the FCC steps up and forces broadcaster to either A) pay a market rate for the spectrum they use or B) give up their retrans fees. The Aereo legal loophole is not the only way to force this decision, it’s just the most immediate one.
When I was in middle school, my Social Studies teacher gave us a long homework assignment that many students disliked. We received a sheet of 8 questions, each with 5 multi-choice answers (except the last question). Each question was a mapping quiz. An example question was something like:
- Start at Rome. Travel NNE 50 miles, then travel W 200 miles, then travel N 80 miles, then travel SSW 10 miles. What city do you arrive at? A) Florence B) Athens C) Barcelona D) Monaco E) Venice
The quiz had one final question at the end after the 7 multi-choice questions. It said something like:
"Take the first letter of each city you answered in the last 7 questions and put them together to form one word. This word will be an important part of next week’s lesson."
We were given a week to complete the assignment. Many of my classmates struggled with the assignment for most of the week. We were taught how to use protractors on a world map in order to complete the assignment, but the directions often led students into fuzzy gaps between cities, and it was surprisingly difficult to figure out whether a possible answer was correct or not. It was enough of a slog that a few student simply turned in the assignment only partially finished.
At first I procrastinated. I was annoyed by what seemed like a tedious exercise that wasn’t going to teach me something new. The exercise did foster mapping practice, but I must have lacked the appetite for that skill.
Eventually, I thought to just read ahead in the textbook and look for any bolded 7 letter proper nouns. Ten pages after our last reading assignment ended, I saw the name Ptolemy and saw that each of those 7 letters mapped to the first letters of cities in the multi-choice options for each of the questions. I wasn’t certain I was right, but it seemed to fit well enough. I circled the 7 bubbles, wrote in Ptolemy, and turned in the worksheet after 10 minutes of work.
My answer turned out to be right and I received 100%, but I remember feeling guilty about what I had done. It felt like cheating because I wasn’t completing the assignment with the spirit in which it had been designed. If the last question hadn’t been a part of the assignment, I would have slogged my way though the work like everyone else, as I should have.
This happened to me a number of times during my education:
One time again in a physics class when we had a slog of a mechanical problem that was painful using algebra, but was incredibly simple using calculus. My answer was right so the teacher gave me credit, but then told me not to do that again.
It happened in an algebra class where a logic problem could be boiled down to 4 interdependent equations that could be solved very quickly in a matrix, but that was not the desired approach for a solution (we were supposed to solve each equation for a different variable and swap each of them in to reduce down to an answer).
It happened again in an introductory Computer Science class in college when I saw a way to use recursion to cut through a logic problem that was designed to be very difficult when addressed with an imperitive methodology. I was later chastized by the TA for taking the wrong approach.
I really hope this post doesn’t ring as me being ungrateful for my education. I had AMAZING teachers and only the top-quality education my whole life. I’m deeply indebted to my parents for providing me this great opportunity.
I also hope this post doesn’t ring as, “Oh, look at me! Aren’t I so smart?” I got my ass handed to me in many classes that battered my intellectual ego plenty times over. In fact, much of the “boasting” in this post is about logic problems, yet I got a C- in my First Order Logic class. Had it not been for a curve, I would have failed. Go figure.
My point here is simple: looking at a problem from a sideways perspective shouldn’t be discouraged. It’s one of the most valuable skills I’ve learned in my life, and when I stumbled into these moments in real life (not school), it genuinely feels like having superpowers. This should be fostered. In fact, whole classes should be dedicated to trying to teach this skill. I’m rarely so lucky to achieve this state, but it’s wonderful when it happens.
This is the most interesting thing I have read in months. Farmers are starting an open source seed movement in response to the Monsantos of the world turning plant line perpetuation into private property.
The open question in my mind is if phenotypes will be patentable or not. If so, the open source seed movement might hit headwinds against properties like Roundup resistance.
1) Most of the anecdotal stories of trading failure in this book stem from people that thought they were making very safe, relatively small, repeatable profits… and doing so millions of times, unaware of the actual small probability of a catastrophic downside event. This was interesting food for thought in the context of my job. As a VC, the worst I can do in an investment is lose 100% of my investment, and unfortunately (with much emotional pain) this happens with reasonable frequency. In exchange for this risk, I am (hopefully) making investments with uncapped upside. In both upside and downside scenarios, my business of investing is contrary to many of Nassim’s fools.
However, I could still easily be fooled by randomness because I am only describing the possible end states of a given investment (1X loss, unlimited upside), and without a probability distribution to map against it. The distribution of these outcomes means *everything* to returns.
2) Nassim is a trader himself, analyzing his trading peers in a world of traders. He doesn’t believe in the value of technical innovation (he said something like (paraphrased) “for every innovation like the Automobile or Internet, there are thousands of failed technologies that waste our time.”) In trading Nassim is focused on reliably making money over the long run, without embracing underlying innovation or growth in production.
By contrast VC investing is different. VC is a much longer time horizon than most trading, and will only be successful if there is material growth in innovation and productivity in the startups being funded.
I’d love to see Nassim take his (highly skeptical) probabilistic lens and apply it to the world of investing as opposed to trading… perhaps he has already done that in a subsequent book I have not read.
3) Every time you hear mention of an average or expected outcome, this should trigger your Spidy senses that there is an implied probability distribution around this average and the shape of that distribution is far more informative than the average itself. Often times, the shape of this distribution will be Normal (aka Gaussian)… But when it isn’t, your assumption can bite back.
4) Nassim regularly gets up in front of his boutique investment firm and states quite simply (paraphrased): “We are idiots and know nothing. But we are blessed with the self-awareness of our limited knowledge, which makes us better than most other investment shops out there.”
I love this approach of perpetual humility as a “first principle” foundation to intellectual curiosity. I strive to be this humble when speaking of my own positions and ideas (and would not be so brash as to assume I hit my goals of humility all the time… I’m sure overconfidence slips past me on occasion).
5) Lastly, I took the whole book with a grain of salt because it must be exhausting to be a perpetual skeptic. Here’s Nassim on his own weakness in the face of an emotional response to randomness: “My humanity will try to foil me; I have to stay on my guard. I was born to be fooled by randomness.”