Author Archives: Dawud Gordon

TwoSense Featured on CB Insights

CB Insights put out a blog post recently with a market map of companies using artificial intelligence (AI) to change the future of cybersecurity.  One of the 9 categories is “Behavioral Analytics / Anomaly Detection” where you can find TwoSense, as well as BehavioSec and several other companies in the space.  We’re very excited to see the buzz uptick around the market sector of Behavioral Biometrics.  Have a look at the IBIA white paper on the subject for more.

Market Map of AI in Cybersecurity by CB Insights

Biometric Theft is a Big Deal and Behavioral Biometrics Can Help

TL;DR Biometric theft is permanent, essentially rendering that biometric (e.g. thumbprint) useless for the lifetime of that user.  Behavioral biometrics use ephemeral data, meaning theft would only be a temporary disability.

Biometric authentication provides an attractive way of authenticating users into high-risk infrastructure.  Think about the Touch ID on your phone, or face and eye-scanning technology   As opposed to usernames, passwords and security questions, the patterns of your thumbprint are so complex that they are almost impossible to guess, and they can’t be stolen through fake websites, and you never have to remember them. Your thumbprint is unique to you, remains the same over your lifetime, and can’t be stolen on the web.  Or can it?

800px-fingerprint_scanner_identification

What happens when a fingerprint is stolen? Can we still count on it to uniquely identify ourselves?

In the past, hardware flaws in some phones were exploited to allow attackers to steal the fingerprint images directly from the scanner on the device.  Even scarier, hackers in Germany stole the German Defense Minister’s fingerprints using only hi-res photos taken at a press conference. Other forms of Biometrics are even worse.  Due to the prevalence of social media, pictures and videos of us abound on the internet, allowing attackers to easily spoof face and voice biometrics.  So what happens when a biometric is stolen?  Since a thumbprint is permanent, a thumbprint that is stolen is essentially permanently rendered useless for authentication purposes: you can no longer use your thumbprint to prove you are who you say you are, ever.  It’s not like a Credit Card number that can be replaced.

Behavioral biometrics is a new form of biometric that allows you to verify your identity with the way you behave, as opposed to some aspect of your physical body.  The behavioral cues range from a swipe gesture you remember or a routine you do, but also can include passive aspects of your behavior such as your gait, typing speed, the order of the buttons you usually use as you interact with an app, the way you travel around, where you spend your time, etc.  One of the biggest challenges in behavioral biometrics is what we call “Behavioral Drift,” where the user’s behavior changes over time.  For example a ski injury makes you walk differently, you change neighborhoods for a new job, or an app update means you interact differently with your phone.  Behavioral drift means that the biometric must continually be updated to account for behavioral changes, potentially limiting accuracy if it is not handled correctly.  Recent advances in Deep Learning make it possible to build behavioral biometrics models that can accommodate behavioral drift while maintaining accuracy, but that’s a different topic.  However the drift also has the distinct advantage making the biometric ephemeral in nature: if it ever should be stolen, the threat to you, the user, is only temporary.

While behavioral biometrics as a tool is still in it’s infancy, the ephemeral nature of behavior itself presents huge potential for low-risk, high-accuracy user authentication.  To be clear, there has never been a known instance of theft of a behavioral biometric.

 

Deal companies are broken because they forgot about their users.

Groupon’s stock price is down to 13.5% of their IPO, LivingSocial did a down round and laid off over 80% of their employees, and Amazon Local shut down its service.  The deal space seems to be dying.  But why?  It’s because they forgot about what most people want.  Let me explain.

“[The] daily deals business model was broken and [we] predicted that it would be unsustainable”

This is how a deal works:  a vendor drops their price by usually around 50%.  For every deal used, half of the remaining 50% goes to the deal site, and only 25% stays with the vendor.  The down side for the vendor is obviously the huge cut in margins, or even taking a loss, and vendors often struggle to get deal users to return.  Some argue that this cost is too great to maintain vendors as customers, but the vendors I spoke to in NYC say they like deal sites over other advertising methods because of the guarantee that spending will convert to foot traffic. Deal sites do make money, but they’re just not GROWING, which explains the value issues. But why aren’t they growing?

You are bombarded with content, and for every deal you like, you will see 100 that you don’t.

The user-facing side of the deal business is focused at getting consumers to try new products, restaurants and experiences that they normally wouldn’t, in return for a massive price cut. Users are presented with available deals and can pick and choose what they want to try. Here is where the problems start.  The volume of deals is MASSIVE!  Groupon and LivingSocial use every available channel to reach their users.  They have a website, an app, daily emails, push notifications, everything!  As a user you are bombarded with content, and for every deal you like, you will see 100 that don’t interest you. Also many deals are far away, or only valid at certain times, and you have to “purchase” deals ahead of time. Travel and pre-purchase mean that you have to plan pretty far in advance to use a deal.

Deal sites are offering most people a value proposition that does not resonate.

From interviewing users for a product we’re working on, I think I know why this is.  There is a small core group of users, that enjoy hunting through reams of deals. For most people, however, this just isn’t attractive. It requires planning which inevitably causes millennial FOMO, and behavioral changes such as adopting searching patterns and leaving the comfort zone of known neighborhoods.  Deal sites are offering most people a value proposition that does not resonate.  Of course Groupon’s growth didn’t continue its hockey-stick.  So why do deal sites still stick to the practice of showing users everything?  Why not make a solution that is more attractive to a larger audience?

The answer in two parts.  Part one is the business model.  Clients are paying to get users to try new things.  Their existing users enjoy hunting for deals on new things to do and try, and would be alienated by any targeting. In that sense, maintaining the status quo protects an existing business model.  Part two is the data.  In order to accurately predict if a user will like a deal or not, you need a lot of data.  And not just a purchase history of deals, but information about the user themselves.  You need to know the user’s behavior, such as where they go, and what they do there, and you need to know personal attributes and preferences of the individual, such as their income bracket, or what their interests are. This is all information that people may, or may not, be willing to give to a deal site in return for better deal recommendations. Even if they are willing, they don’t have access to that data themselves and no way to bring it to the table for deal targeting.

We’re trying to solve these problems in a unique way.  Tuba is a mobile app that works with deal sites to retarget deals based on the personal characteristics, preferences and location of the user.  It is currently in Beta, focused on food and drink deals for Android in the Google Play Store (iOS coming soon). The idea is to give the deal site an avenue to reach a wider user audience which doesn’t want to commit to planning, searching, or big behavior changes for deals. The focus is on building an app that learns to understand the user and give them a way to use deals spontaneously, presenting them with deals they like, that are close enough to use, right when they are deciding where to go. The app uses machine learning to quantify the user’s preferences and attributes, which doesn’t take data in return for deals, but rather gives the user ownership of their own data profile.  With no customer acquisition of it’s own, Tuba does not compete with deal sites, but rather provides a wider audience with a better way to interact with the deal industry. Tuba wants to bring the beleaguered deal industry’s focus back to where it should have been all along: on what people want!

WhatsApp Reneges On Their Promise Of True Message Encryption

WhatsApp’s security was recently hacked by white-hat researchers.  After much click-baiting, it turns out they’re not actually collecting any information they shouldn’t be.  They are, however, protecting it poorly, and they still have access to message content with the ability to share it with Facebook.

Security researchers at Brno University of Technology in the Czech Republic (fun fact: Brno is where Mendel discovered modern genetics) were able to reverse-engineer WhatsApps’s security mechanisms and published their findings in an academic journal. Instantly there was a frenzy of click-baited articles about how WhatsApp was stealing data from users.  Reading the study itself showed that while they are indeed collecting data, that data is reasonable given the service they are providing.  For example, if you start a call with a friend, your WhatsApp client sends your phone number and that of your friend to the server.  In WhatsApp your number is your username, which is needed for the system to know who to connect you with.

A while back we wrote a post about how WhatsApp announced it would be releasing end-to-end encryption for its mobile service.  They had also announced that they themselves would lose access to user messages, with only the sender and recipient being able to decrypt communication.  This confused me because it came just after their $19Bn acquisition by Facebook, presumably for the content of the user communication coursing through their network.  Why on earth were they worth $19Bn to Facebook if the user generated content within WhatsApp was about to disappear within an encrypted channel?  What the Brno hack revealed is that their implementation fell far short of their claims, and Facebook’s investment in the content of WhatsApp’s users’ communication was safe.

In interviews with journalists WhatsApp stated that they would use Public Key Encryption, where only the sender and recipient can unencrypted content.  Indeed they did, but they used the same key for every user.  This makes the Brno hack possible, meaning anyone on the same network as your phone could gain access to the content of your messages.  Also, it means that WhatsApp themselves still have access to all message content.  Moreover, their parent corporation Facebook has access as well and the ability to target you with advertising based on the content of your WhatsApp messaging.  While this is surprising given WhatsApp’s previous PR, it does explain the mysterious $19Bn price tag that Facebook was willing to put on WhatsApp.  In my opinion, fully encrypting all WhatsApp content would make WhatsApp a near worthless asset to Facebook, especially considering the repeal of the $0.99 a year subscription model. We should not expect it any time soon, no matter how many posts like this one appear.

BofA Shuts Down Mint, Staking Their Claim to Your Data

Bank of America shut down 3rd party access to consumer transaction data through their website.  While it tightens security, it also hammers home the fact that user data does not belong to the user.

Bank of America recently shut down Intuit’s access to user data through BofA’s online banking system.  Intuit owns Mint, a service which allows users to aggregate their financial information from bank accounts and credit card sites to have an overview of their financial information and spending in one place.  Users logged into their online banking and credit card systems using their BofA user names and passwords, which allows Mint and other aggregators to collect their transaction and balance data.

BofA argued that they shut down 3rd party aggregator access because it weakened security by giving the aggregator access to the user’s password.  In fact, many banks changed their terms of service to state that using Mint or another aggregator voided their identity theft coverage.  While this sounds logical, shutting down access even with user consent drives home the point that users do not own, or even have access rights to their own financial transaction history. Some surmise that the real reason is because Mint provides users with deeper insight into the fee structures of their accounts, information that banks would prefer stay less explicit.

Almost every web service we use has a Terms and Conditions document that grants that service access to the data it generates.  Most of those also grant the services ownership, or an “irrevocable lifetime license” to that data.  That’s great as long as everything works as expected because there is no perceptible difference between us owning the data as opposed to the 3rd party service providers.  The issue only comes to a head when users want their data back, and that request is denied, or access is granted but made difficult.  What remains to be seen is how hard users will push back against institutional data silos to maintain access.

Aggregators such as Mint provide users with increased incentives to ask for access to their own data. Should these requests be denied, the issue of ownership of personal data may quickly come to a head. The technology exists to give safe read-only access to aggregators in the same way that Google and Facebook can give read-only access to your friends list in an app (OAuth), yet BofA chooses not to.  Perhaps a little consumer outrage fueled by Mint’s PR machine will make a change.

-dawud
@d4wud

SDKs Bring Easy Utility to Apps at the Cost of Privacy and Trust

A recent iOS scandal demonstrated how invasive a malicious SDK can be, and how much damage it can do to the privacy of the user.  This can happen without the user, or even the app developer, knowing or agreeing to it.  We don’t use SDK’s and here’s why.

When you build an app, 3rd party SDKs are incredibly attractive.  Import a Google library into your project, add a few lines of code, and things just start working. It’s that easy, and the utility is HUGE for the developer.  Install Twitter’s Fabric.io and you get an email every time your app crashes on someone’s device with all the details you need to fix it.  Throw in Yahoo’s Flurry and see how people use the app in real time, which screens they like, where interest drops off, how and when they use the app, etc.  If you’re marketing your app, use Facebook’s developer SDK to be able to track ad clicks all the way through the app store to app install, and even pay only when the app is installed.  All of these SDKs are bundled with SaaS platforms that store all the data, do all the processing, and visualize the data to make it instantly actionable.

But notice a pattern?  Look who is buying up app analytics startups.  It’s all of the huge names in tech, but not the SaaS providers like SAP, IBM or SalesForce.  It’s data companies whose value lies in the insight their content provides them.  These ad/analytics/tracking SDKs give them eyeballs into the pockets of the end user: your users, or you.  They don’t know what you’ve just put on their phone, and how would they considering it is probably only stated in the terms of use (hopefully).  When the developer writes 3 lines of code, the SDK has all the permissions of the app itself.  And the data it collects sits on the 3rd party server, not yours.  For all intended purposes it now belongs to them.

fb_sdk

Get Facebook’s SDK up and running with 3 lines of code.

Recently, Apple blocked over 150 apps from the app store at once.  They all had an SDK in common, from a Chinese ad network named Youmi.  That SDK was accessing and reporting sensitive information such as user emails and device IDs back to the SDK provider Youmi. Apple usually has stringent app checks to catch this type of app behavior.  However it appears Youmi was able to fool the examiners.  I think that similar to the way VW diesels were able to sense that they were in a test environment and reduce emissions, Youmi was able to sense the Apple testing environment and shut down the malicious activities.  However that’s just conjecture.  What is not conjecture is that Youmi stole extremely sensitive user data without users, or even the developers, knowing that it was happening.  In general as a user, the only way to find out which SDKs you’ve “opted-in” to is to read the privacy agreement, terms of use, EULA, etc. of every app you have installed.

While Youmi was obviously not a reputable partner, their actions are bringing the behavior of other more reputable SDKs into the spotlight.  Since it is now clear that we don’t know exactly what they are doing, it is also clear that we shouldn’t necessarily trust them.  Avoiding them makes things extremely difficult for developers.  The other options are certainly not as refined. ACRA for example allows you to catch crashes and run analytics using your own servers, but can take a good bit of tooling to get it running.  We searched for paid SaaS solutions that would allow us the agility and insight of Google, Twitter or Yahoo while keeping it within our own silo, but came up empty handed.  If you build a privacy-aware, SaaS app analytics platform, we’ll be your first customers.  Call us!  Perhaps the Youmi scandal means we won’t be the only one, but it will be the users of those 150 kicked apps that decide what the consequences are.

-dawud
@d4wud

Navigating the Entrepreneurial Legal Landscape

Getting your startup formation and legal structure done right is so important. Failure here will be expensive, make your slim chances at success even worse, and can even be fatal to an otherwise great business.  This is about what we’ve learned so far at TwoSense.

The TL;DR advice is to ask startups slightly ahead of you what they did and who they recommend, and bill by the hour if it’s not straight forward work.

When you launch a startup, there are so many decisions that become immediately prescient.  If you’re technical or non-legal like we are, these are all decisions that you probably have never considered before, but you have to make them immediately none the less. Which form should you choose to create the entity?  There are so many options: DBA, LLC, C Corp, S Corp, B Corp, etc.  Should you form locally or in Delaware (NY offers you the first 10 years tax free for example)?   How much equity do you assign to the founders?  How do you structure that equity? How do you deal with IP? There are some great resources out there from experienced entrepreneurs (e.g. Sam Altman’s startup class, I highly recommend all of it), but any variation from the standard in your configuration (e.g. a partner who is a foreign national) opens up questions that are difficult to answer definitively without a legal background.

Legal representation can be really, really expensive, sometimes up to $1200 an hour.  That’s a big investment to make before you’ve tested product-market fit….not exactly in line with the “be lean and fail fast” mantra.  Services like LegalZoom can do all the forms and filing for you, but at the end of the day you’re signing documents the intricacies of which you don’t completely understand.  You also don’t understand the repercussions of those details, some of which can be fatal for the future of the startup.  At TwoSense, we decided to go with LegalZoom at the beginning, while doing enough research to ensure that we weren’t committing any of the fatal errors, and hoping to be able to iron out the issues later when we were better funded.  We formed a DE LLC that could easily be converted to a C Corp pre-investment.  This worked out for us so far, but in retrospect it could have gone poorly.

A new option in the legal landscape are a la carte legal service platforms like LegalHero.  You list the legal issue at hand, and lawyers and firms bid in a reverse-auction to be the ones to resolve it.   The advantage is that you get to see the span of pricing, and get a fixed price for work instead of hourly billing.  Our advice is to remove the min and max outliers (highest and lowest bids) from the selection, but that’s entirely up to you.  Friends of ours work with someone they found through an a la carte service and are very happy with the result. However, at TwoSense we found that for complex issues, the legal team on the other end is not motivated to explain things to you, because they are not billing hourly.  If you’re founding a startup, you’re probably a control freak like me, and signing something that you don’t completely understand is out of the question. We worked with a great lawyer, and I think the issue was not with the individual, or the platform per se, but rather with the billing method for the complexity of the issue.  We were essentially trying to get something for nothing.

One alternative is to go with the larger firms which are institutional startup wheelhouses, such as Gunderson, WSGR, Cooley, etc..  For most of us, paying $1,200 an hour isn’t an option. Luckily, many of them have a VC-like model of offering reduced rates or payment deferral plans. They take some of the risk with the startup. They risk making a loss or not being able to collect deferred bills should the startup fail in the hopes that some of their crop will make it to a funding round. The lifetime revenue of those few will pay for the losses of the others and then some.  These are great options, but be wary of deferral periods that come due close to when you plan to raise as any delay means you may have to pay a huge bill out of your own pocket.  Also watch out for deals that involve the legal firm taking equity.  Equity is your most precious resource.  As our current legal team puts it, they don’t take equity because “we don’t want equity in companies that are willing to give it up, and the ones that won’t give us equity are the ones we would want a piece of.”  In my experience, a warm intro goes a long way to sweeten the deal and the better the intro, the sweeter the deal. Intros from existing clients seem to get the best results.  Some of the best deals also come with a vetting process beyond the intro.  One such program which I personally really like is the QuickLaunch program from WilmerHale.  It combines a prix fixe for formation but an hourly rate with a rebate and cost deferral plan after that. I know I said watch out for flat rates, but while formation is complex for you, they’ve seen it all before and if you back out after formation they make a loss, so the motivation to put man-hours into keeping you happy is given.

Don’t underestimate legal costs. 15% of our budget is set aside for lawyers.

Another issue is once you have an intro and get an offer, how do you vet the team in terms of their legal skills?  Their websites are no use, they all say the same thing and claim to be the best in every area, and there is no “Yelp of legal services” for startups (good pain point, maybe there’s room for disruption here?).  You’re also probably not an expert and not really equipped to judge experts.  What we did was take the most complex issue we had that needed solving, present it to each team and play dumb.  While you may have no idea what the best solution is, seeing what most firms’ proposals have in common tells you a lot about what needs to be done, and the depth of their proposal and recommendation hints at their experience.  You can also bounce recommendations from one firm off of another and gauge the reaction.  Also, get letters of engagement from several firms and tell each of them what the others offered.  Using these offers as social proof can get you movement, and creating a bidding war is always in your favor. Keep in mind that if you’re successful, the price you pay now will probably not make a difference, but having a legal team that knows the ropes might.

The biggest factor in our decision was talking to other entrepreneurs and learning from their mistakes.  In the end we went with Dentons through a referral from an advisor and angel who is also a seasoned entrepreneur.  He had been through the works before finding someone he liked, and we followed his advice and are happy with the result.  It isn’t necessarily the legal firm that we’re happy with, but the partner that we interact with that was important to us.  And you found him through another entrepreneur.  As a disclaimer, I’d like to point out that this is a report of our experience, and we are not out of the woods yet.  It has been a tough journey to get to where we are though, and I wanted to share what we’ve learned from our mistakes so far.  So, if you have someone with more experience than us who is giving you contrary advice, keep that in mind and go with your gut.

-dg
@d4wud

Tweether is Twitter on Ethereum to break access restrictions

TL:DR: a quick blockchain hack Tweether could let anyone, anywhere, post to twitter without governments limiting access.

This past weekend, ConsenSys held a hackathon with BlockApps for people to create distrubuted apps (dapps) running on the recently released “frontier” version of Ethereum.  I will try and explain all these organizations for those who are confused about all the names in the mix.  Ethereum is a venture-backed non-profit that created a platform which uses BitCoin’s blockchain and distributed consensus mechanisms to create a cloud computing environment that can’t be hacked, manipulated or taken down.  ConsenSys is a for-profit LLC that looks to invest in and support startups that are built on Ethereum, with the goal of creating a thriving ecosystem.  BlockApps is one such startup that offers dev support to other startups trying to get off the ground with Ethereum.  Joe Lubin is the lynchpin of it all.  He is a co-founder of Ethereum and ConsenSys, and his son Kieran runs BlockApps.

The hackathon produced some interesting dapps that highlighted the power of Ethereum.  One app created tools that allow individuals to create their own legally binding documents (crypto-law), create equity dispersement mechanisms for multi-owner entities that can’t be cheated, or distributed registries for things like pure-bread horses or dogs.  The one dapp that stood out was Tweether created by Stefan George, a Berlin-based ConsenSys employee. His Dapp Tweether is Twitter, but based on Ethereum. Anyone can tweeth, from anywhere.  All one needs is the address of any Ethereum node to do it, and there is no business behind it to intimidate, only permanent code running across the cloud, making a government blockade infeasible. Obviously what Tweether lacks that Twitter has is a huge user base that is reading content and can potentially make things viral.  My suggestion was to connect Tweether to Twitter and repost content with attribution which is technically simple, as long as Twitter is on board. And if the press is good, why wouldn’t they be.

He built the entire thing in 48 hours.

Ethereum runs the risk of gaining a bad rap.  If the first dapps that are released use the robustness of Ethereum to government intervention to do things in a legal gray area, say the next Silk Road, or prediction markets like Augur, that shady reputation could leak over to Ethereum itself.  Tweether presents a case of something that can be done quickly and uses those same advantages for social good (although some officials in China might quibble about what “good” is).  Tweether could give everyone a voice, representing real personal data empowerment, and at TwoSense we hope it is weaponized and released quickly to demonstrate the awesome positive potential of Ethereum.  Getting Ethereum branded as “the good guys” paves the way for startups like us to use their platform as engines for positive social change.

-dg @d4wud

Stop hatin’ on Google and Facebook.

When people talk about personal data privacy, there are two names we all love to be hatin’ on that always pop up: Google and Facebook. At TwoSense we are all about personal data empowerment and giving you control over your own data, so you’d think we’d hate them too. But we don’t, in fact we use them and like them to some extent. It’s companies like Zeotap and their cell carrier clients that we should really be hot and bothered about. Here’s why.

TL;DR Google and Facebook give you a fantastic utility for your data and don’t sell it. Data brokers give you nothing and make a killing.  Worse still are companies that you pay for a service which sell off your data anyway.

The internet ecosystem is full of the good, the bad and the really, really ugly companies when it comes to personal data and privacy.  Some of “the good” candidates we’ve talked about are people like Personal, who give you utility from your own data while working really hard to protect your privacy.  We’ve done our own fair share of Google and Facebook bashing, but in the end we posted those rants on Google and Facebook.  The truth is, they give us a service that we love, and we pay for it by having our attention monetized through targeted ads.  That’s what our data is being used for primarily. It’s not being sold or leased (we hope). In fact, our data is their secret sauce that they guard preciously. That gets them a score somewhere between good and bad.  But put into the context of what else is happening out there, their rating is far closer to good than bad.

Good Guy Google

The bad are the data brokers and warehousers who collect everything they can without giving you any utility whatsoever.  It’s brokers like Acxiom, Epsilon and Experian to name a few that really grind our gears.  This is a $200Bn industry that churns away in the background, tracking everyone and everything and selling that info off for whatever they can get for it.  They offer no service to you directly and you really have no benefit from their existence at all.  They are “the bad” in the personal data ecosystem, but they aren’t “the ugly.”

pcluo

The really, really ugly are the businesses where you pay for a service and get monetized anyway. “The ugly” just hit the news as a new startup Zeotap announced that it raised $4.6M to help cell carriers monetize all the data they have about from their users and generate “much needed” revenue. Cell carries offer consumer-facing cell service like AT&T that you pay to get your iPhone online, and we pay enough to generate a projected $1.5Bn for the top 4 US carriers in 2015 alone, so “much needed” is apparently meant relatively. And Zeotap is not the only one: Verizon failed spectacularly to launch their own version with a broken opt-out.

“That’s like buying a house and then having the previous owner
continue to AirBnB out a bedroom.”

Both companies claim to protect the privacy of the end-user, but a) data is intrinsically identifying, and b) why should any business a be further monetizing me for a service I pay for? That’s like buying a house and then having the previous owner continue to AirBnB out a bedroom. It’s your house, why shouldn’t you get the payment for that service?  So why would it be OK when it comes to your data data?  The infrastructure to enable you to offer those services yourself don’t yet exist, but that’s only an engineering problem.  Even if it did exist though, the value the cell carriers have comes from having data on millions of users, so a lot of people would need to opt in with you to make it happen.  

TwoSense wants to bring you, the end user, and others like you together to make you the money that third parties are earning with your data. Join us and help us to empower you.

-dg @D4wud

TwoSense’s Recipe for Equity Distribution

My equation for equity assignment is proportional to “risk” x “skills” x “commitment.”  I have discussed this so often with fellow entrepreneurs, and while we all seem to agree on this, I have never really found something that quantified it to my satisfaction.  This is the metric that I’ve come up with that fits how I operate.  What everyone wants is an objective function where you put in facts and get out a good split.  Unfortunately, the “facts” are almost always the subjective view of the founders.  This metric is for taking those subjective views and estimating a good distribution.

When you found a startup, there are so many blogs, books and webseries on how to assign your equity.  Vesting is a great tool that motivates founders, and safeguards for eventualities where founders leave, can’t join, or cease to get along.  Vesting is all about rewarding hard work while safeguarding against accruing “dead equity” in the cap table.  Some experts say they want to see all founders with similar equity.  Others advise that equity should be split 2-to-1 for partners working full time vs. part time.  I argue that these are all great guidelines, but don’t do the complexities of the issue justice.  Vesting is great, but how much should you be allowed to vest?  What if one partner is exponentially more valuable than another?  When you distill it down to the most essential components, there are three aspects that need to be taken into account:

Everybody wants a piece of the pie.

Risk: a startup is defined as a business operating in the face of abnormally high risk.  Imagine the total risk that the company needs to overcome is an enormous vat.  Every step the team takes reduces the risk left in the vat asymptotically (it’s never empty) until the startup becomes an established business.  But every step does not yield an equal risk reduction.  For example the work to establish initial market viability has a much higher opportunity costs than say testing product color schemes later on.  Investing energy at that level of risk is what should be rewarded, and the more risk the individual carries (or removes from the vat), the more they should be rewarded in equity.  However, if the individual is being paid at market rate, the risk they actually bear is the risk of losing their job if all goes south, which I would argue is substantially less than the opportunity cost of a founder.  This issue is usually covered by the standard pay-vs-equity tradeoff that is prevalent among startups.

Skills: what each team member brings to the table should affect how much equity they get.  Is someone the only person in the world who can fulfill a role? Does the company sink without their specific skill set or experience (max skill points)?  Can the company hire someone easily to replace them (min skill points)?  These are all tough questions that have to be addressed earnestly.  I have seen many companies where the person with the most specific and crucial skill set is undervalued (usually the executing technical co-founder), giving privilege to a non-crucial person who “had the idea.”

Commitment: there are many types of commitment: emotional, financial, legal, temporal, etc. For a startup, building a team all of these are necessary. There is plenty of work on rewarding financial commitment, that’s what Venture Capital is all about. Emotional commitment is par for the course: everyone believes in what they are doing or they wouldn’t have joined.  Time commitment is the most important and is usually handled using vesting, but needs to be addressed within the context of risk taken and skills brought to the table.

equity  “risk taken” x “skills brought to the table” x “commitment.”

The great thing about proportional equations is that absolute values are not needed, you can do all computation with relative values.  As long as everyone is judged on a uniform scale and the size of the equity pool is fixed, the rest of the math just works itself out (the basis for efficient probabilistic machine learning).  I’m not an expert in all things startup related, nor do I have all the answers, but this is what I believe and it’s how equity distribution is being modeled at TwoSense. Unfortunately, I have yet to find a dynamic vesting model that accounts for all of these aspects explicitly. Specifically, risk is the hard aspect to model.  If anyone has any feedback or ideas, please feel free to send them to us through our “info@” address on the contact page.

-dg