Sam was one of the co-founders of Ansaro, a SaaS that aimed to revolutionize the recruiting industry through the use of technologies like AI. They raised $2.25M from institutional investors and $750K from friends and family, grew the team to 6 members and earned in total $100k. But with expenses of $70k/month and no product-market fit, they had to shut down 2 years later.
You MUST validate your startup ideas if you want to avoid failure. In our course "Pre-Sell to Validate" we teach you an actionable framework to do it. You can get it here.
I was one of the co-founders of Ansaro. Our mission at Ansaro was to use data science to improve hiring. Specifically, we focused on better interviewing.
At Ansaro, I was responsible for product design and sales and marketing. In our early days, I did a bit of data science too. We started Ansaro in late 2016 and shut down 2 years later, at the end of 2018. After 2 pivots (while staying within the scope of data science to improve hiring), we had failed to achieve product-market fit, and we shut down the business and returned our remaining capital to investors.
We were based in San Francisco, grew to a team of 6, and raised $3M in VC funding. We had a number of large enterprise customers, but never got beyond pilot contracts. We were aiming for a SaaS model, but our actual revenue was nearly entirely from services-based pilots.
Before Ansaro, I worked in consulting (Bain), tech investing (TPG), and software development (SolveBio, then a Series A startup). I studied math as an undergrad at Stanford and did an MBA at Harvard; at both universities, I was interested in applying statistics/math/data science to messy human problems (like hiring!) Today, I’m a product manager at Opendoor, where I work on machine-learning products for the real estate industry.
In my first job out of college (Bain), I was frustrated by how subjective hiring was. Along with a number of other recent Stanford grads, I was assigned to review Stanford student applications for the entry-level consulting position. I was struck by how differently my peers and I assessed candidates. We were aligned on the outcomes that defined a good hire (high-performance ratings, retention), but we disagreed on the inputs that generated those outcomes. For example, some of us thought a STEM background was critical; others thought humanities majors did better. Some thought club leadership was a key input, others thought a high GPA was critical. It occurred to me that we could answer these questions analytically, by mapping our performance/retention records back to job applications and looking for correlations.
I started to dig into this problem in my free time in 2011-2012, and I quickly stumbled on 3 challenges. First, the data wasn’t readily available - it was spread out across homegrown systems that lacked APIs or basic file export capabilities. Second, HR leaders weren’t thinking about data science and their eyes tended to glaze over when I started talking about analytics. Third, I realized that the outcomes data (the “ground truth” represented by performance ratings and retention) could themselves be biased if the organization was biased in the way it treated some hires.
So I paused. But I kept watching the HR space and by 2016, I thought the first 2 barriers were eroding. It seemed to me that most enterprises had moved to cloud-based HR systems with API/export capabilities and HR leaders were starting to get excited about data science. In late 2016, I started asking companies if they would share anonymized employee data with us and a few did so. While I was still at Harvard, we confirmed that there really was predictive power in these companies’ job application data that predicted who would perform well and remain in the job. This was a signal these companies weren’t yet using or aware of. With some proof-of-concept in hand, I went full-time on Ansaro as soon as I graduated from Harvard.
For our first handful of enterprise customers, we built a customized predictive hiring model for each. The advantage of this was that we weren’t beholden to any assumptions and were free to tailor to the customer’s data and business questions. This was great for exploratory models that resulted in presentations and discussions. The disadvantage was that these models were not ready to be put into production. With each new customer, we essentially started over.
After doing this a handful of times, we moved on to build a scalable platform that could do the following across customers: ingest data, fit a model, and return suggestions about live applicants. However, we failed on data ingestion, because none of our customers provided us live access to their HR systems. For some, their systems simply lacked APIs, for others it was security concerns, for others it was simply bureaucracy.
We also realized another fundamental problem: many companies don’t trust their own recruiting data - they know that applicant’s can game the system, or just straight-up lie.
So we pivoted to focusing on interviews, which most of our customers believed (a) can provide more signal than an application or resume when done well, but (b) are often done poorly today. We built a platform that allowed companies to plan structured interviews, assign an interviewer panel, and record structured feedback.
The (hopefully) headline-grabbing feature was an AI notetaker: for phone interviews, Ansaro provided a conference line, recorded the call (automatically notifying all participants beforehand), and sent the recruiter an AI-generated transcript and summary.
Problems with the 2nd version of included:
Our sales strategy relied entirely on personalized, outbound pitches from me to prospects.
We did a little bit of marketing:
First, we were too slow to pivot. I attribute that mainly to building a team/culture where people weren’t comfortable disagreeing with the fundamentals of our product plan. We all got along well, and we carved out different areas of responsibility. The downside of this was that my area of responsibility was our core product roadmap, and when our initial idea turned out to be bad, it took too many months for others to question it and for me to abandon it.
Second, we conflated our buyers and our users. CHROs spend a lot of time talking about how they want to hire better, but when the rubber meets the road, recruiters care more about hiring efficiency than new hire quality. We were pitching new hire quality improvement to our buyers (CHROs), but this wasn’t an acute pain point for our users (recruiters).
Third, we were tackling a problem where improvement takes a long time to measure. It requires months, and sometimes years, to see how a cohort of new hires turns out, and thus conclude that Ansaro is ROI positive. That’s way too long for a small startup. We tried to make the case for ROI based on backtesting, but that never resonated with HR buyers. I now believe that problems that require years to measure results are fundamentally better suited to large companies with deep pockets, as opposed to startups.
Immediately before we shut down, we were spending $60-70K per month to support a team of 6, with some minimal marketing and hosting/compute costs. We brought in about $100K in total revenue, almost entirely from non-recurring, customization-heavy pilot projects.
We raised a $3.0M seed round, which included $2.25M from institutional investors (Silicon Valley Data Capital was the lead investor), and $750K from friends and family. We pitched ~30 investors before we got a term sheet from SVDC. In hindsight, I wish we had waited longer and only engaged with institutional investors once we had (more) product-market fit / recurring SW users. In other words, I wish we had raised money only from family and friends.
I've really enjoyed these books:
You can check out this blog post I wrote about Ansaro’s failure.