Why I'm Starting a New AI Company
After two and a half incredible years leading data at Spotify — following Spotify’s acquisition of my last startup — I left a couple of weeks ago to start a new company. We’ll be using data and AI to tackle a world problem that I’ve been circling around for some time — and that now feels critical and solvable. I wanted to share why I’m embarking on this new endeavor, what I hope we’ll accomplish, and an invitation to join us.
I’ve always viewed entrepreneurship as a way to rapidly, independently, and sustainably solve problems you observe in society.
Against the backdrop of a rising, anti-scientific movement in 2005, I started a magazine to counter that cultural force and advance science’s place in society. In 2012, I founded a technology startup to help governments, companies, and international organizations leverage data to be scientific where they might have previously been instinctual — in domains ranging from global development to music discovery.
Today, I observe three macro trends. It’s their concomitant rise that motivates me to start this new company.
The first of these trends is rising fragmentation. I see a world that is perilously moving away from multilateralism and toward our respective corners — a rise in nationalism, most notably here in the United States, against a backdrop of increasingly global challenges. “Climate change carries no passport and knows no national borders,” proclaimed Ban Ki-moon. I believe we must counter fragmentation and champion globalism — and I think data can help.
The second is the rise of complexity. We are either nearly at, or at, the point where every world problem — refugees, terrorism, food security, water scarcity, etc. — is intractable in isolation; a complexity tipping point, of sorts. We are not organized to understand, let alone improve, the world as a system of systems. Together with my colleagues at the World Economic Forum, we have been making this case to political and business leaders for some time — and considering possible modernizations to global governance. I believe we now have the tools and the imperative to equip ourselves for what Stephen Hawking has called the “century of complexity.”
Seen together, fragmentation and complexity are a dangerous pair. In the interest of improving the state of the world, I believe we must move away from nation- and issue-based silos and toward systems that reflect the interconnectedness of our era. Efforts like the newly announced Co-Impact initiative are a positive step forward in the social sector. I believe technology and business can help.
Finally, I observe the rise of opacity in technology. Deep learning is a tremendously exciting research area — and we could well advance our understanding of the mind in the coming decade as a result of this technology. But deep learning comes at a price, and that’s opacity (or, put another way, lack of interpretability); We are designing a world that we may not be able to interpret, let alone govern. Meanwhile, the European Commission has set out a new regulatory framework that will require much more transparency about how our data is used, and the Asilomar AI Principles (modeled after the self-regulation effort in recombinant DNA in the 1970s) contemplates, among various design constraints, “Judicial Transparency” (“Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.”).
Andrej Karpathy recently put it this way: “The [Software] 2.0 stack also has some of its own disadvantages. At the end of the optimization we’re left with large networks that work well, but it’s very hard to tell how. Across many application areas, we’ll be left with a choice of using a 90% accurate model we understand, or 99% accurate model we don’t. The 2.0 stack can fail in unintuitive and embarrassing ways, or worse, they can ‘silently fail,’ e.g., by silently adopting biases in their training data, which are very difficult to properly analyze and examine...”
Against the backdrop of fragmentation, opacity is especially troubling. I believe we must actively promote interpretability in AI.
We are building a different kind of AI company. Our values will be ardently pro-globalism and pro-interpretability. Our products will help data scientists — the linchpins of any modern organization, I believe — and will hopefully encourage a new type of infrastructure for machine learning. Our culture will be pro-open source.
We will be in stealth mode for a little while as we translate our raw ideas into functioning code — at which point we’ll officially launch the company. As soon as possible, we will form a community of data scientists and data engineers to explore and test our early products. If you’re interested in joining that community, please drop us a line at email@example.com.
The thread that links my entrepreneurial efforts is a tested belief in the potential of science — its methodology, philosophy, culture, and output — to improve the state of the world. At the most fundamental level, I love working with people who share this conviction and have remarkable technical skill to put it into action.
I believe in the power of small, autonomous (thanks Spotify), interdisciplinary, and highly diverse teams; in agile development; and in leading with values, first principles, and clear high-level priorities backed by data.
We are building a founding team with extraordinary abilities in:
- Machine Learning
- Data Architecture
- Data Engineering
- Data Science
- Complexity Science
- Software Engineering
- Product Management
- UX & UI Design
- Information Architecture & Design
We are based in New York and are hiring now.
If you find the trends and problems I’ve laid out to be thought-provoking, can imagine yourself working tirelessly to try and help tackle them, have some of the technical skills noted above, and are ready to be part of something messy, urgent, and ambitious— we’d love to chat with you.
Drop us a line at firstname.lastname@example.org.
(Originally published on LinkedIn on December 5)