Fighting (For) The Future

By Bassey Etim on March 06, 2019

You asked me the question only after I’d posed it to myself a few hundred times.

You’re a young journalist running a newsroom desk at The New York Times. Why would you leave to work at a 13-person startup?

The answer is pretty simple: For all of the same reasons I became a journalist in the first place.

Sometimes, I think about all of the ways I was taught to think — as a writer, in journalism school, at The Badger Herald and at the New York Times.

The shell of skepticism around a well of openness, the cascading impact of new information rearranging our values, the raw practical use of sensible language, the burden of social responsibility in even our smallest actions — the way good journalists train their minds is the raw stuff of ethical technology.

The weather, the terrain and the very air we breathe on the internet seems defined by those who believe in fictions because they have to believe them. I believe Arthur Koestler called them “applicable fictions.”

How many times have well-informed observers sat through tech presentations in which the speaker makes declarative statements about the underlying reasoning for their architecture, citing ‘facts’ about the behavior of online communities that have been contradicted by research and happen to provide convenient justification for a large revenue stream?

Thus far, the use of machine learning by major platforms has inspired little optimism. You’d be forgiven for believing it primarily exists to insulate executives whose competitive fire for profit has led them to erect online networks that threaten the stability of human societies.

The nature of the technology we’re using to order our society is not a fait accompli. When I first spoke with Brian and Annika about online social policy, I couldn’t help but recall of an old X-files tagline: Fight the future.

If I believe there is a way to build an online infrastructure that is profitable, yet declines to package our most private thoughts into commodities to be bought and sold on the open market, then I should get involved.

At the New York Times, part of my mission was to reimagine the online comments section. Most comments sections have been notoriously awful for decades. But it didn’t have to be that way.

After all, what self-respecting publication would allow contributors to post whatever they wanted, whenever they wanted, about any topic at all? Who would visit that site?

But that’s exactly the comments strategy for most outlets — an entire content section on their carefully-crafted website, with little editorial oversight. Madness. All for the sake of a few empty clicks.

We had an idea that can’t at all qualify as brilliant, but it was revolutionary all the same: Treat your comments the way you treat the rest of the content on your site — with love and care. Years later, we’d built what is widely considered the best online comments section in the industry.

Here at Canopy, our idea might not qualify as brilliant, but it’s revolutionary all the same: There is something fundamentally wrong with the internet. We’re looking to build a platform that fixes the power imbalance between readers and powerful interests that feed our addictions and consume our private data.

Our mission isn’t just audacious. It’s one of the hardest jobs I can imagine.

So how could I resist?

Next comes the hard part: Figuring out how to actually do the things.

Our first product, a personalized content recommendation app, is meant to plant a flag for our values. Broadly, we are looking to optimize for reader delight. So I figured that first, we should define ‘delight.’ I examined delight against 8 axes, and asked my colleagues to help me collect answers:

When are we delighted / Why are we delighted / What is not delightful about the things we do on our phones / What does delight inspire us to do / When we are angry, how can it feel productive? When does it feel useless? / Does delight come from hard work, or serendipity, or both? How can delight come from each, if it can? / Are things ever delightful when there is no context? If so, how?

Then, I embarked on a bit of an intellectual journey to study the articles we might recommend and the ways that our machine learning model sorts them. After that, I assigned human language to each machine-sorted category of content.

For example, one category is: Factoids for travelers, deadly brushes between nature and man. Another is: Emergent modes of morality, economic and sexual consent.

Once I had an understanding of what delight means in a modern online context and a solid grasp of how our model “thinks” about human editorial work, I repackaged those insights to create our curation guidelines, which define the the kinds of articles, podcasts, videos, photos and art that we’ll be recommending in the app.

I hope those guidelines reflect the essence of who we are at Canopy.

“We recommend content that helps us find joy in human connections.”

Oh, and by the way, we are hiring.


Should We Model Ethics?

By Sarah Rich on February 08, 2019

(Note: This post is based off a conversation between Sarah Rich and Erica Greene.)

As mentioned in our previous post, Canopy recently attended the FAT* conference in Atlanta. Because FAT* is about fairness, accountability, and transparency in computer systems, there were moments at the conference that made me, as a woman in computer science, feel like the president of the Hair Club For Men; I’m not just an implementer of systems with implications for fairness, but I’m also on the receiving end of those same systems.

In this year’s opening keynote, Fairness, Rankings, and Behavioral Biases, Jon Kleinberg discussed his joint work with Manish Raghavan in which they mathematically model implicit bias as well as the use of the Rooney Rule to counteract it. The Rooney Rule’s name comes from a practice in the NFL that requires league teams to interview minority candidates for head coaching and senior football operation jobs. Since its adoption in the NFL in 2003, the Rooney Rule has spread to other industries, and the moniker is now used to refer more broadly to the strategy of requiring hiring committees to interview candidates from underrepresented groups with the goal of improving either the ethnic or gender composition of an organization.

Of course, this practice of intentionally diversifying candidate pools has not been without its detractors. In particular, in the tech world, the notorious notion of “lowering the bar,” rather well-characterized by this popular blog post, continues to rear its head. The argument goes: there just aren’t very many qualified candidates and so to consider diverse ones you must lower your standards. Since structural barriers to entry often prevent women and underrepresented minorities from appearing to be as qualified according to traditional standards, advocating for their inclusion in the interview process can be perceived as advocating for workforce diversity at the expense of workforce quality.

Kleinberg’s work addresses this point by first formalizing the opposition’s argument. In mathematical optimization, when you are trying to find the best value of something, it makes sense to pick from among as many choices as you can. For instance, if I am looking for the best deal on a used Toyota Prius, it makes sense to not only look at the dealership down the street from me, but also to consider all dealers within, say, a 50 mile radius. The more options I have, the better the deal I am likely to find. This seems very intuitively clear and correct, and so many people get hung up on this point when they think about changing the way we consider candidate qualifications in interviews. Indeed, although I am a proponent of hiring candidates from underrepresented groups, I often faltered when trying to rebut this specific formulation of the argument against “lowering the bar”. You pretty much have to reject the mathematical formulation and say something like “People are not Toyota Priuses!” Which is unsatisfying.

So, I was refreshed to see Kleinberg, rather than appeal to all the empirical research that shows that diverse teams actually do perform better, instead say, Okay, let’s accept this optimization formulation. How do we build a reasonable model of the world so that the empirical result, that the Rooney Rule has favorable outcomes, is a natural consequence of the model? Yes! An itch I’d not even fully been aware of in the part of my brain that chafed at the “bar lowerers” and their cool rationality was finally being scratched! I would never again be caught without a mathematically rigorous rejoinder when arguing about why the tech sector should work much harder to include more people from underrepresented backgrounds.

As the talk unfolded, Kleinberg proved that in the presence of bias against a group, using the Rooney Rule can lead to the hiring of better talent. The model conceives of a distribution of talent where bias scales back the talent of an applicant that can be perceived by biased decision-makers. I could picture this abstract notion of talent mapping concretely onto citation counts, Github contributions, or number of referrals. I was into it. Everything about this seemed totally awesome.

And then, as Liz Lemon would say, “Twist!

My colleague and fellow machine learning engineer, Erica Greene, had typed this into our company’s FAT* Slack channel:

“...the keynote today was an hour long talk about a mathematical justification for the Rooney Rule that contained no mention of 1) any ethical/moral justifications or 2) any mention of the many studies that show that diverse teams are more productive, effective and profitable even if the individual members are less qualified or experienced… I thought it was phenomenally out of touch.”

NO!!! How could this be?! How was it that this piece of mathematics that I was so excited about was being perceived so differently by one of my colleagues? Once we got back to Boston we decided to take some time out to discuss our differing viewpoints. Below is a distilled version of our conversation, edited for brevity, clarity, and profanity.

Erica: I think algorithmic fairness has become a hot topic, and as a result, many people have joined the discussion without being sensitive to the ethical, social justice, and civil rights aspect of the work itself. The Kleinberg talk is a prime example of this: great work from a mathematical perspective, but tone deaf from a moral and ethical perspective. He cedes so much ground in the problem definition that the conclusion isn’t only useless, it’s harmful.

Sarah: I don't see it as ceding ground to address the arguments of the opposition. I think it's necessary and valuable, especially if you can prove them wrong. So I'm not saying this should be the only argument. I'm saying it's one I'm glad we have, and it was new.

Erica: My feeling is that if I find myself telling someone that there exists some delta and some alpha such that it is provably true that women should be included in the hiring pool, then I’ve already lost the debate. I don't want mathematical models of ethical issues.

Sarah: I want mathematical models of everything.

Erica: Ah ha. I think we’ve found the root of the issue.

Sarah: I do understand where you are coming from. Any utility-oriented argument for diversity raises my hackles a bit because if you are fundamentally non-racist/sexist, the idea that you have to prove that women and people of color are actually not worse than white men and so should be included does seem like engaging on a topic that should not even merit discussion. But once we're talking about it, I don't see a difference between citing empirical research that diverse teams perform better and a mathematical model of bias that proves the Rooney rule.

Erica: I agree that the primary argument should be a moral one. I would love for that to convince everyone. But if we need a second argument, my preference is to highlight that diverse teams perform better. It’s something I really believe, that I’ve seen in practice and there are many studies to back up.

The problem with Kleinberg’s bias argument is that it requires that we accept some metric that women almost certainly do worse on. Instead, we should be talking about people’s "value add". How does this person add a new perspective to our team that will make us more creative and productive? Not, how biased do we think we are at judging their skills on this narrowly defined axis?

Sarah: I believe that math should be capable of expressing and representing anything. Given that math has historically been developed and used primarily by men of privilege, expanding our mathematical understanding of the world to represent my lived experience feels like a really beautiful gesture, albeit one that I can see being perceived as out of touch.

Additionally, I don't like thinking of myself as a "value-add." I want to think of myself as a valuable engineer, apart from any woman traits like soft skills or understanding the product from a “female viewpoint.”

Erica: You’ve totally nailed a fundamental tension when talking about diversity. If we say that it's valuable to have more people with background X because they have A and B skills, then we open yourself up to the James Damore (UGH) argument that A and B skills are not well suited for software engineering and that people with background X should stick to product management.

But if you say that all populations have basically the same point of view and skill set, then why not simply hire based on quantitative metrics of accomplishment?

Sarah: That's the thing the Kleinberg paper is trying to correct for. The idea is that the system is biased so these metrics are not objective; that's why the argument supports including the best candidates by these metrics but separated by race/gender/sensitive attribute. Because the system is rigged, you can't expect marginalized groups to perform equally well by these metrics even if they are just as talented. So, you control for this by using the Rooney rule. The thing I anticipate you'll object to is that even in the pool of marginalized candidates, we are still using these metrics to compare candidates against each other, and you might fundamentally reject these metrics altogether as the fruit of a poison system.

Erica: Ha, yes. I object to the metrics. Many big tech companies are full of very "smart" men who have exactly the same training and can tell you the Big O complexity of all the algorithms. But these companies keep screwing up when it comes to fairness and they can't seem to figure out why.

I don't think these problems would be fixed if they hired a bunch of women with exactly the same training, background and perspective. The problem is that they have a single notion of what "smart" is. And they're so enamored with themselves, that they can't begin to change it.

Sarah: To try to summarize your point: The paper was making this assumption about how we say who is the "best" even constrained to a specific candidate pool and glossed over the fact that using traditional metrics to select between a pool of "diverse" candidates will only reify the values that are entrenched in our existing system. Like, the most valuable voices in marginalized groups generally speaking are not going to be those of people who have fully committed themselves to competing in a rigged environment for bro points.

Erica: Yes, exactly. Another good example is politics. I think there is inherent value if having elected officials who come from a diverse set of backgrounds. Kleinberg-esque reasoning might look like "This woman of color went to a less prestigious college than this white dude, but don’t worry! We know the world is biased and WE HAVE A SOLUTION. We will give her a 20% boost when we’re weighing whether to endorse her and call it a day.”

Not the right framing to get more representative representation.

Sarah: Definitely something to think about for any company trying to remove bias from their hiring practices while also reaping the benefits of a diverse workforce.

In the end, Erica and I agreed to disagree on whether the Kleinberg model is a useful tool when advocating for the importance of diversity and inclusion. But we both came away from this conversation with a deeper understanding of the nuanced, sticky topic that is diversity and fairness in hiring. It was a great reminder that people with seemingly similar backgrounds can have wildly different viewpoints on this complex set of issues. And that we should all keep talking to each other.

If you want to join the conversation at Canopy, we’re hiring.


Canopy Goes to FAT*

By Erica Greene on February 05, 2019

Last week, a group of us from Canopy traveled to Atlanta to attend FAT*, a multi-disciplinary conference on fairness, accountability and transparency in socio-technical systems.

Do you know what that means? We weren’t sure either, but last year’s line-up was a stellar collection of computer scientists trying to prove bias, philosophers trying to define bias and lawyers trying to get judges to make decisions based on p-values. So we decided to face the hordes of football fans who were descending on Atlanta for the Super Bowl and check out this year’s program.

We had a delightful time, learned a lot, and have decided to share with you some of our takeaways.

(And just to get it out of the way early, yes, the conference acronym is terrible. They’re working on it.)

Let’s move from Fairness and Transparency to Justice and Ethics. Or maybe simply Human Rights?

During the town hall discussion on the second day of the conference, someone suggested that the FAT* community change its focus from Fairness and Transparency to Justice and Ethics.

That comment reminded me of a discussion I attended a few months ago where Philip Alston from the NYU School of Law said (jokingly!) that he wanted to “strangle ethics”.

What I always see in the AI literature these days is ‘ethics’. I want to strangle ethics... ethics are completely open ended. You can create your own ethic. But human rights, you can’t. They're in the Constitution, they’re in the Bill of Rights, they’ve been interpreted by courts, there are certain limits. And until we start bringing those in to the AI discussion, there’s no hard anchor.

So what should we focus on?

Most people at FAT* would probably agree that the AI community needs to address the negative, real world consequences of discriminatory algorithmic decision making. But from the wide range of talks, there appears to be substantial disagreement about what addressing these problems looks like.

I agree with Alston that fairness and ethics are fuzzy concepts, and that this community should challenge itself to be accountable for how our work impacts real people. This does not mean that we need to forgo mathematical and algorithmic approaches, but that we should pull on threads until we get down to the human level.

I am not familiar with a lot work that does this successfully, so I am going to reach outside of this year’s FAT* program to highlight an example. Last year, Virginia Eubanks, published a phenomenal book titled Automating Inequality that investigates the impacts of software automation and algorithmic decision making on poor and working-class people in America. What makes Eubanks’ work so compelling is that she combines historical context with data analysis and a tremendous amount of on-the-ground reporting. As computer scientists, we might not have the training to do that, but we should be educating ourselves about the context that our work exists in and reaching out to historians, journalists, case workers and lawyers who engage with the human rights impact of the systems we work on.

Eubanks shows that it is possible to do “full-stack” work on algorithmic human rights. Now we just need more people to give it a try.

In practice, the biggest challenges and biggest wins come down to data.

One thing that stood out to me as the talks progressed was how different the theoretical work was from the applied work. The CS literature on algorithmic fairness largely focuses on metrics, algorithms and what can be proven about the two. We saw talks proposing new metrics, new algorithms, and a very good empirical study of the effectiveness and robustness of combinations of the two -- which sadly concluded that “fairness interventions might be more brittle than previously thought”.

But the real-world applications had a very different flavor. They tended to revolve around data. What type of data would we need to empirically prove bias? What domain-specific metrics should we look at? Can we get around practical hurdles like missing information?

There were two papers addressing these issues that really stood out.

The first was an analysis of the Bayesian Improved Surname Geocoding (BISG) method, titled “Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved”. BISG is a technique that is widely used to impute someone’s racial group given their last name and location. Why would anyone do this? Because some industries are legally required to have fair decision practices, but are at the same time forbidden from collecting information about protected attributes from their customers or applicants. Both regulators and the companies themselves rely on these “proxy models” to guess an unknown protected class based on observed variables in order to audit for fair practices. The paper analyzes the bias introduced by these proxy models and proposes an improvement to the most commonly used models. This area of research is both mathematically interesting and has the potential for broad impact.

In the following session, Sendhil Mullainathan presented the incredible work he did with Ziad Obermeyer in which they identified “significant racial bias” in an automated selection process for a beneficial medical care program that impacts 70 million people. The bias stemmed from the decision to train the selection algorithm to keep costs consistent across racial groups, but not to keep healthcare needs consistent across racial groups. Black patients tend to have lower healthcare costs, and thus were being enrolled in this beneficial program at half the rate they would have been without the bias. The authors write that while insurers may focus on costs, “from the social perspective, actual health – not just costs – also matters.”

In addition to identifying this problem, Sendhil and Ziad reached out to the company that designed the algorithm and have worked with them to reduce the bias against black patients by 84%!

These two papers speak to the heart of the questions the FAT* community is trying to address. How do we identify when algorithms are discriminating against historically marginalized groups? What can we do about it in practice? And when are the stakes too high, the potential damage too much, should we toss the algorithm out the window?

These “rubber-hits-the-road” papers were a reminder that it is often access to data and a deep understanding of a domain that leads to real impact.

How we’re contributing to this work at Canopy

Here at Canopy, we are creating a personalized content discovery architecture that keeps your data on device. We know that we are complicating our ability to audit for fairness by not storing any user data on the server, but we are not scared of complicated problems. We believe that people deserve to be in control of their digital identities. And for too long, companies have presented us with false choice when it comes to our privacy. At Canopy, we are committed to building delightful products that are both private and fair.

If you want to follow us along on that journey, we’re on Twitter. We’ll be sharing more about our approach to fairness in an age of privacy there. And please come to FAT* next year in Barcelona, we’ll be sure to be there.


Bassey Etim and Matthew Ogle Join Canopy

By Annika Goldman on January 15, 2019

Welcome Bassey and Matthew to the Canopy team!

Today we are excited to announce that Bassey Etim and Matthew Ogle have joined the Canopy team. In their previous work lives, Matt and Bassey have both built ethical products and services at scale that allow people to express themselves, learn from each other and discover new worlds. At Canopy they will focus on developing a new technical architecture that protects personal data and delivers an amazing, personalized content experience. They will both be based in our Brooklyn office.

At Canopy, we are working on something entirely new in the world of personalization and discovery. The addition of Bassey and Matt to the team will help us achieve our ambitious goals and show the world that you can do things differently through a combination of technology and human curation.

Matthew Ogle comes to Canopy from Instagram, where he worked across multiple teams and products — most recently Explore and Hashtags — to build features that help people discover and connect within Instagram's global community of over one billion. Prior to Instagram, Matt was a Product Director at Spotify responsible for launching Spotify's next generation of personalization and discovery features including Discover Weekly, Release Radar, and Daily Mix. Matt will be leading the overall consumer product vision for Canopy as we develop a broader private personalization architecture.

Bassey Etim is an artist, journalist, and technology manager. He spent the last 10 years as the Community Desk Editor at The New York Times, where he managed a team that built one of the most innovative community forums in the world. He ran the 16-person community management desk within the Times newsroom and was the editorial head of the Times’ Community development team. Bassey will lead development of the voice of Canopy’s recommendation architecture. His experience managing conversations at scale will help provide a human context and voice to our content recommendations.

Canopy launched in 2018 and is focused on giving users a new way to explore the world — without ever revealing their personal data. We believe privacy and personalization should have been built into the Internet from the beginning: with users in control of what they discover and how. We are developing a new kind of architecture, but also a new kind of company that optimizes for delight and discovery.


Let's Fix The Internet Together

By Annika Goldman on January 09, 2019

Hi! I’m Annika and I run strategy and product. I joined Canopy to reimagine the way the internet works. Like most of you, I have experienced amazing, personalized recommendations from big media platforms. I have seen the magic in creators finding their most devoted fans. I have learned new things from people I’ve never met and may never meet. I have celebrated that the internet can amplify vulnerable and marginalized voices and give people tools to amplify the voices of others. But I am - to be frank - angry that this place of discovery, creativity and connectivity is now optimized for engagement, revenue, and the retention and sale of personal data.

For too long, the tech industry has believed that the only way to grow was through personalization powered by user data. This business model led platforms to force us to give up control of our digital selves so that we could connect with a friend, read an article, plan a trip, listen to a song, or watch a video. This obsession with technical innovation and growth at all costs caused the tech industry to turn a blind eye to the social impacts of its creations. It’s left people vulnerable to misuse of their data, hacking, manipulation, radicalization, hate speech, and violence. It has led to the development of products - facial recognition that perpetuates gender and skin-type biases, resume scanning that compounds biases against women - that further entrench biases and oppressive systems. Corporate decisions to track, retain, sell, and exchange personal data power these abuses, which seems like an unreasonably high price to pay for recommendations.

What if you could have an amazing, personalized experience without being manipulated or deceived into giving up more and more of yourself? What if you could challenge bias and deception by taking control of your digital presence? With Canopy, you can.

For the past several months, we’ve been working hard at this very problem. If we’re going to fix this, and fix it now, what would the architecture of the internet look like? If you were going to commit to building mindfully from the very beginning, how would the user experience change? Our founder, Brian Whitman, has written about what we’re doing technically. At its core, Canopy is a protective layer between you and the internet/cloud/server. Our architecture keeps your data on your devices, and in your hands. This means that instead of third parties taking your data, manipulating it to figure out how you make decisions, and then using that knowledge for their own ends, you stay in control of your digital identity. With Canopy, you have the power to tell the internet how you want to be understood. You dictate what experience you want to have and who has access to your data.

A slightly annotated drawing explaining Canopy from our launch in November.

Since our founding last year, we’ve laid the groundwork of this new way, showing that private personalization can work at scale. And because we want you to experience it for yourself, we’ve built a personalized discovery app. This product is the first application of our architecture; our long-term goal is to change the value proposition of the internet so that you’re in control. You’ll get better experiences and you’ll stop paying for them by giving away who you are.

If you want to test our app and have an impact on how the future of the internet will work, reach out. We believe that the more perspectives the better, and we would love to have yours.

Join me and our Head of Operations, Urcella Di Pietro, as we find the good on the internet.


Building off Gratitude

By Urcella Di Pietro on November 29, 2018

As my fellow CANOpeers and I signed off for the Thanksgiving weekend, we all posted messages on Slack about what we are thankful for this holiday season. There were the typical posts of gratitude about pets, family, friends and general health. My heart always acknowledges the importance these things hold in my life, but what was different this year for me was the recognition of the gratitude I felt for the place where I work and what I get to do here. Seeing this new item on my thankful list inspired me to examine further what exactly is different about Canopy compared to other jobs I’ve had in the past (both great and terrible).

Hiring for Potential

Prior to ever setting foot in the Canopy office (we didn’t even have a company name yet), I was told things were going to be different. This was pretty much the first thing our founder, Brian, said during my interview at a bar in Greenpoint, Brooklyn. He wanted people who are committed to making the internet a safer place. “Great, done, I’m in,” I said before he was finished with his spiel about how we would be building a new kind of technology and product that would give consumers an amazing personalized content experience while at the same time protect their personal data. He could have said just about anything as I was pretty excited at the chance to help build a new company. Despite having my jazzed up résumé in tow, I was told I was being hired for my potential, which is the first thing I felt that was going to be truly different about Canopy. My past was the past, a prelude to now, but the real show was about to begin, and it was indeed going to be unlike anything I have encountered before.

Optimizing for Delight

Being part of a team that builds for privacy and optimizes for delight has been profoundly rewarding. In the past, you had to consent to unmitigated access to data on your phone to be able to use apps that could end up making you miserable. Never before has the possibility of getting better recommendations without sacrificing privacy seemed more doable or even possible. Getting to build a product that you feel good about using just feels better — and it’s easy to get evangelical about.

Doing the “Hard Work” a.k.a. Commitment to Mission

Laying down the groundwork for an ethical tech company has not come easy. At every turn, my team and I have had to ask ourselves tough questions: Can this product be used for malicious reasons? Is this product needed? Can the algorithms be reverse-engineered? Are we promising something we can’t deliver? Can we delight everyone (is anyone delighted)? Are we doing something beneficial to society or just making a lot of noise? The questions never go away, and the demand for answers grows stronger every day we read stories about Facebook, Google and others misusing user data and the disastrous consequences that brings. It’s a marathon and we need to pace ourselves accordingly — we never want to skip around or cut corners. It’s part of why we do this: We’re getting it right this time and that has profound meaning to everyone who works here.

When I think about all the work we have ahead of us, I don’t feel worried or weary. It really feels like a great gift that I get to come here and do the big-deal work with like-minded, passionate folks at the top of their fields. People that left cushier positions and better-paid gigs at larger, more established companies for the opportunity to make a difference. And when I get a gift of that size I feel enormously grateful, and that gratitude is what I plan to build an entire company on.


Welcome to Canopy!

By Brian Whitman on November 15, 2018

Today I am introducing Canopy, a new company focused on giving you a new way to explore the world—without ever revealing your personal data. We're building Canopy the way we believe personalization should have been built from the beginning: with you in control of what you discover and how. For too long, the accepted standard for how we learn about people on the internet has benefited the wrong side. Over the past year, a great group of people have come together to work on a new way to do personalization and discovery on the internet. We are building a new kind of technology and product, but also a new kind of company that optimizes for delight and discovery.

More taste decisions than ever are being mediated by an algorithm—and those algorithms are running on more and more personal data than ever before. The data you give away to services can be used against you, or sold, or lead to results that you don’t understand. We’ve all seen the “creepy” side of personalization at work, aligned with revenue or time spent rather than improving your experience or happiness. And in the worst case we saw people being radicalized from aggressive personalization, or saw state actors steal or manipulate personal data for their own needs.

Canopy Brooklyn in October 2017. How did we fit two people in here?

After I left Spotify in 2016 I began thinking of focused ways we could change the way things worked. I started talking with my friends David Blei and Ben Recht, two well-known machine learning and recommendations professors who have worked on or invented what are now the core parts of your personalized internet experience in 2018. They immediately signed on to help and we spent some time wondering what a fully private and transparent discovery system would look like. Could there be a music recommender that never knew what songs you've listened to, or a personalized browsing experience that never knew what websites you've visited? Could we build a transparent discovery tool that gave control back to the person doing the discovery and power back to the creators?

We’ve found a new way. We’ve shown already that a company having full access to personal data is not a prerequisite for great discovery. We are carefully applying a combination of on-device machine learning and differential privacy while taking advantage of the enormous step change in power of our mobile devices. We're now moving most of the math we used to do on big clusters of servers against your personal data directly to the device in your hands. We guarantee we'll never see your personal interactions with the world, and we guarantee we'll give you an even better discovery experience.

The science behind what we're doing at Canopy is just a small piece of the puzzle. We are plotting an incredibly pragmatic approach to discovery, with a large focus on editorial, finding viewpoints, understanding root causes, uncovering and programming niches. We’ve already built a different sort of company. Most of us have done this before. Diversity of our team is our first priority, as our product needs to reach the entire world of ideas. We’ve brought on scientists, lawyers, marketers, engineers, editors, professors and operations experts who’ve all helped build great discovery experiences you use every day at Spotify, Instagram, Google, and The New York Times. Our early investors include founders and executives from Spotify, Keybase, Splice, WeWork, MIT, and who make investments in the decentralized internet, machine learning and data.

Much improved space today with the same chairs. We're up to 15: 4 in Brooklyn, 5 in Boston and a few remote.

Today, we’re introducing Canopy to the world to share what we’re building. The first way you’ll interact with us is our mobile app, coming early next year. It’ll be the first time ever you can explore and discover new things without the services and company on the other end watching your every move. You don’t even tell us your email address. You’ll find it a refreshing new experience that you can believe in. This app is just a first step on our way to a better kind of system that gets you, a better kind of company, and eventually a better internet. We hope you'll join us for what's next.