A better way to discover things

By Brian Whitman on May 15, 2019

by Brian Whitman [Canopy founder] & Ben Recht [UC Berkeley, Canopy machine learning advisor]

Do you remember the first time you saw something personalized to you on the internet? It was probably a long time ago. Things have changed a bit to say the least. You now have ads following you around, you have feeds of content sorted for maximum engagement, you find brand new creators every day, you have people and web search personalized to your location and preferences. Very little of your day to day digital experience is anonymous anymore.

Here’s a chart of how personalization and recommendations work today:

It all starts with platforms collecting as much personal and behavioral data as possible. This can be a combination of the things you’ve done -- songs you’ve listened to, or ads you’ve seen and clicked on or not clicked on, or the amount of time you’ve spent on a web site -- and personal data like your email, demographics, and location. All of this is sent from your device to someone else. In many cases your data is sent to multiple places. For example, a video streaming service may want their own copy of you, but a partner of theirs may want a copy for their own purposes.

If you collect enough behavioral data on a person, and send it to your server cluster, you’re able to build up a model of a person. That model is a computable summary that you can use to predict new things about someone. If you line up all the world’s behavioral data, you’ll see that there are obvious patterns: someone really into fashion also likes a certain kind of website, or people that like this YouTube video also watched this one. Or, more insidiously, people that see this type of content tend to stay longer on the app or site. This model uses the collected data of the rest of the world to get a better understanding of you. It can use simple counting, matrix factorization, or more recently, statistical machine learning approaches. The model is like a clone of you, that can guess what you’d do, what you’d like, what you’d buy, even for things you haven’t yet seen.

One of the many downsides of this approach is that the models are almost always a “black-box”: they are a series of tightly packed numbers that a computer can ask questions of, and get an ordered list of results back from, but rarely is it easy to explain why the results are in that order, or what bits of your behavioral data informed the answers.

Any current company’s stack we know of keeps both the raw data and the model data on their servers. Even if new regulations, privacy policies, or deletion features let people remove their data from a service, the models will likely stay around, and they can still predict the interaction and behavioral data with very high accuracy.


That’s the old way. Here’s the Canopy way. Your personal devices are more powerful than ever. Your phone can likely do more than your computer, if you use one, and your phone is also more powerful than servers we did all that model-building on just a few years ago.

Our expectations of what our personal data is worth and what these algorithms are doing to us is shifting. At Canopy, we’ve been working on a new architecture for the past 18 months. It’s real now, and you can try it out if you join our beta for our first consumer experience. We want to run you through how it works, and show how it can put people back in the center of this process, while giving them even better discoveries than before.

Here’s how we do it:

Many of us had built massively scaled recommendation, ad tech or analytics systems in past lives, and we all knew there were a few things we wished we could have accomplished. Three clear design goals came out of the Canopy stack from early on:

We wanted to keep interaction data private. We wanted our mediated discoveries to be explainable, and to give the person getting the discovery to have recourse to change what they see. We wanted a voice behind our discoveries -- a strong editorial stance across all types of media, that can give you context for your choices and also ensure we don’t blindly trust a purely algorithmic system that may work against you.

The first clear difference in our tech is that raw interaction and behavioral data stays on your device. In this model, Canopy never sees it, and neither would any content provider or partner we would work with. What we instead send over an encrypted connection to our server is a differentially private version of your personal interaction and behavior model. The local model of you that goes to Canopy never has a direct connection to the things you’ve interacted with, but instead represents an aggregate set of preferences of people like you. It’s a crucial difference for our approach: even in the worst case of the encryption failing, or our servers being hacked, no one could ever do anything with the private models because they do not represent any individual.

Apple has done a great job educating their users on this sort of technology -- but they are in the business of protecting your messages, photos and keyboard data, not the scarier interaction and behavioral data that informs so much of your internet experience. While some of the Canopy stack is inspired by and similar to some great work done by friends of ours at Apple and Google, our method focuses more on discovery than predicting variables, and has the added benefit on doing more with less data, allowing us to work across many types of devices and network connections across the world.

But your privacy in this model is just one core feature of the Canopy stack. We care just as deeply about recourse -- your ability to see why you’re seeing what you’re seeing, and change it. You should be in control of what you share and what you don’t share -- even if that data and information stays on your device.

At Canopy, we believe strongly that learning about content is more important than trying to amplify engagement. As an example of how context-less engagement systems could go wrong, a video recommender may find latent patterns in the idle history of the service’s users -- an average person might watch a sports clip then, due to further personalization via search, “up next” or social, find something unrelated, which feeds back into the recommender model as a correlation. What’s missing in this sort of “collaborative filtering” approach is a notion of what the underlying media is about. Just like the last company a few Canopy employees were involved in, we believe strongly in learning as much about the underlying content as we do matching peoples’ interactions against each other. We have already built a suite of content analysis approaches that can also find the inherent similarities between lots of types of content. This lets us unpack the model with more underlying understanding of the why: “you enjoyed something about whales, and we figured you may want to watch this documentary about nature.” It also helps us prioritize niche content: finding underserved types of discoveries.

Our strong belief in content, explainability and recourse comes to a fine point with the announcement of one of our first hires -- Bassey Etim, who ran the NY Times’ community desk for many years. Bassey’s role is to build a team to be the voice of Canopy, help the various systems get better at learning how different stories interrelate, help guide our content analysis systems, and give crucial context to any sort of recommendation. A future in which Canopy is mediating the worlds’ discovery starts with us ensuring our results reflect humanity.

We’re really excited to show off how we’ve re-thought this crucial part of your daily experience, to put you back in the center and get the world re-focused on discovering great things. There are so many exciting new experiences that can happen when you’ve got a great discovery platform that you can trust. We’re just getting started.


How We Review Code at Canopy

By Joe Delfino on May 01, 2019

Over a year ago, we sat down at various Central Square establishments to talk about all the ways people screw up engineering culture and how we were going to get it right this time (this is now our official unofficial engineering motto). We talked and talked, we wrote it all down, and then we let that doc sit, because getting it right is not something you can do by fiat.

A year later we’re most proud of the way we review each other’s code. That probably sounds mundane but it’s a crucial first step in addressing how we work together as a team. Taking the time to read each other’s work and respectfully ask questions has set the tone for how we build things here. Having walked the walk, we deemed it safe to write down and share our review philosophy. It’s helped us a ton, and we hope you find it useful.

Q: Why do we do code reviews?

Lots of reasons!

  • To let a reviewer learn about the change, so that more than one person knows about it.
  • To help both the coder and reviewer learn how to write better code - language features, design patterns, testing strategies, conventions, etc.
  • To help catch bugs or oversights.
  • To have a second person think about test coverage for the change, and suggest ways to improve it.
  • To identify confusing parts of the change that should be documented or refactored.

Explicit non-reasons:

  • To let the author and/or reviewer feel smart.
  • To let someone else catch/be on the hook for bugs you wrote.
  • To nitpick style.

Q: Should code reviews be supportive?

We all know this is a gimme question, and the answer is a loud YES! Code reviews are not a time to look smart, or make another person feel bad about a mistake or oversight. Remember that there is a human on the other end of the code review, and they worked hard on whatever change you are looking at. In response, we work hard to be thorough, thoughtful, kind, and professional when reviewing our colleagues code.

So be nice, supportive, and encouraging in your comments. When making style suggestions, offer them as opinions, not facts. Don’t focus on style to the exclusion of understanding what the change does.

Q: Wait, first you said no style nitpicks, then you talked about making style suggestions. Do style comments belong in code reviews, or not?

There are style comments, then there are style comments. Style can be very important to readability and simplicity of code. Small things like a confusing name, or deviating from a convention can make new code confusing to read.

If you, as a reviewer, are confused by something in a PR - even if you can figure it out yourself by poking around in the change - you should feel totally comfortable making suggestions to improve readability. Chances are, someone else reading the code later will have the same confusion. These suggestions might include asking for more comments, asking for variable renames, or suggesting alternative factorings or simplifications.

As an author, you might find yourself thinking “that’s silly, it’s obvious,” or “I like my way better.” You’re probably even right sometimes, and should push back on the comment if you feel strongly. But, keep in mind that code is read many more times than it is written, and you should give your reviewer’s opinion extra weight because it represents the majority who will read the code but did not write it.

On the other hand, as a reviewer, if an author writes code in a style that is not your favorite, but it doesn’t noticeably impact the readability of the code, then you can just grumble to yourself and move on with the rest of the review.

Q: Can a reviewer suggest big changes?

Yes! Up to and including “I don’t think we should make this change.” However, suggestions for big changes should happen as conversations, rather than statements of fact.

For example, “I have this idea for a big change that I think would make this better, let me explain…how does that sound? Do you think it makes sense to do that now, or later?” is better than “Here is a better way to do it: …, let me know when it’s ready to look at again.”

The timing of those big changes should also be a discussion. See the next question...

Q: Are code reviews a gate?

Yes and no. Every change to master should be code reviewed. Beyond that, it is the responsibility of the author and reviewer to agree on what feedback needs to be addressed before the PR is merged.

As a reviewer, it’s helpful to be explicit about what changes you think should be addressed before the PR goes in, as an immediate follow on, or some time in the future.

As an author, if you are inclined to skip or defer a piece of feedback, it’s expected that you’ll reply to the feedback with a reason for deferring or skipping it.

Q: I’m reviewing a change that is blocking a release or some other time sensitive thing, what should I change?

You should give the change the same level of scrutiny, and leave the same comments you would otherwise. However, you may want to be more flexible in terms of which pieces of feedback you consider to be blocking vs. which can be addressed in a follow on PR. For example, you might explicitly note that style can be addressed in a follow on PR so that the functionality can land sooner.

A looming deadline is never an excuse to ignore quality, readability, or test coverage. It may be an excuse to defer addressing some of it in a particular PR though.

Q: When should I request a review?

For larger changes, getting an initial review early is often really helpful - if the reviewer suggests big structural changes, making them before polishing the rest of the change can save a ton of time. On the other hand, reviews do take time, so you don’t want to ask for one every time you push a single commit.

If you do ask for an early review, make your expectations and the state of the change clear to the reviewer. For example, saying “I don’t have tests yet, and I need to do a naming pass, but can you take a look at the class factoring I have here?” can save the reviewer a lot of time.

Q: What if something is really trivial - does it need a review?

Then it should be quick to code review! Even a PR that fixes a comment can teach the reviewer something.

Q: What if something is on fire and I can’t wait for a review?

We’re all adults, use your judgement! If you do have to submit something without a review, get a post-submit review as soon as you can.


A Few Of Our Favorite Things

By Bassey Etim on April 11, 2019


We’ve been testing the new Canopy app for the past couple of weeks, and wanted to share a few of our favorite articles and podcasts that we’ve recommended to our testers. Every day, we want to give people things to read and listen to that will delight them. We’re not building a typical news feed or serving up clickbait hate-reads — we’re simply focused on giving people great personalized recommendations that inform, entertain, challenge, and make you want to learn more.

Here is a taste of what our beta testers are reading and listening to right now:

Confiscation
By Anne Noonan
Blackbird

In Confiscation, Anne Noonan recalls an encounter with Charles Manson’s red X, carved onto the forehead of a neighbor in Springfield, Massachusetts in 1977. Her haunting essay explores the sense of self and all of the ways it becomes warped by others’ perception of your self.

Soylent Beige: The Middle Gray of Taste
By Travis Diehl
e-flux

This is one of our editor’s (That’s me, I’m writing this right now.) favorite essays. It is all parts:
- About the convergence between modern art and food
- A profile on Soylent and its creators
- A meditation on molecular gastronomy
- An argument that we should only eat art
- A product design review
- A reflection on modern corporations, which flaunt the corpses of their humanity
- An examination of faith in brands replacing faith in art
- A review of technology that creates a future without history
- An exploration of artists who become the food that brands consume

On Tinder
By Elizabeth Wolfe
n+1

In this episodic essay on the nature of desire, Elizabeth Wolfe explores the inherent danger of carnal need: We risk losing what we’ve built ourselves into to what we truly are.

Her form is deceptively simple: The evocative recounting of a tinder conversation.

How the Enlightenment Sold Us a Twisted View of Human Nature
By Sean Illing
Vox

In this Vox interview with British historian David Wootton, we learn about how unfounded ideas about our nature have become human canon, poisoning our cultural discourse. It’s a common belief that humans are hardwired to pursue “power, pleasure and profit,” but evidence points in the opposite direction.

The strike isn’t just for wages anymore. It’s for ‘the common good.’
By Steven Greenhouse
The Washington Post

This Washington Post column from Steven Greenhouse captures an overlooked political moment: A merger of American collectivist movements. Labor is moving beyond advocating for workers, per se, and increasingly looks to protect society at large from the absolute dominance of capital. Widespread U.S. teachers strikes have come to embody the intersectionality of the labor movement.

The Remote Control Brain
NPR Invisibillia podcast

In this fascinating podcast, the NPR Invisibilia team interviews a woman who had the unprecedented opportunity to control her moods by remote control as part of a medical trial. The device is meant to help control the symptoms of severe Obsessive Compulsive Disorder.

Like what you see? Then sign up to join our iOS beta program and help us build a better internet. You’ll help us build out new features, catch bugs, and shape the future of private discovery.


Fighting (For) The Future

By Bassey Etim on March 06, 2019

You asked me the question only after I’d posed it to myself a few hundred times.

You’re a young journalist running a newsroom desk at The New York Times. Why would you leave to work at a 13-person startup?

The answer is pretty simple: For all of the same reasons I became a journalist in the first place.

Sometimes, I think about all of the ways I was taught to think — as a writer, in journalism school, at The Badger Herald and at the New York Times.

The shell of skepticism around a well of openness, the cascading impact of new information rearranging our values, the raw practical use of sensible language, the burden of social responsibility in even our smallest actions — the way good journalists train their minds is the raw stuff of ethical technology.

The weather, the terrain and the very air we breathe on the internet seems defined by those who believe in fictions because they have to believe them. I believe Arthur Koestler called them “applicable fictions.”

How many times have well-informed observers sat through tech presentations in which the speaker makes declarative statements about the underlying reasoning for their architecture, citing ‘facts’ about the behavior of online communities that have been contradicted by research and happen to provide convenient justification for a large revenue stream?

Thus far, the use of machine learning by major platforms has inspired little optimism. You’d be forgiven for believing it primarily exists to insulate executives whose competitive fire for profit has led them to erect online networks that threaten the stability of human societies.

The nature of the technology we’re using to order our society is not a fait accompli. When I first spoke with Brian and Annika about online social policy, I couldn’t help but recall of an old X-files tagline: Fight the future.

If I believe there is a way to build an online infrastructure that is profitable, yet declines to package our most private thoughts into commodities to be bought and sold on the open market, then I should get involved.

At the New York Times, part of my mission was to reimagine the online comments section. Most comments sections have been notoriously awful for decades. But it didn’t have to be that way.

After all, what self-respecting publication would allow contributors to post whatever they wanted, whenever they wanted, about any topic at all? Who would visit that site?

But that’s exactly the comments strategy for most outlets — an entire content section on their carefully-crafted website, with little editorial oversight. Madness. All for the sake of a few empty clicks.

We had an idea that can’t at all qualify as brilliant, but it was revolutionary all the same: Treat your comments the way you treat the rest of the content on your site — with love and care. Years later, we’d built what is widely considered the best online comments section in the industry.

Here at Canopy, our idea might not qualify as brilliant, but it’s revolutionary all the same: There is something fundamentally wrong with the internet. We’re looking to build a platform that fixes the power imbalance between readers and powerful interests that feed our addictions and consume our private data.

Our mission isn’t just audacious. It’s one of the hardest jobs I can imagine.

So how could I resist?

Next comes the hard part: Figuring out how to actually do the things.

Our first product, a personalized content recommendation app, is meant to plant a flag for our values. Broadly, we are looking to optimize for reader delight. So I figured that first, we should define ‘delight.’ I examined delight against 8 axes, and asked my colleagues to help me collect answers:

When are we delighted / Why are we delighted / What is not delightful about the things we do on our phones / What does delight inspire us to do / When we are angry, how can it feel productive? When does it feel useless? / Does delight come from hard work, or serendipity, or both? How can delight come from each, if it can? / Are things ever delightful when there is no context? If so, how?

Then, I embarked on a bit of an intellectual journey to study the articles we might recommend and the ways that our machine learning model sorts them. After that, I assigned human language to each machine-sorted category of content.

For example, one category is: Factoids for travelers, deadly brushes between nature and man. Another is: Emergent modes of morality, economic and sexual consent.

Once I had an understanding of what delight means in a modern online context and a solid grasp of how our model “thinks” about human editorial work, I repackaged those insights to create our curation guidelines, which define the the kinds of articles, podcasts, videos, photos and art that we’ll be recommending in the app.

I hope those guidelines reflect the essence of who we are at Canopy.

“We recommend content that helps us find joy in human connections.”

Oh, and by the way, we are hiring.


Should We Model Ethics?

By Sarah Rich on February 08, 2019

(Note: This post is based off a conversation between Sarah Rich and Erica Greene.)

As mentioned in our previous post, Canopy recently attended the FAT* conference in Atlanta. Because FAT* is about fairness, accountability, and transparency in computer systems, there were moments at the conference that made me, as a woman in computer science, feel like the president of the Hair Club For Men; I’m not just an implementer of systems with implications for fairness, but I’m also on the receiving end of those same systems.

In this year’s opening keynote, Fairness, Rankings, and Behavioral Biases, Jon Kleinberg discussed his joint work with Manish Raghavan in which they mathematically model implicit bias as well as the use of the Rooney Rule to counteract it. The Rooney Rule’s name comes from a practice in the NFL that requires league teams to interview minority candidates for head coaching and senior football operation jobs. Since its adoption in the NFL in 2003, the Rooney Rule has spread to other industries, and the moniker is now used to refer more broadly to the strategy of requiring hiring committees to interview candidates from underrepresented groups with the goal of improving either the ethnic or gender composition of an organization.

Of course, this practice of intentionally diversifying candidate pools has not been without its detractors. In particular, in the tech world, the notorious notion of “lowering the bar,” rather well-characterized by this popular blog post, continues to rear its head. The argument goes: there just aren’t very many qualified candidates and so to consider diverse ones you must lower your standards. Since structural barriers to entry often prevent women and underrepresented minorities from appearing to be as qualified according to traditional standards, advocating for their inclusion in the interview process can be perceived as advocating for workforce diversity at the expense of workforce quality.

Kleinberg’s work addresses this point by first formalizing the opposition’s argument. In mathematical optimization, when you are trying to find the best value of something, it makes sense to pick from among as many choices as you can. For instance, if I am looking for the best deal on a used Toyota Prius, it makes sense to not only look at the dealership down the street from me, but also to consider all dealers within, say, a 50 mile radius. The more options I have, the better the deal I am likely to find. This seems very intuitively clear and correct, and so many people get hung up on this point when they think about changing the way we consider candidate qualifications in interviews. Indeed, although I am a proponent of hiring candidates from underrepresented groups, I often faltered when trying to rebut this specific formulation of the argument against “lowering the bar”. You pretty much have to reject the mathematical formulation and say something like “People are not Toyota Priuses!” Which is unsatisfying.

So, I was refreshed to see Kleinberg, rather than appeal to all the empirical research that shows that diverse teams actually do perform better, instead say, Okay, let’s accept this optimization formulation. How do we build a reasonable model of the world so that the empirical result, that the Rooney Rule has favorable outcomes, is a natural consequence of the model? Yes! An itch I’d not even fully been aware of in the part of my brain that chafed at the “bar lowerers” and their cool rationality was finally being scratched! I would never again be caught without a mathematically rigorous rejoinder when arguing about why the tech sector should work much harder to include more people from underrepresented backgrounds.

As the talk unfolded, Kleinberg proved that in the presence of bias against a group, using the Rooney Rule can lead to the hiring of better talent. The model conceives of a distribution of talent where bias scales back the talent of an applicant that can be perceived by biased decision-makers. I could picture this abstract notion of talent mapping concretely onto citation counts, Github contributions, or number of referrals. I was into it. Everything about this seemed totally awesome.

And then, as Liz Lemon would say, “Twist!

My colleague and fellow machine learning engineer, Erica Greene, had typed this into our company’s FAT* Slack channel:

“...the keynote today was an hour long talk about a mathematical justification for the Rooney Rule that contained no mention of 1) any ethical/moral justifications or 2) any mention of the many studies that show that diverse teams are more productive, effective and profitable even if the individual members are less qualified or experienced… I thought it was phenomenally out of touch.”

NO!!! How could this be?! How was it that this piece of mathematics that I was so excited about was being perceived so differently by one of my colleagues? Once we got back to Boston we decided to take some time out to discuss our differing viewpoints. Below is a distilled version of our conversation, edited for brevity, clarity, and profanity.

Erica: I think algorithmic fairness has become a hot topic, and as a result, many people have joined the discussion without being sensitive to the ethical, social justice, and civil rights aspect of the work itself. The Kleinberg talk is a prime example of this: great work from a mathematical perspective, but tone deaf from a moral and ethical perspective. He cedes so much ground in the problem definition that the conclusion isn’t only useless, it’s harmful.

Sarah: I don't see it as ceding ground to address the arguments of the opposition. I think it's necessary and valuable, especially if you can prove them wrong. So I'm not saying this should be the only argument. I'm saying it's one I'm glad we have, and it was new.

Erica: My feeling is that if I find myself telling someone that there exists some delta and some alpha such that it is provably true that women should be included in the hiring pool, then I’ve already lost the debate. I don't want mathematical models of ethical issues.

Sarah: I want mathematical models of everything.

Erica: Ah ha. I think we’ve found the root of the issue.

Sarah: I do understand where you are coming from. Any utility-oriented argument for diversity raises my hackles a bit because if you are fundamentally non-racist/sexist, the idea that you have to prove that women and people of color are actually not worse than white men and so should be included does seem like engaging on a topic that should not even merit discussion. But once we're talking about it, I don't see a difference between citing empirical research that diverse teams perform better and a mathematical model of bias that proves the Rooney rule.

Erica: I agree that the primary argument should be a moral one. I would love for that to convince everyone. But if we need a second argument, my preference is to highlight that diverse teams perform better. It’s something I really believe, that I’ve seen in practice and there are many studies to back up.

The problem with Kleinberg’s bias argument is that it requires that we accept some metric that women almost certainly do worse on. Instead, we should be talking about people’s "value add". How does this person add a new perspective to our team that will make us more creative and productive? Not, how biased do we think we are at judging their skills on this narrowly defined axis?

Sarah: I believe that math should be capable of expressing and representing anything. Given that math has historically been developed and used primarily by men of privilege, expanding our mathematical understanding of the world to represent my lived experience feels like a really beautiful gesture, albeit one that I can see being perceived as out of touch.

Additionally, I don't like thinking of myself as a "value-add." I want to think of myself as a valuable engineer, apart from any woman traits like soft skills or understanding the product from a “female viewpoint.”

Erica: You’ve totally nailed a fundamental tension when talking about diversity. If we say that it's valuable to have more people with background X because they have A and B skills, then we open yourself up to the James Damore (UGH) argument that A and B skills are not well suited for software engineering and that people with background X should stick to product management.

But if you say that all populations have basically the same point of view and skill set, then why not simply hire based on quantitative metrics of accomplishment?

Sarah: That's the thing the Kleinberg paper is trying to correct for. The idea is that the system is biased so these metrics are not objective; that's why the argument supports including the best candidates by these metrics but separated by race/gender/sensitive attribute. Because the system is rigged, you can't expect marginalized groups to perform equally well by these metrics even if they are just as talented. So, you control for this by using the Rooney rule. The thing I anticipate you'll object to is that even in the pool of marginalized candidates, we are still using these metrics to compare candidates against each other, and you might fundamentally reject these metrics altogether as the fruit of a poison system.

Erica: Ha, yes. I object to the metrics. Many big tech companies are full of very "smart" men who have exactly the same training and can tell you the Big O complexity of all the algorithms. But these companies keep screwing up when it comes to fairness and they can't seem to figure out why.

I don't think these problems would be fixed if they hired a bunch of women with exactly the same training, background and perspective. The problem is that they have a single notion of what "smart" is. And they're so enamored with themselves, that they can't begin to change it.

Sarah: To try to summarize your point: The paper was making this assumption about how we say who is the "best" even constrained to a specific candidate pool and glossed over the fact that using traditional metrics to select between a pool of "diverse" candidates will only reify the values that are entrenched in our existing system. Like, the most valuable voices in marginalized groups generally speaking are not going to be those of people who have fully committed themselves to competing in a rigged environment for bro points.

Erica: Yes, exactly. Another good example is politics. I think there is inherent value if having elected officials who come from a diverse set of backgrounds. Kleinberg-esque reasoning might look like "This woman of color went to a less prestigious college than this white dude, but don’t worry! We know the world is biased and WE HAVE A SOLUTION. We will give her a 20% boost when we’re weighing whether to endorse her and call it a day.”

Not the right framing to get more representative representation.

Sarah: Definitely something to think about for any company trying to remove bias from their hiring practices while also reaping the benefits of a diverse workforce.

In the end, Erica and I agreed to disagree on whether the Kleinberg model is a useful tool when advocating for the importance of diversity and inclusion. But we both came away from this conversation with a deeper understanding of the nuanced, sticky topic that is diversity and fairness in hiring. It was a great reminder that people with seemingly similar backgrounds can have wildly different viewpoints on this complex set of issues. And that we should all keep talking to each other.

If you want to join the conversation at Canopy, we’re hiring.


Canopy Goes to FAT*

By Erica Greene on February 05, 2019

Last week, a group of us from Canopy traveled to Atlanta to attend FAT*, a multi-disciplinary conference on fairness, accountability and transparency in socio-technical systems.

Do you know what that means? We weren’t sure either, but last year’s line-up was a stellar collection of computer scientists trying to prove bias, philosophers trying to define bias and lawyers trying to get judges to make decisions based on p-values. So we decided to face the hordes of football fans who were descending on Atlanta for the Super Bowl and check out this year’s program.

We had a delightful time, learned a lot, and have decided to share with you some of our takeaways.

(And just to get it out of the way early, yes, the conference acronym is terrible. They’re working on it.)

Let’s move from Fairness and Transparency to Justice and Ethics. Or maybe simply Human Rights?

During the town hall discussion on the second day of the conference, someone suggested that the FAT* community change its focus from Fairness and Transparency to Justice and Ethics.

That comment reminded me of a discussion I attended a few months ago where Philip Alston from the NYU School of Law said (jokingly!) that he wanted to “strangle ethics”.

What I always see in the AI literature these days is ‘ethics’. I want to strangle ethics... ethics are completely open ended. You can create your own ethic. But human rights, you can’t. They're in the Constitution, they’re in the Bill of Rights, they’ve been interpreted by courts, there are certain limits. And until we start bringing those in to the AI discussion, there’s no hard anchor.

So what should we focus on?

Most people at FAT* would probably agree that the AI community needs to address the negative, real world consequences of discriminatory algorithmic decision making. But from the wide range of talks, there appears to be substantial disagreement about what addressing these problems looks like.

I agree with Alston that fairness and ethics are fuzzy concepts, and that this community should challenge itself to be accountable for how our work impacts real people. This does not mean that we need to forgo mathematical and algorithmic approaches, but that we should pull on threads until we get down to the human level.

I am not familiar with a lot work that does this successfully, so I am going to reach outside of this year’s FAT* program to highlight an example. Last year, Virginia Eubanks, published a phenomenal book titled Automating Inequality that investigates the impacts of software automation and algorithmic decision making on poor and working-class people in America. What makes Eubanks’ work so compelling is that she combines historical context with data analysis and a tremendous amount of on-the-ground reporting. As computer scientists, we might not have the training to do that, but we should be educating ourselves about the context that our work exists in and reaching out to historians, journalists, case workers and lawyers who engage with the human rights impact of the systems we work on.

Eubanks shows that it is possible to do “full-stack” work on algorithmic human rights. Now we just need more people to give it a try.

In practice, the biggest challenges and biggest wins come down to data.

One thing that stood out to me as the talks progressed was how different the theoretical work was from the applied work. The CS literature on algorithmic fairness largely focuses on metrics, algorithms and what can be proven about the two. We saw talks proposing new metrics, new algorithms, and a very good empirical study of the effectiveness and robustness of combinations of the two -- which sadly concluded that “fairness interventions might be more brittle than previously thought”.

But the real-world applications had a very different flavor. They tended to revolve around data. What type of data would we need to empirically prove bias? What domain-specific metrics should we look at? Can we get around practical hurdles like missing information?

There were two papers addressing these issues that really stood out.

The first was an analysis of the Bayesian Improved Surname Geocoding (BISG) method, titled “Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved”. BISG is a technique that is widely used to impute someone’s racial group given their last name and location. Why would anyone do this? Because some industries are legally required to have fair decision practices, but are at the same time forbidden from collecting information about protected attributes from their customers or applicants. Both regulators and the companies themselves rely on these “proxy models” to guess an unknown protected class based on observed variables in order to audit for fair practices. The paper analyzes the bias introduced by these proxy models and proposes an improvement to the most commonly used models. This area of research is both mathematically interesting and has the potential for broad impact.

In the following session, Sendhil Mullainathan presented the incredible work he did with Ziad Obermeyer in which they identified “significant racial bias” in an automated selection process for a beneficial medical care program that impacts 70 million people. The bias stemmed from the decision to train the selection algorithm to keep costs consistent across racial groups, but not to keep healthcare needs consistent across racial groups. Black patients tend to have lower healthcare costs, and thus were being enrolled in this beneficial program at half the rate they would have been without the bias. The authors write that while insurers may focus on costs, “from the social perspective, actual health – not just costs – also matters.”

In addition to identifying this problem, Sendhil and Ziad reached out to the company that designed the algorithm and have worked with them to reduce the bias against black patients by 84%!

These two papers speak to the heart of the questions the FAT* community is trying to address. How do we identify when algorithms are discriminating against historically marginalized groups? What can we do about it in practice? And when are the stakes too high, the potential damage too much, should we toss the algorithm out the window?

These “rubber-hits-the-road” papers were a reminder that it is often access to data and a deep understanding of a domain that leads to real impact.

How we’re contributing to this work at Canopy

Here at Canopy, we are creating a personalized content discovery architecture that keeps your data on device. We know that we are complicating our ability to audit for fairness by not storing any user data on the server, but we are not scared of complicated problems. We believe that people deserve to be in control of their digital identities. And for too long, companies have presented us with false choice when it comes to our privacy. At Canopy, we are committed to building delightful products that are both private and fair.

If you want to follow us along on that journey, we’re on Twitter. We’ll be sharing more about our approach to fairness in an age of privacy there. And please come to FAT* next year in Barcelona, we’ll be sure to be there.


Bassey Etim and Matthew Ogle Join Canopy

By Annika Goldman on January 15, 2019

Welcome Bassey and Matthew to the Canopy team!

Today we are excited to announce that Bassey Etim and Matthew Ogle have joined the Canopy team. In their previous work lives, Matt and Bassey have both built ethical products and services at scale that allow people to express themselves, learn from each other and discover new worlds. At Canopy they will focus on developing a new technical architecture that protects personal data and delivers an amazing, personalized content experience. They will both be based in our Brooklyn office.

At Canopy, we are working on something entirely new in the world of personalization and discovery. The addition of Bassey and Matt to the team will help us achieve our ambitious goals and show the world that you can do things differently through a combination of technology and human curation.

Matthew Ogle comes to Canopy from Instagram, where he worked across multiple teams and products — most recently Explore and Hashtags — to build features that help people discover and connect within Instagram's global community of over one billion. Prior to Instagram, Matt was a Product Director at Spotify responsible for launching Spotify's next generation of personalization and discovery features including Discover Weekly, Release Radar, and Daily Mix. Matt will be leading the overall consumer product vision for Canopy as we develop a broader private personalization architecture.

Bassey Etim is an artist, journalist, and technology manager. He spent the last 10 years as the Community Desk Editor at The New York Times, where he managed a team that built one of the most innovative community forums in the world. He ran the 16-person community management desk within the Times newsroom and was the editorial head of the Times’ Community development team. Bassey will lead development of the voice of Canopy’s recommendation architecture. His experience managing conversations at scale will help provide a human context and voice to our content recommendations.

Canopy launched in 2018 and is focused on giving users a new way to explore the world — without ever revealing their personal data. We believe privacy and personalization should have been built into the Internet from the beginning: with users in control of what they discover and how. We are developing a new kind of architecture, but also a new kind of company that optimizes for delight and discovery.


Let's Fix The Internet Together

By Annika Goldman on January 09, 2019

Hi! I’m Annika and I run strategy and product. I joined Canopy to reimagine the way the internet works. Like most of you, I have experienced amazing, personalized recommendations from big media platforms. I have seen the magic in creators finding their most devoted fans. I have learned new things from people I’ve never met and may never meet. I have celebrated that the internet can amplify vulnerable and marginalized voices and give people tools to amplify the voices of others. But I am - to be frank - angry that this place of discovery, creativity and connectivity is now optimized for engagement, revenue, and the retention and sale of personal data.

For too long, the tech industry has believed that the only way to grow was through personalization powered by user data. This business model led platforms to force us to give up control of our digital selves so that we could connect with a friend, read an article, plan a trip, listen to a song, or watch a video. This obsession with technical innovation and growth at all costs caused the tech industry to turn a blind eye to the social impacts of its creations. It’s left people vulnerable to misuse of their data, hacking, manipulation, radicalization, hate speech, and violence. It has led to the development of products - facial recognition that perpetuates gender and skin-type biases, resume scanning that compounds biases against women - that further entrench biases and oppressive systems. Corporate decisions to track, retain, sell, and exchange personal data power these abuses, which seems like an unreasonably high price to pay for recommendations.

What if you could have an amazing, personalized experience without being manipulated or deceived into giving up more and more of yourself? What if you could challenge bias and deception by taking control of your digital presence? With Canopy, you can.

For the past several months, we’ve been working hard at this very problem. If we’re going to fix this, and fix it now, what would the architecture of the internet look like? If you were going to commit to building mindfully from the very beginning, how would the user experience change? Our founder, Brian Whitman, has written about what we’re doing technically. At its core, Canopy is a protective layer between you and the internet/cloud/server. Our architecture keeps your data on your devices, and in your hands. This means that instead of third parties taking your data, manipulating it to figure out how you make decisions, and then using that knowledge for their own ends, you stay in control of your digital identity. With Canopy, you have the power to tell the internet how you want to be understood. You dictate what experience you want to have and who has access to your data.

A slightly annotated drawing explaining Canopy from our launch in November.

Since our founding last year, we’ve laid the groundwork of this new way, showing that private personalization can work at scale. And because we want you to experience it for yourself, we’ve built a personalized discovery app. This product is the first application of our architecture; our long-term goal is to change the value proposition of the internet so that you’re in control. You’ll get better experiences and you’ll stop paying for them by giving away who you are.

If you want to test our app and have an impact on how the future of the internet will work, reach out. We believe that the more perspectives the better, and we would love to have yours.

Join me and our Head of Operations, Urcella Di Pietro, as we find the good on the internet.


Building off Gratitude

By Urcella Di Pietro on November 29, 2018

As my fellow CANOpeers and I signed off for the Thanksgiving weekend, we all posted messages on Slack about what we are thankful for this holiday season. There were the typical posts of gratitude about pets, family, friends and general health. My heart always acknowledges the importance these things hold in my life, but what was different this year for me was the recognition of the gratitude I felt for the place where I work and what I get to do here. Seeing this new item on my thankful list inspired me to examine further what exactly is different about Canopy compared to other jobs I’ve had in the past (both great and terrible).

Hiring for Potential

Prior to ever setting foot in the Canopy office (we didn’t even have a company name yet), I was told things were going to be different. This was pretty much the first thing our founder, Brian, said during my interview at a bar in Greenpoint, Brooklyn. He wanted people who are committed to making the internet a safer place. “Great, done, I’m in,” I said before he was finished with his spiel about how we would be building a new kind of technology and product that would give consumers an amazing personalized content experience while at the same time protect their personal data. He could have said just about anything as I was pretty excited at the chance to help build a new company. Despite having my jazzed up résumé in tow, I was told I was being hired for my potential, which is the first thing I felt that was going to be truly different about Canopy. My past was the past, a prelude to now, but the real show was about to begin, and it was indeed going to be unlike anything I have encountered before.

Optimizing for Delight

Being part of a team that builds for privacy and optimizes for delight has been profoundly rewarding. In the past, you had to consent to unmitigated access to data on your phone to be able to use apps that could end up making you miserable. Never before has the possibility of getting better recommendations without sacrificing privacy seemed more doable or even possible. Getting to build a product that you feel good about using just feels better — and it’s easy to get evangelical about.

Doing the “Hard Work” a.k.a. Commitment to Mission

Laying down the groundwork for an ethical tech company has not come easy. At every turn, my team and I have had to ask ourselves tough questions: Can this product be used for malicious reasons? Is this product needed? Can the algorithms be reverse-engineered? Are we promising something we can’t deliver? Can we delight everyone (is anyone delighted)? Are we doing something beneficial to society or just making a lot of noise? The questions never go away, and the demand for answers grows stronger every day we read stories about Facebook, Google and others misusing user data and the disastrous consequences that brings. It’s a marathon and we need to pace ourselves accordingly — we never want to skip around or cut corners. It’s part of why we do this: We’re getting it right this time and that has profound meaning to everyone who works here.

When I think about all the work we have ahead of us, I don’t feel worried or weary. It really feels like a great gift that I get to come here and do the big-deal work with like-minded, passionate folks at the top of their fields. People that left cushier positions and better-paid gigs at larger, more established companies for the opportunity to make a difference. And when I get a gift of that size I feel enormously grateful, and that gratitude is what I plan to build an entire company on.


Welcome to Canopy!

By Brian Whitman on November 15, 2018

Today I am introducing Canopy, a new company focused on giving you a new way to explore the world—without ever revealing your personal data. We're building Canopy the way we believe personalization should have been built from the beginning: with you in control of what you discover and how. For too long, the accepted standard for how we learn about people on the internet has benefited the wrong side. Over the past year, a great group of people have come together to work on a new way to do personalization and discovery on the internet. We are building a new kind of technology and product, but also a new kind of company that optimizes for delight and discovery.

More taste decisions than ever are being mediated by an algorithm—and those algorithms are running on more and more personal data than ever before. The data you give away to services can be used against you, or sold, or lead to results that you don’t understand. We’ve all seen the “creepy” side of personalization at work, aligned with revenue or time spent rather than improving your experience or happiness. And in the worst case we saw people being radicalized from aggressive personalization, or saw state actors steal or manipulate personal data for their own needs.

Canopy Brooklyn in October 2017. How did we fit two people in here?

After I left Spotify in 2016 I began thinking of focused ways we could change the way things worked. I started talking with my friends David Blei and Ben Recht, two well-known machine learning and recommendations professors who have worked on or invented what are now the core parts of your personalized internet experience in 2018. They immediately signed on to help and we spent some time wondering what a fully private and transparent discovery system would look like. Could there be a music recommender that never knew what songs you've listened to, or a personalized browsing experience that never knew what websites you've visited? Could we build a transparent discovery tool that gave control back to the person doing the discovery and power back to the creators?

We’ve found a new way. We’ve shown already that a company having full access to personal data is not a prerequisite for great discovery. We are carefully applying a combination of on-device machine learning and differential privacy while taking advantage of the enormous step change in power of our mobile devices. We're now moving most of the math we used to do on big clusters of servers against your personal data directly to the device in your hands. We guarantee we'll never see your personal interactions with the world, and we guarantee we'll give you an even better discovery experience.

The science behind what we're doing at Canopy is just a small piece of the puzzle. We are plotting an incredibly pragmatic approach to discovery, with a large focus on editorial, finding viewpoints, understanding root causes, uncovering and programming niches. We’ve already built a different sort of company. Most of us have done this before. Diversity of our team is our first priority, as our product needs to reach the entire world of ideas. We’ve brought on scientists, lawyers, marketers, engineers, editors, professors and operations experts who’ve all helped build great discovery experiences you use every day at Spotify, Instagram, Google, and The New York Times. Our early investors include founders and executives from Spotify, Keybase, Splice, WeWork, MIT, and who make investments in the decentralized internet, machine learning and data.

Much improved space today with the same chairs. We're up to 15: 4 in Brooklyn, 5 in Boston and a few remote.

Today, we’re introducing Canopy to the world to share what we’re building. The first way you’ll interact with us is our mobile app, coming early next year. It’ll be the first time ever you can explore and discover new things without the services and company on the other end watching your every move. You don’t even tell us your email address. You’ll find it a refreshing new experience that you can believe in. This app is just a first step on our way to a better kind of system that gets you, a better kind of company, and eventually a better internet. We hope you'll join us for what's next.