Fighting (for) the future

You asked me the question only after I’d posed it to myself a few hundred times.

You’re a young journalist running a newsroom desk at The New York Times. Why would you leave to work at a 13-person startup?

The answer is pretty simple: For all of the same reasons I became a journalist in the first place.

Sometimes, I think about all of the ways I was taught to think — as a writer, in journalism school, at The Badger Herald and at the New York Times.

The shell of skepticism around a well of openness, the cascading impact of new information rearranging our values, the raw practical use of sensible language, the burden of social responsibility in even our smallest actions — the way good journalists train their minds is the raw stuff of ethical technology.

The weather, the terrain and the very air we breathe on the internet seems defined by those who believe in fictions because they have to believe them. I believe Arthur Koestler called them “applicable fictions.”

How many times have well-informed observers sat through tech presentations in which the speaker makes declarative statements about the underlying reasoning for their architecture, citing ‘facts’ about the behavior of online communities that have been contradicted by research and happen to provide convenient justification for a large revenue stream?

Thus far, the use of machine learning by major platforms has inspired little optimism. You’d be forgiven for believing it primarily exists to insulate executives whose competitive fire for profit has led them to erect online networks that threaten the stability of human societies.

The nature of the technology we’re using to order our society is not a fait accompli. When I first spoke with Brian and Annika about online social policy, I couldn’t help but recall of an old X-files tagline: Fight the future.

If I believe there is a way to build an online infrastructure that is profitable, yet declines to package our most private thoughts into commodities to be bought and sold on the open market, then I should get involved.

At the New York Times, part of my mission was to reimagine the online comments section. Most comments sections have been notoriously awful for decades. But it didn’t have to be that way.

After all, what self-respecting publication would allow contributors to post whatever they wanted, whenever they wanted, about any topic at all? Who would visit that site?

But that’s exactly the comments strategy for most outlets — an entire content section on their carefully-crafted website, with little editorial oversight. Madness. All for the sake of a few empty clicks.

We had an idea that can’t at all qualify as brilliant, but it was revolutionary all the same: Treat your comments the way you treat the rest of the content on your site — with love and care. Years later, we’d built what is widely considered the best online comments section in the industry.

Here at Canopy, our idea might not qualify as brilliant, but it’s revolutionary all the same: There is something fundamentally wrong with the internet. We’re looking to build a platform that fixes the power imbalance between readers and powerful interests that feed our addictions and consume our private data.

Our mission isn’t just audacious. It’s one of the hardest jobs I can imagine.

So how could I resist?

Next comes the hard part: Figuring out how to actually do the things.

Our first product, a personalized content recommendation app, is meant to plant a flag for our values. Broadly, we are looking to optimize for reader delight. So I figured that first, we should define ‘delight.’ I examined delight against 8 axes, and asked my colleagues to help me collect answers:

When are we delighted / Why are we delighted / What is not delightful about the things we do on our phones / What does delight inspire us to do / When we are angry, how can it feel productive? When does it feel useless? / Does delight come from hard work, or serendipity, or both? How can delight come from each, if it can? / Are things ever delightful when there is no context? If so, how?

Then, I embarked on a bit of an intellectual journey to study the articles we might recommend and the ways that our machine learning model sorts them. After that, I assigned human language to each machine-sorted category of content.

For example, one category is: Factoids for travelers, deadly brushes between nature and man. Another is: Emergent modes of morality, economic and sexual consent.

Once I had an understanding of what delight means in a modern online context and a solid grasp of how our model “thinks” about human editorial work, I repackaged those insights to create our curation guidelines, which define the the kinds of articles, podcasts, videos, photos and art that we’ll be recommending in the app.

I hope those guidelines reflect the essence of who we are at Canopy.

“We recommend content that helps us find joy in human connections.”

Oh, and by the way, we are hiring.

published by:
Bassey Etim
DAte:
March 6, 2019