Should we model ethics?

As mentioned in our previous post, Canopy recently attended the FAT* conference in Atlanta. Because FAT* is about fairness, accountability, and transparency in computer systems, there were moments at the conference that made me, as a woman in computer science, feel like the president of the Hair Club For Men; I’m not just an implementer of systems with implications for fairness, but I’m also on the receiving end of those same systems.

In this year’s opening keynote, Fairness, Rankings, and Behavioral Biases, Jon Kleinberg discussed his joint work with Manish Raghavan in which they mathematically model implicit bias as well as the use of the Rooney Rule to counteract it. The Rooney Rule’s name comes from a practice in the NFL that requires league teams to interview minority candidates for head coaching and senior football operation jobs. Since its adoption in the NFL in 2003, the Rooney Rule has spread to other industries, and the moniker is now used to refer more broadly to the strategy of requiring hiring committees to interview candidates from underrepresented groups with the goal of improving either the ethnic or gender composition of an organization.

Of course, this practice of intentionally diversifying candidate pools has not been without its detractors. In particular, in the tech world, the notorious notion of “lowering the bar,” rather well-characterized by this popular blog post, continues to rear its head. The argument goes: there just aren’t very many qualified candidates and so to consider diverse ones you must lower your standards. Since structural barriers to entry often prevent women and underrepresented minorities from appearing to be as qualified according to traditional standards, advocating for their inclusion in the interview process can be perceived as advocating for workforce diversity at the expense of workforce quality.

Kleinberg’s work addresses this point by first formalizing the opposition’s argument. In mathematical optimization, when you are trying to find the best value of something, it makes sense to pick from among as many choices as you can. For instance, if I am looking for the best deal on a used Toyota Prius, it makes sense to not only look at the dealership down the street from me, but also to consider all dealers within, say, a 50 mile radius. The more options I have, the better the deal I am likely to find. This seems very intuitively clear and correct, and so many people get hung up on this point when they think about changing the way we consider candidate qualifications in interviews. Indeed, although I am a proponent of hiring candidates from underrepresented groups, I often faltered when trying to rebut this specific formulation of the argument against “lowering the bar”. You pretty much have to reject the mathematical formulation and say something like “People are not Toyota Priuses!” Which is unsatisfying.

So, I was refreshed to see Kleinberg, rather than appeal to all the empirical research that shows that diverse teams actually do perform better, instead say, Okay, let’s accept this optimization formulation. How do we build a reasonable model of the world so that the empirical result, that the Rooney Rule has favorable outcomes, is a natural consequence of the model? Yes! An itch I’d not even fully been aware of in the part of my brain that chafed at the “bar lowerers” and their cool rationality was finally being scratched! I would never again be caught without a mathematically rigorous rejoinder when arguing about why the tech sector should work much harder to include more people from underrepresented backgrounds.

As the talk unfolded, Kleinberg proved that in the presence of bias against a group, using the Rooney Rule can lead to the hiring of better talent. The model conceives of a distribution of talent where bias scales back the talent of an applicant that can be perceived by biased decision-makers. I could picture this abstract notion of talent mapping concretely onto citation counts, Github contributions, or number of referrals. I was into it. Everything about this seemed totally awesome.

And then, as Liz Lemon would say, “Twist!

My colleague and fellow machine learning engineer, Erica Greene, had typed this into our company’s FAT* Slack channel:

“...the keynote today was an hour long talk about a mathematical justification for the Rooney Rule that contained no mention of 1) any ethical/moral justifications or 2) any mention of the many studies that show that diverse teams are more productive, effective and profitable even if the individual members are less qualified or experienced… I thought it was phenomenally out of touch.”

NO!!! How could this be?! How was it that this piece of mathematics that I was so excited about was being perceived so differently by one of my colleagues? Once we got back to Boston we decided to take some time out to discuss our differing viewpoints. Below is a distilled version of our conversation, edited for brevity, clarity, and profanity.

Erica: I think algorithmic fairness has become a hot topic, and as a result, many people have joined the discussion without being sensitive to the ethical, social justice, and civil rights aspect of the work itself. The Kleinberg talk is a prime example of this: great work from a mathematical perspective, but tone deaf from a moral and ethical perspective. He cedes so much ground in the problem definition that the conclusion isn’t only useless, it’s harmful.

Sarah: I don't see it as ceding ground to address the arguments of the opposition. I think it's necessary and valuable, especially if you can prove them wrong. So I'm not saying this should be the only argument. I'm saying it's one I'm glad we have, and it was new.

Erica: My feeling is that if I find myself telling someone that there exists some delta and some alpha such that it is provably true that women should be included in the hiring pool, then I’ve already lost the debate. I don't want mathematical models of ethical issues.

Sarah: I want mathematical models of everything.

Erica: Ah ha. I think we’ve found the root of the issue.

Sarah: I do understand where you are coming from. Any utility-oriented argument for diversity raises my hackles a bit because if you are fundamentally non-racist/sexist, the idea that you have to prove that women and people of color are actually not worse than white men and so should be included does seem like engaging on a topic that should not even merit discussion. But once we're talking about it, I don't see a difference between citing empirical research that diverse teams perform better and a mathematical model of bias that proves the Rooney rule.

Erica: I agree that the primary argument should be a moral one. I would love for that to convince everyone. But if we need a second argument, my preference is to highlight that diverse teams perform better. It’s something I really believe, that I’ve seen in practice and there are many studies to back up.

The problem with Kleinberg’s bias argument is that it requires that we accept some metric that women almost certainly do worse on. Instead, we should be talking about people’s "value add". How does this person add a new perspective to our team that will make us more creative and productive? Not, how biased do we think we are at judging their skills on this narrowly defined axis?

Sarah: I believe that math should be capable of expressing and representing anything. Given that math has historically been developed and used primarily by men of privilege, expanding our mathematical understanding of the world to represent my lived experience feels like a really beautiful gesture, albeit one that I can see being perceived as out of touch.

Additionally, I don't like thinking of myself as a "value-add." I want to think of myself as a valuable engineer, apart from any woman traits like soft skills or understanding the product from a “female viewpoint.”

Erica: You’ve totally nailed a fundamental tension when talking about diversity. If we say that it's valuable to have more people with background X because they have A and B skills, then we open yourself up to the James Damore (UGH) argument that A and B skills are not well suited for software engineering and that people with background X should stick to product management.

But if you say that all populations have basically the same point of view and skill set, then why not simply hire based on quantitative metrics of accomplishment?

Sarah: That's the thing the Kleinberg paper is trying to correct for. The idea is that the system is biased so these metrics are not objective; that's why the argument supports including the best candidates by these metrics but separated by race/gender/sensitive attribute. Because the system is rigged, you can't expect marginalized groups to perform equally well by these metrics even if they are just as talented. So, you control for this by using the Rooney rule. The thing I anticipate you'll object to is that even in the pool of marginalized candidates, we are still using these metrics to compare candidates against each other, and you might fundamentally reject these metrics altogether as the fruit of a poison system.

Erica: Ha, yes. I object to the metrics. Many big tech companies are full of very "smart" men who have exactly the same training and can tell you the Big O complexity of all the algorithms. But these companies keep screwing up when it comes to fairness and they can't seem to figure out why.

I don't think these problems would be fixed if they hired a bunch of women with exactly the same training, background and perspective. The problem is that they have a single notion of what "smart" is. And they're so enamored with themselves, that they can't begin to change it.

Sarah: To try to summarize your point: The paper was making this assumption about how we say who is the "best" even constrained to a specific candidate pool and glossed over the fact that using traditional metrics to select between a pool of "diverse" candidates will only reify the values that are entrenched in our existing system. Like, the most valuable voices in marginalized groups generally speaking are not going to be those of people who have fully committed themselves to competing in a rigged environment for bro points.

Erica: Yes, exactly. Another good example is politics. I think there is inherent value if having elected officials who come from a diverse set of backgrounds. Kleinberg-esque reasoning might look like "This woman of color went to a less prestigious college than this white dude, but don’t worry! We know the world is biased and WE HAVE A SOLUTION. We will give her a 20% boost when we’re weighing whether to endorse her and call it a day.”

Not the right framing to get more representative representation.

Sarah: Definitely something to think about for any company trying to remove bias from their hiring practices while also reaping the benefits of a diverse workforce.

In the end, Erica and I agreed to disagree on whether the Kleinberg model is a useful tool when advocating for the importance of diversity and inclusion. But we both came away from this conversation with a deeper understanding of the nuanced, sticky topic that is diversity and fairness in hiring. It was a great reminder that people with seemingly similar backgrounds can have wildly different viewpoints on this complex set of issues. And that we should all keep talking to each other.

If you want to join the conversation at Canopy, we’re hiring.

published by:
Sarah Rich
DAte:
February 8, 2019