was originally published on this site
Veena Dubal is an unlikely star in the tech world.
A scholar of labor practices regarding the taxi and ride-hailing industries and an Associate Professor at San Francisco’s U.C. Hastings College of the Law, her work on the ethics of the gig economy has been covered by the New York Times, NBC News, New York Magazine, and other publications. She’s been in public dialogue with Naomi Klein and other famous authors, and penned a prominent op-ed on facial recognition tech in San Francisco — all while winning awards for her contributions to legal scholarship in her area of specialization, labor and employment law.
At the annual symposium of the AI Now Institute, an interdisciplinary research center at New York University, Dubal was a featured speaker. The symposium is the largest annual public gathering of the NYU-affiliated research group that examines AI’s social implications. Held at NYU’s largest theater in the heart of Greenwich Village, Dubal’s event gathered a packed crowd of 800, with hundreds more on the waiting list and several viewing parties offsite. It brought together a relatively young and diverse crowd that, as my seatmate pointed out, contained basically zero of the VC vests ubiquitous at other tech gatherings.
AI Now’s symposium represented the emergence of a no-nonsense, women and people of color-led, charismatic, compassionate, and crazy knowledgeable stream of tech ethics. (As I discussed with New Yorker writer Andrew Marantz recently, not all approaches to tech ethics are created equal). AI Now co-founders Kate Crawford and Meredith Whittaker have built an institution capable of mobilizing significant resources alongside a large, passionate audience. Which may be bad news for companies that design and hawk AI as the all-purpose, all glamorous solution to seemingly every problem, despite the fact that it’s often not even AI doing the work they tout.
Legal scholar Veena Dubal.
As the institute’s work demonstrates, harmful AI can be found across many segments of society, such as policing, housing, the justice system, labor practices and the environmental impacts of some of our largest corporations. AI Now’s diverse and inspiring speaker lineup, however, was a testament to a growing constituency that’s starting to hold reckless tech businesses accountable. The banking class may panic at the thought of a Warren or Sanders presidency, but Big Tech’s irresponsible actors and utopian philosopher bros should be keeping a watchful eye on the ascendance of figures like Clark, Whittaker, and Dubal, along with their competence.
I won’t attempt a more detailed review of AI Now’s conference here; the organization will put out an annual report summarizing and expanding on it later this year; and if you’re intrigued by this piece, get on their mailing list and go next year.
Below is my conversation with Dubal, where we discuss why the AI Now Institute is different from so many other tech ethics initiatives and how a scholar of taxis became a must-read name in tech. Our conversation ends with the story of one well-off white male software engineer who experienced surprising failure, only to realize his own disillusionment helped him connect to a much greater purpose than he’d ever envisioned.
Epstein: Let’s start by talking about the AI Now Symposium. What does it mean for you to be here as one of the featured speakers?
Dubal: It’s so awesome for a center like this to to say that what Uber drivers are doing to organize to better their conditions is actually related to tech. For the last half decade at least, I’ve been doing what is considered tech work, but very much at the periphery. Because we weren’t explicitly doing computer science-related work, I think people didn’t think of the research people like me do as being at all [related to tech]… it was “just” labor. It wasn’t tech, even though it is on [workers] backs that the whole tech industry exists. So it’s powerful to be included in this conversation.
And for this particular event, they’ve done such a good job of [inviting speakers] whose research is thought of as on the periphery, but should be at the center in terms of what is really important from an ethics perspective. Ruha Benjamin [a Professor of African American Studies at Princeton and founder of Princeton’s JustData Lab]’s work is amazing and then the two people that I’m on the panel with, Abdi Muse [Executive Director of the Awood Center in Minneapolis, a community organization focused on advocating for and educating Minnesota’s growing East African communities about their labor rights], organizes warehouse workers in Minnesota, who are the reason Amazon can facilitate the transcontinental flow of goods in the way that they do.
AI Now co-founders Meredith Whittaker and Kate Crawford.
And Bhairavi Desai [Executive Director of the New York Taxi Worker’s Alliance] — I’ve known her for 10 years and she has, from the very beginning, been fighting this gig nonsense. To have them in the room and centered, to have their voices centered instead of on periphery, is just so awesome for me.
Epstein: It’s very clear that AI Now is dedicated to doing that, maybe even moreso than any other peer organization I can identify. How do you see AI Now, as an organization, positioned among their various peers?
Dubal: It’s a great question. I’ve looked at a couple of other more nonprofity things that do tech and equality, and you are absolutely right; more so than any other organization, [AI Now] centers the people who are often at the periphery. Everything that they do is very deliberative.
They aren’t moving through things really quickly, onto the next project really quickly. Every decision they make is thoughtful, in terms of the people that they hire, for example, or how they do an event, or who they include in an event. It’s just very, very thoughtful, which is not how most things in tech, period, run.
Epstein: They’re not moving fast. They’re not breaking things.
Dubal: Exactly. They’re not breaking things. They’re fixing things. And the other thing is, even The TechEquity Collaborative, a nonprofit in San Francisco, there’s a tech utopian imaginary that guides their work. They really have a belief that the technology is going to fix things.
AI now, based on all the interactions I’ve had with them, My sense is that their ethos is very much about how people fix things. Tech doesn’t fix things.
So they’re centering the people who can fix things. They’re in a powerful place, and I think because they’re so sophisticated in the work that they do, they have a powerful voice, which is unusual for people who are interested in the subaltern and in the issues that hurt the most marginalized.
Epstein: Yes. What made me want to come all the way here from Cambridge, MA, where we are not exactly suffering from a shortage of tech ethics initiatives, and what made me decide to miss a lot of the Disrupt conference even though I work for TechCrunch, is that it’s rare that you have an organization that is able to combine to things: genuinely fighting for the marginalized, or helping the subaltern speak; and actually achieving a very significant public voice. Usually it’s maybe one or the other but not both.