Emerging Tech Policy

B Cavello is the Director of Emerging Technologies at Aspen Digital.

You can follow them on Twitter, LinkedIn, Mastodon, and bcavello.com

Published in November 2023.

Tell us a little about your career journey: How did you come to work in AI policy?

I kinda backed my way into AI policy. I ended up in the tech industry almost by accident, thanks to the recommendation of a friend I met through the crowdfunding world. I had been working at a card game startup called Exploding Kittens, at the time. While I knew I wanted to work in a field that felt more “impactful,” I wasn’t very actively searching and, honestly, may not have even realized IBM was still around until I was literally interviewing for the role.

Working in the Watson Group when IBM was first commercializing their AI offerings was an incredible opportunity. I worked in a kinda strange (no longer extant) branch of client engagement, where I was essentially paid to learn about AI and then talk about it with people from all different industries around the world. I met with insurance companies from New Zealand and telcos from Mexico. I met with meteorologists, doctors, and foreign dignitaries. I even started an “AI study group” with a bunch of IBM colleagues to audit free AI courses online and discuss the tech together. It was the master’s degree I never got. It was incredible.

“working in such a far-reaching role also made me even more conscious of the many ways that technology could go wrong or be harnessed to do great harm.”

That said, working in such a far-reaching role also made me even more conscious of the many ways that technology could go wrong or be harnessed to do great harm. The deeper that I got into this space, the more I felt that my future would be in some form of “responsible tech.” (Funnily enough, it was a sort of return to my roots, given that right out of college I was developing toys for kids to learn about the internet so they could participate more actively in its future governance.)

Inspired by Caroline Sinders, whom I met in my first week of work at IBM (and who introduced me to the concept of “fellowships”), I applied to the Assembly program at Harvard’s Berkman Klein Center for Internet and Society. Almost simultaneously I met Peter Eckersley who would encourage me to apply to and ultimately welcomed me onto the research team at the Partnership on AI. I suppose you could say that I’ve been “in AI policy” ever since!

What are some of the current AI policy challenges you’re working on?

A phrase you’ll hear a lot in AI policy is “coordination problems.” Complex systems are, well, complicated, and making impact on the big challenges that face our world today requires a lot of people working together to achieve shared goals. That’s at the heart of the work that I do.

My work takes three primary forms:

  • Network-building: helping people who are working on similar problems connect with each other,
  • Knowledge-building: empowering more people from different backgrounds and points of view to engage through shared language and understanding, and
  • Practical guidance: developing targeted tools and resources to guide specific decision makers through AI-related choices.

In many ways, the type of work I do isn’t specific to AI, but because of my background and community of incredible AI peers, I am able to do this work in the AI policy context. I use these approaches to help address some of the coordination problems that AI policy entails.

What advice do you have for those interested in a similar career path?

If there’s one thing I could encourage others to do, it would be to learn out loud.

Many people internalize the idea that you have to be an expert at a thing to participate in it, but that is absolutely not the truth. Sharing what you’re learning, what you’re curious about, and what you find inspiring is a fantastic way to engage in community (and even serve as a resource for those coming up after you).

When I think back to why my friend Jacob recommended me to that role at IBM, it wasn’t because I was an AI expert (I wasn’t!) or because I had done anything like that before (I hadn’t!). I believe it was because I showed passion, interest, and a demonstrable willingness to learn and share my learning with other people.

So many parts of my career path have been thanks to luck, and I can’t take any credit for that, but if you want to “make your own luck” to follow a similar path, I believe that being intellectually humble, enthusiastic about learning new things, and generous and encouraging to those around you who are also on that journey will take you far.

What skills do you think are important for success in AI policy, and how could readers acquire them?


“Truthfully “AI policy” is a million things…I don’t think there’s any one set of skills that is core to AI policy, but I think it’s always safe to invest in your capacity for communication.”

Truthfully “AI policy” is a million things. It could be advocacy, legal research, prototyping tools, journalism, and more. I don’t think there’s any one set of skills that is core to AI policy, but I think it’s always safe to invest in your capacity for communication. This might mean formal writing (which I’m actually pretty bad at!), public speaking, negotiation, technical documentation, or making really good analogies. At the end of the day, AI policy is mostly working with other people, so working on your skills at doing that will help no matter where in the ecosystem you end up.

Are there any programs, resources, or books you’d especially recommend for those interested in AI policy?

Absolutely! Fellowships are an incredible way to get real practice in a domain and quickly develop a network of friends and allies. I have had the privilege of serving as both an Assembly Ethics & Governance of AI fellow as well as a TechCongress Congressional Innovation Fellow. Both of these were life-changing experiences which introduced me to new friends, new ideas, and new ways of approaching my work.

For folks entering the space with limited technology knowledge, be brave! Try to read beyond what you’re seeing in headlines. There are a lot of other folks like you who want to enter this space, and although certain opportunities may put you into competition, you are allies. Form study groups, watch lectures, write down words you encounter that you don’t know and then define them for each other. There’s a universe of content out there, but it’ll be most useful to you if you don’t just consume it but engage with it, discuss it with peers, and share what you’ve learned with others.

For folks entering the space with limited policy experience, be humble! Policy is tough, messy stuff, and there’s a lot on the line, so often the “obvious solutions” (and several non-obvious ones) have already been tried. This fantastic piece from Joshua Tauberer is a must-read. Policy is people work. Learn who the people are behind the issues you care about. What specific problems do they think are important? What have they tried? How can you narrow in and get more specific? There’s a lot of grandiose thinking when it comes to AI policy, but specificity and focus get the goods.

B’s series of posts on “How to get into AI policy”:

Random assortment of other links:

This is part of a series of career profiles, aiming to make career stories and resources more accessible to people without easy access to mentorship and advice. If you have suggestions for what questions you’d like to see answered in these profiles, please fill out our feedback form

Other profiles