Emerging Tech Policy

Irene Solaiman is the Head of Global Policy at Hugging Face.

You can follow her on Twitter, LinkedIn, and at irenesolaiman.com

Published in November 2023.

Tell us a little about your career journey: How did you come to work in AI policy?

I took an incredibly non-linear journey, which seems to be the norm among my colleagues. AI policy as a field has only blossomed in the last five-ish years, and even so, skilling up can look vastly different by person.

I originally became interested in AI as many teenagers with awkward social skills do: through sci-fi media. However, I only began my AI career journey during my masters, when I pivoted from a human rights policy background as I saw AI as a way to work on human rights while (somewhat) protecting myself from the incredible mental toll of processing human rights violations. Taking coding bootcamps and then computer science courses in AI gave me the foundation I needed to be effective in the technically-informed policy career I have now. 

“AI policy as a field has only blossomed in the last five-ish years, and even so, skilling up can look vastly different by person.”

My career is heavily steeped in technical research, which I feel is vital to effective policy work. I’m a firm believer in experiential learning and transferable skills; my early projects on automated decision making tools in government procurement processes helped me build research skills for an area with so many unasked questions. Now, in addition to ongoing research projects, tinkering with models and at Hugging Face, with our many tools, keeps me updated on the latest AI progress.

What are some of the current AI policy challenges you’re working on?

All of them. The current AI policy arena is interwoven and requires radical prioritization. I find it helpful to have a guiding personal mission, which for me has always been making systems safer and better for people who are not always able to give input to the development process. Concretely, my main research areas are release methods and the social risk components of AI safety. Concurrently, I’m constantly keeping atop parallel safety work, as many policy and technical risk mitigation approaches can apply to many risks. A large discussion in the policy, governance, and technical space is around open-source and safeguards, which is now top of mind for me.

The pace of this field requires being adaptable but firm in that personal mission. While reactionary work is valuable, I prefer planning safety research long-term that prioritizes quality over quantity. The current social impact evaluation work I’m leading with an incredible community has been in the works for almost two years now.

What advice do you have for those interested in a similar career path?

Find topics that you are excited to read about and tinker with for hours without getting bored. It has to come from genuine interest.

I strongly encourage gaining expertise in a specific part of the AI field which can then be extrapolated to other systems and parts of safety. I often hear interest in generalization, which is possible through extrapolation. I know language models best, and within language models I am best at safety research, but I can intuit similar challenges in other generative modalities and apply high level safeguards, like documentation approaches, to narrow non-generative AI systemes. 

What skills do you think are important for success in AI policy, and how could readers acquire them?

“While it’s not necessary for AI policy people to be technical, I find it invaluable and often an enormous advantage to be able to gauge what is technically feasible and implementable.”

While it’s not necessary for AI policy people to be technical, I find it invaluable and often an enormous advantage to be able to gauge what is technically feasible and implementable. This does not necessarily mean publishing at a top AI conference, but being able to comfortably understand technical papers is ideal.

Most importantly, the ability to work in the unknown is one of the most exciting parts of my career that can be a high barrier for those without experience. Interestingly, the way I gained these skills were entirely outside of AI; for example, I’m fond of going to countries where I don’t speak the language and learning on-the-ground. Pushing yourself outside of your comfort zone is a transferable skill. 

Are there any programs, resources, or books you’d especially recommend for those interested in AI policy?

I did edX courses a ton back in the day! For those who need more structure, some of the computer science courses have community accountability so you can take them with friends. I loved working with the Berkman Klein Center when I was in grad school and there are more research centers popping up for early career folks to skill up. 

Now there are so many resources it’s hard to name specific ones. Almost all I read now is technical papers from a select few conferences, NeurIPS, FAccT, occasionally ACL, ICML and ICLR. Stanford HAI has fantastic reports and newsletters on the policy side, and I hugely appreciate the annual State of AI reports

This is part of a series of career profiles, aiming to make career stories and resources more accessible to people without easy access to mentorship and advice. If you have suggestions for what questions you’d like to see answered in these profiles, please fill out our feedback form

Other profiles