Q&A: Khoury College’s Alan Mislove on crafting AI policy in the Biden White House
Mon 10.28.24 / Emily Spatz
Q&A: Khoury College’s Alan Mislove on crafting AI policy in the Biden White House
Mon 10.28.24 / Emily Spatz
Mon 10.28.24 / Emily Spatz
Mon 10.28.24 / Emily Spatz
Q&A: Khoury College’s Alan Mislove on crafting AI policy in the Biden White House
Mon 10.28.24 / Emily Spatz
Q&A: Khoury College’s Alan Mislove on crafting AI policy in the Biden White House
Mon 10.28.24 / Emily Spatz
Mon 10.28.24 / Emily Spatz
Mon 10.28.24 / Emily Spatz
For the past 18 months, Alan Mislove, professor and senior associate dean for academic affairs at Khoury College, served in the White House Office of Science and Technology Policy (OSTP). During his time as deputy US chief technology officer for privacy, Mislove worked with other parts of the White House and agencies across the federal government on the Biden administration’s policies and priorities on artificial intelligence (AI) and privacy. In particular, he strove to reduce the risks of discrimination, bias, and disinformation in automated systems, as was highlighted in OSTP’s 2022 Blueprint for an AI Bill of Rights.
Khoury News caught up with Mislove to discuss his White House work. The interview has been edited for length and clarity.
How did you get the position and what was the process like?
I knew some folks from my research community who were serving at OSTP before I was there. I didn’t apply directly for the position, but I ended up talking to someone who said this might be a possibility. One thing became another, and I ended up there.
What work did the White House appoint you to do?
In October 2022 — before AI took over the public consciousness with ChatGPT, Claude, and others — the Biden Administration released its Blueprint for an AI Bill of Rights. The Blueprint lays out five key properties that, as citizens, we should expect from automated systems: things like respect for privacy, the ability for people to know what systems are being used, and the ability to opt out. What motivated that document is that, as time goes forward, we are seeing automated systems mediating more and more of our daily lives. There are ways in which that’s obvious, like ChatGPT. But there are also less obvious ways, like AI systems that rank applicants for jobs, or that affect interactions with the criminal justice system.
This trend brings up serious challenges, such as how should we be thinking about the ways these AI systems impact us? How can we put in protections to make sure AI is used in ways that live up to our nation’s laws and our highest values? Those are very hard, broad challenges.
The Blueprint essentially says that to address these challenges and mitigate the risks of these systems, we first need to be clear about our values. So, the idea behind the Blueprint was to lay out the core expectations of automated systems and then describe how to operationalize those expectations. It’s about applying those principles to real systems; for example, what each principle means in the context of law enforcement or hiring or adjudication of government benefits.
The reason I went to the White House was that much of my academic research focuses on algorithmic auditing. I look at real systems that we all interact with every day — such as Facebook or Uber or Google — measure them, and try to determine whether they’re making decisions that may disproportionately impact certain populations, or that just may not live up to our values. I was brought in to OSTP to help translate the ideas in the Blueprint into policy.
I showed up there in January 2023, just after ChatGPT became a thing. It was a fascinating time to be working in technology policy, as it was just when AI became one of the key things that everybody in the building — from the National Security Council to the Domestic Policy Council to the National Economic Council — was talking about. It took over the national conversation and was a significant focus for the administration.
What kind of policies were you working on and influencing?
The Biden administration has priorities, such as ensuring good-paying jobs, protecting the environment, and gaining the benefits of AI while also mitigating the risks. We work with federal agencies to help translate what those priorities mean for different agencies.
Our work varied a lot by the day. Some days we would work directly with agencies to help them protect Americans — for example, to protect them from housing discrimination. Other days we would work to host events, to highlight key administration actions that help move these priorities forward.
We also worked on executive orders, which are binding instructions that the president gives to the federal government. Last year, President Biden issued an executive order on AI, so we worked, along with other policy councils, to determine what the federal government could do to make sure AI is used in a safe, secure, and trustworthy manner. We also reviewed agency reports that were called for in this executive order to make sure they reflected the administration’s thinking and priorities.
We dealt with international issues, like helping to shape the US government’s positions on AI in international forums. For example, members of my team worked closely with our European counterparts to understand what they were doing, and to understand how their activities would impact what we were doing.
Were you in DC for your 18-month appointment or did you have to commute?
I have a family and young kids in school, so moving to DC wasn’t in the cards. Most weeks, I would take the early flight down to DC on Tuesday morning and fly back Thursday afternoon. I worked remotely the other days.
What was your day to day like? What were some of the biggest projects you worked on?
One of the most important things was Office of Management and Budget (OMB) Memoranda M-24-10. OMB is part of the Executive Office of the President, so they’re the White House’s budgetary arm and they work with federal agencies on budgets, but they also set federal-government-wide policy.
The AI executive order directed OMB to develop federal policy on the use of AI. In other words, when a federal agency wants to use AI in ways that could impact people’s rights or safety, there are things that the agency has to do to both before deploying the AI and on an ongoing basis ensure that the use of AI is safe. That includes things like engaging with the community, running tests for discrimination, providing notice to people, and enabling human review. We worked with our colleagues in OMB on this policy, which came out in March of this year.
What’s particularly important about this policy is that the federal government is one of the largest developers and procurers of AI systems. So when the federal government wants to procure a system, the system has to meet these rules, which helps to shape the market for both federal procurements and the private sector.
Another recent project we worked on dealt with the impact of image-based sexual abuse, which has existed for a long time but has been exacerbated by AI. People have probably heard of deepfakes like the Taylor Swift incident; these deepfakes are some of the most direct and acute AI harms we’ve seen. So we worked with our colleagues at the Gender Policy Council to issue a call to action to industry to address these harms while Congress considers legislation. After I left, the White House announced a number of voluntary commitments from industry to reduce the harms of image-based sexual abuse.
Who else did you work with?
A lot of different folks, both across the White House and federal agencies. One was the National Institute of Standards and Technology. Under the AI executive order, they were tasked with several things, including reporting on the potential harms of AI-generated misinformation, creating standards for synthetic content, and understanding the technical methods to counteract those harms.
We worked with the Department of Housing and Urban Development around AI’s impacts on housing. We worked with the Consumer Financial Protection Bureau to document ways in which consumer financial services are impacted by AI. We worked with the Equal Employment Opportunity Commission on what AI means for hiring and people’s civil rights in employment.
One mechanism by which we worked with so many agencies is through Interagency Policy Committees (IPCs). If the federal government is thinking about a particular topic, they might put together an IPC and lots of agencies will send representatives. I led a few of these while at OSTP; you really get a sense of everything the federal government does and how your efforts impact those operations.
How does the work that you did fit into your research interests and teaching career?
Much of my work tries to understand the potential harms of the systems we interact with every day, whether those harms be privacy leaks, security vulnerabilities, impacts on marginalized populations, that sort of thing. My research is naturally interdisciplinary. It sits at the intersection between computer science and policy, or computer science and law, and the ways systems and technologies impact people.
This was my first time serving in government, and I found it really fascinating because you get to see what it’s like inside. I learned to engage effectively with policymaking, talk to policymakers, and frame my research for them.
For the past 18 months, Alan Mislove, professor and senior associate dean for academic affairs at Khoury College, served in the White House Office of Science and Technology Policy (OSTP). During his time as deputy US chief technology officer for privacy, Mislove worked with other parts of the White House and agencies across the federal government on the Biden administration’s policies and priorities on artificial intelligence (AI) and privacy. In particular, he strove to reduce the risks of discrimination, bias, and disinformation in automated systems, as was highlighted in OSTP’s 2022 Blueprint for an AI Bill of Rights.
Khoury News caught up with Mislove to discuss his White House work. The interview has been edited for length and clarity.
How did you get the position and what was the process like?
I knew some folks from my research community who were serving at OSTP before I was there. I didn’t apply directly for the position, but I ended up talking to someone who said this might be a possibility. One thing became another, and I ended up there.
What work did the White House appoint you to do?
In October 2022 — before AI took over the public consciousness with ChatGPT, Claude, and others — the Biden Administration released its Blueprint for an AI Bill of Rights. The Blueprint lays out five key properties that, as citizens, we should expect from automated systems: things like respect for privacy, the ability for people to know what systems are being used, and the ability to opt out. What motivated that document is that, as time goes forward, we are seeing automated systems mediating more and more of our daily lives. There are ways in which that’s obvious, like ChatGPT. But there are also less obvious ways, like AI systems that rank applicants for jobs, or that affect interactions with the criminal justice system.
This trend brings up serious challenges, such as how should we be thinking about the ways these AI systems impact us? How can we put in protections to make sure AI is used in ways that live up to our nation’s laws and our highest values? Those are very hard, broad challenges.
The Blueprint essentially says that to address these challenges and mitigate the risks of these systems, we first need to be clear about our values. So, the idea behind the Blueprint was to lay out the core expectations of automated systems and then describe how to operationalize those expectations. It’s about applying those principles to real systems; for example, what each principle means in the context of law enforcement or hiring or adjudication of government benefits.
The reason I went to the White House was that much of my academic research focuses on algorithmic auditing. I look at real systems that we all interact with every day — such as Facebook or Uber or Google — measure them, and try to determine whether they’re making decisions that may disproportionately impact certain populations, or that just may not live up to our values. I was brought in to OSTP to help translate the ideas in the Blueprint into policy.
I showed up there in January 2023, just after ChatGPT became a thing. It was a fascinating time to be working in technology policy, as it was just when AI became one of the key things that everybody in the building — from the National Security Council to the Domestic Policy Council to the National Economic Council — was talking about. It took over the national conversation and was a significant focus for the administration.
What kind of policies were you working on and influencing?
The Biden administration has priorities, such as ensuring good-paying jobs, protecting the environment, and gaining the benefits of AI while also mitigating the risks. We work with federal agencies to help translate what those priorities mean for different agencies.
Our work varied a lot by the day. Some days we would work directly with agencies to help them protect Americans — for example, to protect them from housing discrimination. Other days we would work to host events, to highlight key administration actions that help move these priorities forward.
We also worked on executive orders, which are binding instructions that the president gives to the federal government. Last year, President Biden issued an executive order on AI, so we worked, along with other policy councils, to determine what the federal government could do to make sure AI is used in a safe, secure, and trustworthy manner. We also reviewed agency reports that were called for in this executive order to make sure they reflected the administration’s thinking and priorities.
We dealt with international issues, like helping to shape the US government’s positions on AI in international forums. For example, members of my team worked closely with our European counterparts to understand what they were doing, and to understand how their activities would impact what we were doing.
Were you in DC for your 18-month appointment or did you have to commute?
I have a family and young kids in school, so moving to DC wasn’t in the cards. Most weeks, I would take the early flight down to DC on Tuesday morning and fly back Thursday afternoon. I worked remotely the other days.
What was your day to day like? What were some of the biggest projects you worked on?
One of the most important things was Office of Management and Budget (OMB) Memoranda M-24-10. OMB is part of the Executive Office of the President, so they’re the White House’s budgetary arm and they work with federal agencies on budgets, but they also set federal-government-wide policy.
The AI executive order directed OMB to develop federal policy on the use of AI. In other words, when a federal agency wants to use AI in ways that could impact people’s rights or safety, there are things that the agency has to do to both before deploying the AI and on an ongoing basis ensure that the use of AI is safe. That includes things like engaging with the community, running tests for discrimination, providing notice to people, and enabling human review. We worked with our colleagues in OMB on this policy, which came out in March of this year.
What’s particularly important about this policy is that the federal government is one of the largest developers and procurers of AI systems. So when the federal government wants to procure a system, the system has to meet these rules, which helps to shape the market for both federal procurements and the private sector.
Another recent project we worked on dealt with the impact of image-based sexual abuse, which has existed for a long time but has been exacerbated by AI. People have probably heard of deepfakes like the Taylor Swift incident; these deepfakes are some of the most direct and acute AI harms we’ve seen. So we worked with our colleagues at the Gender Policy Council to issue a call to action to industry to address these harms while Congress considers legislation. After I left, the White House announced a number of voluntary commitments from industry to reduce the harms of image-based sexual abuse.
Who else did you work with?
A lot of different folks, both across the White House and federal agencies. One was the National Institute of Standards and Technology. Under the AI executive order, they were tasked with several things, including reporting on the potential harms of AI-generated misinformation, creating standards for synthetic content, and understanding the technical methods to counteract those harms.
We worked with the Department of Housing and Urban Development around AI’s impacts on housing. We worked with the Consumer Financial Protection Bureau to document ways in which consumer financial services are impacted by AI. We worked with the Equal Employment Opportunity Commission on what AI means for hiring and people’s civil rights in employment.
One mechanism by which we worked with so many agencies is through Interagency Policy Committees (IPCs). If the federal government is thinking about a particular topic, they might put together an IPC and lots of agencies will send representatives. I led a few of these while at OSTP; you really get a sense of everything the federal government does and how your efforts impact those operations.
How does the work that you did fit into your research interests and teaching career?
Much of my work tries to understand the potential harms of the systems we interact with every day, whether those harms be privacy leaks, security vulnerabilities, impacts on marginalized populations, that sort of thing. My research is naturally interdisciplinary. It sits at the intersection between computer science and policy, or computer science and law, and the ways systems and technologies impact people.
This was my first time serving in government, and I found it really fascinating because you get to see what it’s like inside. I learned to engage effectively with policymaking, talk to policymakers, and frame my research for them.
For the past 18 months, Alan Mislove, professor and senior associate dean for academic affairs at Khoury College, served in the White House Office of Science and Technology Policy (OSTP). During his time as deputy US chief technology officer for privacy, Mislove worked with other parts of the White House and agencies across the federal government on the Biden administration’s policies and priorities on artificial intelligence (AI) and privacy. In particular, he strove to reduce the risks of discrimination, bias, and disinformation in automated systems, as was highlighted in OSTP’s 2022 Blueprint for an AI Bill of Rights.
Khoury News caught up with Mislove to discuss his White House work. The interview has been edited for length and clarity.
How did you get the position and what was the process like?
I knew some folks from my research community who were serving at OSTP before I was there. I didn’t apply directly for the position, but I ended up talking to someone who said this might be a possibility. One thing became another, and I ended up there.
What work did the White House appoint you to do?
In October 2022 — before AI took over the public consciousness with ChatGPT, Claude, and others — the Biden Administration released its Blueprint for an AI Bill of Rights. The Blueprint lays out five key properties that, as citizens, we should expect from automated systems: things like respect for privacy, the ability for people to know what systems are being used, and the ability to opt out. What motivated that document is that, as time goes forward, we are seeing automated systems mediating more and more of our daily lives. There are ways in which that’s obvious, like ChatGPT. But there are also less obvious ways, like AI systems that rank applicants for jobs, or that affect interactions with the criminal justice system.
This trend brings up serious challenges, such as how should we be thinking about the ways these AI systems impact us? How can we put in protections to make sure AI is used in ways that live up to our nation’s laws and our highest values? Those are very hard, broad challenges.
The Blueprint essentially says that to address these challenges and mitigate the risks of these systems, we first need to be clear about our values. So, the idea behind the Blueprint was to lay out the core expectations of automated systems and then describe how to operationalize those expectations. It’s about applying those principles to real systems; for example, what each principle means in the context of law enforcement or hiring or adjudication of government benefits.
The reason I went to the White House was that much of my academic research focuses on algorithmic auditing. I look at real systems that we all interact with every day — such as Facebook or Uber or Google — measure them, and try to determine whether they’re making decisions that may disproportionately impact certain populations, or that just may not live up to our values. I was brought in to OSTP to help translate the ideas in the Blueprint into policy.
I showed up there in January 2023, just after ChatGPT became a thing. It was a fascinating time to be working in technology policy, as it was just when AI became one of the key things that everybody in the building — from the National Security Council to the Domestic Policy Council to the National Economic Council — was talking about. It took over the national conversation and was a significant focus for the administration.
What kind of policies were you working on and influencing?
The Biden administration has priorities, such as ensuring good-paying jobs, protecting the environment, and gaining the benefits of AI while also mitigating the risks. We work with federal agencies to help translate what those priorities mean for different agencies.
Our work varied a lot by the day. Some days we would work directly with agencies to help them protect Americans — for example, to protect them from housing discrimination. Other days we would work to host events, to highlight key administration actions that help move these priorities forward.
We also worked on executive orders, which are binding instructions that the president gives to the federal government. Last year, President Biden issued an executive order on AI, so we worked, along with other policy councils, to determine what the federal government could do to make sure AI is used in a safe, secure, and trustworthy manner. We also reviewed agency reports that were called for in this executive order to make sure they reflected the administration’s thinking and priorities.
We dealt with international issues, like helping to shape the US government’s positions on AI in international forums. For example, members of my team worked closely with our European counterparts to understand what they were doing, and to understand how their activities would impact what we were doing.
Were you in DC for your 18-month appointment or did you have to commute?
I have a family and young kids in school, so moving to DC wasn’t in the cards. Most weeks, I would take the early flight down to DC on Tuesday morning and fly back Thursday afternoon. I worked remotely the other days.
What was your day to day like? What were some of the biggest projects you worked on?
One of the most important things was Office of Management and Budget (OMB) Memoranda M-24-10. OMB is part of the Executive Office of the President, so they’re the White House’s budgetary arm and they work with federal agencies on budgets, but they also set federal-government-wide policy.
The AI executive order directed OMB to develop federal policy on the use of AI. In other words, when a federal agency wants to use AI in ways that could impact people’s rights or safety, there are things that the agency has to do to both before deploying the AI and on an ongoing basis ensure that the use of AI is safe. That includes things like engaging with the community, running tests for discrimination, providing notice to people, and enabling human review. We worked with our colleagues in OMB on this policy, which came out in March of this year.
What’s particularly important about this policy is that the federal government is one of the largest developers and procurers of AI systems. So when the federal government wants to procure a system, the system has to meet these rules, which helps to shape the market for both federal procurements and the private sector.
Another recent project we worked on dealt with the impact of image-based sexual abuse, which has existed for a long time but has been exacerbated by AI. People have probably heard of deepfakes like the Taylor Swift incident; these deepfakes are some of the most direct and acute AI harms we’ve seen. So we worked with our colleagues at the Gender Policy Council to issue a call to action to industry to address these harms while Congress considers legislation. After I left, the White House announced a number of voluntary commitments from industry to reduce the harms of image-based sexual abuse.
Who else did you work with?
A lot of different folks, both across the White House and federal agencies. One was the National Institute of Standards and Technology. Under the AI executive order, they were tasked with several things, including reporting on the potential harms of AI-generated misinformation, creating standards for synthetic content, and understanding the technical methods to counteract those harms.
We worked with the Department of Housing and Urban Development around AI’s impacts on housing. We worked with the Consumer Financial Protection Bureau to document ways in which consumer financial services are impacted by AI. We worked with the Equal Employment Opportunity Commission on what AI means for hiring and people’s civil rights in employment.
One mechanism by which we worked with so many agencies is through Interagency Policy Committees (IPCs). If the federal government is thinking about a particular topic, they might put together an IPC and lots of agencies will send representatives. I led a few of these while at OSTP; you really get a sense of everything the federal government does and how your efforts impact those operations.
How does the work that you did fit into your research interests and teaching career?
Much of my work tries to understand the potential harms of the systems we interact with every day, whether those harms be privacy leaks, security vulnerabilities, impacts on marginalized populations, that sort of thing. My research is naturally interdisciplinary. It sits at the intersection between computer science and policy, or computer science and law, and the ways systems and technologies impact people.
This was my first time serving in government, and I found it really fascinating because you get to see what it’s like inside. I learned to engage effectively with policymaking, talk to policymakers, and frame my research for them.
For the past 18 months, Alan Mislove, professor and senior associate dean for academic affairs at Khoury College, served in the White House Office of Science and Technology Policy (OSTP). During his time as deputy US chief technology officer for privacy, Mislove worked with other parts of the White House and agencies across the federal government on the Biden administration’s policies and priorities on artificial intelligence (AI) and privacy. In particular, he strove to reduce the risks of discrimination, bias, and disinformation in automated systems, as was highlighted in OSTP’s 2022 Blueprint for an AI Bill of Rights.
Khoury News caught up with Mislove to discuss his White House work. The interview has been edited for length and clarity.
How did you get the position and what was the process like?
I knew some folks from my research community who were serving at OSTP before I was there. I didn’t apply directly for the position, but I ended up talking to someone who said this might be a possibility. One thing became another, and I ended up there.
What work did the White House appoint you to do?
In October 2022 — before AI took over the public consciousness with ChatGPT, Claude, and others — the Biden Administration released its Blueprint for an AI Bill of Rights. The Blueprint lays out five key properties that, as citizens, we should expect from automated systems: things like respect for privacy, the ability for people to know what systems are being used, and the ability to opt out. What motivated that document is that, as time goes forward, we are seeing automated systems mediating more and more of our daily lives. There are ways in which that’s obvious, like ChatGPT. But there are also less obvious ways, like AI systems that rank applicants for jobs, or that affect interactions with the criminal justice system.
This trend brings up serious challenges, such as how should we be thinking about the ways these AI systems impact us? How can we put in protections to make sure AI is used in ways that live up to our nation’s laws and our highest values? Those are very hard, broad challenges.
The Blueprint essentially says that to address these challenges and mitigate the risks of these systems, we first need to be clear about our values. So, the idea behind the Blueprint was to lay out the core expectations of automated systems and then describe how to operationalize those expectations. It’s about applying those principles to real systems; for example, what each principle means in the context of law enforcement or hiring or adjudication of government benefits.
The reason I went to the White House was that much of my academic research focuses on algorithmic auditing. I look at real systems that we all interact with every day — such as Facebook or Uber or Google — measure them, and try to determine whether they’re making decisions that may disproportionately impact certain populations, or that just may not live up to our values. I was brought in to OSTP to help translate the ideas in the Blueprint into policy.
I showed up there in January 2023, just after ChatGPT became a thing. It was a fascinating time to be working in technology policy, as it was just when AI became one of the key things that everybody in the building — from the National Security Council to the Domestic Policy Council to the National Economic Council — was talking about. It took over the national conversation and was a significant focus for the administration.
What kind of policies were you working on and influencing?
The Biden administration has priorities, such as ensuring good-paying jobs, protecting the environment, and gaining the benefits of AI while also mitigating the risks. We work with federal agencies to help translate what those priorities mean for different agencies.
Our work varied a lot by the day. Some days we would work directly with agencies to help them protect Americans — for example, to protect them from housing discrimination. Other days we would work to host events, to highlight key administration actions that help move these priorities forward.
We also worked on executive orders, which are binding instructions that the president gives to the federal government. Last year, President Biden issued an executive order on AI, so we worked, along with other policy councils, to determine what the federal government could do to make sure AI is used in a safe, secure, and trustworthy manner. We also reviewed agency reports that were called for in this executive order to make sure they reflected the administration’s thinking and priorities.
We dealt with international issues, like helping to shape the US government’s positions on AI in international forums. For example, members of my team worked closely with our European counterparts to understand what they were doing, and to understand how their activities would impact what we were doing.
Were you in DC for your 18-month appointment or did you have to commute?
I have a family and young kids in school, so moving to DC wasn’t in the cards. Most weeks, I would take the early flight down to DC on Tuesday morning and fly back Thursday afternoon. I worked remotely the other days.
What was your day to day like? What were some of the biggest projects you worked on?
One of the most important things was Office of Management and Budget (OMB) Memoranda M-24-10. OMB is part of the Executive Office of the President, so they’re the White House’s budgetary arm and they work with federal agencies on budgets, but they also set federal-government-wide policy.
The AI executive order directed OMB to develop federal policy on the use of AI. In other words, when a federal agency wants to use AI in ways that could impact people’s rights or safety, there are things that the agency has to do to both before deploying the AI and on an ongoing basis ensure that the use of AI is safe. That includes things like engaging with the community, running tests for discrimination, providing notice to people, and enabling human review. We worked with our colleagues in OMB on this policy, which came out in March of this year.
What’s particularly important about this policy is that the federal government is one of the largest developers and procurers of AI systems. So when the federal government wants to procure a system, the system has to meet these rules, which helps to shape the market for both federal procurements and the private sector.
Another recent project we worked on dealt with the impact of image-based sexual abuse, which has existed for a long time but has been exacerbated by AI. People have probably heard of deepfakes like the Taylor Swift incident; these deepfakes are some of the most direct and acute AI harms we’ve seen. So we worked with our colleagues at the Gender Policy Council to issue a call to action to industry to address these harms while Congress considers legislation. After I left, the White House announced a number of voluntary commitments from industry to reduce the harms of image-based sexual abuse.
Who else did you work with?
A lot of different folks, both across the White House and federal agencies. One was the National Institute of Standards and Technology. Under the AI executive order, they were tasked with several things, including reporting on the potential harms of AI-generated misinformation, creating standards for synthetic content, and understanding the technical methods to counteract those harms.
We worked with the Department of Housing and Urban Development around AI’s impacts on housing. We worked with the Consumer Financial Protection Bureau to document ways in which consumer financial services are impacted by AI. We worked with the Equal Employment Opportunity Commission on what AI means for hiring and people’s civil rights in employment.
One mechanism by which we worked with so many agencies is through Interagency Policy Committees (IPCs). If the federal government is thinking about a particular topic, they might put together an IPC and lots of agencies will send representatives. I led a few of these while at OSTP; you really get a sense of everything the federal government does and how your efforts impact those operations.
How does the work that you did fit into your research interests and teaching career?
Much of my work tries to understand the potential harms of the systems we interact with every day, whether those harms be privacy leaks, security vulnerabilities, impacts on marginalized populations, that sort of thing. My research is naturally interdisciplinary. It sits at the intersection between computer science and policy, or computer science and law, and the ways systems and technologies impact people.
This was my first time serving in government, and I found it really fascinating because you get to see what it’s like inside. I learned to engage effectively with policymaking, talk to policymakers, and frame my research for them.