Accountable AI: senior policy researcher Amba Kak brings new expertise to the Cybersecurity and Privacy Institute
Author: Aditi Peyush
Date: 03.09.22
However, there are limitations to AI. Issues—such as bias and exclusion—arise as a result of the implementation and use of these systems.
Founded in 2016, the Cybersecurity and Privacy Institute at Northeastern University (CPI) is leading the charge to understand the impacts of AI and other emerging technologies. The institute is made up of researchers from Khoury College of Computer Sciences and the School of Law who collaborate with leading universities, tech companies, and defense contractors.
READ: Collision Conference 2021: Khoury faculty on ethical and responsible AI
Recently, the CPI welcomed a new member to their team. Meet Amba Kak, senior research fellow at the CPI, who is also currently senior advisor on AI at the Federal Trade Commission (FTC).
Kak joined the CPI from her role as the director of global policy at AI Now, a research institute affiliated with New York University. At AI Now, Kak designed, developed, and executed the institute’s globally-oriented policy research agenda that focused on algorithmic accountability. And at the FTC, as senior advisor on AI, Kak is working with the agency’s chief technology officer and technologists as part of an informal AI strategy group. Kak also partners with policy experts across the agency to provide insight and advice on emerging technology issues.
Kak’s passion to translate scholarly research to policy action led her to the CPI. She called the CPI “an interdisciplinary community of researchers who are motivated to find policy windows for their research.” Intentionally using the word “windows,” Kak continued, “I really like that metaphor because it gets at those pivotal opportunities for translating bold ideas into action that may not have been visible—or at all possible—before.”
Merging the worlds of law and social technology
How did Kak find herself in tech policy? Back in law school, Kak was drawn to a course around internet regulation. Reminiscing, Kak said, “In a sea of decade-old statute and settled precedent, it was so motivating to learn about a field where the legal questions were mostly open. In fact, there was no consensus on the pre-legal normative question of ‘what type of futures do we want in the first place?’”
Kak didn’t stop there. She went on to study at the University of Oxford as a Rhodes Scholar, where she pursued advanced degrees in both law and the social science of the internet. On the latter degree, Kak explained, “I think legal training gives you a specific set of skills […], but I also think it can limit your lens and imagination in some ways. At the Oxford Internet Institute, I got to expand the tools of analysis I applied to any particular issue. That kind of interdisciplinary lens is especially valuable in this field.”
At the Oxford Internet Institute, Kak wrote her master’s thesis on zero-rated plans. These plans, like Meta Connectivity’s Free Basics, offer access to a restricted selection of websites for users without a data plan or at reduced rates. These plans sparked a policy debate about net neutrality and gave Kak a policy window for her research, she explained, “On one hand, people said, ‘Everyone’s going to get stuck in the walled garden, they’re going to think that Facebook is the internet.’ On the other hand, people said, ‘Some internet is better than none.’” The ethnographic research that Kak conducted served as a reminder that “policy debates can often result in pitting abstract theoretical propositions against each other. I learned the value centering research around the communities that are directly impacted by these developments—and to always leave room to develop and adjust our arguments to those learnings.”
Ensuring accountability in AI
AI is the future, or so we’ve been told. Kak challenges this platitude by drawing attention to the bigger picture. “The futuristic rhetoric on AI can be a bit of a distraction. AI systems are intertwined with systemic and historical inequities, and using these systems in social contexts can disguise or obscure these larger issues.” Continuing, she said, “There’s also a fair amount of tech-solutionism in the field, as if there’s no problem that AI can’t fix—whether it’s poverty or mental health. It makes you wonder if AI is a solution in search of a problem.”
As a policy researcher focusing on technology regulation, one of her goals is to remind different audiences that these technologies are not immune to scrutiny. Kak said, “Many interested parties will project that technology trajectories are inevitable, so as an antidote we need to emphasize and remind people that technology must work for people and not the other way around.”
Why does accountability take the stage in the discussion of AI? Kak argues that it’s because the stakes are high. “AI systems—whether they’re used in private or public contexts—are having real, material impacts on people. They’re affecting their access and the quality of basic opportunities, services, and benefits,” she argued.
Joining the CPI and advocating for fair technology
At the CPI, Kak finds herself among distinguished computer scientists who design and analyze complex systems. She’s excited to use this research to inform policy and understand what systems need to be put in place to prevent abuse of these technologies.
As she joins the team, Kak is “personally excited to learn from and grow with this community.” Between the FTC and the CPI, she’s got her hands full—but Kak continues to roll up her sleeves.
The unanswered questions and potential policy solutions drive her research and advocacy efforts. She explained, “I think we’ve moved in the last decade from abstract questions about AI ethics to ‘the moment for action is now.’ How can we practically hold companies and other actors accountable for their use of technology?”
Her advice to technologists? “To have a healthy amount of skepticism and humility about what tech can change on its own—divorced from broader social context and histories.” This means making room for other kinds of expertise and knowledge. She concluded, “I think we need more technologists to cede space for broader forms of expertise and deliberate over the impacts of technologies before—not after—they are developed.”