Collision Conference 2021: Khoury Faculty on Ethical and Responsible AI
Tue 05.04.21 / Aditi Peyush
Collision Conference 2021: Khoury Faculty on Ethical and Responsible AI
Tue 05.04.21 / Aditi Peyush
Tue 05.04.21 / Aditi Peyush
Tue 05.04.21 / Aditi Peyush
Collision Conference 2021: Khoury Faculty on Ethical and Responsible AI
Tue 05.04.21 / Aditi Peyush
Collision Conference 2021: Khoury Faculty on Ethical and Responsible AI
Tue 05.04.21 / Aditi Peyush
Tue 05.04.21 / Aditi Peyush
Tue 05.04.21 / Aditi Peyush
Dubbed the “Olympics of tech” by Politico, the Collision Conference brings together influential speakers, leading tech companies, and top media to engage in interdisciplinary discussion around technology.
Though usually presented ‘live’ from Toronto, home to Northeastern’s regional Toronto campus, Collision 2021 was held virtually, with over 450 speakers gathering to discuss a range of topics, from AI cities to the creator economy to creating a new digital ecosystem.
Northeastern had a strong presence at this year’s Collision, held April 20-22. The university sponsored the conference, and Khoury College faculty members Tina Eliassi-Rad, Ricardo Baeza-Yates, and Christo Wilson held a lively discussion about ethical and responsible artificial intelligence, or AI – on a panel moderated by Usama Fayyad, executive director for the Institute of Experiential Artificial Intelligence. Through his work in creating a research hub that places human skills and intelligence at the forefront of AI development, Fayyad served as an expert moderator.
Professor and faculty member of the Network Science Institute, Eliassi-Rad kicked off the panel by quoting Safiya Noble, “You have no business designing systems for society, when you know nothing about society.” Eliassi-Rad explained that, if algorithms are to be used in society, specifically in high-stakes situations like criminal justice, then technologists need to have a solid understanding of society, civics, and ethics. Elaborating, she said, “Leaving them to their own devices, or having ethics committees on the sides, is an afterthought that is destined to fail.”
Research professor at the Institute for Experiential AI, Baeza-Yates emphasized that designers need to raise awareness about how their systems work. How? By asking questions like, “What are the possible discriminations in my output?” Or, “What are the possible biases in my input?”
Christo Wilson, Khoury associate professor and faculty member of Northeastern’s Cybersecurity and Privacy Institute, argued that “if we’re hoping to live in a world where we’re using machine learning and AI ethically, we need independent oversight.” He described the necessity of understanding systems in complicated real-world context, “If the tech is affected by the context, it also causes it to affect the context.”
Wilson, who specializes in algorithmic audits, explained, “It refers to assessing whether a system has bias or unfairness, and that being kind of sociotechnical.” He introduced the related ideas of understanding the unique context of where systems are being deployed and establishing a baseline for algorithmic audits, so that companies are aware and deliberate as to their audit process and goals.
“If you’re thinking about conducting an algorithmic audit, you need to think about what you expect the outcome to be,” said Wilson. “To me the ideal outcome is something that’s very transparent,” he concluded.
When presented with the audience question of whether fair AI exists—that is, if fair AI is even attainable—Eliassi-Rad immediately responded, “The short answer is no; we can never get to fair AI, even in principle.” However, she did suggest actionable steps to take: educating the public, diversifying teams who design systems, and holding technologists accountable.
Another audience member asked if regulation is necessary for reform and behavior change. To this, Baeza-Yates provided a perspective that hadn’t yet been mentioned – the focus should be on protecting people from the harms of tech, while also informing them about the benefits of tech.
“Tech has improved the living conditions of many people, but we only focus on the bad parts,” Baeza-Yates observed.
Finally, the panelists were asked for their thoughts on diversity in teams who write code. Eliassi-Rad emphatically said, “You want teams that are as diverse as your population—you want to have that lived experience that’s present in society.” Wilson agreed and introduced the importance of disciplinary diversity: “You need sociologists, psychologists, and anthropologists to get that larger social context.” Wilson also addressed the cultural context in which code is written, saying, “Another issue is U.S. teams creating systems that are global, often you get context collapse, so you have to be really careful.” Baeza-Yates concluded with two points of recommendation: “Consider having more than one team—different teams see different things,” he said, adding, “The biases of the creators of the code need to be acknowledged.”
Later, at a related Northeastern-sponsored session, leader in computer science education Bethany Edmunds moderated a discussion with industry experts. The director of computer science at Northeastern’s Vancouver campus, Edmunds added some of her insights on the topic of bias in AI. Introducing the roundtable, she said, “This is a conversation to activate the general public and different industries that maybe don’t see themselves playing a role in responsible and ethical AI.”
In a call to arms, Edmunds told the audience of her roundtable, “The truth is, in order to root out AI in a way that mitigates as much harm as possible, we all need to play a role.”
Evidently, Northeastern had a strong voice at Collision 2021. With forward-thinking engagement and dialogue, the Khoury College experts across the Northeastern network, raised important points for influential leaders in tech to reflect on and continue the discussion in their own institutions and companies. Overall, individuals, no matter their role, should consider diversity, context, and sociotechnical factors in collectively deciding how AI can work for everyone.
Dubbed the “Olympics of tech” by Politico, the Collision Conference brings together influential speakers, leading tech companies, and top media to engage in interdisciplinary discussion around technology.
Though usually presented ‘live’ from Toronto, home to Northeastern’s regional Toronto campus, Collision 2021 was held virtually, with over 450 speakers gathering to discuss a range of topics, from AI cities to the creator economy to creating a new digital ecosystem.
Northeastern had a strong presence at this year’s Collision, held April 20-22. The university sponsored the conference, and Khoury College faculty members Tina Eliassi-Rad, Ricardo Baeza-Yates, and Christo Wilson held a lively discussion about ethical and responsible artificial intelligence, or AI – on a panel moderated by Usama Fayyad, executive director for the Institute of Experiential Artificial Intelligence. Through his work in creating a research hub that places human skills and intelligence at the forefront of AI development, Fayyad served as an expert moderator.
Professor and faculty member of the Network Science Institute, Eliassi-Rad kicked off the panel by quoting Safiya Noble, “You have no business designing systems for society, when you know nothing about society.” Eliassi-Rad explained that, if algorithms are to be used in society, specifically in high-stakes situations like criminal justice, then technologists need to have a solid understanding of society, civics, and ethics. Elaborating, she said, “Leaving them to their own devices, or having ethics committees on the sides, is an afterthought that is destined to fail.”
Research professor at the Institute for Experiential AI, Baeza-Yates emphasized that designers need to raise awareness about how their systems work. How? By asking questions like, “What are the possible discriminations in my output?” Or, “What are the possible biases in my input?”
Christo Wilson, Khoury associate professor and faculty member of Northeastern’s Cybersecurity and Privacy Institute, argued that “if we’re hoping to live in a world where we’re using machine learning and AI ethically, we need independent oversight.” He described the necessity of understanding systems in complicated real-world context, “If the tech is affected by the context, it also causes it to affect the context.”
Wilson, who specializes in algorithmic audits, explained, “It refers to assessing whether a system has bias or unfairness, and that being kind of sociotechnical.” He introduced the related ideas of understanding the unique context of where systems are being deployed and establishing a baseline for algorithmic audits, so that companies are aware and deliberate as to their audit process and goals.
“If you’re thinking about conducting an algorithmic audit, you need to think about what you expect the outcome to be,” said Wilson. “To me the ideal outcome is something that’s very transparent,” he concluded.
When presented with the audience question of whether fair AI exists—that is, if fair AI is even attainable—Eliassi-Rad immediately responded, “The short answer is no; we can never get to fair AI, even in principle.” However, she did suggest actionable steps to take: educating the public, diversifying teams who design systems, and holding technologists accountable.
Another audience member asked if regulation is necessary for reform and behavior change. To this, Baeza-Yates provided a perspective that hadn’t yet been mentioned – the focus should be on protecting people from the harms of tech, while also informing them about the benefits of tech.
“Tech has improved the living conditions of many people, but we only focus on the bad parts,” Baeza-Yates observed.
Finally, the panelists were asked for their thoughts on diversity in teams who write code. Eliassi-Rad emphatically said, “You want teams that are as diverse as your population—you want to have that lived experience that’s present in society.” Wilson agreed and introduced the importance of disciplinary diversity: “You need sociologists, psychologists, and anthropologists to get that larger social context.” Wilson also addressed the cultural context in which code is written, saying, “Another issue is U.S. teams creating systems that are global, often you get context collapse, so you have to be really careful.” Baeza-Yates concluded with two points of recommendation: “Consider having more than one team—different teams see different things,” he said, adding, “The biases of the creators of the code need to be acknowledged.”
Later, at a related Northeastern-sponsored session, leader in computer science education Bethany Edmunds moderated a discussion with industry experts. The director of computer science at Northeastern’s Vancouver campus, Edmunds added some of her insights on the topic of bias in AI. Introducing the roundtable, she said, “This is a conversation to activate the general public and different industries that maybe don’t see themselves playing a role in responsible and ethical AI.”
In a call to arms, Edmunds told the audience of her roundtable, “The truth is, in order to root out AI in a way that mitigates as much harm as possible, we all need to play a role.”
Evidently, Northeastern had a strong voice at Collision 2021. With forward-thinking engagement and dialogue, the Khoury College experts across the Northeastern network, raised important points for influential leaders in tech to reflect on and continue the discussion in their own institutions and companies. Overall, individuals, no matter their role, should consider diversity, context, and sociotechnical factors in collectively deciding how AI can work for everyone.
Dubbed the “Olympics of tech” by Politico, the Collision Conference brings together influential speakers, leading tech companies, and top media to engage in interdisciplinary discussion around technology.
Though usually presented ‘live’ from Toronto, home to Northeastern’s regional Toronto campus, Collision 2021 was held virtually, with over 450 speakers gathering to discuss a range of topics, from AI cities to the creator economy to creating a new digital ecosystem.
Northeastern had a strong presence at this year’s Collision, held April 20-22. The university sponsored the conference, and Khoury College faculty members Tina Eliassi-Rad, Ricardo Baeza-Yates, and Christo Wilson held a lively discussion about ethical and responsible artificial intelligence, or AI – on a panel moderated by Usama Fayyad, executive director for the Institute of Experiential Artificial Intelligence. Through his work in creating a research hub that places human skills and intelligence at the forefront of AI development, Fayyad served as an expert moderator.
Professor and faculty member of the Network Science Institute, Eliassi-Rad kicked off the panel by quoting Safiya Noble, “You have no business designing systems for society, when you know nothing about society.” Eliassi-Rad explained that, if algorithms are to be used in society, specifically in high-stakes situations like criminal justice, then technologists need to have a solid understanding of society, civics, and ethics. Elaborating, she said, “Leaving them to their own devices, or having ethics committees on the sides, is an afterthought that is destined to fail.”
Research professor at the Institute for Experiential AI, Baeza-Yates emphasized that designers need to raise awareness about how their systems work. How? By asking questions like, “What are the possible discriminations in my output?” Or, “What are the possible biases in my input?”
Christo Wilson, Khoury associate professor and faculty member of Northeastern’s Cybersecurity and Privacy Institute, argued that “if we’re hoping to live in a world where we’re using machine learning and AI ethically, we need independent oversight.” He described the necessity of understanding systems in complicated real-world context, “If the tech is affected by the context, it also causes it to affect the context.”
Wilson, who specializes in algorithmic audits, explained, “It refers to assessing whether a system has bias or unfairness, and that being kind of sociotechnical.” He introduced the related ideas of understanding the unique context of where systems are being deployed and establishing a baseline for algorithmic audits, so that companies are aware and deliberate as to their audit process and goals.
“If you’re thinking about conducting an algorithmic audit, you need to think about what you expect the outcome to be,” said Wilson. “To me the ideal outcome is something that’s very transparent,” he concluded.
When presented with the audience question of whether fair AI exists—that is, if fair AI is even attainable—Eliassi-Rad immediately responded, “The short answer is no; we can never get to fair AI, even in principle.” However, she did suggest actionable steps to take: educating the public, diversifying teams who design systems, and holding technologists accountable.
Another audience member asked if regulation is necessary for reform and behavior change. To this, Baeza-Yates provided a perspective that hadn’t yet been mentioned – the focus should be on protecting people from the harms of tech, while also informing them about the benefits of tech.
“Tech has improved the living conditions of many people, but we only focus on the bad parts,” Baeza-Yates observed.
Finally, the panelists were asked for their thoughts on diversity in teams who write code. Eliassi-Rad emphatically said, “You want teams that are as diverse as your population—you want to have that lived experience that’s present in society.” Wilson agreed and introduced the importance of disciplinary diversity: “You need sociologists, psychologists, and anthropologists to get that larger social context.” Wilson also addressed the cultural context in which code is written, saying, “Another issue is U.S. teams creating systems that are global, often you get context collapse, so you have to be really careful.” Baeza-Yates concluded with two points of recommendation: “Consider having more than one team—different teams see different things,” he said, adding, “The biases of the creators of the code need to be acknowledged.”
Later, at a related Northeastern-sponsored session, leader in computer science education Bethany Edmunds moderated a discussion with industry experts. The director of computer science at Northeastern’s Vancouver campus, Edmunds added some of her insights on the topic of bias in AI. Introducing the roundtable, she said, “This is a conversation to activate the general public and different industries that maybe don’t see themselves playing a role in responsible and ethical AI.”
In a call to arms, Edmunds told the audience of her roundtable, “The truth is, in order to root out AI in a way that mitigates as much harm as possible, we all need to play a role.”
Evidently, Northeastern had a strong voice at Collision 2021. With forward-thinking engagement and dialogue, the Khoury College experts across the Northeastern network, raised important points for influential leaders in tech to reflect on and continue the discussion in their own institutions and companies. Overall, individuals, no matter their role, should consider diversity, context, and sociotechnical factors in collectively deciding how AI can work for everyone.
Dubbed the “Olympics of tech” by Politico, the Collision Conference brings together influential speakers, leading tech companies, and top media to engage in interdisciplinary discussion around technology.
Though usually presented ‘live’ from Toronto, home to Northeastern’s regional Toronto campus, Collision 2021 was held virtually, with over 450 speakers gathering to discuss a range of topics, from AI cities to the creator economy to creating a new digital ecosystem.
Northeastern had a strong presence at this year’s Collision, held April 20-22. The university sponsored the conference, and Khoury College faculty members Tina Eliassi-Rad, Ricardo Baeza-Yates, and Christo Wilson held a lively discussion about ethical and responsible artificial intelligence, or AI – on a panel moderated by Usama Fayyad, executive director for the Institute of Experiential Artificial Intelligence. Through his work in creating a research hub that places human skills and intelligence at the forefront of AI development, Fayyad served as an expert moderator.
Professor and faculty member of the Network Science Institute, Eliassi-Rad kicked off the panel by quoting Safiya Noble, “You have no business designing systems for society, when you know nothing about society.” Eliassi-Rad explained that, if algorithms are to be used in society, specifically in high-stakes situations like criminal justice, then technologists need to have a solid understanding of society, civics, and ethics. Elaborating, she said, “Leaving them to their own devices, or having ethics committees on the sides, is an afterthought that is destined to fail.”
Research professor at the Institute for Experiential AI, Baeza-Yates emphasized that designers need to raise awareness about how their systems work. How? By asking questions like, “What are the possible discriminations in my output?” Or, “What are the possible biases in my input?”
Christo Wilson, Khoury associate professor and faculty member of Northeastern’s Cybersecurity and Privacy Institute, argued that “if we’re hoping to live in a world where we’re using machine learning and AI ethically, we need independent oversight.” He described the necessity of understanding systems in complicated real-world context, “If the tech is affected by the context, it also causes it to affect the context.”
Wilson, who specializes in algorithmic audits, explained, “It refers to assessing whether a system has bias or unfairness, and that being kind of sociotechnical.” He introduced the related ideas of understanding the unique context of where systems are being deployed and establishing a baseline for algorithmic audits, so that companies are aware and deliberate as to their audit process and goals.
“If you’re thinking about conducting an algorithmic audit, you need to think about what you expect the outcome to be,” said Wilson. “To me the ideal outcome is something that’s very transparent,” he concluded.
When presented with the audience question of whether fair AI exists—that is, if fair AI is even attainable—Eliassi-Rad immediately responded, “The short answer is no; we can never get to fair AI, even in principle.” However, she did suggest actionable steps to take: educating the public, diversifying teams who design systems, and holding technologists accountable.
Another audience member asked if regulation is necessary for reform and behavior change. To this, Baeza-Yates provided a perspective that hadn’t yet been mentioned – the focus should be on protecting people from the harms of tech, while also informing them about the benefits of tech.
“Tech has improved the living conditions of many people, but we only focus on the bad parts,” Baeza-Yates observed.
Finally, the panelists were asked for their thoughts on diversity in teams who write code. Eliassi-Rad emphatically said, “You want teams that are as diverse as your population—you want to have that lived experience that’s present in society.” Wilson agreed and introduced the importance of disciplinary diversity: “You need sociologists, psychologists, and anthropologists to get that larger social context.” Wilson also addressed the cultural context in which code is written, saying, “Another issue is U.S. teams creating systems that are global, often you get context collapse, so you have to be really careful.” Baeza-Yates concluded with two points of recommendation: “Consider having more than one team—different teams see different things,” he said, adding, “The biases of the creators of the code need to be acknowledged.”
Later, at a related Northeastern-sponsored session, leader in computer science education Bethany Edmunds moderated a discussion with industry experts. The director of computer science at Northeastern’s Vancouver campus, Edmunds added some of her insights on the topic of bias in AI. Introducing the roundtable, she said, “This is a conversation to activate the general public and different industries that maybe don’t see themselves playing a role in responsible and ethical AI.”
In a call to arms, Edmunds told the audience of her roundtable, “The truth is, in order to root out AI in a way that mitigates as much harm as possible, we all need to play a role.”
Evidently, Northeastern had a strong voice at Collision 2021. With forward-thinking engagement and dialogue, the Khoury College experts across the Northeastern network, raised important points for influential leaders in tech to reflect on and continue the discussion in their own institutions and companies. Overall, individuals, no matter their role, should consider diversity, context, and sociotechnical factors in collectively deciding how AI can work for everyone.