Silicon Valley director of data science offers a global perspective on ethics and technology
Thu 10.29.20 / Ysabelle Kempe
Silicon Valley director of data science offers a global perspective on ethics and technology
Thu 10.29.20 / Ysabelle Kempe
Thu 10.29.20 / Ysabelle Kempe
Thu 10.29.20 / Ysabelle Kempe
Silicon Valley director of data science offers a global perspective on ethics and technology
Thu 10.29.20 / Ysabelle Kempe
Silicon Valley director of data science offers a global perspective on ethics and technology
Thu 10.29.20 / Ysabelle Kempe
Thu 10.29.20 / Ysabelle Kempe
Thu 10.29.20 / Ysabelle Kempe
Did you know your values are likely a product of the culture you live in? At least that’s how Ricardo Baeza-Yates sees it.
“Your ethics depend on your upbringing, your culture, your religion,” said the director of data science at Northeastern University’s San Jose Campus. “It is not easy to impose one standard of ethics onto other people.”
Baeza-Yates would know. He has lived in six countries — Chile, Canada, Spain, the United States, New Zealand, and Australia — and works on teams regulating the ethics of algorithms and artificial intelligence in the U.S. and Spain, as well as Latin America and the Caribbean. As technology advances at full tilt on a global scale, Baeza-Yates continues to work with these groups to shape the future of AI ethics in the Americas and Spain.
Baeza-Yates is on the advisory committee of fAIrLAC, a network advocating for the ethical and responsible use of AI in Latin America and the Caribbean, and he was appointed to Spain’s AI Council this summer. (Although Baeza-Yates grew up in Chile, he has Spanish heritage.) He also serves on ACM’s U.S. Technology Policy Committee, specifically a subcommittee focused on AI and algorithms. This subcommittee, he said, has similar objectives as Cascad.AI, an initiative out of Northeastern’s Seattle campus that promotes ethical deployment of AI.
Region to region, ethics around tech vary
As members of the Western world, the countries and regions Baeza-Yates works with are often in ethical alignment, he said. They diverge, however, when you examine the corporate status of tech and AI in each. The U.S., for example, has more established, advanced tech companies than Latin America. While that initially seems like a deficiency for the latter, it could allow policymakers and corporations in Latin America to learn from ethical mistakes made in the U.S., Baeza-Yates said.
“In the U.S., the ethical problems with technology are already here and well beyond what I think is good for this society,” he said. “These are the advantages sometimes in technology — that the people who come late can basically leapfrog and, not get ahead, but get to a similar state.” His message? “Let’s learn about bias, fairness, and accountability before the problems happen.”
Unfortunately, there are many examples one could use to highlight the potential of bias and invasion of privacy in tech. Baeza-Yates points to a project from Stanford University researchers that used facial recognition software to predict people’s sexual orientation. An article published by the New York Times in 2017 described the backlash the researchers faced from LGBTQ+ advocacy groups and scholars. One academic described the technology as “the algorithmic equivalent of a 13-year-old bully.” Baeza-Yates is in the same camp as these critics, pointing out that “people forget that being able to do something does not mean you should do it.”
“This is an example of phrenology,” he said, referring to a pseudoscience popular in the 19th century that predicted individuals’ mental capabilities based on the shape and size of their skulls. “We do not want to go back to that.”
Responsibility for ethical tech cuts across professions
Baeza-Yates sees the establishment of ethical regulations as a fundamental part of creating a healthy tech ecosystem. Especially in the U.S., he said, there is a more utilitarian view of technology in which the ends justify the means. Baeza-Yates postulates that this attitude could be a byproduct of the high value the U.S. places on freedom and individuality. But when discussing technology, he said, human rights should be the priority, not an afterthought, as they were in the Cambridge Analytica data scandal. This philosophy is in keeping with his ACM committee’s recent recommendation to halt facial recognition technology until more is known about its consequences.
“No matter your profession — lawyer, sociologist, psychologist — you need a basic understanding of technology’s impacts.” — Ricardo Baeza-Yates
The onus to ensure ethical tech doesn’t fall on policymakers alone, Baeza-Yates said. If companies stepped up to the plate in regulating themselves, he explained, governmental regulations on tech ethics may not even be necessary. For this to work, however, the companies need to actually heed the instructions of their ethical boards.
“Even in the few companies that currently have ethical boards, the company will often still do what it wants regardless of the ethical board’s recommendations,” Baeza-Yates said.
On a more granular level, Baeza-Yates stresses the importance of all types of professionals having an understanding of technology and ethics. No matter your profession — lawyer, sociologist, psychologist — you need a basic understanding of technology’s impacts, he said. How can we establish this knowledge across all fields? A potential solution is mandating that all college students take a course on the intersection of society, technology, and ethics, according to Baeza-Yates.
The beginnings of this sort of mandate are present at Khoury — all computer science undergraduates are required to take a course concerning computing and social issues. Northeastern’s College of Social Sciences and Humanities also offers a variety of technology ethics courses, including Technology and Human Values (PHIL 1145), Information Ethics (PHIL 5005), and AI Ethics (PHIL 5010).
“If everyone took courses like this, it would make a change because at least lots of people would be exposed to these ideas,” he said. “That’s the first step — to make people aware. And how do we make people aware? We need to talk about it.”
Did you know your values are likely a product of the culture you live in? At least that’s how Ricardo Baeza-Yates sees it.
“Your ethics depend on your upbringing, your culture, your religion,” said the director of data science at Northeastern University’s San Jose Campus. “It is not easy to impose one standard of ethics onto other people.”
Baeza-Yates would know. He has lived in six countries — Chile, Canada, Spain, the United States, New Zealand, and Australia — and works on teams regulating the ethics of algorithms and artificial intelligence in the U.S. and Spain, as well as Latin America and the Caribbean. As technology advances at full tilt on a global scale, Baeza-Yates continues to work with these groups to shape the future of AI ethics in the Americas and Spain.
Baeza-Yates is on the advisory committee of fAIrLAC, a network advocating for the ethical and responsible use of AI in Latin America and the Caribbean, and he was appointed to Spain’s AI Council this summer. (Although Baeza-Yates grew up in Chile, he has Spanish heritage.) He also serves on ACM’s U.S. Technology Policy Committee, specifically a subcommittee focused on AI and algorithms. This subcommittee, he said, has similar objectives as Cascad.AI, an initiative out of Northeastern’s Seattle campus that promotes ethical deployment of AI.
Region to region, ethics around tech vary
As members of the Western world, the countries and regions Baeza-Yates works with are often in ethical alignment, he said. They diverge, however, when you examine the corporate status of tech and AI in each. The U.S., for example, has more established, advanced tech companies than Latin America. While that initially seems like a deficiency for the latter, it could allow policymakers and corporations in Latin America to learn from ethical mistakes made in the U.S., Baeza-Yates said.
“In the U.S., the ethical problems with technology are already here and well beyond what I think is good for this society,” he said. “These are the advantages sometimes in technology — that the people who come late can basically leapfrog and, not get ahead, but get to a similar state.” His message? “Let’s learn about bias, fairness, and accountability before the problems happen.”
Unfortunately, there are many examples one could use to highlight the potential of bias and invasion of privacy in tech. Baeza-Yates points to a project from Stanford University researchers that used facial recognition software to predict people’s sexual orientation. An article published by the New York Times in 2017 described the backlash the researchers faced from LGBTQ+ advocacy groups and scholars. One academic described the technology as “the algorithmic equivalent of a 13-year-old bully.” Baeza-Yates is in the same camp as these critics, pointing out that “people forget that being able to do something does not mean you should do it.”
“This is an example of phrenology,” he said, referring to a pseudoscience popular in the 19th century that predicted individuals’ mental capabilities based on the shape and size of their skulls. “We do not want to go back to that.”
Responsibility for ethical tech cuts across professions
Baeza-Yates sees the establishment of ethical regulations as a fundamental part of creating a healthy tech ecosystem. Especially in the U.S., he said, there is a more utilitarian view of technology in which the ends justify the means. Baeza-Yates postulates that this attitude could be a byproduct of the high value the U.S. places on freedom and individuality. But when discussing technology, he said, human rights should be the priority, not an afterthought, as they were in the Cambridge Analytica data scandal. This philosophy is in keeping with his ACM committee’s recent recommendation to halt facial recognition technology until more is known about its consequences.
“No matter your profession — lawyer, sociologist, psychologist — you need a basic understanding of technology’s impacts.” — Ricardo Baeza-Yates
The onus to ensure ethical tech doesn’t fall on policymakers alone, Baeza-Yates said. If companies stepped up to the plate in regulating themselves, he explained, governmental regulations on tech ethics may not even be necessary. For this to work, however, the companies need to actually heed the instructions of their ethical boards.
“Even in the few companies that currently have ethical boards, the company will often still do what it wants regardless of the ethical board’s recommendations,” Baeza-Yates said.
On a more granular level, Baeza-Yates stresses the importance of all types of professionals having an understanding of technology and ethics. No matter your profession — lawyer, sociologist, psychologist — you need a basic understanding of technology’s impacts, he said. How can we establish this knowledge across all fields? A potential solution is mandating that all college students take a course on the intersection of society, technology, and ethics, according to Baeza-Yates.
The beginnings of this sort of mandate are present at Khoury — all computer science undergraduates are required to take a course concerning computing and social issues. Northeastern’s College of Social Sciences and Humanities also offers a variety of technology ethics courses, including Technology and Human Values (PHIL 1145), Information Ethics (PHIL 5005), and AI Ethics (PHIL 5010).
“If everyone took courses like this, it would make a change because at least lots of people would be exposed to these ideas,” he said. “That’s the first step — to make people aware. And how do we make people aware? We need to talk about it.”
Did you know your values are likely a product of the culture you live in? At least that’s how Ricardo Baeza-Yates sees it.
“Your ethics depend on your upbringing, your culture, your religion,” said the director of data science at Northeastern University’s San Jose Campus. “It is not easy to impose one standard of ethics onto other people.”
Baeza-Yates would know. He has lived in six countries — Chile, Canada, Spain, the United States, New Zealand, and Australia — and works on teams regulating the ethics of algorithms and artificial intelligence in the U.S. and Spain, as well as Latin America and the Caribbean. As technology advances at full tilt on a global scale, Baeza-Yates continues to work with these groups to shape the future of AI ethics in the Americas and Spain.
Baeza-Yates is on the advisory committee of fAIrLAC, a network advocating for the ethical and responsible use of AI in Latin America and the Caribbean, and he was appointed to Spain’s AI Council this summer. (Although Baeza-Yates grew up in Chile, he has Spanish heritage.) He also serves on ACM’s U.S. Technology Policy Committee, specifically a subcommittee focused on AI and algorithms. This subcommittee, he said, has similar objectives as Cascad.AI, an initiative out of Northeastern’s Seattle campus that promotes ethical deployment of AI.
Region to region, ethics around tech vary
As members of the Western world, the countries and regions Baeza-Yates works with are often in ethical alignment, he said. They diverge, however, when you examine the corporate status of tech and AI in each. The U.S., for example, has more established, advanced tech companies than Latin America. While that initially seems like a deficiency for the latter, it could allow policymakers and corporations in Latin America to learn from ethical mistakes made in the U.S., Baeza-Yates said.
“In the U.S., the ethical problems with technology are already here and well beyond what I think is good for this society,” he said. “These are the advantages sometimes in technology — that the people who come late can basically leapfrog and, not get ahead, but get to a similar state.” His message? “Let’s learn about bias, fairness, and accountability before the problems happen.”
Unfortunately, there are many examples one could use to highlight the potential of bias and invasion of privacy in tech. Baeza-Yates points to a project from Stanford University researchers that used facial recognition software to predict people’s sexual orientation. An article published by the New York Times in 2017 described the backlash the researchers faced from LGBTQ+ advocacy groups and scholars. One academic described the technology as “the algorithmic equivalent of a 13-year-old bully.” Baeza-Yates is in the same camp as these critics, pointing out that “people forget that being able to do something does not mean you should do it.”
“This is an example of phrenology,” he said, referring to a pseudoscience popular in the 19th century that predicted individuals’ mental capabilities based on the shape and size of their skulls. “We do not want to go back to that.”
Responsibility for ethical tech cuts across professions
Baeza-Yates sees the establishment of ethical regulations as a fundamental part of creating a healthy tech ecosystem. Especially in the U.S., he said, there is a more utilitarian view of technology in which the ends justify the means. Baeza-Yates postulates that this attitude could be a byproduct of the high value the U.S. places on freedom and individuality. But when discussing technology, he said, human rights should be the priority, not an afterthought, as they were in the Cambridge Analytica data scandal. This philosophy is in keeping with his ACM committee’s recent recommendation to halt facial recognition technology until more is known about its consequences.
“No matter your profession — lawyer, sociologist, psychologist — you need a basic understanding of technology’s impacts.” — Ricardo Baeza-Yates
The onus to ensure ethical tech doesn’t fall on policymakers alone, Baeza-Yates said. If companies stepped up to the plate in regulating themselves, he explained, governmental regulations on tech ethics may not even be necessary. For this to work, however, the companies need to actually heed the instructions of their ethical boards.
“Even in the few companies that currently have ethical boards, the company will often still do what it wants regardless of the ethical board’s recommendations,” Baeza-Yates said.
On a more granular level, Baeza-Yates stresses the importance of all types of professionals having an understanding of technology and ethics. No matter your profession — lawyer, sociologist, psychologist — you need a basic understanding of technology’s impacts, he said. How can we establish this knowledge across all fields? A potential solution is mandating that all college students take a course on the intersection of society, technology, and ethics, according to Baeza-Yates.
The beginnings of this sort of mandate are present at Khoury — all computer science undergraduates are required to take a course concerning computing and social issues. Northeastern’s College of Social Sciences and Humanities also offers a variety of technology ethics courses, including Technology and Human Values (PHIL 1145), Information Ethics (PHIL 5005), and AI Ethics (PHIL 5010).
“If everyone took courses like this, it would make a change because at least lots of people would be exposed to these ideas,” he said. “That’s the first step — to make people aware. And how do we make people aware? We need to talk about it.”
Did you know your values are likely a product of the culture you live in? At least that’s how Ricardo Baeza-Yates sees it.
“Your ethics depend on your upbringing, your culture, your religion,” said the director of data science at Northeastern University’s San Jose Campus. “It is not easy to impose one standard of ethics onto other people.”
Baeza-Yates would know. He has lived in six countries — Chile, Canada, Spain, the United States, New Zealand, and Australia — and works on teams regulating the ethics of algorithms and artificial intelligence in the U.S. and Spain, as well as Latin America and the Caribbean. As technology advances at full tilt on a global scale, Baeza-Yates continues to work with these groups to shape the future of AI ethics in the Americas and Spain.
Baeza-Yates is on the advisory committee of fAIrLAC, a network advocating for the ethical and responsible use of AI in Latin America and the Caribbean, and he was appointed to Spain’s AI Council this summer. (Although Baeza-Yates grew up in Chile, he has Spanish heritage.) He also serves on ACM’s U.S. Technology Policy Committee, specifically a subcommittee focused on AI and algorithms. This subcommittee, he said, has similar objectives as Cascad.AI, an initiative out of Northeastern’s Seattle campus that promotes ethical deployment of AI.
Region to region, ethics around tech vary
As members of the Western world, the countries and regions Baeza-Yates works with are often in ethical alignment, he said. They diverge, however, when you examine the corporate status of tech and AI in each. The U.S., for example, has more established, advanced tech companies than Latin America. While that initially seems like a deficiency for the latter, it could allow policymakers and corporations in Latin America to learn from ethical mistakes made in the U.S., Baeza-Yates said.
“In the U.S., the ethical problems with technology are already here and well beyond what I think is good for this society,” he said. “These are the advantages sometimes in technology — that the people who come late can basically leapfrog and, not get ahead, but get to a similar state.” His message? “Let’s learn about bias, fairness, and accountability before the problems happen.”
Unfortunately, there are many examples one could use to highlight the potential of bias and invasion of privacy in tech. Baeza-Yates points to a project from Stanford University researchers that used facial recognition software to predict people’s sexual orientation. An article published by the New York Times in 2017 described the backlash the researchers faced from LGBTQ+ advocacy groups and scholars. One academic described the technology as “the algorithmic equivalent of a 13-year-old bully.” Baeza-Yates is in the same camp as these critics, pointing out that “people forget that being able to do something does not mean you should do it.”
“This is an example of phrenology,” he said, referring to a pseudoscience popular in the 19th century that predicted individuals’ mental capabilities based on the shape and size of their skulls. “We do not want to go back to that.”
Responsibility for ethical tech cuts across professions
Baeza-Yates sees the establishment of ethical regulations as a fundamental part of creating a healthy tech ecosystem. Especially in the U.S., he said, there is a more utilitarian view of technology in which the ends justify the means. Baeza-Yates postulates that this attitude could be a byproduct of the high value the U.S. places on freedom and individuality. But when discussing technology, he said, human rights should be the priority, not an afterthought, as they were in the Cambridge Analytica data scandal. This philosophy is in keeping with his ACM committee’s recent recommendation to halt facial recognition technology until more is known about its consequences.
“No matter your profession — lawyer, sociologist, psychologist — you need a basic understanding of technology’s impacts.” — Ricardo Baeza-Yates
The onus to ensure ethical tech doesn’t fall on policymakers alone, Baeza-Yates said. If companies stepped up to the plate in regulating themselves, he explained, governmental regulations on tech ethics may not even be necessary. For this to work, however, the companies need to actually heed the instructions of their ethical boards.
“Even in the few companies that currently have ethical boards, the company will often still do what it wants regardless of the ethical board’s recommendations,” Baeza-Yates said.
On a more granular level, Baeza-Yates stresses the importance of all types of professionals having an understanding of technology and ethics. No matter your profession — lawyer, sociologist, psychologist — you need a basic understanding of technology’s impacts, he said. How can we establish this knowledge across all fields? A potential solution is mandating that all college students take a course on the intersection of society, technology, and ethics, according to Baeza-Yates.
The beginnings of this sort of mandate are present at Khoury — all computer science undergraduates are required to take a course concerning computing and social issues. Northeastern’s College of Social Sciences and Humanities also offers a variety of technology ethics courses, including Technology and Human Values (PHIL 1145), Information Ethics (PHIL 5005), and AI Ethics (PHIL 5010).
“If everyone took courses like this, it would make a change because at least lots of people would be exposed to these ideas,” he said. “That’s the first step — to make people aware. And how do we make people aware? We need to talk about it.”