Tina Eliassi-Rad adds prestigious university award to her networks and AI honors
Mon 08.22.22 / Madelaine Millar
Tina Eliassi-Rad adds prestigious university award to her networks and AI honors
Mon 08.22.22 / Madelaine Millar
Mon 08.22.22 / Madelaine Millar
Mon 08.22.22 / Madelaine Millar
Tina Eliassi-Rad adds prestigious university award to her networks and AI honors
Mon 08.22.22 / Madelaine Millar
Tina Eliassi-Rad adds prestigious university award to her networks and AI honors
Mon 08.22.22 / Madelaine Millar
Mon 08.22.22 / Madelaine Millar
Mon 08.22.22 / Madelaine Millar
If Tina Eliassi-Rad is one thing only, it’s curious.
Her time as a young researcher at the Lawrence Livermore National Laboratory felt like being a kid in a candy shop. She learned how scientists use massively parallel computer systems to simulate complex phenomena such as supernovas and worked with an interdisciplinary group of researchers to develop statistical models. She loved figuring out how things fit together, but it wasn’t until a global tragedy struck that she understood how she could focus that skill.
“When 9/11 happened, there was all this talk about ‘if we could have only connected the dots,’” she said. “Our government put a lot of money into network science and machine learning and data mining on graphs and networks — that changed the direction of my research.”
Today, that research takes Eliassi-Rad from social implications of artificial intelligence to evolution to cybersecurity. She is the director of RADLAB, where she recruits students from the Khoury College of Computer Sciences and the Network Science Institute, and she has ongoing grants and projects related to complex networks, machine learning, and data mining sponsored by the National Science Foundation, Department of Defense, Volkswagen Foundation, and MIT Lincoln Laboratory, among others.
In fact, the diversity, quantity, and significance of Eliassi-Rad’s research recently earned her Northeastern’s Excellence in Research and Creativity Award, making her the fifth Khoury College researcher among the nine all-time winners. The current that carries her there — and through the broader stream of her research — remains machine learning, network science, and the interaction of the two in society.
Two networks and a map
If you’ve tried to drive around Manhattan and Boston, then you understand the basic difference between a simple grid-like network and a complex network. While simple networks are easy to navigate, complex networks have lots of nodes interacting in many different — and often messy — ways. Complex networks are a lot more useful for modeling real systems though, like how information flows through communities.
Take Eliassi-Rad’s 2022 publication Information Access Equality on Network Generative Models. Previous research established that mechanisms like preferential attachment (“the rich get richer”) and homophily (“birds of a feather flock together”) mean that information like job opportunities or medical resources don’t always make it to minority populations in time to be useful. Eliassi-Rad and her co-researchers demonstrated how both the structure of social networks and the way information spreads matter for information access equality. They observed that when more people in the majority group have relationships with people in the minority, information spreads more equally, but less efficiently.
It’s a lot more complicated than that, of course. For Eliassi-Rad, modeling the network is what helps her to appreciate that complexity.
“There are folks who like to work on text data, or on images or video. I just like to work on complex networks,” she said.
It’s impossible for an employer or a medical provider to go out into the community and tell every single person about an opportunity. Understanding how information spreads on various networks can help community builders develop complex networks with more equal information access.
Taking algorithms to school
Imagine two people in a complex social network of Facebook friends. They have hundreds of mutual connections but aren’t friends themselves. An algorithm might conclude that the two probably don’t like each other very much. It might also speculate about why they aren’t friends, but online friendship networks are not as complete as the real-world complex network they represent. The algorithm’s predictions are based on learning from incomplete data.
The data doesn’t have to be randomly incomplete, though. That’s the jumping-off point for the 2021 paper Selective Network Discovery via Deep Reinforcement Learning on Embedded Spaces, in which Eliassi-Rad and her MIT Lincoln Laboratory colleagues present an AI model to discover the missing pieces of a complex network. Their model, called Network Actor Critic, explores an incomplete complex network and finds the relevant missing data by asking a limited number of questions.
For instance, consider a city where some people have a sexually transmitted disease. Health officials aren’t aware of every infected person, so their social network is incomplete. Network Actor Critic learns to ask specific questions about the social network of people who have tested positive in order to learn about the missing people.
Eliassi-Rad stressed that AI researchers can only make an algorithm better, never perfect.
“No machine learning algorithm is 100 percent sure. If you come across one that is, there’s some problem,” she said. “Modeling the world is a complex thing; there’s always some uncertainty.”
An eye on AI
Our society is infused with machine learning algorithms recommending who to date, what to buy, what news to read, and more. These recommendations influence social, economic, and political processes, the data from these processes is fed back into the algorithms, and around and around we go.
In heterogenous societies such as the United States, not everyone is represented equally. Thus, machine learning algorithms don’t perform equally for everyone, which is a real problem when they operate in fields with little room for error like medicine and criminal justice.
“If you think about machine learning algorithms, they’re like prescription drugs. They operate differently on different subpopulations, they have adverse effects,” Eliassi-Rad said, noting that the CS field routinely, and problematically, applies them universally. “Perhaps these algorithms should have warning labels.”
As she has spent more time in the world of AI, Eliassi-Rad’s interest in the ethics of the discipline has grown, and this year — together with her graduate student David Liu and a team of researchers — she conducted her first qualitative study (to appear at the 2022 AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society). Their publication, titled Examining Responsibility and Deliberation in AI Impact Statements and Ethics Reviews is a breakdown of 327 ethics reviews and impact statements from Neural Information Processing Systems 2021, a top-tier AI and machine learning conference.
READ: David Liu’s interdisciplinary machine learning research journey
The team found a stunning lack of accountability. Researchers rarely felt they controlled whether their algorithms were used for the public good, and many felt it wasn’t their responsibility to mitigate the harm their tools could cause in the first place. By that logic, the responsibility falls to the user rather than the industry collectively.
“It makes a lot more sense if the community comes together and says, ‘Okay, what are the professional norms?’ There are certain problems we should not be working on,” Eliassi-Rad said. For instance, it would be wildly unethical for police to predict crimes with an algorithm and arrest would-be perpetrators, but there’s also nothing preventing someone from building such a tool.
In addition to her research, Eliassi-Rad teaches a freshman honors seminar called “Algorithms that Affect Lives” where students inquire into where algorithms are infused into society, how they function, and what their motivations and shortcomings are. Eliassi-Rad also dispels the misconception that AI is inherently better or more objective than a human being.
“People don’t think about the incentives of algorithms being used in criminal justice, or in healthcare,” Eliassi-Rad said. “Is it to provide you better healthcare, or is it so the doctor can spend only 10 minutes with you, as opposed to 15? Is it really for better patient care, or is it for them to make more money?”
As her recent award implies, Eliassi-Rad isn’t slowing down any time soon. Despite her busy schedule though, she’s committed to making time to continue to teach about the potential — and potential ethical dangers — of AI.
“The public needs to know how AI works,” she said, “because the population that’s more informed can decide what kind of society they want to live in.”
If Tina Eliassi-Rad is one thing only, it’s curious.
Her time as a young researcher at the Lawrence Livermore National Laboratory felt like being a kid in a candy shop. She learned how scientists use massively parallel computer systems to simulate complex phenomena such as supernovas and worked with an interdisciplinary group of researchers to develop statistical models. She loved figuring out how things fit together, but it wasn’t until a global tragedy struck that she understood how she could focus that skill.
“When 9/11 happened, there was all this talk about ‘if we could have only connected the dots,’” she said. “Our government put a lot of money into network science and machine learning and data mining on graphs and networks — that changed the direction of my research.”
Today, that research takes Eliassi-Rad from social implications of artificial intelligence to evolution to cybersecurity. She is the director of RADLAB, where she recruits students from the Khoury College of Computer Sciences and the Network Science Institute, and she has ongoing grants and projects related to complex networks, machine learning, and data mining sponsored by the National Science Foundation, Department of Defense, Volkswagen Foundation, and MIT Lincoln Laboratory, among others.
In fact, the diversity, quantity, and significance of Eliassi-Rad’s research recently earned her Northeastern’s Excellence in Research and Creativity Award, making her the fifth Khoury College researcher among the nine all-time winners. The current that carries her there — and through the broader stream of her research — remains machine learning, network science, and the interaction of the two in society.
Two networks and a map
If you’ve tried to drive around Manhattan and Boston, then you understand the basic difference between a simple grid-like network and a complex network. While simple networks are easy to navigate, complex networks have lots of nodes interacting in many different — and often messy — ways. Complex networks are a lot more useful for modeling real systems though, like how information flows through communities.
Take Eliassi-Rad’s 2022 publication Information Access Equality on Network Generative Models. Previous research established that mechanisms like preferential attachment (“the rich get richer”) and homophily (“birds of a feather flock together”) mean that information like job opportunities or medical resources don’t always make it to minority populations in time to be useful. Eliassi-Rad and her co-researchers demonstrated how both the structure of social networks and the way information spreads matter for information access equality. They observed that when more people in the majority group have relationships with people in the minority, information spreads more equally, but less efficiently.
It’s a lot more complicated than that, of course. For Eliassi-Rad, modeling the network is what helps her to appreciate that complexity.
“There are folks who like to work on text data, or on images or video. I just like to work on complex networks,” she said.
It’s impossible for an employer or a medical provider to go out into the community and tell every single person about an opportunity. Understanding how information spreads on various networks can help community builders develop complex networks with more equal information access.
Taking algorithms to school
Imagine two people in a complex social network of Facebook friends. They have hundreds of mutual connections but aren’t friends themselves. An algorithm might conclude that the two probably don’t like each other very much. It might also speculate about why they aren’t friends, but online friendship networks are not as complete as the real-world complex network they represent. The algorithm’s predictions are based on learning from incomplete data.
The data doesn’t have to be randomly incomplete, though. That’s the jumping-off point for the 2021 paper Selective Network Discovery via Deep Reinforcement Learning on Embedded Spaces, in which Eliassi-Rad and her MIT Lincoln Laboratory colleagues present an AI model to discover the missing pieces of a complex network. Their model, called Network Actor Critic, explores an incomplete complex network and finds the relevant missing data by asking a limited number of questions.
For instance, consider a city where some people have a sexually transmitted disease. Health officials aren’t aware of every infected person, so their social network is incomplete. Network Actor Critic learns to ask specific questions about the social network of people who have tested positive in order to learn about the missing people.
Eliassi-Rad stressed that AI researchers can only make an algorithm better, never perfect.
“No machine learning algorithm is 100 percent sure. If you come across one that is, there’s some problem,” she said. “Modeling the world is a complex thing; there’s always some uncertainty.”
An eye on AI
Our society is infused with machine learning algorithms recommending who to date, what to buy, what news to read, and more. These recommendations influence social, economic, and political processes, the data from these processes is fed back into the algorithms, and around and around we go.
In heterogenous societies such as the United States, not everyone is represented equally. Thus, machine learning algorithms don’t perform equally for everyone, which is a real problem when they operate in fields with little room for error like medicine and criminal justice.
“If you think about machine learning algorithms, they’re like prescription drugs. They operate differently on different subpopulations, they have adverse effects,” Eliassi-Rad said, noting that the CS field routinely, and problematically, applies them universally. “Perhaps these algorithms should have warning labels.”
As she has spent more time in the world of AI, Eliassi-Rad’s interest in the ethics of the discipline has grown, and this year — together with her graduate student David Liu and a team of researchers — she conducted her first qualitative study (to appear at the 2022 AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society). Their publication, titled Examining Responsibility and Deliberation in AI Impact Statements and Ethics Reviews is a breakdown of 327 ethics reviews and impact statements from Neural Information Processing Systems 2021, a top-tier AI and machine learning conference.
READ: David Liu’s interdisciplinary machine learning research journey
The team found a stunning lack of accountability. Researchers rarely felt they controlled whether their algorithms were used for the public good, and many felt it wasn’t their responsibility to mitigate the harm their tools could cause in the first place. By that logic, the responsibility falls to the user rather than the industry collectively.
“It makes a lot more sense if the community comes together and says, ‘Okay, what are the professional norms?’ There are certain problems we should not be working on,” Eliassi-Rad said. For instance, it would be wildly unethical for police to predict crimes with an algorithm and arrest would-be perpetrators, but there’s also nothing preventing someone from building such a tool.
In addition to her research, Eliassi-Rad teaches a freshman honors seminar called “Algorithms that Affect Lives” where students inquire into where algorithms are infused into society, how they function, and what their motivations and shortcomings are. Eliassi-Rad also dispels the misconception that AI is inherently better or more objective than a human being.
“People don’t think about the incentives of algorithms being used in criminal justice, or in healthcare,” Eliassi-Rad said. “Is it to provide you better healthcare, or is it so the doctor can spend only 10 minutes with you, as opposed to 15? Is it really for better patient care, or is it for them to make more money?”
As her recent award implies, Eliassi-Rad isn’t slowing down any time soon. Despite her busy schedule though, she’s committed to making time to continue to teach about the potential — and potential ethical dangers — of AI.
“The public needs to know how AI works,” she said, “because the population that’s more informed can decide what kind of society they want to live in.”
If Tina Eliassi-Rad is one thing only, it’s curious.
Her time as a young researcher at the Lawrence Livermore National Laboratory felt like being a kid in a candy shop. She learned how scientists use massively parallel computer systems to simulate complex phenomena such as supernovas and worked with an interdisciplinary group of researchers to develop statistical models. She loved figuring out how things fit together, but it wasn’t until a global tragedy struck that she understood how she could focus that skill.
“When 9/11 happened, there was all this talk about ‘if we could have only connected the dots,’” she said. “Our government put a lot of money into network science and machine learning and data mining on graphs and networks — that changed the direction of my research.”
Today, that research takes Eliassi-Rad from social implications of artificial intelligence to evolution to cybersecurity. She is the director of RADLAB, where she recruits students from the Khoury College of Computer Sciences and the Network Science Institute, and she has ongoing grants and projects related to complex networks, machine learning, and data mining sponsored by the National Science Foundation, Department of Defense, Volkswagen Foundation, and MIT Lincoln Laboratory, among others.
In fact, the diversity, quantity, and significance of Eliassi-Rad’s research recently earned her Northeastern’s Excellence in Research and Creativity Award, making her the fifth Khoury College researcher among the nine all-time winners. The current that carries her there — and through the broader stream of her research — remains machine learning, network science, and the interaction of the two in society.
Two networks and a map
If you’ve tried to drive around Manhattan and Boston, then you understand the basic difference between a simple grid-like network and a complex network. While simple networks are easy to navigate, complex networks have lots of nodes interacting in many different — and often messy — ways. Complex networks are a lot more useful for modeling real systems though, like how information flows through communities.
Take Eliassi-Rad’s 2022 publication Information Access Equality on Network Generative Models. Previous research established that mechanisms like preferential attachment (“the rich get richer”) and homophily (“birds of a feather flock together”) mean that information like job opportunities or medical resources don’t always make it to minority populations in time to be useful. Eliassi-Rad and her co-researchers demonstrated how both the structure of social networks and the way information spreads matter for information access equality. They observed that when more people in the majority group have relationships with people in the minority, information spreads more equally, but less efficiently.
It’s a lot more complicated than that, of course. For Eliassi-Rad, modeling the network is what helps her to appreciate that complexity.
“There are folks who like to work on text data, or on images or video. I just like to work on complex networks,” she said.
It’s impossible for an employer or a medical provider to go out into the community and tell every single person about an opportunity. Understanding how information spreads on various networks can help community builders develop complex networks with more equal information access.
Taking algorithms to school
Imagine two people in a complex social network of Facebook friends. They have hundreds of mutual connections but aren’t friends themselves. An algorithm might conclude that the two probably don’t like each other very much. It might also speculate about why they aren’t friends, but online friendship networks are not as complete as the real-world complex network they represent. The algorithm’s predictions are based on learning from incomplete data.
The data doesn’t have to be randomly incomplete, though. That’s the jumping-off point for the 2021 paper Selective Network Discovery via Deep Reinforcement Learning on Embedded Spaces, in which Eliassi-Rad and her MIT Lincoln Laboratory colleagues present an AI model to discover the missing pieces of a complex network. Their model, called Network Actor Critic, explores an incomplete complex network and finds the relevant missing data by asking a limited number of questions.
For instance, consider a city where some people have a sexually transmitted disease. Health officials aren’t aware of every infected person, so their social network is incomplete. Network Actor Critic learns to ask specific questions about the social network of people who have tested positive in order to learn about the missing people.
Eliassi-Rad stressed that AI researchers can only make an algorithm better, never perfect.
“No machine learning algorithm is 100 percent sure. If you come across one that is, there’s some problem,” she said. “Modeling the world is a complex thing; there’s always some uncertainty.”
An eye on AI
Our society is infused with machine learning algorithms recommending who to date, what to buy, what news to read, and more. These recommendations influence social, economic, and political processes, the data from these processes is fed back into the algorithms, and around and around we go.
In heterogenous societies such as the United States, not everyone is represented equally. Thus, machine learning algorithms don’t perform equally for everyone, which is a real problem when they operate in fields with little room for error like medicine and criminal justice.
“If you think about machine learning algorithms, they’re like prescription drugs. They operate differently on different subpopulations, they have adverse effects,” Eliassi-Rad said, noting that the CS field routinely, and problematically, applies them universally. “Perhaps these algorithms should have warning labels.”
As she has spent more time in the world of AI, Eliassi-Rad’s interest in the ethics of the discipline has grown, and this year — together with her graduate student David Liu and a team of researchers — she conducted her first qualitative study (to appear at the 2022 AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society). Their publication, titled Examining Responsibility and Deliberation in AI Impact Statements and Ethics Reviews is a breakdown of 327 ethics reviews and impact statements from Neural Information Processing Systems 2021, a top-tier AI and machine learning conference.
READ: David Liu’s interdisciplinary machine learning research journey
The team found a stunning lack of accountability. Researchers rarely felt they controlled whether their algorithms were used for the public good, and many felt it wasn’t their responsibility to mitigate the harm their tools could cause in the first place. By that logic, the responsibility falls to the user rather than the industry collectively.
“It makes a lot more sense if the community comes together and says, ‘Okay, what are the professional norms?’ There are certain problems we should not be working on,” Eliassi-Rad said. For instance, it would be wildly unethical for police to predict crimes with an algorithm and arrest would-be perpetrators, but there’s also nothing preventing someone from building such a tool.
In addition to her research, Eliassi-Rad teaches a freshman honors seminar called “Algorithms that Affect Lives” where students inquire into where algorithms are infused into society, how they function, and what their motivations and shortcomings are. Eliassi-Rad also dispels the misconception that AI is inherently better or more objective than a human being.
“People don’t think about the incentives of algorithms being used in criminal justice, or in healthcare,” Eliassi-Rad said. “Is it to provide you better healthcare, or is it so the doctor can spend only 10 minutes with you, as opposed to 15? Is it really for better patient care, or is it for them to make more money?”
As her recent award implies, Eliassi-Rad isn’t slowing down any time soon. Despite her busy schedule though, she’s committed to making time to continue to teach about the potential — and potential ethical dangers — of AI.
“The public needs to know how AI works,” she said, “because the population that’s more informed can decide what kind of society they want to live in.”
If Tina Eliassi-Rad is one thing only, it’s curious.
Her time as a young researcher at the Lawrence Livermore National Laboratory felt like being a kid in a candy shop. She learned how scientists use massively parallel computer systems to simulate complex phenomena such as supernovas and worked with an interdisciplinary group of researchers to develop statistical models. She loved figuring out how things fit together, but it wasn’t until a global tragedy struck that she understood how she could focus that skill.
“When 9/11 happened, there was all this talk about ‘if we could have only connected the dots,’” she said. “Our government put a lot of money into network science and machine learning and data mining on graphs and networks — that changed the direction of my research.”
Today, that research takes Eliassi-Rad from social implications of artificial intelligence to evolution to cybersecurity. She is the director of RADLAB, where she recruits students from the Khoury College of Computer Sciences and the Network Science Institute, and she has ongoing grants and projects related to complex networks, machine learning, and data mining sponsored by the National Science Foundation, Department of Defense, Volkswagen Foundation, and MIT Lincoln Laboratory, among others.
In fact, the diversity, quantity, and significance of Eliassi-Rad’s research recently earned her Northeastern’s Excellence in Research and Creativity Award, making her the fifth Khoury College researcher among the nine all-time winners. The current that carries her there — and through the broader stream of her research — remains machine learning, network science, and the interaction of the two in society.
Two networks and a map
If you’ve tried to drive around Manhattan and Boston, then you understand the basic difference between a simple grid-like network and a complex network. While simple networks are easy to navigate, complex networks have lots of nodes interacting in many different — and often messy — ways. Complex networks are a lot more useful for modeling real systems though, like how information flows through communities.
Take Eliassi-Rad’s 2022 publication Information Access Equality on Network Generative Models. Previous research established that mechanisms like preferential attachment (“the rich get richer”) and homophily (“birds of a feather flock together”) mean that information like job opportunities or medical resources don’t always make it to minority populations in time to be useful. Eliassi-Rad and her co-researchers demonstrated how both the structure of social networks and the way information spreads matter for information access equality. They observed that when more people in the majority group have relationships with people in the minority, information spreads more equally, but less efficiently.
It’s a lot more complicated than that, of course. For Eliassi-Rad, modeling the network is what helps her to appreciate that complexity.
“There are folks who like to work on text data, or on images or video. I just like to work on complex networks,” she said.
It’s impossible for an employer or a medical provider to go out into the community and tell every single person about an opportunity. Understanding how information spreads on various networks can help community builders develop complex networks with more equal information access.
Taking algorithms to school
Imagine two people in a complex social network of Facebook friends. They have hundreds of mutual connections but aren’t friends themselves. An algorithm might conclude that the two probably don’t like each other very much. It might also speculate about why they aren’t friends, but online friendship networks are not as complete as the real-world complex network they represent. The algorithm’s predictions are based on learning from incomplete data.
The data doesn’t have to be randomly incomplete, though. That’s the jumping-off point for the 2021 paper Selective Network Discovery via Deep Reinforcement Learning on Embedded Spaces, in which Eliassi-Rad and her MIT Lincoln Laboratory colleagues present an AI model to discover the missing pieces of a complex network. Their model, called Network Actor Critic, explores an incomplete complex network and finds the relevant missing data by asking a limited number of questions.
For instance, consider a city where some people have a sexually transmitted disease. Health officials aren’t aware of every infected person, so their social network is incomplete. Network Actor Critic learns to ask specific questions about the social network of people who have tested positive in order to learn about the missing people.
Eliassi-Rad stressed that AI researchers can only make an algorithm better, never perfect.
“No machine learning algorithm is 100 percent sure. If you come across one that is, there’s some problem,” she said. “Modeling the world is a complex thing; there’s always some uncertainty.”
An eye on AI
Our society is infused with machine learning algorithms recommending who to date, what to buy, what news to read, and more. These recommendations influence social, economic, and political processes, the data from these processes is fed back into the algorithms, and around and around we go.
In heterogenous societies such as the United States, not everyone is represented equally. Thus, machine learning algorithms don’t perform equally for everyone, which is a real problem when they operate in fields with little room for error like medicine and criminal justice.
“If you think about machine learning algorithms, they’re like prescription drugs. They operate differently on different subpopulations, they have adverse effects,” Eliassi-Rad said, noting that the CS field routinely, and problematically, applies them universally. “Perhaps these algorithms should have warning labels.”
As she has spent more time in the world of AI, Eliassi-Rad’s interest in the ethics of the discipline has grown, and this year — together with her graduate student David Liu and a team of researchers — she conducted her first qualitative study (to appear at the 2022 AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society). Their publication, titled Examining Responsibility and Deliberation in AI Impact Statements and Ethics Reviews is a breakdown of 327 ethics reviews and impact statements from Neural Information Processing Systems 2021, a top-tier AI and machine learning conference.
READ: David Liu’s interdisciplinary machine learning research journey
The team found a stunning lack of accountability. Researchers rarely felt they controlled whether their algorithms were used for the public good, and many felt it wasn’t their responsibility to mitigate the harm their tools could cause in the first place. By that logic, the responsibility falls to the user rather than the industry collectively.
“It makes a lot more sense if the community comes together and says, ‘Okay, what are the professional norms?’ There are certain problems we should not be working on,” Eliassi-Rad said. For instance, it would be wildly unethical for police to predict crimes with an algorithm and arrest would-be perpetrators, but there’s also nothing preventing someone from building such a tool.
In addition to her research, Eliassi-Rad teaches a freshman honors seminar called “Algorithms that Affect Lives” where students inquire into where algorithms are infused into society, how they function, and what their motivations and shortcomings are. Eliassi-Rad also dispels the misconception that AI is inherently better or more objective than a human being.
“People don’t think about the incentives of algorithms being used in criminal justice, or in healthcare,” Eliassi-Rad said. “Is it to provide you better healthcare, or is it so the doctor can spend only 10 minutes with you, as opposed to 15? Is it really for better patient care, or is it for them to make more money?”
As her recent award implies, Eliassi-Rad isn’t slowing down any time soon. Despite her busy schedule though, she’s committed to making time to continue to teach about the potential — and potential ethical dangers — of AI.
“The public needs to know how AI works,” she said, “because the population that’s more informed can decide what kind of society they want to live in.”