Ethical AI: Indigenous languages, biased algorithms, and the way forward

Northeastern University in Vancouver hosted the CascadAI summit, a virtual event about the intersections between ethics and artificial intelligence (AI)

Author: Hannah Bernstein
Date: 12.15.20

Top row (L to R): Bethany Edmunds, Krista Richmond, Cristina Pombo
Bottom (L to R): Meeri Haataja, Natalie Cartwright, Michael Running Wolf

In Salta, Argentina, as many as 1 in 5 girls will become pregnant and potentially drop out of school before the age of 19. In 2017, in an attempt to use technology as a solution, the province’s government partnered with Microsoft to develop an algorithm to predict which girls in the Salta school population had an 85% likelihood of becoming pregnant as teenagers — and then spit out each girl’s full name and address.

Far from seeing this as a success, many ethical AI experts see bias, bad data, and privacy invasion. Cristina Pombo, the head of the social-digital cluster at the Inter-American Development Bank (IADB), is one of them.

“The data that fed the algorithm contained only low-income women, as opposed to the total population,” Pombo says. “And, there was not a policy implication that could improve the situation once the results were out there. So, yes, technology helps, but it does not solve.”

Pombo was sharing this story as part of the CascadAI summit, hosted by Northeastern University in Vancouver, a virtual event held Nov. 19 about the intersections between ethics and artificial intelligence (AI). She was joined on the panel by Meeri Haataja, CEO and co-founder of Saidot, an AI transparency start-up in Finland; Natalie Cartwright, COO and co-founder of Canadian virtual assistant company Finn AI; and Michael Running Wolf, an Indigenous software and AI developer.

Bethany Edmunds, the director of computer science and a teaching professor at Northeastern’s Vancouver campus, hosted the event alongside moderator Krista Richmond, the director of digital innovation and partnerships at Technical Safety BC.

“The purpose of this conference is really to get together both business leaders and technologists and the community to recognize there’s a gap between what we want to do and what we are doing,” Edmunds said. “As AI progresses so fast, we’re leaving people behind.”

She added, “We don’t mean to, but there’s a lack of dialogue between who’s being affected and who’s doing the development.”

Improving AI transparency and information access

Haataja’s company, Saidot, is specifically working to help companies, governments, and organizations who are using AI to improve their transparency. Saidot does this by creating and operating artificial intelligence registers, which are websites that users can browse to read about the AI systems in the products or services they use and become better informed.

Saidot runs AI registers for city governments in Helsinki and Amsterdam, and facilitates other AI ethics programs for groups like airline company FinnAir. The most important part, Haataja says, is transparency and information access.

“We’re building technology platforms to make it as easy as possible, as convenient, for different parties, and facilitating the transparency between the organizations and the different stakeholders,” Haataja said.

In Helsinki, the website features answers to basic questions, like what artificial intelligence means and what the register is used for, as well as listings for each of the AI services used by the city. This includes a check-in service in health centers, a city library chat that recommends books, and a parking bot that acts as the initial customer service line for the city’s parking lots. The information about these services is designed to be accessible and understandable to people at all education levels.

“These AI registers are supposed to be a channel for participation, getting feedback on how we are doing and what we are doing,” Haataja said. “That’s one of the key development areas where we’re working.”

Elsewhere, Pombo’s work in Latin America and the Caribbean seeks to help entities not do what Salta’s government did with their pregnancy algorithm. The IADB has developed several guides, such as an ethical self-assessment, for organizations to evaluate their AI systems and make improvements. A big part of that work, she said, is making sure there are plenty of educational and career opportunities to join the AI industry for people who have been historically excluded.

“You have to have diverse populations developing your systems,” Pombo says. “The same white guys doing the code is not good.”

Human-centered AI and ethical practice

In Canada, Finn AI co-founder Cartwright works to unify AI stakeholders across the country and world in order to standardize a clear code of ethics in the industry. She focused her presentation on human-centered artificial intelligence, and what that means in practice.

“How do we build AI that serves us, that is built to enhance good, and that is rooted in some really basic things?” Cartwright asked.

Those basic things? Just like Pombo, Cartwright says responsibility, diversity, and inclusion are just a few. Industry stakeholders can make this happen by supporting broad policy like the Montreal Declaration for a Responsible Development of AI, an international agreement from 2017. Ultimately, Cartwright says, being able to learn and iterate with diverse perspectives in the room is how to ensure ethical AI in the future.

Software developer Running Wolf has been working on that. He highlighted the AI industry’s racism, such as in automatic speech recognition, where research is severely lacking in Indigenous polysynthetic languages that are very different from English.

Indigenous languages are sacred to their communities, he explains, and preserving their culture and speech is really important. But their reservations often lack basic resources like electricity or the internet, and Indigenous people must be given sovereignty in any data collection or AI development process to ensure that the technology really will serve the people.

“You need to make sure that the work that you’re trying to do is relevant and the work you’re trying to do gives back to the community,” Running Wolf said. “Maybe they don’t want automatic speech recognition. Maybe they just want internet.”

He also introduced the Lakota spiritual concept of wakan, which is the idea that everything contains energy, and everything is sacred. If everything you do has meaning and is sacred, he says, you better do it with intention.

Though his remarks were focused on Indigenous peoples, they could be more broadly applied. “In developing AI, you have to think about what you do now will impact two or three generations from you,” Running Wolf says. “If you treat AI as wakan, then you need to treat it with respect and be incredibly careful and assume that you might be wrong.”

To learn more about the Cascadia Commitment for conscientious adoption and deployment of AI, visit cascad.ai/commitment