Mynatt, Choffnes welcome NYT’s Kashmir Hill for a chat on the end — and the future — of privacy
Fri 11.03.23 / Milton Posner
Mynatt, Choffnes welcome NYT’s Kashmir Hill for a chat on the end — and the future — of privacy
Fri 11.03.23 / Milton Posner
Fri 11.03.23 / Milton Posner
Fri 11.03.23 / Milton Posner
Mynatt, Choffnes welcome NYT’s Kashmir Hill for a chat on the end — and the future — of privacy
Fri 11.03.23 / Milton Posner
Mynatt, Choffnes welcome NYT’s Kashmir Hill for a chat on the end — and the future — of privacy
Fri 11.03.23 / Milton Posner
Fri 11.03.23 / Milton Posner
Fri 11.03.23 / Milton Posner
David Choffnes and Kashmir Hill have spent much of their careers investigating the subtle and sweeping ways in which technological creations surveil us, and how we can fight back.
Choffnes’ latest contribution is an award-winning examination of the ways in which our smart home devices share our data with each other and with outside parties, often without our knowledge and above our objections. The Cybersecurity and Privacy Institute director was also recently announced as co-principal investigator on a $18 million grant to build a platform that will facilitate future cybersecurity and privacy research.
Hill’s latest contribution is an extended look at a striking story she first broke in the New York Times in 2020. It was the tale of a small, secretive company called Clearview AI, whose facial recognition app could take a picture uploaded by a user, scour a database of several billion images scraped from the internet, and display each photo match along with its source link — all without anyone’s permission. It was, she said, the sort of remarkable, yet privacy-busting innovation that established tech companies like Facebook and Google had long regarded as an unapproachable third rail. Last month, after more than a decade writing about tech and privacy, Hill published Your Face Belongs to Us, a deeper dive into how Clearview AI came to be, what it represents, and where we go from here.
And it was her debut book tour that brought Hill to Northeastern on Friday, where she, along with frequent collaborator Choffnes and Khoury College Dean Beth Mynatt, took part in a fireside chat at the Fenway Center to discuss the privacy we’ve lost and how we might regain it.
“It can be hard to get people to care about privacy as a value, and many people don’t appreciate it until it’s harmed in some way,” Hill said. “But with these sorts of tools that can identify and track your face in real time, it could take all of the online tracking — and all these dossiers that have been created about us — and attach them to our faces. All of the anxiety we have about online tracking will move into the real world with our faces as tokens to unlock that information.”
ThoughClearview AI is not available to the wider public at the moment, it’s already being used for specific applications.
“By the time I found out about Clearview, it was already being used by hundreds of police departments,” Hill said, adding that the Department of Homeland Security has also jumped on board. “There are clearly benefits. Police using it to solve crimes is very appealing to a lot of people.”
But as Hill has noted in her coverage, these tools present problems too. They sometimes lead law enforcement to the wrong person. They sometimes struggle to identify people of color. And they often lead to “surveillance creep,” where technology implemented for supposedly beneficial or altruistic purposes is co-opted for sinister or self-serving ones.
“I think it’s important that we have a choice about how this is deployed,” Hill noted. “Just because we allow it for things like opening our iPhones doesn’t mean that we need to accept it as a ubiquitous technology that completely strips us of our anonymity.”
Mynatt, who made her mark in ubiquitous computing before joining Khoury College as dean, concurred.
“When we talk about our oath for computer scientists, there is a reframing that says, ‘Just because you can build it doesn’t mean you should,’” she said. “I think Clearview AI is an incredible illustration of that. In the past, there’s been this assumption of technical superiority, that only the smartest folks could build the most dangerous technologies. That’s not the case here.”
This isn’t to say that the Clearview AI team lacked technical savvy. But Hill emphasized that the tool was not built by the sort of behind-the-scenes engineering genius she was expecting to find when she began her investigation. The tech had simply developed to the point where a group of outsiders who hadn’t worked with biometric data could pick up the pieces and build a powerful surveillance tool.
Nevertheless, the trio noted, Clearview AI did absorb ideas and know-how from academia, including open-source research publications. The company’s founder, Hoan Ton-That, even pulled some information from a Carnegie Mellon project site that contained a well-intentioned, but unenforceable request to respect user privacy.
“As researchers, we’re held to different standards than companies,” Choffnes noted, citing the Belmont Report’s principles of respect for persons, beneficence, and justice. “These things should be taught more broadly … and given that the founder of Clearview AI didn’t attend college, technology and those ethical principles need to be integrated all the way down the curriculum to earlier education. That’s something I’d like to see as we work toward ‘CS for everyone.’”
And as for what we do with the technology we’ve already got?
“I hear from a lot of people who feel resigned, like the technology toothpaste is out of the tube and can’t be stuffed back in. And in the US, we have a real fear of getting in the way of innovation … but we constrain technology all the time,” Hill said, citing anti-wiretapping and anti-eavesdropping laws. “We can do that again with facial recognition and I think we should, because there are so many ways in which we rely on our anonymity to feel comfortable.”
Exactly what form such regulation would take is an open question. A handful of US cities — including Oakland, San Francisco, and Somerville — have restricted police from using facial recognition tech, and Illinois does not allow its residents’ biometric data (e.g., faces) to be used without their consent. But state laws often only allow users to request that their data be removed from databases, which Hill says is less effective than the sweeping European regulations that prevent companies from amassing the data in the first place. Mynatt added that because researchers have aided communities of disabled people using this sort of data scraping, regulating the information’s use is preferable to walling off access entirely.
In the meantime, there’s still a wider conversation to be had, one that Hill is doing her best to feed by unpacking, dissecting, and humanizing the technology for the New York Times’ massive and diverse audience — and for the readers of her book.
“I want people to know the state of the technology and how it got as powerful as it did. I want people to think about what they want the world to look like,” she said. “Because we’re at the tipping point right now where we could all become celebrities with famous faces, and everyone — the government, companies, individuals — could know who we are. And I don’t think that’s the world we want to live in.”
David Choffnes and Kashmir Hill have spent much of their careers investigating the subtle and sweeping ways in which technological creations surveil us, and how we can fight back.
Choffnes’ latest contribution is an award-winning examination of the ways in which our smart home devices share our data with each other and with outside parties, often without our knowledge and above our objections. The Cybersecurity and Privacy Institute director was also recently announced as co-principal investigator on a $18 million grant to build a platform that will facilitate future cybersecurity and privacy research.
Hill’s latest contribution is an extended look at a striking story she first broke in the New York Times in 2020. It was the tale of a small, secretive company called Clearview AI, whose facial recognition app could take a picture uploaded by a user, scour a database of several billion images scraped from the internet, and display each photo match along with its source link — all without anyone’s permission. It was, she said, the sort of remarkable, yet privacy-busting innovation that established tech companies like Facebook and Google had long regarded as an unapproachable third rail. Last month, after more than a decade writing about tech and privacy, Hill published Your Face Belongs to Us, a deeper dive into how Clearview AI came to be, what it represents, and where we go from here.
And it was her debut book tour that brought Hill to Northeastern on Friday, where she, along with frequent collaborator Choffnes and Khoury College Dean Beth Mynatt, took part in a fireside chat at the Fenway Center to discuss the privacy we’ve lost and how we might regain it.
“It can be hard to get people to care about privacy as a value, and many people don’t appreciate it until it’s harmed in some way,” Hill said. “But with these sorts of tools that can identify and track your face in real time, it could take all of the online tracking — and all these dossiers that have been created about us — and attach them to our faces. All of the anxiety we have about online tracking will move into the real world with our faces as tokens to unlock that information.”
ThoughClearview AI is not available to the wider public at the moment, it’s already being used for specific applications.
“By the time I found out about Clearview, it was already being used by hundreds of police departments,” Hill said, adding that the Department of Homeland Security has also jumped on board. “There are clearly benefits. Police using it to solve crimes is very appealing to a lot of people.”
But as Hill has noted in her coverage, these tools present problems too. They sometimes lead law enforcement to the wrong person. They sometimes struggle to identify people of color. And they often lead to “surveillance creep,” where technology implemented for supposedly beneficial or altruistic purposes is co-opted for sinister or self-serving ones.
“I think it’s important that we have a choice about how this is deployed,” Hill noted. “Just because we allow it for things like opening our iPhones doesn’t mean that we need to accept it as a ubiquitous technology that completely strips us of our anonymity.”
Mynatt, who made her mark in ubiquitous computing before joining Khoury College as dean, concurred.
“When we talk about our oath for computer scientists, there is a reframing that says, ‘Just because you can build it doesn’t mean you should,’” she said. “I think Clearview AI is an incredible illustration of that. In the past, there’s been this assumption of technical superiority, that only the smartest folks could build the most dangerous technologies. That’s not the case here.”
This isn’t to say that the Clearview AI team lacked technical savvy. But Hill emphasized that the tool was not built by the sort of behind-the-scenes engineering genius she was expecting to find when she began her investigation. The tech had simply developed to the point where a group of outsiders who hadn’t worked with biometric data could pick up the pieces and build a powerful surveillance tool.
Nevertheless, the trio noted, Clearview AI did absorb ideas and know-how from academia, including open-source research publications. The company’s founder, Hoan Ton-That, even pulled some information from a Carnegie Mellon project site that contained a well-intentioned, but unenforceable request to respect user privacy.
“As researchers, we’re held to different standards than companies,” Choffnes noted, citing the Belmont Report’s principles of respect for persons, beneficence, and justice. “These things should be taught more broadly … and given that the founder of Clearview AI didn’t attend college, technology and those ethical principles need to be integrated all the way down the curriculum to earlier education. That’s something I’d like to see as we work toward ‘CS for everyone.’”
And as for what we do with the technology we’ve already got?
“I hear from a lot of people who feel resigned, like the technology toothpaste is out of the tube and can’t be stuffed back in. And in the US, we have a real fear of getting in the way of innovation … but we constrain technology all the time,” Hill said, citing anti-wiretapping and anti-eavesdropping laws. “We can do that again with facial recognition and I think we should, because there are so many ways in which we rely on our anonymity to feel comfortable.”
Exactly what form such regulation would take is an open question. A handful of US cities — including Oakland, San Francisco, and Somerville — have restricted police from using facial recognition tech, and Illinois does not allow its residents’ biometric data (e.g., faces) to be used without their consent. But state laws often only allow users to request that their data be removed from databases, which Hill says is less effective than the sweeping European regulations that prevent companies from amassing the data in the first place. Mynatt added that because researchers have aided communities of disabled people using this sort of data scraping, regulating the information’s use is preferable to walling off access entirely.
In the meantime, there’s still a wider conversation to be had, one that Hill is doing her best to feed by unpacking, dissecting, and humanizing the technology for the New York Times’ massive and diverse audience — and for the readers of her book.
“I want people to know the state of the technology and how it got as powerful as it did. I want people to think about what they want the world to look like,” she said. “Because we’re at the tipping point right now where we could all become celebrities with famous faces, and everyone — the government, companies, individuals — could know who we are. And I don’t think that’s the world we want to live in.”
David Choffnes and Kashmir Hill have spent much of their careers investigating the subtle and sweeping ways in which technological creations surveil us, and how we can fight back.
Choffnes’ latest contribution is an award-winning examination of the ways in which our smart home devices share our data with each other and with outside parties, often without our knowledge and above our objections. The Cybersecurity and Privacy Institute director was also recently announced as co-principal investigator on a $18 million grant to build a platform that will facilitate future cybersecurity and privacy research.
Hill’s latest contribution is an extended look at a striking story she first broke in the New York Times in 2020. It was the tale of a small, secretive company called Clearview AI, whose facial recognition app could take a picture uploaded by a user, scour a database of several billion images scraped from the internet, and display each photo match along with its source link — all without anyone’s permission. It was, she said, the sort of remarkable, yet privacy-busting innovation that established tech companies like Facebook and Google had long regarded as an unapproachable third rail. Last month, after more than a decade writing about tech and privacy, Hill published Your Face Belongs to Us, a deeper dive into how Clearview AI came to be, what it represents, and where we go from here.
And it was her debut book tour that brought Hill to Northeastern on Friday, where she, along with frequent collaborator Choffnes and Khoury College Dean Beth Mynatt, took part in a fireside chat at the Fenway Center to discuss the privacy we’ve lost and how we might regain it.
“It can be hard to get people to care about privacy as a value, and many people don’t appreciate it until it’s harmed in some way,” Hill said. “But with these sorts of tools that can identify and track your face in real time, it could take all of the online tracking — and all these dossiers that have been created about us — and attach them to our faces. All of the anxiety we have about online tracking will move into the real world with our faces as tokens to unlock that information.”
ThoughClearview AI is not available to the wider public at the moment, it’s already being used for specific applications.
“By the time I found out about Clearview, it was already being used by hundreds of police departments,” Hill said, adding that the Department of Homeland Security has also jumped on board. “There are clearly benefits. Police using it to solve crimes is very appealing to a lot of people.”
But as Hill has noted in her coverage, these tools present problems too. They sometimes lead law enforcement to the wrong person. They sometimes struggle to identify people of color. And they often lead to “surveillance creep,” where technology implemented for supposedly beneficial or altruistic purposes is co-opted for sinister or self-serving ones.
“I think it’s important that we have a choice about how this is deployed,” Hill noted. “Just because we allow it for things like opening our iPhones doesn’t mean that we need to accept it as a ubiquitous technology that completely strips us of our anonymity.”
Mynatt, who made her mark in ubiquitous computing before joining Khoury College as dean, concurred.
“When we talk about our oath for computer scientists, there is a reframing that says, ‘Just because you can build it doesn’t mean you should,’” she said. “I think Clearview AI is an incredible illustration of that. In the past, there’s been this assumption of technical superiority, that only the smartest folks could build the most dangerous technologies. That’s not the case here.”
This isn’t to say that the Clearview AI team lacked technical savvy. But Hill emphasized that the tool was not built by the sort of behind-the-scenes engineering genius she was expecting to find when she began her investigation. The tech had simply developed to the point where a group of outsiders who hadn’t worked with biometric data could pick up the pieces and build a powerful surveillance tool.
Nevertheless, the trio noted, Clearview AI did absorb ideas and know-how from academia, including open-source research publications. The company’s founder, Hoan Ton-That, even pulled some information from a Carnegie Mellon project site that contained a well-intentioned, but unenforceable request to respect user privacy.
“As researchers, we’re held to different standards than companies,” Choffnes noted, citing the Belmont Report’s principles of respect for persons, beneficence, and justice. “These things should be taught more broadly … and given that the founder of Clearview AI didn’t attend college, technology and those ethical principles need to be integrated all the way down the curriculum to earlier education. That’s something I’d like to see as we work toward ‘CS for everyone.’”
And as for what we do with the technology we’ve already got?
“I hear from a lot of people who feel resigned, like the technology toothpaste is out of the tube and can’t be stuffed back in. And in the US, we have a real fear of getting in the way of innovation … but we constrain technology all the time,” Hill said, citing anti-wiretapping and anti-eavesdropping laws. “We can do that again with facial recognition and I think we should, because there are so many ways in which we rely on our anonymity to feel comfortable.”
Exactly what form such regulation would take is an open question. A handful of US cities — including Oakland, San Francisco, and Somerville — have restricted police from using facial recognition tech, and Illinois does not allow its residents’ biometric data (e.g., faces) to be used without their consent. But state laws often only allow users to request that their data be removed from databases, which Hill says is less effective than the sweeping European regulations that prevent companies from amassing the data in the first place. Mynatt added that because researchers have aided communities of disabled people using this sort of data scraping, regulating the information’s use is preferable to walling off access entirely.
In the meantime, there’s still a wider conversation to be had, one that Hill is doing her best to feed by unpacking, dissecting, and humanizing the technology for the New York Times’ massive and diverse audience — and for the readers of her book.
“I want people to know the state of the technology and how it got as powerful as it did. I want people to think about what they want the world to look like,” she said. “Because we’re at the tipping point right now where we could all become celebrities with famous faces, and everyone — the government, companies, individuals — could know who we are. And I don’t think that’s the world we want to live in.”
David Choffnes and Kashmir Hill have spent much of their careers investigating the subtle and sweeping ways in which technological creations surveil us, and how we can fight back.
Choffnes’ latest contribution is an award-winning examination of the ways in which our smart home devices share our data with each other and with outside parties, often without our knowledge and above our objections. The Cybersecurity and Privacy Institute director was also recently announced as co-principal investigator on a $18 million grant to build a platform that will facilitate future cybersecurity and privacy research.
Hill’s latest contribution is an extended look at a striking story she first broke in the New York Times in 2020. It was the tale of a small, secretive company called Clearview AI, whose facial recognition app could take a picture uploaded by a user, scour a database of several billion images scraped from the internet, and display each photo match along with its source link — all without anyone’s permission. It was, she said, the sort of remarkable, yet privacy-busting innovation that established tech companies like Facebook and Google had long regarded as an unapproachable third rail. Last month, after more than a decade writing about tech and privacy, Hill published Your Face Belongs to Us, a deeper dive into how Clearview AI came to be, what it represents, and where we go from here.
And it was her debut book tour that brought Hill to Northeastern on Friday, where she, along with frequent collaborator Choffnes and Khoury College Dean Beth Mynatt, took part in a fireside chat at the Fenway Center to discuss the privacy we’ve lost and how we might regain it.
“It can be hard to get people to care about privacy as a value, and many people don’t appreciate it until it’s harmed in some way,” Hill said. “But with these sorts of tools that can identify and track your face in real time, it could take all of the online tracking — and all these dossiers that have been created about us — and attach them to our faces. All of the anxiety we have about online tracking will move into the real world with our faces as tokens to unlock that information.”
ThoughClearview AI is not available to the wider public at the moment, it’s already being used for specific applications.
“By the time I found out about Clearview, it was already being used by hundreds of police departments,” Hill said, adding that the Department of Homeland Security has also jumped on board. “There are clearly benefits. Police using it to solve crimes is very appealing to a lot of people.”
But as Hill has noted in her coverage, these tools present problems too. They sometimes lead law enforcement to the wrong person. They sometimes struggle to identify people of color. And they often lead to “surveillance creep,” where technology implemented for supposedly beneficial or altruistic purposes is co-opted for sinister or self-serving ones.
“I think it’s important that we have a choice about how this is deployed,” Hill noted. “Just because we allow it for things like opening our iPhones doesn’t mean that we need to accept it as a ubiquitous technology that completely strips us of our anonymity.”
Mynatt, who made her mark in ubiquitous computing before joining Khoury College as dean, concurred.
“When we talk about our oath for computer scientists, there is a reframing that says, ‘Just because you can build it doesn’t mean you should,’” she said. “I think Clearview AI is an incredible illustration of that. In the past, there’s been this assumption of technical superiority, that only the smartest folks could build the most dangerous technologies. That’s not the case here.”
This isn’t to say that the Clearview AI team lacked technical savvy. But Hill emphasized that the tool was not built by the sort of behind-the-scenes engineering genius she was expecting to find when she began her investigation. The tech had simply developed to the point where a group of outsiders who hadn’t worked with biometric data could pick up the pieces and build a powerful surveillance tool.
Nevertheless, the trio noted, Clearview AI did absorb ideas and know-how from academia, including open-source research publications. The company’s founder, Hoan Ton-That, even pulled some information from a Carnegie Mellon project site that contained a well-intentioned, but unenforceable request to respect user privacy.
“As researchers, we’re held to different standards than companies,” Choffnes noted, citing the Belmont Report’s principles of respect for persons, beneficence, and justice. “These things should be taught more broadly … and given that the founder of Clearview AI didn’t attend college, technology and those ethical principles need to be integrated all the way down the curriculum to earlier education. That’s something I’d like to see as we work toward ‘CS for everyone.’”
And as for what we do with the technology we’ve already got?
“I hear from a lot of people who feel resigned, like the technology toothpaste is out of the tube and can’t be stuffed back in. And in the US, we have a real fear of getting in the way of innovation … but we constrain technology all the time,” Hill said, citing anti-wiretapping and anti-eavesdropping laws. “We can do that again with facial recognition and I think we should, because there are so many ways in which we rely on our anonymity to feel comfortable.”
Exactly what form such regulation would take is an open question. A handful of US cities — including Oakland, San Francisco, and Somerville — have restricted police from using facial recognition tech, and Illinois does not allow its residents’ biometric data (e.g., faces) to be used without their consent. But state laws often only allow users to request that their data be removed from databases, which Hill says is less effective than the sweeping European regulations that prevent companies from amassing the data in the first place. Mynatt added that because researchers have aided communities of disabled people using this sort of data scraping, regulating the information’s use is preferable to walling off access entirely.
In the meantime, there’s still a wider conversation to be had, one that Hill is doing her best to feed by unpacking, dissecting, and humanizing the technology for the New York Times’ massive and diverse audience — and for the readers of her book.
“I want people to know the state of the technology and how it got as powerful as it did. I want people to think about what they want the world to look like,” she said. “Because we’re at the tipping point right now where we could all become celebrities with famous faces, and everyone — the government, companies, individuals — could know who we are. And I don’t think that’s the world we want to live in.”