Want to broaden participation in computing education? Change the way it’s measured.
Mon 08.26.24 / Milton Posner
Want to broaden participation in computing education? Change the way it’s measured.
Mon 08.26.24 / Milton Posner
Mon 08.26.24 / Milton Posner
Mon 08.26.24 / Milton Posner
Want to broaden participation in computing education? Change the way it’s measured.
Mon 08.26.24 / Milton Posner
Want to broaden participation in computing education? Change the way it’s measured.
Mon 08.26.24 / Milton Posner
Mon 08.26.24 / Milton Posner
Mon 08.26.24 / Milton Posner
On the cover of the July issue of Communications of the ACM — the monthly journal of the world’s largest computing society — was a plea to the community: If you want to broaden participation in the field, change the way you measure it.
The article was penned by Valerie Barr, distinguished professor at Bard College; Carla Brodley, Khoury College dean of inclusive computing and founding director of the Center for Inclusive Computing (CIC); and Manuel Pérez Quiñones, UNC Charlotte professor and CIC visiting professor. The trio contend that the typical measures of participation in computing education, namely enrollment and graduation rates for different gender and racial groups, show only a portion of the picture. To fully understand and improve the status quo, universities instead must evaluate their data in new ways.
“To improve diversity and [participation] analysis and assessment,” the researchers wrote, “institutions should examine cohort-based data, report intersectional data as a norm, and consider university demographic context.”
Let’s take those one by one.
“Examine cohort-based data”
According to the researchers, the most common data point used to measure computing participation is graduation rate — the number of computing graduates from a given demographic group in a given year. While the approach can pinpoint some trends, Brodley explained that graduation rate misses others and often fails to address the root of the issue at hand.
“If the participation of some demographic group is going up or down over time, you also need to look at who’s going to college,” Brodley says. “You can’t just look at it the way we traditionally have, which is in isolation for one year.”
That’s where the trio’s cohort-based, over-time analysis, which grew out of Barr’s research, takes over. The trio recommend using multiple data analysis methods to analyze each incoming cohort of students, including their pre-college computing experience. Once they arrive, departments should track what portion of each demographic group is retained from year to year, as well as their drop, fail, and withdraw rates in the introductory sequence of computing classes.
Additionally, while graduation rate analyses often rely on the Department of Education’s IPEDS (Integrated Postsecondary Education Data System) data — which is released on a two-year delay — schools have access to their own data right away.
“We don’t need to wait years to see if a university intervention is working,” Brodley says. “IPEDS is great for graduation rates, but it doesn’t tell you how well you’re doing right now … Departments have that data available, but often don’t collect or analyze it.”
“Report intersectional data as a norm”
Most computing participation research reports on gender or race, but doesn’t combine the two into a slew of intersectional categories. For instance, what would seem at first blush to be a jump in the number of Black degree earners in recent years would more accurately be characterized as a jump in Black male degree earners. The numbers for Black women have not followed suit.
“If you want intersectional demographic data, it’s not something that’s often available as a CS department,” Brodley explains. “You have to get it from wherever it’s centrally held at the university level. And that can be a difficult conversation. A university may also not have the resources for each college or department to analyze their own demographic data.”
The CIC has helped more than 60 computing departments to obtain centrally held intersectional data on their students, which can help those departments colleges to find and address opportunity gaps.
READ: CIC shares master’s bridge program, gender diversity lessons with other colleges
“Computing departments often have no control over who attends their university,” the researchers elaborated in their article, “but they can influence who can discover computing, feel a sense of belonging, and persist to graduation.”
“Consider university demographic context”
“Every university’s opportunity to diversify computing is different,” Brodley says. “If we’re trying to increase demographic diversity in our universities, we can’t hold everybody to the same set of population-wide metrics. Different states have different demographic distributions.”
For example, a college with 35% female computing graduates, despite being well above the national average, would be falling short if its university was 55% female. Similarly, a state university with 10% Black computing graduates, despite trailing the percentage of Black people in the US, would be succeeding in a state like Maine, which has a much smaller Black population to draw from.
As such, the researchers argue, computing departments should be held to a standard which matches the demographics of their larger universities, and larger universities to the demographics of their state. The latter is especially salient for the public schools that the CIC partners with, which tend to reflect the demographics of their states more closely than private schools do.
By expanding their data analysis to include these comparisons, and by doing so over time to account for demographic changes, computing faculty and administrators can understand student demographics in a wider, more accurate context, and can plan more effective strategies to improve it. And at the CIC, which centers such analysis in its consulting work, Brodley will continue to advocate these principles.
“This is relevant to every area, not just computing,” she says. “Physics should be looking at it. English should look at it. Nursing should look at it.”
Brodley published a second article in the same magazine, co-authored with fellow CIC director Catherine Gill, exploring the value of common exams and assignments across multi-section courses in introductory computing classes, and their relevance to broadening computing participation. She also co-authored a piece in the August edition analyzing the math requirements of the 158 US universities that graduate more than 150 CS students each year, and suggesting several ways to ensure that math pre- and co-requisites for CS classes are not a barrier to discovering and persisting in computing.
On the cover of the July issue of Communications of the ACM — the monthly journal of the world’s largest computing society — was a plea to the community: If you want to broaden participation in the field, change the way you measure it.
The article was penned by Valerie Barr, distinguished professor at Bard College; Carla Brodley, Khoury College dean of inclusive computing and founding director of the Center for Inclusive Computing (CIC); and Manuel Pérez Quiñones, UNC Charlotte professor and CIC visiting professor. The trio contend that the typical measures of participation in computing education, namely enrollment and graduation rates for different gender and racial groups, show only a portion of the picture. To fully understand and improve the status quo, universities instead must evaluate their data in new ways.
“To improve diversity and [participation] analysis and assessment,” the researchers wrote, “institutions should examine cohort-based data, report intersectional data as a norm, and consider university demographic context.”
Let’s take those one by one.
“Examine cohort-based data”
According to the researchers, the most common data point used to measure computing participation is graduation rate — the number of computing graduates from a given demographic group in a given year. While the approach can pinpoint some trends, Brodley explained that graduation rate misses others and often fails to address the root of the issue at hand.
“If the participation of some demographic group is going up or down over time, you also need to look at who’s going to college,” Brodley says. “You can’t just look at it the way we traditionally have, which is in isolation for one year.”
That’s where the trio’s cohort-based, over-time analysis, which grew out of Barr’s research, takes over. The trio recommend using multiple data analysis methods to analyze each incoming cohort of students, including their pre-college computing experience. Once they arrive, departments should track what portion of each demographic group is retained from year to year, as well as their drop, fail, and withdraw rates in the introductory sequence of computing classes.
Additionally, while graduation rate analyses often rely on the Department of Education’s IPEDS (Integrated Postsecondary Education Data System) data — which is released on a two-year delay — schools have access to their own data right away.
“We don’t need to wait years to see if a university intervention is working,” Brodley says. “IPEDS is great for graduation rates, but it doesn’t tell you how well you’re doing right now … Departments have that data available, but often don’t collect or analyze it.”
“Report intersectional data as a norm”
Most computing participation research reports on gender or race, but doesn’t combine the two into a slew of intersectional categories. For instance, what would seem at first blush to be a jump in the number of Black degree earners in recent years would more accurately be characterized as a jump in Black male degree earners. The numbers for Black women have not followed suit.
“If you want intersectional demographic data, it’s not something that’s often available as a CS department,” Brodley explains. “You have to get it from wherever it’s centrally held at the university level. And that can be a difficult conversation. A university may also not have the resources for each college or department to analyze their own demographic data.”
The CIC has helped more than 60 computing departments to obtain centrally held intersectional data on their students, which can help those departments colleges to find and address opportunity gaps.
READ: CIC shares master’s bridge program, gender diversity lessons with other colleges
“Computing departments often have no control over who attends their university,” the researchers elaborated in their article, “but they can influence who can discover computing, feel a sense of belonging, and persist to graduation.”
“Consider university demographic context”
“Every university’s opportunity to diversify computing is different,” Brodley says. “If we’re trying to increase demographic diversity in our universities, we can’t hold everybody to the same set of population-wide metrics. Different states have different demographic distributions.”
For example, a college with 35% female computing graduates, despite being well above the national average, would be falling short if its university was 55% female. Similarly, a state university with 10% Black computing graduates, despite trailing the percentage of Black people in the US, would be succeeding in a state like Maine, which has a much smaller Black population to draw from.
As such, the researchers argue, computing departments should be held to a standard which matches the demographics of their larger universities, and larger universities to the demographics of their state. The latter is especially salient for the public schools that the CIC partners with, which tend to reflect the demographics of their states more closely than private schools do.
By expanding their data analysis to include these comparisons, and by doing so over time to account for demographic changes, computing faculty and administrators can understand student demographics in a wider, more accurate context, and can plan more effective strategies to improve it. And at the CIC, which centers such analysis in its consulting work, Brodley will continue to advocate these principles.
“This is relevant to every area, not just computing,” she says. “Physics should be looking at it. English should look at it. Nursing should look at it.”
Brodley published a second article in the same magazine, co-authored with fellow CIC director Catherine Gill, exploring the value of common exams and assignments across multi-section courses in introductory computing classes, and their relevance to broadening computing participation. She also co-authored a piece in the August edition analyzing the math requirements of the 158 US universities that graduate more than 150 CS students each year, and suggesting several ways to ensure that math pre- and co-requisites for CS classes are not a barrier to discovering and persisting in computing.
On the cover of the July issue of Communications of the ACM — the monthly journal of the world’s largest computing society — was a plea to the community: If you want to broaden participation in the field, change the way you measure it.
The article was penned by Valerie Barr, distinguished professor at Bard College; Carla Brodley, Khoury College dean of inclusive computing and founding director of the Center for Inclusive Computing (CIC); and Manuel Pérez Quiñones, UNC Charlotte professor and CIC visiting professor. The trio contend that the typical measures of participation in computing education, namely enrollment and graduation rates for different gender and racial groups, show only a portion of the picture. To fully understand and improve the status quo, universities instead must evaluate their data in new ways.
“To improve diversity and [participation] analysis and assessment,” the researchers wrote, “institutions should examine cohort-based data, report intersectional data as a norm, and consider university demographic context.”
Let’s take those one by one.
“Examine cohort-based data”
According to the researchers, the most common data point used to measure computing participation is graduation rate — the number of computing graduates from a given demographic group in a given year. While the approach can pinpoint some trends, Brodley explained that graduation rate misses others and often fails to address the root of the issue at hand.
“If the participation of some demographic group is going up or down over time, you also need to look at who’s going to college,” Brodley says. “You can’t just look at it the way we traditionally have, which is in isolation for one year.”
That’s where the trio’s cohort-based, over-time analysis, which grew out of Barr’s research, takes over. The trio recommend using multiple data analysis methods to analyze each incoming cohort of students, including their pre-college computing experience. Once they arrive, departments should track what portion of each demographic group is retained from year to year, as well as their drop, fail, and withdraw rates in the introductory sequence of computing classes.
Additionally, while graduation rate analyses often rely on the Department of Education’s IPEDS (Integrated Postsecondary Education Data System) data — which is released on a two-year delay — schools have access to their own data right away.
“We don’t need to wait years to see if a university intervention is working,” Brodley says. “IPEDS is great for graduation rates, but it doesn’t tell you how well you’re doing right now … Departments have that data available, but often don’t collect or analyze it.”
“Report intersectional data as a norm”
Most computing participation research reports on gender or race, but doesn’t combine the two into a slew of intersectional categories. For instance, what would seem at first blush to be a jump in the number of Black degree earners in recent years would more accurately be characterized as a jump in Black male degree earners. The numbers for Black women have not followed suit.
“If you want intersectional demographic data, it’s not something that’s often available as a CS department,” Brodley explains. “You have to get it from wherever it’s centrally held at the university level. And that can be a difficult conversation. A university may also not have the resources for each college or department to analyze their own demographic data.”
The CIC has helped more than 60 computing departments to obtain centrally held intersectional data on their students, which can help those departments colleges to find and address opportunity gaps.
READ: CIC shares master’s bridge program, gender diversity lessons with other colleges
“Computing departments often have no control over who attends their university,” the researchers elaborated in their article, “but they can influence who can discover computing, feel a sense of belonging, and persist to graduation.”
“Consider university demographic context”
“Every university’s opportunity to diversify computing is different,” Brodley says. “If we’re trying to increase demographic diversity in our universities, we can’t hold everybody to the same set of population-wide metrics. Different states have different demographic distributions.”
For example, a college with 35% female computing graduates, despite being well above the national average, would be falling short if its university was 55% female. Similarly, a state university with 10% Black computing graduates, despite trailing the percentage of Black people in the US, would be succeeding in a state like Maine, which has a much smaller Black population to draw from.
As such, the researchers argue, computing departments should be held to a standard which matches the demographics of their larger universities, and larger universities to the demographics of their state. The latter is especially salient for the public schools that the CIC partners with, which tend to reflect the demographics of their states more closely than private schools do.
By expanding their data analysis to include these comparisons, and by doing so over time to account for demographic changes, computing faculty and administrators can understand student demographics in a wider, more accurate context, and can plan more effective strategies to improve it. And at the CIC, which centers such analysis in its consulting work, Brodley will continue to advocate these principles.
“This is relevant to every area, not just computing,” she says. “Physics should be looking at it. English should look at it. Nursing should look at it.”
Brodley published a second article in the same magazine, co-authored with fellow CIC director Catherine Gill, exploring the value of common exams and assignments across multi-section courses in introductory computing classes, and their relevance to broadening computing participation. She also co-authored a piece in the August edition analyzing the math requirements of the 158 US universities that graduate more than 150 CS students each year, and suggesting several ways to ensure that math pre- and co-requisites for CS classes are not a barrier to discovering and persisting in computing.
On the cover of the July issue of Communications of the ACM — the monthly journal of the world’s largest computing society — was a plea to the community: If you want to broaden participation in the field, change the way you measure it.
The article was penned by Valerie Barr, distinguished professor at Bard College; Carla Brodley, Khoury College dean of inclusive computing and founding director of the Center for Inclusive Computing (CIC); and Manuel Pérez Quiñones, UNC Charlotte professor and CIC visiting professor. The trio contend that the typical measures of participation in computing education, namely enrollment and graduation rates for different gender and racial groups, show only a portion of the picture. To fully understand and improve the status quo, universities instead must evaluate their data in new ways.
“To improve diversity and [participation] analysis and assessment,” the researchers wrote, “institutions should examine cohort-based data, report intersectional data as a norm, and consider university demographic context.”
Let’s take those one by one.
“Examine cohort-based data”
According to the researchers, the most common data point used to measure computing participation is graduation rate — the number of computing graduates from a given demographic group in a given year. While the approach can pinpoint some trends, Brodley explained that graduation rate misses others and often fails to address the root of the issue at hand.
“If the participation of some demographic group is going up or down over time, you also need to look at who’s going to college,” Brodley says. “You can’t just look at it the way we traditionally have, which is in isolation for one year.”
That’s where the trio’s cohort-based, over-time analysis, which grew out of Barr’s research, takes over. The trio recommend using multiple data analysis methods to analyze each incoming cohort of students, including their pre-college computing experience. Once they arrive, departments should track what portion of each demographic group is retained from year to year, as well as their drop, fail, and withdraw rates in the introductory sequence of computing classes.
Additionally, while graduation rate analyses often rely on the Department of Education’s IPEDS (Integrated Postsecondary Education Data System) data — which is released on a two-year delay — schools have access to their own data right away.
“We don’t need to wait years to see if a university intervention is working,” Brodley says. “IPEDS is great for graduation rates, but it doesn’t tell you how well you’re doing right now … Departments have that data available, but often don’t collect or analyze it.”
“Report intersectional data as a norm”
Most computing participation research reports on gender or race, but doesn’t combine the two into a slew of intersectional categories. For instance, what would seem at first blush to be a jump in the number of Black degree earners in recent years would more accurately be characterized as a jump in Black male degree earners. The numbers for Black women have not followed suit.
“If you want intersectional demographic data, it’s not something that’s often available as a CS department,” Brodley explains. “You have to get it from wherever it’s centrally held at the university level. And that can be a difficult conversation. A university may also not have the resources for each college or department to analyze their own demographic data.”
The CIC has helped more than 60 computing departments to obtain centrally held intersectional data on their students, which can help those departments colleges to find and address opportunity gaps.
READ: CIC shares master’s bridge program, gender diversity lessons with other colleges
“Computing departments often have no control over who attends their university,” the researchers elaborated in their article, “but they can influence who can discover computing, feel a sense of belonging, and persist to graduation.”
“Consider university demographic context”
“Every university’s opportunity to diversify computing is different,” Brodley says. “If we’re trying to increase demographic diversity in our universities, we can’t hold everybody to the same set of population-wide metrics. Different states have different demographic distributions.”
For example, a college with 35% female computing graduates, despite being well above the national average, would be falling short if its university was 55% female. Similarly, a state university with 10% Black computing graduates, despite trailing the percentage of Black people in the US, would be succeeding in a state like Maine, which has a much smaller Black population to draw from.
As such, the researchers argue, computing departments should be held to a standard which matches the demographics of their larger universities, and larger universities to the demographics of their state. The latter is especially salient for the public schools that the CIC partners with, which tend to reflect the demographics of their states more closely than private schools do.
By expanding their data analysis to include these comparisons, and by doing so over time to account for demographic changes, computing faculty and administrators can understand student demographics in a wider, more accurate context, and can plan more effective strategies to improve it. And at the CIC, which centers such analysis in its consulting work, Brodley will continue to advocate these principles.
“This is relevant to every area, not just computing,” she says. “Physics should be looking at it. English should look at it. Nursing should look at it.”
Brodley published a second article in the same magazine, co-authored with fellow CIC director Catherine Gill, exploring the value of common exams and assignments across multi-section courses in introductory computing classes, and their relevance to broadening computing participation. She also co-authored a piece in the August edition analyzing the math requirements of the 158 US universities that graduate more than 150 CS students each year, and suggesting several ways to ensure that math pre- and co-requisites for CS classes are not a barrier to discovering and persisting in computing.