Jay Gogue, Interim President of the NMSU System | New Mexico State University
Jay Gogue, Interim President of the NMSU System | New Mexico State University
The use of artificial intelligence in education is a topic of growing interest and concern. Research examining AI's connection to racial bias has led to the establishment of a research collaborative at New Mexico State University’s College of Health, Education and Social Transformation. This initiative involves faculty from NMSU, the University of Texas at El Paso, and other regional universities focusing on bias in educational uses of generative AI.
Generative AI, such as ChatGPT, enables users to produce text, visual, and audio content using its training data. Melissa Warr, an assistant professor in the School of Teacher Preparation, Administration and Leadership at NMSU, leads a study investigating racial bias in ChatGPT and its implications for educational tools like it.
The study was co-authored by Nicole Jakubczyk Oster from Arizona State University and Roger Isaac from NMSU. It revealed that when given identical student writing samples with demographic information implied rather than explicitly stated, ChatGPT assigned different scores compared to when race was clearly mentioned.
“I used the same passage every time – the only thing that changed was how I described the student,” Warr explains. “The resulting score patterns show significant but implicit bias.”
Warr noted that explicit mention of race did not reveal clear bias; however, descriptions such as "inner-city public school" or "elite private school" resulted in lower scores for inner-city students. Bias also appeared when students were associated with rap music versus classical music preferences.
Recent findings indicate newer AI models like ChatGPT-4o exhibit even more bias than earlier versions. “When directly labeling race, there is bias – it’s just not in the expected direction,” Warr says. She adds that feedback varies in tone based on race: less confident language is used with White students while a more authoritative tone appears with Black and Hispanic students.
These ongoing findings prompted Warr to help establish the Biased AI in Systems of Education (BAIS) research collaborative. The group includes six NMSU faculty members alongside colleagues from Arizona State University, the University of Texas at El Paso, Universidad Autónoma de Ciudad Juárez, the University of Central Arkansas, Utah Valley University, and a Spencer Foundation representative. Their focus extends beyond identifying bias to educating about critical use.
Nicole Oster comments on her experiences: “Learning about generative AI has been full of surprises as it is an emerging and evolving technology.” She notes educators should be cautious but acknowledges benefits such as translating languages or adjusting reading levels for texts.
“While our findings suggest teachers should avoid or be very cautious of using generative AI to grade or provide feedback on student work," Oster states," there are many other uses...that can promote equity in education."
Researchers plan a summer workshop for teachers to help them design curricula addressing these issues.
For more details on Warr’s research visit [Melissa Warr's website](https://melissa-warr.com/). To read more stories like this one visit [Pinnacle Magazine](https://pinnacle.nmsu.edu/).