When we think about the hallowed halls of academia, we often envision a place where lofty ideas are discussed, philosophies are argued and theories are tested.
Does artificial intelligence have a place in these supposed bastions of independent thoughts?
We should obviously be considering the ethics of AI use by students and teachers, but also be thinking about recruitment and retention as well as the technology disparities between public and private universities as we embark on a deeper search for answers.
An AI ethical gray area?
Marley Stevens is a University of North Georgia student who has been in the news recently because of cheating allegations due to her use of the AI tool Grammarly while writing an essay for class. She claims she only used the software to check for spelling and punctuation errors, but her professor gave her a zero on her paper and reported her to the school’s Office of Student Integrity after using AI-detection software to scan Marley’s work. It’s been reported that Stevens lost a scholarship and was put on academic probation after a disciplinary hearing.
However, in an unsurprising plot twist, the AI-detection software used in this case has also come under fire for inaccuracies in identifying AI-generated content.
The use of the Turnitin software tool has been disabled at major universities including Vanderbilt, Michigan State, and Northwestern, among others. And, although not relevant in this particular case, a study conducted by Stanford University has shown that AI-detection tools in general have a bias against non-native English speakers.
The question around AI use on college campuses came up in interviews conducted in Philadelphia by Love Now Media. Most of the respondents were college students who readily acknowledged their own use of tools like ChatGPT, but when it came to the question of whether professors should also take advantage of the technology, replies were mixed.
Michael has an instructor who “is already using AI and openly admits to it,” he said. Because the instructor provides insight on how he’s using the AI, Michael sees no problem with it, and even went so far as to call it “beneficial.”
Conversely, Owen felt very strongly that “teachers should know the lessons,” and that AI “doesn’t know how to interact with people in a way that teachers can.”
Is there a chance universities will use AI to replace teachers? Morehouse College recently announced the introduction of AI teaching assistants, rolling out in the fall semester. But the California state legislature passed a bill in June prohibiting AI bots from replacing community college faculty, and similar legislation has been introduced in Minnesota.
AI as the great equalizer?
In May, I had the opportunity to provide a high-level overview to public health educators at Philly’s La Salle University on how they might start thinking about implementing artificial intelligence into programs at the school.
I was surprised by the disconnect between the educators’ enthusiasm to incorporate AI into learning versus their lack of optimism around how — or if — the school would be able to fund any new initiatives. Something as simple as the school paying for a professional ChatGPT subscription was regarded as doubtful.
That’s a very different approach than taken by other, more affluent schools in the city, and I was struck by the disparity.
The University of Pennsylvania has not only added both bachelor’s and master’s degree programs in AI, but is also integrating AI across all fields at Wharton, the business school, and hired a vice dean of AI for its medical school. Drexel University launched a learning group to better understand how AI can be a teaching tool, and Temple University is using a recent $2.5 million donation to advance its AI engineering curriculum.
Amira, another Love Now Media interviewee, was asked if they thought AI was something that could bridge the gap between public and private education. “I don’t think so,” they said, “because the gap between that is wealth inequality and income taxes informing how much schools get access to money and resources. So, I think that’s more to do with classism than access to AI.”
We also have to consider whether AI is facilitating the elimination of potential students from consideration before they ever get a chance to apply.
Some schools are now using AI tools like Element451 for predictive analytics, trying to gauge things such as how likely a student is to apply to a school in order to save money on marketing costs. Using artificial intelligence in college recruitment can be an enormous help to overworked admissions teams, and there are many examples of how AI chatbots help prospective students with applications and current students with services that have been shown to improve retention and graduation rates.
However, this all still depends on a school’s resources and its ability to implement these systems. It remains to be seen whether schools with fewer resources will be able to keep up.
The kids are alright?
On brand for Love Now Media, interviewees were also asked what they loved about learning — and if AI could encourage or enhance that love.
Responses fell on both sides of the coin.
Lauren felt like people could “lose their sense of self because they were so connected to technology,” and Qadasha thought that AI caused people to “have less access to creative things.” However, Harry felt that AI “can encourage you. [I think] It gives you a lot more resources, resources to learn, and opens up a lot more doors.”
Overall, the respondents seemed to have a healthy attitude towards artificial intelligence with the hope that it will continue to evolve into a more useful tool while maintaining enough healthy skepticism to not get swept up in the hype.
If colleges and universities can come up with creative ways to engage everyone on campus using AI while establishing policies that will ensure its ethical use, this can ultimately be a mutually beneficial relationship for both.
This is the second in a series of essays produced by Love Now Media and Technical.ly exploring the impact of artificial intelligence on various aspects of life in Philly. Read the first essay on AI and personal safety.
The series is produced with the support of the Philadelphia Journalism Collaborative, a coalition of more than 25 local newsrooms doing solutions reporting on things that affect daily life where the problem and symptoms are obvious, but what’s driving them isn’t. Follow at @PHLJournoCollab.