Skip to main content
 

International Partners Discuss Uses of AI

In the near future, artificial intelligence (AI) systems may help people dealing with intimate partner violence, be used to address misinformation, or coach people to be more civil online. Those are just some of the research topics shared at a conference on the sociotechnical consequences of AI at UNC-Chapel Hill last month. The event brought together an international group of interdisciplinary scholars who explored the ethical, organizational, social, and computation dimensions of artificial intelligence.

Conference Evolved from Long-Standing Partnership

The conference came about from conversations between School of Information and Library Science Professor Mohammad Hossein Jarrahi and Universitat Tübingen Senior Researcher Laura Schelenz.

“The primary goal was to bring together researchers from the two universities who are interested in different aspects of AI. However, we ended up attracting researchers from other institutions as well, which enriched the conversation,” said Jarrahi.

UNC-Chapel Hill’s partnership with Tübingen began in 1986. The partners were recently recognized for their data science collaboration.

“Carolina’s partnership with Tübingen is as rich as it is vibrant,” said Barbara J. Stephenson, the Vice Provost for Global Affairs and Chief Global Officers at UNC-Chapel Hill. “It’s made up of many strategic interactions, inspired by our institutions’ — and nations’ — aligned values, which allow us to explore shared global challenges like generative artificial intelligence and information warfare. I am grateful for Dean Bardzell’s vision and leadership, as the School of Information and Library Science demonstrates just how indispensable this partnership is to the University’s core mission.”

Duke University professor Christopher Andrew Bail at front of classroom speaking to audienceResearchers Share Current Work in AI

The one-day conference involved short presentations on current research related to a variety of uses of AI. Participants then broke into working groups for vibrant discussions on key themes. The event concluded with a keynote lecture on generative AI from Christopher Andrew Bail from Duke University.

“The event looked at AI from several angles — as a transformative tool for society, as a new technology that isn’t yet addressed in our legal systems, as a tool for studying people and society in new ways that weren’t possible previously, and as a tool for building new architecture, art, and technology,” said SILS doctoral student Aashka Dave.

“I appreciated the breadth of the event; it broadened my understanding of the sociotechnical consequences of AI in a way that isn’t possible when I am looking at one narrow slice of AI’s applications to our lives.”

A team from the Center for Information, Technology, and Public Life at UNC-Chapel Hill talked about their work using AI to identify data voids using information from Wikipedia searches. The group gave the example of the sudden spike in searches for information on hydroxychloroquine, a medication often used to prevent and treat malaria, during the COVID-19 pandemic. The researchers showed how AI could be used to identify these types of sudden spikes in information searches in real time. Journalists could use the information to help spread authenticated information. Wikipedia editors could use the information to identify when pages might need to be edited or protected from those seeking to spread false or misleading information.

Conference Outcomes

Jarrahi says his goal is to synthesize the various discussions from the workshop and to explore further opportunities for collaboration between the two institutions.

It’s likely that the event attendees met new potential collaborators and learned about new ways of approaching their own research topics.

Dave expects to AI to be a permanent fixture in her life and career.

“As someone who studies data models and how we use and understand them, I see AI as a progression in that trajectory, opening up new avenues for how we find and understand information and how we build accessible information systems and media ecosystems. There are challenges, of course, that need studying, including whether/how AI can perpetuate biases already present in our society, and how we as a society can work to prevent those.”