Adapting to AI

Newhouse faculty members are immersed in study and research across a host of topics that investigate how artificial intelligence touches our lives, from the challenges of rooting out fake news to the opportunities and risks that come with the integration of AI and extended reality. Here, they share insights into how AI is affecting their respective fields. 


Makana Chock

Makana Chock 

David J. Levidow Endowed Professor of Communications 
Director, Extended Reality Lab 

The integration of AI and extended reality (XR) offers incredible opportunities for creativity and heightened immersion, but also increases the risks of misinformation and privacy violations. XR technologies collect individualized information about users’ body motions. Generative AI can incorporate that information to create personalized interactive experiences that enhance the experiences of XR users. This has great potential for education, job-training, therapy, entertainment and gameplay. However, this data could also be used to manipulate and mislead through tailored persuasive techniques and interactions with personalized virtual avatars.


Headshot of Joshua Darr

Joshua P. Darr

Associate Professor, Communications
Senior Researcher, Institute for Democracy, Journalism and Citizenship 

AI forces the dwindling local news industry to answer a difficult question: Is AI-generated local news better than nothing? If we want AI to be a civically useful part of local news’ future, local governments need to provide transparent civic information that will train and supply AI “journalism.” One part of my research into new models of nonprofit news asks whether we can use the efforts of these new sites to create informative, equitably focused civic products (including meeting notes, how-to guides and policy analyses) that could be scaled and delivered to broader audiences using AI. 


Jason Davis

Jason Davis

Research Professor
Co-Director, Real Chemistry Emerging Insights Lab

While the challenges of fake news and misinformation are not new, the speed, scale and global impact created by digital media channels certainly are. The application of generative AI systems capable of creating completely synthetic text, images, audio and video has the potential to disrupt this dynamic and even further erode public trust. To meet this challenge, our research has had to move beyond a focus on development and evaluation of new digital detection tools and start working with these AI systems directly. By pushing AI generative capabilities to the limit, we can help shape digital literacy and ethical frameworks.  


Gina Luttrell

Regina Luttrell

Senior Associate Dean
Associate Professor, Public Relations

The integration of AI into mass communication classrooms has effects that will reverberate for years to come. Proper utilization and integration of AI-driven tools are key to the future of our industry. For instance, PR professionals rely on data analysis powered by AI to make data-driven decisions, a stark change from traditional roles in the field. However, rushing to incorporate AI without being aware of its vast potential and limitations can lead to unintended results. The value of artificial intelligence to professionals and rapid proliferation in strategic communication make it an indispensable addition to the field and the classroom. It is vital to research and comprehend these shifts and their ramifications.  


Generative AI, like GPT-4, has revolutionized product development by automating parts of the user interface design and coding processes. Through AI, code generation can be achieved at unprecedented speed, reducing human error and enhancing efficiency. In the advanced media management program, we are examining this shift toward automated product development by studying the quality of AI-generated code and design and its impact on developers’ productivity. We are also looking at the implications of this technology on workforce policies, dynamics, job roles and skills required in the future. We aim to better understand how AI can complement human developers, rather than replace them.