The inaugural Newhouse Impact Summit focused on bridging academia with industry by examining the future of artificial intelligence.
The event, titled “Provoking and Prognosticating on Generative AI Futures,” took place July 27-28, 2023, at the Newhouse School.
All events were held at the Joyce Hergenhan Auditorium in Newhouse 3 unless otherwise noted.
Opening remarks
Mark J. Lodato, Dean, S.I. Newhouse School of Public Communications
Moral Panics or Mindful Caution? Moderating Excitement for & Expectation of AI’s Impact
Presenter: Nick Bowman, Associate Professor, Newhouse School
Bowman has contributed to more than 125 peer-reviewed manuscripts, 200 conference presentations, several dozen book chapters, and two textbooks. His research is focused on understanding the cognitive, emotional, physical, and social demands of interactive media content. In addition to Syracuse, he also holds faculty appointments at the Universität Erfurt in Germany and National Chengchi University in Taiwan.
Overview: With increasingly accessible tools bringing AI platforms to the masses, an inevitable cycle of hype, fear, and misunderstanding stands to outpace bona fide developments in the technologies themselves. Yet, this quick interest ignores the decades-long usage of AI in various media and communication fields. Bowman will discuss the history of several types of AI, including what they can and cannot do (yet); and connect this conversation to prior research and scholarship into historical moral panics and prior hype cycles regarding technology. The presentation will end with a call for more careful, deliberate, and precise discussion about the perils and pearls of AI, serving as a calibration point for how we frame its role in an increasingly mediated world.
LLM’s & Linguistic Competency: How Well Does GPT-3 Understand English Creole?
Presenter: Rhonda McEwen, President and Professor, University of Toronto Victoria
McEwen has worked in digital communications media for 15 years, both for companies providing such services and in a consulting role. Her research and teaching centers around information practices involving new media technologies, with an emphasis on mobile and tablet communication, new media, social networks and sensory information processing.
Overview: GPT-3 has proven to be proficient at understanding and producing English sentences. Less is known about its ability to handle dialectal variation or English spoken by those outside of the “Inner Circle” (Kachru 1985). This may place World English speakers outside of the Inner Circle at a disadvantage as they may not be able to access the same information as those inside the Inner Circle, thereby perpetuating inequality. This presentation will share early data on a research project that examines GPT-3’s linguistic competency as it is tested on Trinidadian Creole – a derivative of English and European languages. The aim is to determine if a problem exists and explore ways to rectify it if it does. The core research question is: to what extent does GPT-3 provide equivalent answers to prompts posed in Standard Canadian English and Trinidadian English Creole?
Saving Time, Maximizing Impact: AI in Life Science Sales & Training
Presenter: Jim Ferreira, Learning and Development Leader, Millpore Sigma
Ferreira is an L&D lifer. He started by burning training videos to CDs in the 1990’s and now leads a global-learning program for a billion-euro company. Focused on creating modern learning programs and finding better ways for employees to get information, he is excited about advances in digital training tools and the amazing possibilities for modern corporate learning they offer.
Overview: AI has revolutionized numerous industries, and the life sciences is no exception. This presentation will explore the realm of AI-powered lesson planning and its impact on sales teams within the life science industry. Sales professionals in life science are responsible for conveying the value of organizations’ services to potential clients. However, the industry’s pace is evolving rapidly. Complexity is going through the roof, and teams need to possess a vast array of technical knowledge. This is where AI comes into play. By engaging with AI, one can quickly generate draft lesson plans by simply inputting a topic. This integration of AI bestows several advantages. Firstly, its algorithms possess the ability to analyze vast amounts of data from scientific literature, industry reports, and case studies. As a result, they can provide well-rounded content that keeps pace with the latest advancements in the life science field. Secondly, AI-powered lesson plans can be customized to cater to the unique needs of different teams. One can input specific requirements into the AI model and receive tailor-made plans that align with distinctive selling propositions. AI has unlocked a world of possibilities. With tools like ChatGPT, the future of employee training shines brightly. As AI continues to advance, the way will be paved for further innovation in the field.
Relational AI: How Generative AI Reshape Communications, Stakeholder Relations and Trust
Presenters
Sévigny runs an active management and strategic consulting practice driven by his extensive experience in communications, marketing, politics, data science and analytics. His research focuses on the study of human communication and communication theory applied to the practices of public relations, communications management and reputation management.
Waxman is a digital communications strategist, AI researcher, developer of social media marketing courses for LinkedIn Learning and president of a consulting form. He teaches digital strategy at McMaster University, Schulich School of Business, Seneca College and the University of Toronto School of Continuing Studies.
Overview: Building relationships with stakeholders has traditionally been done through live events, public relations, and marketing initiatives. While elements such as scheduled social media posts, and other automated tasks have been added to the process, the intentionality is still human. Relational AI alters that dynamic by placing a machine between an organization and those they are trying to reach. This presentation will delve into how AI agents alter ethics, persuasion, symmetrical communications, and relationships between an organization and its public.
Lunch and Learn: Editor in the Loop Approaches to AI in Journalism
Presenter: AJ Chavar, Creative Strategist, The New York Times and Newhouse Fellow in Immersive Journalism
Chavar is an award-winning cinematographer, photographer, multimedia journalist and producer. In his current role, he explores and explains emerging technologies in the field of journalism at the R&D Lab of The New York Times. He recently returned to the Newhouse School as a Fellow in Immersive Journalism to teach photojournalism courses.
Overview: Many AI system designs incorporate a “human in the loop” approach. How do we apply that principle in practice for journalism? Through these explorations, hear how the boundaries of technology are paving the way for innovative solutions in the domains of NLP, LLM, AI, image processing, and computer vision. The findings and developments from these projects contribute to the advancement of journalism and open new avenues for future research and application. Specifically, this session will address NLP / LLM / AI (“switchboard” tool), ICON (pose extraction), OCR (“paper trail” tool), and NeRF (portrait experiments / photorealistic rendering).
Deconstructing Large Language Models
Presenter: David Gunkel, Professor, Northern Illinois University
Gunkel is an educator, scholar, and author specializing in the ethics of emerging technology. His teaching and research synthesize the hype of technology with the rigor and insight of contemporary critical analysis. He is the author of 12 books and over 80 scholarly journal articles and book chapters; and has lectured and delivered papers throughout North America, South America, and Europe.
Overview: Recent criticism of large language models (LLM) and generative AI have focused on the way that these applications are little more than “stochastic parrots”- technological devices that generate seemingly intelligible statements but cannot understand a word of what they say. If the terms of these evaluations sound familiar, they should. They are rooted in foundational concepts regarding language and technology that have been definitive of Western systems of knowing since the time of Plato. The current crop of critical correctives and LLM hype-reduction efforts reproduce (or one might be tempted to say “parrot”) this ancient wisdom. And it works, precisely because it sounds like good common sense. But that’s the problem. This presentation takes aim at this largely unquestioned theoretical framework; identifies its limitations and inabilities to understand the opportunities and challenges of LLMs; and concludes by providing a more robust method for responding to these technological innovations.
GenAI & the Future Of SEO: How Automation Can Create & Inspire Content that Ranks
Presenter: Adrienne Wallace, Strategic Operations Director, Black Truck Media and Marketing, and Associate Professor, Grand Valley State University
Wallace is a prolific writer and conference speaker with a passion for student-to-professional development. She has taught at GVSU for more than 14 years and advises the GVSU Public Relations Student Society of America chapter and its student-run PR firm, GrandPR. To keep lessons relevant for learners, she also works for Black Truck Media & Marketing as a Digital Strategist.
Overview: Generative artificial intelligence (GenAI) can be used in search engine optimization (SEO) and content marketing. AI can help with keyword research, content optimization, and data analysis in addition to aiding content strategy, creation, distribution, and reporting. In the communications industry, we are constantly producing copy for blog posts, website landing pages, product descriptions, ads, social media posts, video descriptions, and/or emails. AI writing software could help this process; however, it could also lead to lower-quality content that can harm an SEO strategy. This session will uncover why we should avoid creating “search engine-first” content as well as ways in which GenAI can assist production of original, high-quality, people-forward content that demonstrates qualities of E-E-A-T (expertise, experience, authoritativeness, and trustworthiness).
Analyzing & Exploiting Multimedia Assets Utilizing Generative AI
Presenter: Arslan Basharat, Assistant Director of Computer Vision, Kitware
Basharat has undertaken roles of research, software development, project management, business development and personnel management over the years. Serving as a principal investigator on a variety of software research and development projects sponsored by government organizations, he has investigated computer vision, anomaly detection, activity recognition, dynamical systems in videos, imagery forensics and imagery retrieval.
Overview: Generative AI can be utilized in a variety of innovative ways for analysis and exploitation of images, videos, audio and text. This includes applications such as image / video understanding, object detection, speaker identification, forensic authentication of multimedia assets and others. This talk will cover a couple of these: object detection in satellite imagery and forensic authentication of multimedia assets. Analyzing satellite images to provide situational awareness is crucial for applications ranging from disaster relief to defense. Though object detection is an important technique used for this purpose, low-resolution satellite images pose a significant challenge to a variety of algorithms. This limitation is addressed by utilizing generative AI super-resolution methods to enhance imagery in order to improve performance. The success of this approach is demonstrated by the significantly improved accuracy of airplane detection in low-resolution imagery. The threat of disinformation has also been catalyzed. Online articles with text and images generated by AI can now be produced at scale as part of such campaigns. Social media networks can then further spread this disinformation through synthetic AI-generated user profiles. Therefore, defensive tactics are crucial to mitigating this threat. In this talk, various forensic authentication techniques that have been developed under the DARPA SemaFor program will be discussed. These approaches include a variety of AI methods that have demonstrated compelling results in multiple quantified benchmark tests.
Unveiling the Power of AI: From Imagination to Reality
Presenter: Rafael “RC” Concepción, Assistant Teaching Professor, Newhouse School
Concepción is a photographer, podcast host, educator, and author of 12 books on photography, Photoshop, Lightroom and HDR photography. When he is not teaching digital post-production, he travels around the world consulting on developing best practices in image management, photography techniques and website development.
Overview: Dive into the remarkable world where imagination merges seamlessly with reality. Explore the cutting-edge techniques that allow you to push the boundaries of creativity and solve common problems in the world of content creation. Learn how you can responsibly use AI to unleash a new era of artistic expression and visual storytelling.
The Good, the Bad, & the Ugly: AI Use in Entertainment and Art, Disinformation and Offensive / Obscene Content
Presenter: Greg Archer, Video Producer / Editor and Disinformation Team Lead, PAR
Archer manages video, audio, VFX and multimedia production for internal and external use at PAR. His work focuses upon using machine-learning and AI to create Deepfakes and synthetic media; developing programs to identify and attribute misinformation and manipulated media; and creating datasets of manipulated / fake media for the DARPA MediFor and SemaFor programs.
Overview: As AI continues to become more widely available and adopted, the use cases for generated media grow as well. This presentation will outline “the good” use cases related to entertainment and art; “the bad” related to disinformation; and “the ugly” related to offensive and obscene content. It will also touch on the topic of ethical use of AI; and who decides where the “good, bad and ugly” lines are drawn.
Navigating Human-AI Teams: Mental Models & Stakeholder Perceptions
Presenter: Travis Loof, Associate Professor and Graduate Program Director, University of South Dakota
Loof is dedicated to understanding interactive media processing and human-computer / AI interaction. He has contributed to several peer-reviewed manuscripts and conference presentations on how humans socially and cognitively process media and technology in ways that parallel our interactions with other humans. His research aims to bridge theory and practice by providing valuable insights for researchers, practitioners and students in the field.
Overview: This presentation explores the dynamics within Human-Artificial Intelligence teams (HATs) and their impact on decision-making. Much of the existing research on human-AI interaction has focused on intra-team dynamics, neglecting the outside perceptions of secondary groups. Therefore, this talk will delve into two themes: the role of mental models towards AI; and how the mental models of secondary groups may impact adoption of AI-assisted business intelligence. The first theme focuses on the practical and theoretical explanations of applying mental scripts to human-AI interactions. The aim is to discuss the necessary conditions for construction of applicable mental models of AI within teams. Mental models can influence how humans interact with AI; and their perceptions of the capabilities, responsibilities, and roles attributed to it within a team setting. The second theme focuses on the dynamics associated with stakeholder interaction with HATs. By analyzing these, insight can be gained into the influence of mental models on team dynamics, decision-making processes, and overall performance. Furthermore, differences in team composition may have implications for HAT performance and evaluation. This presentation will conclude on the challenges of increasing confidence and reducing uncertainty for recipients of insights generated by HATs.
The following events will take place in the I-3 Center in Newhouse 3.
Next Steps and Grant Brainstorming
Presenter: Jason Davis, Research Professor, Newhouse School
Davis is co-director of the Real Chemistry Emerging Insights Lab at the Newhouse School and serves as lead researcher on DARPA’s SemaFor program. His research focuses on the detection of misinformation and disinformation using AI / ML tools, and the falsification methods for multi-modal media. He holds over 10 patents in the areas of diabetes, water purification technology, anti-fouling coatings and RNA sample stabilization technology.
Overview: This session will focus on developing action-oriented collaborations, opportunities, and mechanisms that sub-groups can identify and pursue together post-summit (including grant opportunities, public private partnerships, and hackathons). It will also support specific breakout opportunities that groups may have identified earlier during the summit and want to explore face-to-face in more detail.
From Provocation to Publication: Extending Our Notes for Broader Publics
Presenters
Luttrell is an innovative educator, distinguished scholar and experienced academic leader. In addition to successfully securing funding for research initiatives, she has authored more than a dozen books; published in academic and professional journals; and presented at domestic and international conferences. Her research focuses on AI, public relations, data analytics, a multi-generational workforce and the intersection of social media with society.
Overview: From initial abstract submissions to discussion and debate, this session co-led with Bowman will consider avenues in which the reach of the AI Summit can be refined and extended in both the short-term (through white papers published via Newhouse) and the long-term (through an edited volume proposal with Routledge that is already in discussion).