Science Communication in the Age of GenAI: Between Trust, Truth, and Transformation
April 16, 2025 Isabell MayWhat does generative artificial intelligence mean for the future of SciComm? Dive deeper in our workshop on May 1.
Read about current trends in science communication and science communication-related activities around the University of Maryland, Baltimore (UMB) in SciComm Spotlight, the monthly column of the University of Maryland School of Graduate Studies’ Science Communication (SciComm) certificate program. To see previous SciComm Spotlight columns, visit the program's website.
What if Bill Nye or Neil deGrasse Tyson turn out to be GenAI-generated avatars? Would we trust them the same way that we trust these two people? What differentiates the information created by experienced and trained science communicators from information created by large language models, machine learning systems trained on massive datasets of text to generate human-like language, that apps and tools like ChatGPT and Copilot, to name just two of the many available at the moment, draw on? Also, as an educator and practitioner of science communication, I feel a deep commitment to a conscious connection with our natural resources. And if recent reports around the energy needs of GenAI-driven structures are to be believed, the impact of increasing integration of GenAI tools and applications into day-to-day operations across many areas is deeply concerning. How do I balance my commitment to sustainability and environmental justice with the increasing demand for and even interest in integrating GenAI in my daily workflow?
These questions are part of a wider reckoning with the promise and peril of GenAI in our professional lives. As a science communication educator, I ask myself these and other questions frequently. But here is where I think many of us can agree on: The growth of generative artificial intelligence (or GenAI for short) technology has taken the world by storm. The jury is still out on whether this storm will transform our educational system for the better, just like a hurricane coming up the coast during a hot summer in Maryland can leave the air cool and breezy — or if it will tear down everything in its path. Likely, the reality will be that GenAI, and the many, many apps that everyone in the tech world cannot wait to throw money at, will be around for a while, and neither savior nor doomsday metaphors will be our realities.
Just like lots of conversation around GenAI affecting higher education, GenAI has also caught the attention of those researching and practicing science communication. Science communication (at times abbreviated with the shorthand SciComm), defined as describing communication about science with individuals and groups outside of traditional research settings, was first claimed as a discipline in the early 1950s. Today, we understand science communication engagement with public audiences and key stakeholders like politicians about scientific research and innovations as well as informal science learning (ISL), predominantly among K-12 population, to borrow Faith Kearns’ understanding of this (inter)discipline from her 2021 publication "Getting to the Heart of Science Communication: A Guide to Effective Engagement."
I find myself at times feeling like I have whiplash from being thrown between the two opposing ends of what I like to call the GenAI spectrum: complete refusal on the one hand, to complete adoption on the other. Along this spectrum, I recognize that GenAI can enhance accessibility to and comprehension of scientific research, especially for public audiences with a less in-depth understanding of the scientific process. At the same time, there are legitimate concerns that GenAI-generated content might increase misinformation. In the context of science communication, such information can add to the increasing mistrust of science and scientists. Additionally, GenAI-generated content can be prone to bias and fail to represent diverse voices and experiences, especially those of marginalized communities whose stories and experiences are often not represented in much of the mainstream source material that the large language models powering GenAI platforms rely on for information.
Some of the central questions occupying SciComm practitioners and researchers are:
- How will GenAI affect the dissemination of scientific research and content?
- Will it add to the broader dissemination of science to broad audiences and aid access to scientific content and processes?
- Will it become a tool in hijacking science and scientific research for people’s own political agendas?
- Will it contribute to greater access for marginalized groups to scientific knowledge and participation in the scientific enterprise?
- Will it continue to empower those already empowered and leave those already marginalized by historic systems of oppression and exclusion even further behind?
More importantly, I am thinking of the implications of GenAI tools replacing human creators of various content. There seems to be more and more evidence that employers are looking toward GenAI to replace human employees with more efficient GenAI tools, especially for entry-level jobs. An article in The Times cites an exasperated CEO of an advertising company who “has given up employing the young anymore. It’s too much effort […] ChatGPT can do the job more efficiently, it can plan my holiday too and it isn’t always off for hen nights, doctor’s appointments and mental health days” (a hen night is the British version of a bachelorette party).
GenAI does seem to be more effective at some tasks than human writers, as a recent study seems to suggest. This 2024 study, published in PNAS Nexus, involving more than 250 lay readers in comparing AI-generated summaries versus summaries written by scientists, demonstrates how GenAI tools seem to create summaries of scientific research that were perceived as more trustworthy and lead to better recall of the information among readers. Of course, I wonder if summaries created by those trained in science communication practices, with an in-depth understanding of writing not just as a product but as a process and form of interaction and learning, would have performed on par with the GenAI-created summaries or maybe even outperformed them. No disrespect to us researchers, but many of us lack formalized training and professional development in effectively crafting messages about our own research for public engagement.
In this context, what does relying on GenAI tools and apps mean for the future of science communication, especially as entry-level positions in science communication start to disappear? These entry-level positions play an important part in creating an ecosystem of professionals who support individual researchers and research institutions in disseminating their findings to broader public audiences. Will we still have this thriving ecosystem if we rely on GenAI for content production instead of doing the work of nurturing young minds and ushering them into the profession?
As our profession starts grappling with the questions I raise here as well as many others around GenAI and science communication, it is encouraging to see that the most recent issue of The Journal of Science Communication, published on April 14, 2025, is dedicated to “Science Communication in the Age of Artificial Intelligence.” The articles published in this special issue present a range of views about the potential and perils of AI in science communication from diverse geographical areas, including France, Germany, the United States, China, Australia, Denmark, Israel, South Korea, and Taiwan, and the United States. Adding to this collection of 10 studies, a literature review on articles from three leading science communication journals, conducted by this issue’s editors, reveals that much of the existing research on science communication and AI is dominated by a focus on the public perceptions of AI. What is absent as of now is an in-depth exploration of how science communicators engage with AI, especially GenAI, and what impact GenAI is having on science communication ecosystems.
Ultimately, science communicators tell stories about science to different and diverse audiences. If GenAI-generated content about science is increasing, will we still have a diverse representation of voices, considering the compelling research on GenAI-generated content producing misinformation about marginalized groups? As science communicators, we care deeply about public engagement with science and telling engaging stories about scientific research and its impact on humanity’s well-being. How do we continue doing that with the advent of GenAI?
If you are interested in exploring these issues, the Science Communication certificate program at UMB is hosting a one-hour workshop on “Navigating AI in Science Communication: Challenges and Opportunities” from 12-1 p.m. on Thursday, May 1, 2025, on Zoom. Join me, current students, and alums of the program for a research-informed, critical conversation on the capabilities and limitations of GenAI in SciComm. This interactive workshop explores how tools like ChatGPT and Copilot are being used in science journalism, outreach, and academia, and what that means for accuracy, ethics, job security, and audience trust.
Isabell Cserno May, PhD, is an associate professor at the University of Maryland School of Graduate Studies, where she directs and teaches in the Science Communication certificate program. May also directs the UMB Writing Center and is passionate about accessible and engaging pedagogies.