
By Betsy Anderson, APR, Ph.D.
Happy PRSA Ethics Month!
As a new school year begins, the ethical use of AI is taking front and center stage.
Generally, the frequent AI ethics advice I hear (and give) is: “Learn to use AI as an important emerging skill, but learn to use it ethically and responsibly.” But what does that really mean?
Is it ethical, for example, to use AI to write an internship or job cover letter for you? Is this a sign of a potential employee who uses time efficiently? Or someone who does as little work as possible? Who lacks writing skills? Or who adapts to change and learns to use new tools quickly? Or, is it okay to use AI to brainstorm general ideas and create an outline if the final content creation reflects an original, creative contribution?
When new technologies are introduced, new questions often arise. It is common for either utopian or dystopian viewpoints to emerge. Some people are evangelists for the positive potential for good, while others raise the alarm about negative implications. The reality often falls somewhere in between these two extremes and depends on the decisions we make about how to use technology.
Our first steps in this experiment with AI raise ethical questions reminiscent of those we considered during the widespread adoption of social media and the rise of influencer marketing.
Although Facebook, YouTube, Twitter and Instagram had already been around for five to ten years, it took until 2015 for the Federal Trade Commission (FTC) to come out with its first policy on native advertising and sponsored content. This is when hashtags such as #ad, #sponsored or #client became standard.
It also coincided with the time that the Facebook algorithm made it significantly more difficult for brands to achieve organic reach. Companies instead needed to invest in paid social for brand-created content to show up in followers’ feeds. One reason many of us lamented this change is that a benefit of public relations has historically been the “earned” and “owned” nature of our content. Brands had earned social media followers who wanted to hear from them. By necessitating that this content be shared in the form of an #ad or #sponsored post to reach the intended audience, the content arguably lost a degree of the consumer trust that tends to be more associated with organic rather than paid content. For that reason, some brands and influencers opted not to include the “sponsored” label – until FTC required it.
However, the FTC’s press release quoted the director of the Bureau of Consumer Protection as saying, “People browsing the Web, using social media or watching videos have a right to know if they’re seeing editorial content or an ad.”
This fits with PRSA’s ethical values, such as Honesty and Fairness to our publics. It particularly follows the intent of the PRSA Code of Ethics Disclosure of Information provision: “To build trust with the public by revealing all information needed for responsible decision making.”
As we seek to come to terms with AI and begin to define what it means to use AI ethically and responsibly, I will ask students to consider this question: “If I needed to include a disclosure statement specifying to my audience how I used AI to create this content, would I choose to use AI in this way?” My follow-up will then be to ask students to actually use their disclosure statement, based on their final decision of whether or how to incorporate AI, as part of their work. The purpose of this question is to help students think through potential positive and negative effects of AI use as part of the final decision-making process. This would also be a good starting point to determine whether the planned use of AI meets the expectations or policies of a syllabus, a corporation, a code of ethics or a consumer.
Michele E. Ewing, APR, Fellow PRSA, points out another challenge in defining how to use AI ethically and responsibly. She writes, “Educators should communicate and model responsible AI use, as AI policies may differ across courses and workplaces.”
So, not only are we all trying to figure out how AI is changing our lives and work at lightning speed, and how to use it ethically and responsibly, but we’re having to discern the rules that differ depending on the norms and expectations of individual professors, clients, companies, industries, countries or legal and IT teams.
I asked myself the above bolded question last year when I used ChatGPT to come up with a class activity that included a step-by-step guide for using Google Trends to gather insights about a brand. Although using ChatGPT felt a little like “cheating,” I decided it demonstrated for students how AI can be used well. I included a statement that “AI was used to create a portion of these assignment instructions.”
When asking the same question about whether I could use AI to answer my emails for me, the unfortunate answer was “no, not yet” because this could involve entering student information into a non-proprietary open AI system.
When using this disclosure statement litmus test to decide whether I could use AI to do my grading for me, a recent article in the New York Times called “The Professors Are Using ChatGPT, and Some Students Aren’t Happy About It” supported my sense that including that disclosure statement when returning an important assignment might make students feel their work was less valued – at least given current norms and expectations. While this may not violate ethical standards specifically, thinking through how disclosing AI use would be perceived also helps determine whether AI would be more efficient and effective. In this case, grading with AI would certainly be more efficient, but may not lead to an effective student satisfaction outcome if that is an important objective.
Similarly, in considering to what extent to use AI to write a news release, Cision’s 2025 State of the Media Report revealed the following:
When 3,000+ journalists were asked: “What are some concerns you have with PR professionals using AI to generate press releases or pitches?”
- 72% worry about potential factual errors in AI content
- 58% worry the quantity will increase, but not the quality
- 54% worry the content will lack authenticity or creativity
- 48% worry about copyright infringements and accusations of plagiarism
- 39% worry about potential bias present in AI-generated content
Cision’s report recommended being “transparent when using AI in PR.”
Some public relations agencies are beginning to include descriptions of how they plan to use AI for client work in their contracts.
It may be that including this type of AI use disclosure statement will someday seem outdated, like the equivalent of disclosing at the bottom of a receipt, “I used a calculator to figure out this tip amount.”
But in the meantime, following PRSA’s Code of Ethics call for honesty and fairness by asking and including AI disclosure statements – even before disclosure is required by the FTC – could put us at the forefront of AI ethical decision-making as communication professionals.
Presently, there seem to be more questions than answers. How would you define what it means to use AI ethically and responsibly? What other questions would you like to see PRSA address in the coming year?
We hope you’ll join us for our Minnesota PRSA Ethics Practicum and Senior Leader Panel on Sept. 19. We look forward to continuing the conversation with you there!
I’d like to extend a special thank you to Minnesota PRSA Ethics Officer Joel Swanson, APR, MACT, for planning the Ethics Practicum, sponsored by BI Worldwide.
Upcoming Ethics Events and Webinars:
Minnesota PRSA Bridging the Ethics Skills Gap – Practicum and Senior Leader Panel on Sept. 19, 2025 at BI Worldwide
PRSA Video Series: AI Tools for the Modern Communicator: Technology, Ethics and Future Trends
Additional AI Ethics Resources:
The Ethical Use of AI For Public Relations Practitioners: Guidance from the PRSA Board of Ethics and Professional Standards (BEPS)
Federal Trade Commission “Endorsements” Guide (updated in 2023)
Navigating Ethical Implications for AI-Driven PR Practice by Michele E. Ewing, APR, Fellow PRSA
The Importance of Integrating AI Ethics Into the College Curriculum (p. 17) by Michele E. Ewing, APR, Fellow PRSA
Cision’s 2025 State of the Media Report on AI in Journalism: What Reporters Really Think about Artificial Intelligence