AI Ethics | Considerations & Strategies
In a recent IBM video about AI ethics, Phaedra Boinodiris, a global leader in ethical AI and author of AI for the Rest of Us, said that three AI-related ethical issues keep her up at night:
- Climate change
- A lack of awareness that AI-based “decisions” impact people’s real lives
- The misconception that AI is logically, morally, and ethically infallible
Generative AI has only been around for a few years, and it’s profoundly changed professional workflows, student writing, hiring practices, and eligibility for anything from mortgages to college admissions. The technology can also lead to adverse effects, such as culture and gender biases, privacy violations, and misinformation.
Using generative AI responsibly involves understanding the risks and AI ethics.
What is AI ethics?
AI ethics are principles for developing and using AI tools in ways that minimize negative impacts on humans and the environment. In 2021, the United Nations (UN) adopted the Recommendation on the Ethics of Artificial Intelligence, which includes principles such as the following:
- Fairness and nondiscrimination: Member states should ensure that citizens have equitable access to AI tools and that AI tools do not perpetuate stereotypes or discriminate against historically marginalized groups.
- Right to privacy and data protection: Nations should enforce policies that protect citizens’ personal information and reduce the risk of data breaches.
- Human oversight and determination: AI tools should always have a human in the loop (e.g., for training and quality oversight), and a living human or corporation should always be legally accountable for the “decisions” of AI tools.
- Explainability and transparency: Corporations and other entities should always inform consumers when AI algorithms impact decisions, outcomes, products, or services. Corporations should also be transparent about how their algorithms function.
- Awareness and literacy: Governments, schools, and other stakeholders should promote public awareness of AI’s risks and benefits (e.g., as part of digital literacy).
- Sustainability: Governments should continuously monitor the environmental, economic, and social impacts of AI tools.
Individual governments and corporations, such as IBM and OpenAI, have also adopted ethical standards to mitigate adverse effects.
Ethics of AI for writing and content creation
Even with a variety of ethical standards in place, generative AI tools pose ongoing ethical challenges for writers and content creators, but many of these can be mitigated.
Privacy
As with any technology, there will always be a risk of data breaches that provide hackers with access to private information. This includes user account data (e.g., address and credit card number) and information that users put in prompts (e.g., your name or autobiographical details in any writing samples that you prompt AI to revise).
For example, ChatGPT saves all of the data in users’ prompts in order to train the tool further. ChatGPT labels this setting “improve the model for everyone,” and users can disable it at any time. You can also protect your privacy by omitting personal details from generative AI prompts.
Intellectual property
In some cases, intellectual property (e.g., photographs, digital art, or student essays) might be used as training data without the creator’s knowledge or consent.
Intellectual property could include the photos that social media users post or artworks that digital artists post online. AI writing tools are also trained with written content that was scraped from the Internet, often without professional writers’ knowledge or consent.
Plagiarism
Generative AI tools can lead to plagiarism when writers prompt them to generate entire documents from start to finish. Instructors and recruiters have become increasingly skilled in how to detect AI writing and often use AI detectors to determine whether an essay or cover letter is someone’s own writing.
To prevent this, use ChatGPT, Gemini, or other writing tools for smaller steps in the writing process (e.g., brainstorming or proofreading). Never prompt AI tools to write an entire draft, and always revise and edit AI outputs (e.g., a thesis statement or introduction paragraph).
Misinformation
Another ethical concern for writers involves misinformation in AI responses. When writers use AI for research, they might get information that isn’t factually correct. This occurs because AI tools are trained with details that developers scrape from the internet, which contains a plethora of misinformation.
AI users can combat this by verifying any details that AI generates with credible sources.
Bias
The internet content that trains AIs also includes stereotypes, which AIs echo in their outputs via biased word choices and images. For example, in 2024, the Harvard Data Science Review reported that ChatGPT sometimes echos biases about genders and occupations when prompted to write recommendation letters.
To combat this bias, AI ethics experts recommend that training data include more content from historically underrepresented groups. AI users can also use prompt engineering, which involves specifying which perspectives, cultures, genders, ages, religions, and or disabilities the response should represent.
AI ethics and jobs
Generative AI tools also pose ethical risks for workers and job seekers, specifically through algorithmic bias and worker displacement.
Algorithmic bias
Algorithmic bias occurs when AI algorithms make decisions that systematically discriminate against groups of people. In job recruiting especially, algorithms have demonstrated gender bias in selecting applicants and targeting job postings to social media users.
For example, Amazon recently stopped using hiring algorithms after discovering that they prioritized resumes from men over women.
As a solution, AI providers have advocated for more diverse representation in training data sets and more human-based testing and monitoring.
Worker displacement
With a plethora of AI tools to write content, communicate with customers, and analyze data, some experts have warned that generative AI could lead to fewer jobs. In 2023, the McKinsey Global Institute predicted that 30 percent of working hours could be automated by AI by 2030.
McKinsey also predicted sharp declines in customer service, retail, and office support staff but growth in occupations that require more education.
In response to AI’s impact on writing jobs, the Writers Guild of America went on strike for 148 days in 2023. The strike resulted in a contract that requires a minimum number of employees per writing team and prohibits film and television studios from requiring writers to use AI.
AI ethical issues and the environment
Generative AI requires vast amounts of natural resources, so the environmental costs are yet another aspect of AI ethics. Two of the major environmental concerns include the following:
- The data centers that run AI systems could significantly increase carbon emissions. For example, the International Energy Agency reports that data centers will collectively consume as much electricity annually as Japan by 2026 (1000+ terawatt hours per year).
- The MIT Technology Review reports that by 2030, the equipment that trains generative AI may produce almost 5 million tons of e-waste per year. This is a fraction of the 62 million tons of e-waste that the world produced in 2022, but it exacerbates a concerning trend (e-waste has increased by 82 percent since 2010).
On the other hand, AI-powered carbon monitoring systems may also compel governments and organizations to take more steps to reduce carbon emissions.
Additionally, the UN’s Global E-waste Monitor 2024 reports that increasing the e-waste recycling rate to 60 percent (versus the current rate of 22 percent) could significantly reduce human exposure to e-waste contaminants.
Frequently asked questions about AI ethics
- How can you check if an image is AI generated?
-
There are a few ways you can check if an image is AI-generated.
First, review the image for anything that doesn’t look quite right. AI-generated images often have distorted text, patterns, or human features (especially faces and hands).
Second, check for metadata information. Some AI image generators use specific filenames or imprint a watermark on their images.
Third, understand how AI detectors work and how you can use them to analyze the probability that an image was generated by AI.
And if you need help detecting texts generated by AI, QuillBot’s free AI Content Detector is one option.
- What do teachers use to detect AI?
-
Teachers use a mix of strategies and tools to detect AI writing.
One method teachers use to detect AI is manual analysis. Teachers look at grammar, style, tone of voice, and the themes present in writing to see if it feels human.
Teachers also know how AI detectors work and how to use them to analyze writing. AI detectors can check—for example—essay writing to see if the qualities of the essay match human samples or AI samples.
QuillBot’s free AI Content Detector is one of the tools that can help teachers detect AI.
- Is ChatGPT safe to use for school?
-
ChatGPT is safe to use for school when students follow these precautions:
- Stay informed about potential risks such as privacy and ChatGPT plagiarism.
- Review and follow your instructor and/or school’s policies about generative AI (e.g., Some schools require students to disclose when they’ve used ChatGPT for assignment help).
- Avoid sensitive personal information in prompts (e.g., your name and contact information).
- Verify facts provided by ChatGPT with credible sources.
- Use ChatGPT for small pieces of an essay (e.g., a hook or thesis statement) or to brainstorm ideas. Don’t prompt ChatGPT to generate entire drafts.
- Revise any content that ChatGPT generates to ensure the writing assignments you submit are in your own writing voice.
Another way to use ChatGPT safely is by using QuillBot’s free AI Content Detector to ensure that AI-assisted writing hasn’t been plagiarized.
- Is ChatGPT safe to use at work?
-
ChatGPT is safe to use at work when users follow these precautions:
- Only access ChatGPT through the official app or website (not from any third-party vendor).
- Stay informed about changes to OpenAI’s data and privacy policies.
- Avoid sensitive personal information and confidential company data in prompts (e.g., clients’ real names, contact information, or financial records).
- Verify facts provided by ChatGPT with credible sources.
Another way to use ChatGPT safely is by using QuillBot’s free AI Detector to ensure that AI-assisted writing hasn’t been plagiarized.
- What are the ethics of AI art?
-
The ethics of AI art address the ethical consequences of generative AI tools that make images and videos. Some of the AI ethics that apply to art include the following
- Bias and stereotyping: AI tools might generate images that echo the stereotypes that are represented in the data sets on which they’re trained.
- Intellectual property: AI tools for art are often trained with the digital works of human artists without their knowledge or consent.
- Deepfakes: AI tools can generate fake images of real humans (e.g., celebrities), which can be used to spread misinformation and harm people’s reputations.
- Job displacement: AI-generated art could lead to fewer jobs or business opportunities for human artists.