Artificial Intelligence and Research: Ethics & Limitations of AI
Considering AI Usage
Before using AI for research or coursework there are many things that you must consider including the ethical use and the limitations of using AI. This page outlines some general ethical issues and limitations that users should consider before using AI. PSU has published guidelines and principles regarding the use of AI for coursework, research, and teaching. Here are links to those guidelines:
Misinformation and AI Hallucinations
AI tools are known to produce inaccurate information, sometimes fabricating data, sources, and even complete citations. These fabricated sources, including full but nonexistent citations, are often referred to as 'AI hallucinations.' When using AI tools for research or information gathering, it's crucial to verify the accuracy of all information, claims, and generated sources. One way researchers can check a citation's validity is to copy the article title into Google Scholar or the library catalog. If no matching results are found, or if elements like the author and journal title differ, the citation may be an AI hallucination.
Further Reading:
Bias in AI
Researchers must recognize that AI outputs can exhibit gender, racial, cultural, and availability bias (where models favor readily available information). This bias stems from several sources: AI models are trained on vast datasets that often reflect existing societal biases and the disproportionate availability of certain information online, leading to skewed representation. Additionally, the human developers of these models introduce their own biases, unintentionally influencing the model's behavior. Users should be aware that any AI-generated content may be affected by these biases.
Further Reading:
Scope of Training Data in AI
AI tools rely on the scope of their training data to answer questions and generate content. The scope of this data, including source materials and date ranges, can vary significantly. Users must understand the limitations of the data used by a specific tool to compile answers and generate content. For instance, some tools may only utilize open-access content or only content within specific date ranges, which can restrict their ability to fully address certain queries.
Environmental Impact of AI
The rapid growth of AI-powered technology has raised significant environmental concerns. The combined effects of land use, water consumption, and pollution from AI infrastructure negatively impact biodiversity, habitats, and contribute to climate change. Users should consider the following issues when utilizing AI tools:
- Increased Water Usage: Data centers, essential for AI operations, require substantial water for cooling. The sourcing of this water can strain local resources. Furthermore, the processing of raw materials for AI components, such as rare earth minerals, consumes significant amounts of water.
- Land for Data Center Infrastructure: The rapid expansion of AI necessitates the construction of larger data centers, consuming substantial land. This can strain habitats and local communities, impacting both land availability and the resources (power and water) required for data center operation.
- Energy Consumption and Carbon Dioxide Emissions: Running complex AI models and supporting data centers demand significant computational power and high energy consumption. Much of this energy is derived from fossil fuels, contributing to increased carbon dioxide emissions and accelerating climate change.
Further Reading:
AI and Privacy
Like other internet tools, AI users should be cautious about the information they share with these tools and investigate how these tools track and collect user data. The specific data being collected and how it's used can often be unclear. Before using any AI tool, it is advisable to review its privacy policies and user data information. If such information is not provided, users should consider not engaging with that tool.
Further Reading: