Limitations for AI

While AI is a powerful tool, it’s not perfect. You should be aware of its limitations so you can use it effectively and responsibly.

 

Hallucinations

AI models sometimes generate incorrect or fabricated information, often referred to as "hallucinations." This happens because the model doesn't know the truth—it only predicts what seems likely based on patterns in its training data.

It may also have been trained on source materials that were incorrect, or unverified or it is missing critical context with new data it doesn’t have access to.

Example: You might ask the AI to write a historical overview of your charity's founding year, and it could confidently state inaccurate dates or made up events.

What to do:

  • Verify information: Always double-check facts, especially for content like donor reports, press releases or grant applications.
  • Be cautious with sensitive tasks: Don’t rely on AI for critical areas like legal or medical advice without expert review.
  • Start with trusted content: Find trusted, verified source material first, show it to the AI then ask it to complete tasks using the data as a base point.
Context Loss

In conversations, AI models may "forget" earlier parts of the discussion, especially as the conversation gets longer. This can lead to irrelevant or repetitive answers.

Example: You might explain your charity’s mission early in a session, but later the AI might act as though it doesn’t remember, giving generic advice instead of tailored responses.

What to do:
  • Provide reminders: If the AI seems to lose track, restate the key points or reintroduce the context. Copy and pasting the last prompt response you were happy with can help.
  • Break tasks into smaller parts: Instead of asking for a long report in one go, split it into sections. For instance, “Can you write the introduction? Now move on to the conclusion.”
Bias in Data

AI models learn from the data they are trained on, which can include biases from the real world. This means the AI might unintentionally reflect or even amplify stereotypes or inequalities.

Example: When generating outreach material, the AI might use language or images that favor one demographic over others if its training data reflects similar biases.

What to do:
  • Be inclusive in prompts: Specify inclusivity when crafting content. For example, “Write a volunteer recruitment ad that appeals to people of all ages, genders and cultural backgrounds.”
  • Audit outputs: Review AI-generated materials for fairness and appropriateness before using them publicly.
  • Stay aware: Understand that biases may exist, and seek input from diverse team members to ensure outputs align with your charity’s values.
Lack of Real-Time Knowledge

Most AI models are trained on data that may not include the most recent events, developments or trends. This can limit their usefulness for time-sensitive tasks.

Example: If your charity wants to reference a recent disaster in a fundraising email, the AI might not know about it.

What to do:
  • Provide the latest context: Share up-to-date information in your prompt. For example, “A recent weather event affected 5,000 families in our region. Can you help us write an appeal to support them?”
  • Supplement with research: Combine AI-generated content with your own research to ensure accuracy and timeliness.