Security for AI

You can protect your charity’s data, donors, and beneficiaries by being careful about security, while still reaping the benefits of AI.

 

AI tools often require you to input data for them to work effectively. However, sharing sensitive or confidential information with AI tools can pose significant security risks. Some tools store user input, which could potentially lead to data breaches or misuse.


As a charity or community organisation, if you store personally identifiable information (PII), it is essential to comply with GDPR and other relevant regulations. Ensuring your organisation meets these standards is your responsibility. Like any other tool, AI carries inherent risks, especially when handling sensitive data. Always verify outputs carefully when sensitive information is involved to maintain compliance and protect privacy.

Example: If you input donor names, contact details, or sensitive financial data into an AI platform, that data might not be secure or private.

What to do:

  • Avoid sharing sensitive information: Never input confidential data into AI tools unless they explicitly guarantee compliance with data protection laws.
  • Use secure platforms: Choose AI tools from trusted providers that have clear data privacy policies and strong security measures.
  • Educate your team: Ensure staff and volunteers understand the risks of sharing sensitive information and know how to use AI responsibly.
  • Review privacy policies: Check how AI tools store and use your data before integrating them into your processes. You can usually find these on their website.