Save The Data Keeping Data Safe While Using AI
AI Published Date, 2024

Save The Data: Keeping Data Safe While Using AI

Created By: Terry Cangelosi
July 17, 2024

As we’ve worked with our nonprofit partners on using and integrating responsible AI into their organizations, data security and privacy with AI is a constant concern. Training AI on your data can maximize its impact on your organization, and yet the more data you train AI on, the more scrutiny will be required to minimize risks. We’ve shared in the past the importance of an AI Usage Policy, AI Governance, and Disclosure of AI Usage, and a consideration within each of these is data. So, how can you ensure that your organization’s data is secure while using AI, and what can be done to mitigate the risks?

Every organization has different policies, practices, and requirements when it comes to handling their private data. Different regulatory obligations such as General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) may apply, depending on the organization’s mission. No two organizations will have the exact same needs, but every organization can benefit from taking steps to protect their data when it comes to AI.  Below outlines three potential considerations for organizations to help keep their data safe. While not exhaustive or definitive, we hope to provide some guidance and inspiration for any organization to get started in securing their data.

Understand How Data is Being Used

While some may be used to accepting the Terms & Conditions “popups” without reading them, there is good reason to pause for a beat before following that same practice with AI tools. It is important to understand how the data entered into AI tools is being used. This means reading and reviewing the terms and conditions, privacy policies, and data agreements of the AI providers. Some of the questions that the Terms & Conditions can answer are:

  • How will the AI provider store, process, and transfer your data?
  • How will the AI provider use your data for their own purposes? Will they share, sell, or disclose your data to third parties? Will they use your data to improve their AI models or services? Do you have the ability to toggle that off?
  • How long will the AI provider retain your data? Will they delete it after a certain period or upon your request?

Understanding how data is used by the AI provider will help an organization assess the risks and benefits of using those AI tools or services. In some cases, it is possible to adjust available settings to limit data training, usage, and storage (here’s how in ChatGPT), but those settings do not mitigate all risks. Just as you may trust Microsoft or Google to protect your data in their email platforms, you are trusting AI companies to protect your data in their platforms as well – it’s up to you to evaluate the risk for each app.

Adhere to Internal Policies

Despite AI tools offering new ways to use and distribute data, many organizations have data policies already in place. It is important to review those existing data policies to understand how and when specific data can be used. These should also be updated regularly to fill identified gaps and keep up with the changing AI landscape and data environment. Some of the policies that may include data considerations and practices include the data security policy, the data governance framework, and the data classification policy in addition to references to data in the AI usage policy and AI disclosure.

Organizations that have these policies in place are better able to ensure their data is safe and secure, even as they integrate AI into their systems. If you read the above and don’t have any of these in place, the Centre for Information Policy Leadership recently published a robust white paper that outlines the importance and roadmaps the creation of a holistic data strategy.

Anonymize Data

When all else fails – if data policies are non-existent and the tools’ terms & conditions fall short of mitigating concern – don’t enter sensitive information into AI tools. With strong prompts, it is easy to get strong results using generative AI without putting organizational data at risk. One can do this in 3 steps: identify sensitive information, replace sensitive information with anonymous data, then review and double-check before submitting. Some good practices include:

  • Substitute Names or Placeholders: Replace real names with pseudonyms or generic terms.
  • Generalize Specific Details: Replace specific details with more general information.
  • Aggregate Data: Instead of discussing individual donations, use aggregated data.
  • Be Aware of Small Data Sets: Even when the data appears anonymous, make sure that using logic won’t reveal who or what it refers to.
  • Remove Contextual Information: Work with a spreadsheet that only has the numbers you need to analyze, with no labels or other textual information. Keep each set of data in a separate chart.

Even if you have a clear data security policy, and you have accepted the risk of using the tool, it is a best ethical practice to only input the necessary data into the AI tool to achieve the results you need. Data anonymization helps support that practice. Anonymizing data can help protect the privacy and rights of data subjects, and reduce the risk of data leaks, misuse, or reuse.

Ensuring Data Security and Trust in AI Usage

Keeping data safe while using AI is not only a matter of security, but also a matter of responsibility, accountability, and trust. As with all things surrounding sensitive company and personal data, it is advisable to consult with legal and IT experts before implementing AI solutions or policies. Nonprofit organizations are responsible for maintaining the trust of those they serve, and by keeping their data secure, they can enhance their reputation while advancing their mission, vision, values, and goals. When your staff is confident that they are using AI tools in an approved and secure manner, they can focus on maximizing the benefits of AI in their workflows.

At Orr Group, we’re enthusiastic about the future of AI and hope to share that enthusiasm with our nonprofit partners. We are ready to assist your organization in brainstorming ways to seamlessly and safely integrate AI into your fundraising and other operational efforts. Contact us to learn how we can help elevate your organization to new heights.


Terry Cangelosi is a Senior Director and Head of Operations at Orr Group. Terry brings 10+ years of nonprofit operations experience to ensure the most efficient operations in Orr Group’s workflows, technology, and infrastructure. Terry is a member of Orr Group’s AI Taskforce.

Related Resources