“I Used AI On This” – How To Clear The Air On AI Usage
AI Published Date, 2024

“I Used AI On This” – How To Clear The Air On AI Usage

Created By: Terry Cangelosi
May 23, 2024

I used AI on this article. I hope you enjoy reading it!

You know what I mean when I say that, right?  AI was used, by me, on this article. I went to AI, then used it, and now this article is done. Am I talking in circles and not giving you a straight answer? Yes, but let me ask you, what do you think I meant when I said, “I used AI on this?”

  • Are you assuming I used it the same way that you most recently used it?
  • What percentage of this article was original thought vs AI generated?
  • Did I even come up with the idea or did AI literally write every word and I just copied and pasted the output here?
  • Did I fact-check anything? Did I run any of the material through a plagiarism checker?

The point is that if I don’t define the “how” when I say, “I used AI on this,” you would have no idea the steps I took, the considerations I made, or the way I used it. And while I might be able to answer the question when you ask, there are proactive steps that any organization can take right now (for little or no cost) to define the “how” both internally and externally.

We’ve written in the past about the importance of an AI Usage Policy and AI Governance which can help define “how” your organization can mitigate biases and enhance transparency, while also addressing ethical considerations and privacy concerns in your staff’s AI usage. These practices ensure staff understand the parameters around when they can use AI in their work and increase their ability to explain their AI usage (read more on the essential concept of Explainable AI). Using those same implemented policies and governance (which should be curated to each organization’s specific preferences and needs), a third way to define the “how” is for those outside the organization, through a Disclosure of AI Usage.

Why Consider Disclosing AI Usage?

You may have seen some of the high-profile news stories involving disclosure, including Sports Illustrated admitting after the fact that they used AI to generate articles (and writers). Additionally, there appears to be some public support for this transparency – according to one recent study by Zetwork, 68% of respondents answered that companies should voluntarily disclose their AI use. There is also legislation in the works across the United States – the National Conference of State Legislatures (NCSL) tracks them – that could impact broader AI policies, and in many specific cases, the need for public disclosure (use Ctrl+F to search and find “disclosure” NCSL page to see what I mean).

A disclosure on AI Usage is a way for your organization to take the already-defined internal policies, practices, and ethical considerations and make it publicly available to your donors, members, and visitors.  It can show that your organization is AI Fluent and follows an explainable AI framework. As a bonus, when one of your members asks “how” you use AI, you can offer them peace of mind, while directing them to the steps your staff takes when using AI that align with the values of the organization.

Clearing The Air on AI Usage In Nonprofit Work

Finally, to answer the question of “how I used AI on this article” – perhaps coincidentally, none of the above is AI-generated. Rather, in alignment with our internal policies and public disclosure, I asked ChatGPT to come up with “nonprofit-specific examples of assumptions people might make about AI usage if the “how” isn’t defined.”  Note – these are unedited but are human-reviewed to ensure they are appropriate for this example.

  1. AI in Donor Profiling: Without clear internal guidelines and transparent communication about how AI analyzes donor data, stakeholders might suspect invasive or unethical data practices, potentially leading to a loss of trust and donor support.
  2. AI Usage in Reporting: If a nonprofit does not clearly explain how AI contributes to creating impact reports, external parties might doubt the accuracy or integrity of the data presented, questioning the organization’s effectiveness and honesty.
  3. AI in Decision Making: Without defining the “how,” if AI is used in decision-making processes, stakeholders might assume that important decisions are made without human oversight or ethical consideration, leading to concerns over accountability.
  4. AI for Research & Development: Without transparently defining how AI aids in research within the nonprofit, there might be skepticism about the novelty or reliability of the research outcomes, possibly impacting funding and collaboration opportunities.

Nonprofits that take the proactive approach to disclose their AI usage – and affirm that human oversight was maintained throughout – can position themselves as adaptive and transparent organizations seeking creative and forward-thinking solutions to further their impact.

At Orr Group, we’re enthusiastic about the future of AI and hope to share that enthusiasm with our nonprofit partners. We are ready to assist your organization in brainstorming ways to seamlessly and safely integrate AI into your fundraising and other operational efforts. Contact us to learn how we can help elevate your organization to new heights.


Terry Cangelosi is a Senior Director and Head of Operations at Orr Group. Terry brings 10+ years of nonprofit operations experience to ensure the most efficient operations in Orr Group’s workflows, technology, and infrastructure. Terry is a member of Orr Group’s AI Taskforce.

Related Resources