Created By: Terry CangelosiOctober 30, 2024 As AI adoption in the nonprofit sector grows, so does the need for greater transparency about how it’s being used. Recent research, highlighted by The Nonprofit Times in the article “Nonprofits’ Use Of AI Exceeds For-Profit Implementation,” examined AI usage, transparency, and engagement specifically in communications across 1,440 nonprofits and 1,500 users. The findings reveal that 92% of nonprofits believe AI will enhance their engagement with end users, and 58% are using AI in their digital communications—surpassing the 47% adoption rate of for-profit B2C businesses. The data also suggests a “transparency gap” of sorts – 83% of respondents at nonprofits overwhelmingly believe they are being transparent about their AI usage, while only 38% of constituents, members, and partners (end users) believe that to be the case. While there are many misconceptions about AI & data, this cannot explain the delta alone. End users want more transparency when it comes to nonprofits’ AI usage to ensure the organization is responsible, ethical, and safe with their data. Whether your nonprofit is actively integrating AI into its systems or has already done so, below are three ways to help close the transparency gap. 1. Increase Responsible AI Usage We’ve spoken at great lengths about the importance of an AI Usage Policy and AI Governance as a starting point for transparent and responsible AI integrations. A clear and supportive policy (with matching governance) that is understood, accepted, and followed internally is the foundation upon which transparency can be built and will increase responsible usage. These documents should answer any question that an end user may ask about the organization’s AI usage. Further, by taking time to increase your team’s AI fluency, your staff will be better equipped to answer any questions about how AI is used organizationally or in their work. 2. Increase Security in AI Usage A major concern surrounding AI usage is what happens with the data that users enter into the tools. To ensure risks are mitigated and the data is safe, considerations must be made regarding personally identifiable information, regulatory obligations by location, and ethics. By doing research into how data is being used and then aligning AI usage with internal data policies (including the AI Usage Policy), a nonprofit can increase security to better protect their end users’ data. As a bonus, taking the time to find the right AI tools with security features that align with your teams’ needs will empower them to maximize their usage. 3. Increase Transparency in AI Usage Although ‘increasing transparency’ may sound like an obvious solution to closing the transparency gap, there are practical steps nonprofits can take to be more open and clear about their AI usage. Publishing a public-facing Disclosure on AI Usage will allow your nonprofit to outline its steps to protect its user by using AI ethically, securely, and responsibly. A recent Bloomberg Law article points to recent laws passed requiring disclosure in some states, provides considerations when creating the disclosure, and offers additional steps that organizations can take to increase their AI transparency. Taking steps to increase responsible usage, security, and transparency in nonprofits is crucial for building trust with end users and bridging the transparency gap. Ethical and effective AI practices can create an environment where users feel secure and confident that the AI enhances mission-driven work while protecting their data. Bridging the transparency gap will allow AI technology to enable positive change. Interestingly, while nonprofits face this challenge, they’re ahead of for-profits – 94% of businesses claim they are being transparent with AI usage while only 37% of customers agree. This highlights an opportunity for nonprofits to lead the way in setting higher standards for AI transparency across all sectors. At Orr Group, we’re enthusiastic about the future of AI and hope to share that enthusiasm with our nonprofit partners. We are ready to assist your organization in brainstorming ways to seamlessly and safely integrate AI into your fundraising and other operational efforts. Contact us to learn how we can help elevate your organization to new heights. Contact Us Terry Cangelosi is a Senior Director and Head of Operations at Orr Group. Terry brings 10+ years of nonprofit operations experience to ensure the most efficient operations in Orr Group’s workflows, technology, and infrastructure. Terry is a member of Orr Group’s AI Taskforce.
Orr Group TALKS "Why Accountable AI Matters and How to Get There" AI Published Date 2024 Orr Group TALKS "Why Accountable AI Matters and How to Get There" Join Orr Group on October 30th at 1:00pm ET to understand the importance of building accountable AI programs and ensuring your organization's AI systems are transparent, ethical, and responsible.
Autonomous Fundraisers: Do We Need Them? AI Published Date 2024 Autonomous Fundraisers: Do We Need Them? Version2.ai's recent launch of a fully autonomous fundraiser has generated buzz in the nonprofit space. Read our thoughts on the growing presence of AI in fundraising and how nonprofits should react.
Save The Data: Keeping Data Safe While Using AI AI Published Date 2024 Save The Data: Keeping Data Safe While Using AI Keeping data safe while using AI is not only a matter of security, but also a matter of responsibility, accountability, and trust. Explore key considerations for your nonprofit to help keep its data safe.