Created By: Bobby Hunter March 28, 2024 It’s no mystery to my colleagues that I am a daily user of AI. On the one hand, I have AI helping me solve complex Excel functions. On the other, I have it generating quotes in Bea Arthur’s voice to celebrate a clear to-do list. The benefits of AI can be productive and fun – yet I still feel a tinge of guilt writing this publicly. While I feel comfortable with these tools, do my colleagues feel comfortable with how I use them? In November 2023, I attended the Microsoft Ignite conference, which launched much of the hype surrounding Microsoft Copilot. During one panel, Generative AI – rise above the hype and build business value, Microsoft offered a comprehensive view of how business value was a product of your organizational readiness for AI tools. That readiness is based on developing governance for responsible usage and building a culture of AI fluency. The business value comes from the staff’s technical skills with specific tools and an ethical and sustainable environment that supports AI usage. In this article, I’ll cover some of the steps Orr Group is taking to create a culture of ethics that you can adopt for your nonprofit organization. Step 1: AI Usage Policy Whether your AI Task Force is already set up or you are still putting the pieces together, it is essential to take the time to develop a robust and supportive AI Usage Policy for your employees. A good AI Usage Policy needs to be broad enough to be flexible in managing this rapidly shifting technology and specific enough that your staff can apply it to their work. The policy needs to align with the processes in your IT and Data Security policies to ensure compliance and safety. The policy helps those engaging with AI regularly feel confident that the organization supports exploring how AI can improve their projects and workflows. Overall, the policy must expect that all employees will learn how to use AI appropriately, ethically, and securely while guiding them to the tools and training they will use. Step 2: AI Governance An AI Usage Policy is only as effective as its supporting governance infrastructure. If staff want to create a team ChatGPT account, how do they get access? What kind of data can be put into ChatGPT? How do we ensure only that kind of data is put in ChatGPT? Your staff and leadership will be asking these questions, and the key to deriving value from AI tools is minimizing the time spent asking them while ensuring everyone is still following protocols. AI Governance is a transparent system describing the various roles employees will take in establishing: A strategy team that determines AI policies, expectations, roadmaps, and KPIs. A team testing AI tools and who needs to be involved to approve the implementation of those tools. A team training staff on how to use the implemented AI tools successfully and appropriately in their work. Together, the AI Usage Policy and supporting AI Governance create a culture of responsibility and accountability that mitigates risk and maximizes the value of using AI. The scale of the governance is directly proportionate to the scale of your rollout – the more influence AI will have in your organization, the tighter the governance will need to be to support it. Step 3: AI Fluency While AI literacy means that your staff knows what a given AI tool is and what it can do, AI fluency means that they can strategize how to maximize its value and minimize risks. A fluent employee does not have to be a regular user of AI at all – they can contribute effectively by identifying opportunities to use AI in projects, bridging gaps in the organization’s governance, or identifying situations where others aren’t using AI appropriately. Similarly, your organization cannot simply rely on a handful of frequent users to raise overall staff fluency– the accessibility of these tools will result in Shadow AI usage, where you cannot control where your business data goes or how AI interacts with it. Fluency is a comprehensive measurement of every staff member committed to AI organizational readiness. Here are some ideas on how to build AI fluency at your nonprofit organization: Start in collaboration with your DEIB initiatives. The principles behind these steps can align with broader Diversity, Equity, Inclusion, and Belonging (DEIB) initiatives your organization is already taking. A good starting point is UNESCO’s values and principles shared in their 2021 Recommendation on the Ethics of Artificial Intelligence. While the recommendation is focused on international governance, the emphasis on fairness, transparency, safety, and accountability is tangible enough to implement at scale. Bias training is required for valuable AI inputs and outputs. Understanding the signs and causes of bias/discrimination tied to AI usage comes from having foundational bias training in the first place. If your staff can’t identify biases in their daily interactions, they won’t be able to identify them in the data/language an AI tool works with. You cannot rely on an AI tool to do that work for you; giving it a biased input will give you a biased output. Transparency about your AI usage isn’t enough. While it is everyone’s responsibility to disclose their AI usage, you must also be able to explain how AI is supporting your work. Staff need to explain how the AI tools take data in, how the data is analyzed, and why it arrives at a given output. In addition, everyone must understand how the data they work with is secured along the way. All of this ties back to how your governance and training can support your staff – be proactive about providing language and mentorship to effectively disclose and comprehensively understand how AI may be part of their work process. The more fluent your staff become with AI, the faster they can navigate these situations and realize the ROI these tools promise. Elevate Your Nonprofit With Orr Group’s AI Consulting Make no mistake – your nonprofit can accomplish this work. You can address these steps piece by piece, as your organization requires them for the AI tools you wish to use. But we also understand that AI is exciting, and, like us, maybe you want to dive in right away. At Orr Group, we’re enthusiastic about the future of AI and hope to share that enthusiasm with our nonprofit partners. We are ready to assist your organization in brainstorming ways to seamlessly and safely integrate AI into your fundraising and other operational efforts. Contact us to learn how we can help elevate your organization to new heights. Contact Us Bobby Hunter is a Senior Associate Director supporting Operations at Orr Group. Bobby is responsible for providing leadership and oversight of the firm’s use of technology and internal systems to ensure maximum efficiency and effectiveness. Bobby is a member of Orr Group’s AI Taskforce.
Why Accountable AI Matters and How to Get There | Orr Group TALKS Published Date 2024 Why Accountable AI Matters and How to Get There | Orr Group TALKS We're joined by Matthew Reisman from CIPL to understand the importance of building accountable AI programs and ensure your AI systems are transparent, ethical, and responsible.
Closing The “Transparency Gap” In Nonprofit AI Usage AI Published Date 2024 Closing The “Transparency Gap” In Nonprofit AI Usage As AI adoption in the nonprofit sector grows, so does the need for greater transparency about how it's being used. Read our three recommendations to help close the transparency gap.
Autonomous Fundraisers: Do We Need Them? AI Published Date 2024 Autonomous Fundraisers: Do We Need Them? Version2.ai's recent launch of a fully autonomous fundraiser has generated buzz in the nonprofit space. Read our thoughts on the growing presence of AI in fundraising and how nonprofits should react.