Having an AI policy can help shape the development of the technology — thereby increasing the chances it will be the ‘best thing ever to happen to humanity.’
How are you feeling about artificial intelligence?
Maybe the excitement you felt back when ChatGPT and other generative AI algorithms burst onto the scene has died down? Have you found it can be helpful for some tasks, and that GPT-4 and the other AI algorithms are amazing in what they can do, but AI still makes mistakes, hallucinates, and has bias and data-privacy issues? Often, using AI tools seems like more trouble than they’re worth, right?
Don’t be lulled. It’s only been just over a year since ChatGPT came out on November 30, 2022. Stephen Hawking was right when nearly 10 years ago he said, “The rise of powerful AI will be either the best or the worst thing ever to happen to humanity.”
Advances in AI are happening, and more are coming. New advances will come from where we have been paying attention, such as new generative AI algorithms from OpenAI and other big tech companies — as wellas from places less on our radar that could be the missing pieces to complex puzzles leading to an order of magnitude gain. Breakthroughs in quantum computing, in other types of AI, in places outside Silicon Valley, in robotics or in some of the thousands of new AI startups around the world that have raised billions of dollars in growth capital.
This exponential rate of improvement in AI capability will lead to true artificial general intelligence (AGI, or superintelligence), with the ability to rapidly self-improve far beyond human understanding. While the exact future is difficult to predict, the general future in which we will have super-intelligent AI is predictable with near certainty.
We are starting to see how AI could be the best thing ever. In science, AlphaFold has made a millionfold breakthrough in protein structure prediction. In education, personalized tutors are becoming available for almost any type of learner. And in business, productivity gains have been demonstrated in many types of work. In the future, super-intelligent AI will be capable of creating a life of abundance and health for all of humanity.
AI applications will grow as the capability grows; most employees will be able to use AI to improve their job performance and satisfaction. A well-defined policy guiding AI usage and its integration into an organization’s culture and strategy is increasingly essential for achieving organizational success. Without an AI policy, the use of AI will be uneven, and potential benefits unrealized across your organization. You’ve likely heard the saying “Professionals who don’t use AI won’t be replaced by AI, they will be replaced by professionals who do use AI.” The same can be said about organizations: Those not using AI will be replaced by organizations that do — and who have a use policy to ensure they use AI effectively.
Steps to Take
In developing a useful AI policy, a good place to start is with this set of questions:
- How does the use of AI align with our organization’s strategy and vision?
- How can the use of AI benefit our organization?
- How should employees use AI?
- What AI tools should employees use in their work and how do they access or acquire them?
- What training and other resources are needed?
- What are the ethical and legal considerations in our use of AI?
- How can we assess and mitigate the risks of AI?
- How are transparency and accountability issues addressed?
- Who makes decisions about the use of AI?
- How can our use of AI and our policy principles evolve and remain relevant?
The process of creating an AI-use policy is also critical. Of most importance is getting stakeholder input, especially from employees, as they are the people who must learn and follow the policy. Be transparent and emphasize communication. Keep it short so it will be read and internalized. Work to summarize your policy in a memorable set of AI principles. Finally, think of your AI policy as a flexible, living document that will help your employees and organization to successfully adapt to future AI developments and resolve problems that come up related to the use of AI.
Developing an AI process with these ideas in mind may take more time than a top-down approach, but it will pay dividends in delivering a more meaningful and valuable AI-use policy. An added benefit is that members of your organization will gain a foundational knowledge of AI and its potential and be better able to figure out how to use it in ethical and practical ways. This enhanced familiarity with AI has another benefit: a positive externality. To explain, let’s circle back to the other option in Hawking’s thesis — that AI might turn out to be the worst thing ever for humanity.
Of course, with a powerful super-intelligence whose “thinking” will be as understandable by humans as a human’s intelligence is by chickens, there might be severe unintended negative consequences. Or it may be used for the benefit of only a few. This is the “worst thing ever” risk Hawking warned us about. I am convinced that, given the incentives driving the development of AI, there is a best way for humanity to minimize the catastrophic downside risk of superintelligence: That is to have most people in the world have knowledge of AI and to be thinking about and talking about how AI’s development should be shaped to increase the likelihood of the good outcome. This broad knowledge base and engagement is critical to drive the regulation of AI that otherwise will be created by a small group of tech-company leaders, politicians and technical experts.
An AI policy is not just a means to harness the power of AI for organizational success but also a key step toward a future where the promise of AI is realized for humanity. By working to embed an understanding and awareness of AI across the employees of your organization, you help foster an environment where ethical use, creative exploration and critical questioning of AI become part of the culture. This is vital not just for organizational performance but also for shaping a society where AI is developed and used in ways that are beneficial, equitable, and aligned with human values.
Jon Down is a strategy and entrepreneurship professor at the University of Portland’s
Pamplin School of Business.
Click here to subscribe to Oregon Business.