ATD Blog
Wed Aug 02 2023
How do we know whether our shiny AI tools are being used ethically? When someone asks me this question, my response goes something like this: We can’t make those decisions yet, because most of us are “playing catch-up” with the technology. If we don’t wise up soon, other people will make those decisions for us. And we may not like the world they make. In this post, I explore the urgency of ethical AI and how it affects talent development.
Conservationist Aldo Leopold wrote that “ethics is doing the right thing, even when no one else is watching.” His definition is more fitting than ever in the world of AI—because sometimes, no one really is watching.
In science and engineering, the term black box refers to any complex device for which we know the inputs and outputs, but not the inner workings. For example, to many of us, our streaming service is a black box. We push the buttons and select the program we wish to watch. We neither know nor care how it works.
To behaviorists, the human mind is a black box. It is the most complex and mostly unknown object we know of. And yet, being the remarkable creatures we are, we dare to develop machines that are intended to think the way we do. Never mind that we haven’t quite figured out how that is!
B.F. Skinner put it like this: “The real question is not whether machines think but whether men do. The mystery which surrounds a thinking machine already surrounds a thinking man.”
In Rebooting AI, cognitive scientist Gary Marcus summarizes the problem: “There are known knowns and known unknowns, but what we should be worried about most is the unknown unknowns.”
As talent development professionals, we are responsible for advocating for the ethical deployment of these powerful tools. Here’s how.
Leaders like Marcus discuss the difference between “opaque” AI and “transparent” AI. With transparency, we can understand how the machine is trained to make decisions. This allows us to challenge the underlying assumptions that led to those decisions.
Building transparent AI will not be easy, and we must be the ones to lead the way. If your company is implementing AI to recruit talent, train employees, evaluate performance, or anything else related to talent development, you are the customer. Insist on a thorough explanation of how the AI model will make decisions. The answer, or lack of one, might surprise you.
As we enter the age of AI, we must approach any shiny new AI program with skepticism, even if many other companies already use it. You may be the only one asking the hard questions, so you need to be up to the task. Here are a few ideas to get you started.
This might sound obvious, but many buyers of AI solutions focus only on the desired result, such as finding the best candidate for the job or processing routine customer complaints. True definition involves describing the decisions the AI will make to identify next steps, the criteria used to make those decisions, and the source of the underlying data.
AIs are programmed to make the best decision every time. Human beings seldom have that luxury. Comparing the “best” decision with the “next best” is one way to identify flaws in the AI’s underlying logic or biases that fallible humans have accidentally built into the machines.
This test’s purpose is not merely to see whether the machine returns a consistent result; it’s to determine whether the result remains true to the original intent.
While the age of AI presents new challenges, the basics remain the same. Ethical behavior is always up to us—all of us and each of us. So let’s begin the way we humans have always learned about new tools. Ask:
What is this?
How can I use it?
What could go wrong?
What can I do to protect myself and others?
Where do I begin?
This post was developed from an excerpt from Margie’s book, AI in Talent Development. For more information and resources, check out ATD's AI resource page.
You've Reached ATD Member-only Content
Become an ATD member to continue
Already a member?Sign In