Main menu


HR departments need to be vigilant about ethical use of AI technology

featured image

Speaking at the HR Technology Conference & Exposition in Las Vegas, Kerry Wang said employers should be aware of ethical considerations when using artificial intelligence (AI) technology in the workplace.

HR and business leaders are drawn into a conflict between the competitive advantage that technology can provide and concerns about its negative impacts, such as unintended biases, say AI is being used to help employers recruit and help measure the quality of employment.

“Imagine implementing an AI tool that prescreens job applicants,” she said. “Recruiters are happy because they spend less time reviewing resumes. Candidates are happy because they can respond faster. I realized today that technology is recommending more men to interviews than women.Will you continue to use technology or will you shelve it?”

Something similar happened when Amazon built an experimental AI tool in 2015. Amazon scrapped that particular system, but since then there has been a surge of vendors pitching AI to HR functions, from sourcing and screening to predicting turnover and enhancing workforce analytics.

“AI is everywhere, like it or not,” Wang said. “But AI is only as good as the rules that program it, and machine learning is only as good as the data it relies on.”

AI, she explained, includes any computer system that mimics human intelligence to complete a task. For example, a simple chatbot that uses an algorithm (a set of rules or lines of code) uses AI. More complex AI uses machine learning.

[SHRM members-only HR Q&A: What is artificial intelligence and how is it used in the workplace?]

“That’s where modeling comes in,” she said. “Modeling is about finding patterns in huge datasets and coding those patterns as rules. If you give AI 10 million, it will learn a lot. How will you respond to that greeting?”

Wang said that AI is not a “silver bullet” but rather an aid to human decision-making. “AI can make us smarter and more efficient. Research shows that by taking on more technical tasks, AI can help people do more strategic things. will be.”

Bringing AI to HR hinges on abundance and scarcity, said Ann Watson, senior vice president of people and culture at Verana Health in San Francisco, also speaking at the conference.

“How can I do more?” she asked. “How can we be more productive? How can we best grow our talent pipeline? How can we be more inclusive and more inclusive? Advantages mean you have more time to do the things you want to do.”

Maisha Gray-Diggs, VP of Global Talent Acquisition at Eventbrite, said her team uses AI for recruitment and onboarding.

“The advantage of AI for me is that it gives me an edge and saves time and resources,” she said at the conference. “We don’t want AI to replace humans, but we can use AI to augment humans. HR is about doing things smarter, not just doing more. You need to do.”

Ethical use of AI

Wang said there are two main concerns regarding the use of AI in employment: privacy and bias.

“I am very uncomfortable with the idea of ​​monitoring employees,” said Watson. “I think AI is about finding ways to do more, not find ways to catch people doing less.”

She gave examples of certain technologies that can predict whether an employee is about to quit based on their behavior at work. However, research has shown that it only works if employees don’t know it’s there.

“For that to work, it has to be kept secret from the employees,” she said.

Wang said that when Searchlight works with a client, the company first sends employees a communication detailing what’s happening, why it’s happening, and what to expect.

“When we do this, 70 to 80 percent of our employees opt in to data collection,” she said. “If you give people a choice and explain the benefits of using AI, the majority will agree to opt-in.”

Another major ethical issue that arises when considering AI in the workplace is its potential to be discriminatory. Bias can be intentionally or unintentionally created in technology.

“There are already biases in human judgment,” says Wang. “The potential for biased technology is there. But if we recognize it and make sure the data we plug into the models we use to make decisions is as holistic as possible. The more we are in a better place…”

Wang noted the first-of-its-kind new law coming into effect in New York City on January 1, 2023. The law prohibits employers from using her AI and algorithm-based technology to recruit, hire, or promote her without an initial audit of those tools. for bias.

“All of us who use tools need to commit to asking questions to make sure we’re not discriminating,” Gray Diggs said. As we move, we feel we need to spend more time and do more research to understand the technology. Hmmm… think about women…and get this AI tool that might weed out other underrepresented people who haven’t marketed themselves enough in the hiring process before they even get the chance. ”

Choosing an AI vendor

Wang said that before approaching an AI vendor, it is necessary to solve the problems that the business really cares about. “It’s hard enough to advocate for new technology, and even harder to convince leaders about issues they don’t care about,” she said.

When working with vendors, ask them how they feel about bias in their system, she explained. “Can you talk to me about how they use the data, how they train their models, how they validate that they aren’t doing any harm? I like it because it shows that we are philosophically aligned.”

Watson said we must ask the hard questions. “Push harder than you feel comfortable pushing. If you need to find someone else in your organization who understands technology better, bring that person into the conversation.”

Gray-Diggs agrees, saying, “If HR feels uneasy or lacks depth in evaluating new products, bring in data science and IT. Bring in business leaders to get things going.” Please don’t miss it,” he said.

According to Gray-Diggs, the composition of the vendor’s team itself can be interesting. “I see teams presenting products to me, and if they are a diverse team, I can see they are already thinking about potential bias and discrimination.”

The pilot program is your friend, Watson said. You need to learn from test scenarios and get buy-in in a deliberate way before you can make a big impact on your organization. “

Wang added that pilot programs for AI techniques are useful, but only if there is a sufficient sample size to use them. “At least she should consider making 100 pilots, otherwise the model’s data her pattern won’t be as accurate,” she warned.