As AI continues to revolutionize various industries, its application in HR raises crucial ethical questions. Join Kathleen Jinkerson, Vice President of HR & Total Rewards Solutions at The Talent Company, as she explores the transformative potential of AI in HR. She highlights how AI can be leveraged to improve employee experiences and achieve organizational objectives while addressing the ethical challenges it presents. Kathleen also emphasizes transparency, bias mitigation, and change management. Join the conversation to explore how AI can reshape HR strategies, enhance efficiency, and drive meaningful results in your organization.
—
I would love to take some time to introduce our hosts. We have Howard Nizots with us. He’s a seasoned Compensation Advisor and Strategic HR consultant with over 30 years of experience. Also, we have with us Char Miller. She’s a Strategy Skills Consultant at a Strategic Thinking Institute. Also, we do some work together here at Comp Team. I’m glad to have Char. She has expertise in talent management strategies, as well as careers and entrepreneur coaching. She does a lot of transformations for people out there.
I would like to introduce you to Kathleen Jinkerson. We’re going to dive into the topic of the ethical use of AI in HR. As a vice president of HR in Total Reward Solutions at The Talent Company, Kathleen brings a treasure trove of experience in human resources and talent management. She has been celebrated for her dynamic leadership in enhancing people practices in guiding organizations globally to harness HR innovation solutions.
She’s an advocate on AI and we’re going to learn from her on that. We’re going to share how to use AI ethically and how it can transform HR practices while ensuring fairness and privacy. She’s going to aim for ways around how to mentor all of you out there on how to use AI effectively. Join us as we go through this enlightening discussion that promises how we reshape and how we think about AI in the workplace. Welcome, Kathleen.
Thank you for the invitation to speak on this episode.
As we start, it’s traditional for us on this show to dive into your background a little bit as we launch. I would love to hear a little bit about you, how you got into the business of HR, and then what inspired you in your recent views on AI and its ethical use.
The first thing to know about me and the consulting firm that I work for, The Talent Company, is our mission is to help organizations achieve their goals through their people. Whether it’s helping with a specific culture or people’s strategies and practices, we’re big believers in the value of work and in creating meaningful work experiences for your employees as a way of achieving your organization’s goals.
This is important to me. I’m a big believer that we are all holistic beings. We have lots of dimensions and that work is one of the areas where we spend a lot of our time and attention towards our work experiences. I‘m a big believer in working with organizations to create positive employee experiences and development opportunities in the service of helping the organization and the individuals achieve what they’re trying to do.
At a practical level, I started in university with a Business degree and then went into working with a large multinational organization as a Project Manager. One of my client groups was HR. I did a lot of HR work with them. I said, “You know what? I like this element of business management.” As much as we can sometimes feel a bit like we’re at the kiddie table, we do have a significant impact on the organization.
As I was thinking about my own career goals and how I wanted to spend my time, HR was where I wanted to be. When I started to think about what that would look like, I wanted the consulting side because I’m someone who loves the big picture. I love theories and I love frameworks, and then I like drilling down and making them practically work. Consulting seems to offer that opportunity for me. That’s how I am where I am now. I end up sitting down with business leaders and HR leaders and helping them figure out what’s working, what needs to be improved, and how they can do those things practically.
How long have you been in the consulting business?
Fifteen years. Still new enough that I don’t mind giving the number out, but long enough that I feel I am a bit of a grizzled veteran sometimes.
I love consulting and I know that everybody on this call does. It’s great to have such a huge impact on many different companies and their people. It’s quite different when you’re sharing strategies and things like this to ensure that versus operating in the business. It’s a lot of fun for sure. Let’s talk a little bit about artificial intelligence and its use in HR. We did a conference on AI and HR two years ago.
It’s funny because that was chatGPT. We’ve had AI for a long time, but it didn’t gain hold or was found as useful until it had advanced about two years ago. At that time, it was transformational in how it was going to impact all of us. Using it effectively, there was a lot of fear, and there still is fear in the use of AI. What is that going to do to all our jobs? How is it going to impact us as a society? Even to the extreme of whether or not we’re going to start some sci-fi movie and it’s going to come to fruition. All of that is on the minds of people.
One thing I noticed is that AI was being adopted by a lot of different professions. HR was a little slow to the table. At the onset, there was a lot of concern about whether this is going to take away jobs or if is this a people-centric technology. There was a lot of resistance at first. Did you see that, Kathleen?
Yes. I think you hit the nail on the head there. There’s probably about an equal measure of excitement, positive buzz, and fear when it comes to AI. The first thing I would say, to put our discussion in context, is a lot of people think of AI and they do think of that sci-fi. They think of that Hal from 2001 Space Odyssey or some of those other technological villains.
The reality is that AI is here already. It covers everything from predictive index to chatbots and virtual assistants. It’s integrated into the technology that we use these days that people think of AI, and the first thing is the chatGPT who are getting some great marketing, buzz, and focus. There are lots of other machine learning and AI technologies that are being developed there.
As a result, you’re seeing excitement. You’re seeing organizations adopting this technology and at the same time, you are seeing great resistance. I compare it to the Industrial Revolution, where you had individuals who wholeheartedly embraced using new technologies in manufacturing environments for weaving machines and all of these things.
At the same time, you had the Luddites, who were going around breaking the machines. If we frame this as business leaders and as HR professionals, we want to look at how we create positive, people practices, how we nurture the cultures we want to do, and how we want to help our organizations achieve. My thesis or my point of view is that these tools can be used very effectively and ethically. We just need to give some leadership on how that is there.
AI can be used effectively and ethically. It’s just about providing leadership on how to use these tools. Share on XAs the three of you know, I am based in the Toronto, Canada area. I work with clients locally. We work with organizations globally as well. Canadians in particular have been very conservative and slow to adopt AI in business processes. According to Statistics Canada, probably only about 9% or 10% of Canadian organizations have integrated AI into their business processes.
I’m going to put a little asterisk on that because their definition is we’ve got an AI strategy or they’re not thinking about what Microsoft is doing with their suite of tools and how AI is embedded. They’re not thinking about how Google has embedded AI. Those 9% or 10% are the ones who’ve actively put in chatbots, etc. I’m going to talk about some specific cases from Canada that have influenced us, but I’ll pause and say yes. There’s a lot of excitement and a lot of work for us as HR professionals to do.
What do you think are some of the reasons why some organizations are reluctant to put in an AI policy?
There are two factors. Whenever something isn’t well understood and when you don’t understand the framework, it’s natural to think about if we introduce AI into our recruitment process, what is that? Does that increase or does that decrease discrimination in our process? If you don’t understand, the answer is it could be both of those things. Depending on what tool you apply and how that tool is applied, it could increase discriminatory practices and bias, or it could mitigate it. When you don’t understand how it works, the natural inclination is to gravitate towards what you do know.
The second piece is here in Ontario, we have seen legislation that has been a bit of a chiller. Going back to the recruitment here in Ontario, we’ve had a series of legislation called The Working for Workers Act. They’re at part four or part five at this point. It’s very broad-ranging. It covers everything from health and safety, the right to disconnect, remote work, and an employer’s ability to monitor your work. It put in place pay transparency legislation. It covers a very broad range of people and cultural topics.
One of the elements is mandating that you must disclose when you’re using AI in your recruitment processes. That has been almost a bit of a chiller because our organizations aren’t quite sure. If they don’t have a bulletproof strategy on how they’re using it, they don’t want to disclose it, rightly so because it’s a Pandora’s box. That has reduced adoption. The part that’s not that’s not being said is that if you’re using an ATS, it more than likely has AI embedded in there.
Most people initially, when they talk about AI and recruitment processes, are thinking about using an AI to assess candidates through video interviews before they ever get to interact with someone, or they think about resume screening being done by a bot, not by a human. That’s where people’s interpretations go with it. I think it’s here. We all just need to reckon with how it’s being used and then how we communicate its use.
To your point, I think that it is here. Whether or not an organization may have its policy and how they’re using that internally, it’s in a lot of the systems that they’re using every day. That should be perhaps disclosed to meet the regulation, but then also in keeping a strategy in place and how this is to be used ethically and appropriately going forward, training needs to happen to ensure that our people are upskilled and know how to use these systems effectively.
There’s a strategy and a procedural element. Another comparison I would often make is the use of Psychometric Assessments as part of our recruitment process. What assessments are you applying? When are you applying for it? How is that driving your decision-making? How transparent are you in that process to candidates and to hiring managers as well? Sometimes they don’t even quite understand how those things interact with the experience and the results.
I think that’s the first step. As you rightly said, the reality is that frontline managers for the most part, particularly when it comes to recruitment, but also when it comes to compensation and performance management. Your strategies are enacted by your managers. It’s making sure that they have the right mindset, tools, and resources. If you’ve trained them in an appropriate use of those tools and resources, you’re going to rise or fall to the level of your manager’s capabilities. That is important.
I know that Howard is a systems expert. He’s looking at a lot of different systems in HR and so forth. What are your thoughts, Howard, on AI’s role in systems in the future?
I was laughing a little bit to myself as we were having the conversation because some of these things had been in place before AI anyway. It just wasn’t under the name of AI. The tools in terms of doing resume reviews and looking for keywords and themes, we’ve already been living with some of those things. Now it’s this fear of, “There’s something called artificial intelligence that’s making these decisions.”
That’s where people need to understand. Corporations need to educate the people. It’s not making decisions. It’s just helping you look at data. There still has to be a human element or human judgment involved. AI will never replace that. I think AI is going to be here with us and going to be helpful in terms of being able to look at and assess large quantities of data.
There still has to be that personal interaction in terms of, “What does this all mean? How do we use it? What is not being looked at by AI?” For example, in individual resumes, there are circumstances that AI is not going to pick up. What hardships did some people have to overcome to get to where they are now? I don’t think AI will ever pick that up.
It’s true. You are making a good point. It’s been here for a long time and some of that baggage is carried forward. As technology has evolved, maybe people’s understanding has not evolved with it. I always hear about the case of Amazon where they implemented AI technology as part of their recruitment process. They had to cancel that because they found that the algorithm and the tool that they were using were biased towards women. Anything that had women as references in the resume got downgraded.
That goes down to how this is programmed and how is that learning happening. It’s all about the programming and the training at this point. AI is not at that singularity point where it is starting to develop on its own. That was a few years ago, and there are lots of great tools that have taken that learning. Quickly and very deliberately looked at where those biases are and how can they be weeded out of the tools. Now, the research does seem to be indicating that the ethical use of technology is going to be less biased than an individual because we’re all biased. We all carry them. It’s how we mitigate that bias.
We're all biased. We all carry these biases, but it's how we mitigate them. Share on XThat’s the second piece. For any use of technology, I advocate that there needs to be governance. There needs to be monitoring, and there needs to be some thought about how it’s being used. The other example that you see is one of those other chillers or horror stories here in Canada. One of the major airlines implemented a chatbot for customer experience. Rather than waiting in the customer service line, you can interact with the chatbot.
There was a situation where an individual was interacting with the chatbot and he was given incorrect procedural advice by the chatbot. Essentially, what it boiled down to was he was looking to get a bereavement fare, which is something the organization did offer. He was told, “Purchase the ticket, then submit it to us and we will discount the fare for you. That’s the direction he was given from the chatbot. He did that and then when he subsequently went to get his refund for the difference, the actual live customer service said, “No, that was not our procedure.”
That went to the courts and it was found in favor of the individual. The chatbot is giving directions. It’s essentially the same as if an employee gave that direction and you will now be held to that. We want to make sure of the programming and the advice that’s being given. Putting it into an HR perspective, if we’ve put in chatbots to help with a self-serve approach to employee experience if those chatbots are giving incorrect information to employees, they were going to have to be held to the advice that those chatbots are giving.
It’s interesting when we think about this impact on the future. When we talk about training chatbots and training AI all the time, this points out that we need to make sure that we’re training them in alignment with how we’re training our people to ensure that they’re dealing with the same information. Char, I’d love to hear your thoughts on how leaders should think about training their AI and keeping on top of all of this.
I’ve been sitting here and you know me, I have to tell a story. I was doing my informal study with my son, who’s 21 years old. He is getting his Bioengineering degree. I’m so proud of him. We were talking about being in college, in the middle of college, and his experience with AI. We’re talking about the ethical aspects of AI relative to being in college, writing papers, plagiarism, and how the colleges are struggling with keeping on top of that and helping students in that arena.
This is only in the last two years that he’s been hit. Our youth is hit, young adults, with AI and how it affects them in their school-age areas. My generalization is very familiar with this technology and the ethical aspects of AI in the college setting. Also, my older daughter is in the workforce and fully aware more so than I am of what AI has done in the work setting.
Getting to your specific question here about training, I think it’s all about an overall awareness that each of us, no matter what age we are, have a different experience with AI, and to realize that we need to bring the human aspect to it. Thank you, Kathleen, for bringing up that example of where something got derailed because it was all focused on AI and gave misinformation.
Our up-and-coming generation coming into the workforce is fully aware of this and way smart, not that it’s X-ers or older folks who don’t understand, but particularly our generation coming up into the workforce is over it. They know the technical glitches. They understand how it’s impacted their education environment.
I would say from a training perspective, be fully aware of this, also the benefits and the disadvantages. As an employee, to be liberated in how to work with these systems, and understand how it’s impacting the changes in our work environment which are happening daily. What do you think, Kathleen?
You raised an interesting point, which is that everyone’s experience is different. That is true of how we interact with AI. That’s why I’m excited about AI because when you think about offering flexible or different experiences to different employees, there’s a complexity that can come with that fairly quickly. AI can help us as HR professionals to reckon through all the work that would be required to give different experiences.
I love learning from other industries. Within the healthcare industry, there was a study that looked at patients having access to AI technology as part of their care plans. What they found was that they got better results and had better experiences, particularly at some of their low-level inquiries, when they were interacting with technology over interacting with a healthcare professional.
Part of that was because they didn’t mind asking technology what they thought might be a stupid question. They thought, “I’m not going to waste my doctors or my nurses or my other health care professional’s time asking a question that I should probably know the answer to.” They have no hesitation about doing that with the technology.
They will take more time with the technology, and they will ask a lot more questions. That creates different experiences and addresses needs differently, but you want to balance that. You don’t want them getting all of their healthcare answers from technology. We know what happens when that’s the approach that people take. It’s that balance of when you go to the technology for the support that you need and when you need a human. When do you need expertise, coaching, or advice?
I appreciate that Kathleen, and I have a long history in healthcare. I would say from a personal experience, it’s interesting that through my healthcare system, the chat box comes up and I’m in that situation. I think it’s a change management aspect too because for some individuals or whatever demographic, up comes the chat box and asking these medical questions and the resistance of, “Representative.” “Representative.” You know what I mean? I’ve done that, “Representative.” You yelling at your phone. You then hang up and then you call a different number, “Representative.”
Anyway, I think you’re right. It’s a change management process, particularly in the healthcare setting. One, it took forever for our physicians to get used to the medical records being electronic versus paper. I’ve been an HR person with all the medical records departments. Converting to an electronic medical record system, it’s a change in management process to have your physicians and clinicians get used to this new process.
Now you introduce AI quickly. Not only do your medical staff, physicians, or clinicians, plus your patient, that’s a change process. I feel that communication must occur so that people can accept that you will be talking to a chat box perhaps, but don’t be intimidated or feel like you’re not going to get the care you need.” It might be challenging in those professions right now.
I think that that’s the key thing. Two things that come to mind for that response. The first is the change management and to call back what we were talking about, which is training. If you think about something like a chatGPT, and if a manager is going to use it to create a job description, or this is very common with younger generations, they’ll use chatGPT as their coach.
Here’s the situation, how do I respond to this? There is training that needs to go into because good advice comes from good prompts with technology like that. The change management, building the mindset, and skillset required to engage with these technologies ethically and effectively are important. That’s the first thing that comes to mind. The second thing is gone. I think we’re coming up to the half-hour anyway at this point. I’ll yield for a second.
Good advice comes from good prompts with technology. Share on XAs we get towards our conclusion here, Kathleen, where do you think AI is going? Where should HR practitioners think about implementing AI for the biggest result in the next year or so?
The constant evolution that the HR profession has gone through where we’re trying to go from being the personnel department, which is 30 or 40 years ago. We were administratively heavy in what we were doing and it squeezed out our ability to do anything from an advisory or strategic standpoint. As a productivity hack, AI is going to be fantastic in eliminating some of that administrative burden.
Whether it’s what Howard was alluding to, where there is so much analysis and there is so much data, a lot of HR teams are putting HR analytics and KPIs into place and then don’t have the time to think about what are the insights that are coming from that. AI can help with that. AI can help with employee sentiment and other two-way interactions with employees in a way that can allow us to focus on the more strategic elements.
There is so much that we have to reckon with. We have to look at our total rewards and how we interact with employees. How transparent we are in those processes. We need to look at evolving talent acquisition and talent management processes so that they’re less compliance and administrative and more results and experience-focused.
Employees want a greater focus on cultural elements, not just vision and values. They want to know as an organization, where we stand on social justice. Where do we stand on diversity, equity, inclusion, and belonging? Where do we stand on employee development, sponsorship, and mentorship? All of these programs and outcomes require a lot of time, effort, thought, and intention. If we can get out of the administration and get more time on these higher-impact areas, I would be happy for us all.
No kidding, for sure. Kathleen, as our audience is pondering about this AI and its ethical use and also in particular, the Canadian environment, how can they get a hold of you to learn more about these topics?
I love talking about these topics, as you can tell by the fact that I went a little bit over. I am always happy for individuals to reach out through social media and then also engage with some of the content that I am publishing in this area. I’m always happy to make connections between individuals. Lastly, at the Talent Company, we do have our monthly Top Of Mind Webinars, which are specifically designed much like your show here to address the topics and the matters that are top of mind for HR professionals. Visiting our website, TheTalentCompany.ca, you can see our upcoming webinars. Register and let’s continue to make connections, share our insights, share our experiences, and share solutions where it’s appropriate.
It’s been a pleasure having you here, Kathleen. Thank you for sharing all your experience and expertise on this topic.
Char and Howard, I appreciate your stories as well and I’m looking forward to continuing to discuss this with you.
I sent you a LinkedIn request already. I think our audience should connect with you at least on that platform, and look at your insights. Thank you very much for joining us on this episode.
Thank you.
Thank you, everyone, for tuning in to the People’s Strategy Forum, and we’ll see you next week.
Kathleen Jinkerson is the Vice President of HR & Total Rewards Solutions at The Talent Company. With a wealth of experience in human resources and talent management, Kathleen is a passionate advocate for elevating talent and people practices within HR and Total Rewards. She works closely with organizations of all sizes globally, helping them leverage proven and trending practices to optimize their teams and refine their HR, talent, and total rewards strategies.
Kathleen is also a sought-after speaker and active participant at numerous HR, leadership, and industry conferences, including the WorldatWork’s Canadian Total Rewards Conference and the HRPA Annual Conference. In 2023 and 2024, she was recognized as one of the Top 50 Women Leaders of Toronto by Women We Admire.
In her role, Kathleen partners with clients to elevate their HR policies, programs, and practices, mentoring and coaching HR professionals at all levels to help them achieve their career goals.