The workforce needs to be prepared for the human-machine partnerships of the future. When the discussion turns to AI, L&D leaders should be speaking up — not stepping out.

BY MARK MARONE

Artificial intelligence is changing the way people live and work, promising massive advances in accuracy, productivity and personalization by replicating human capabilities with information technology systems that can sense, reason, comprehend, learn and act. Organizations are eager to employ it, but business leaders who hope to benefit from the full potential of AI — including those responsible for developing human capital — have a lot to think about.

Expectations are high, but there is an undercurrent of skepticism. Technology has a history of producing unintended consequences. While AI has the capacity to transform work experiences for the better, it can also threaten the trust that underpins a healthy corporate culture and strong employee engagement.

Where Are We Now?

AI use is becoming widespread, including in automated logistics and warehousing systems, routine medical procedures, robotic manufacturing, chatbot technology, investment analysis, reporting and decision-making. Advances continue toward autonomous vehicles, Alexa and Siri have become like family, and music-streaming apps play our favorite new artists before we’ve ever heard of them.

The workforce needs to be prepared for the human-machine partnerships of the future. When the discussion turns to AI, L&D leaders should be speaking up — not stepping out.

BY MARK MARONE

Artificial intelligence is changing the way people live and work, promising massive advances in accuracy, productivity and personalization by replicating human capabilities with information technology systems that can sense, reason, comprehend, learn and act. Organizations are eager to employ it, but business leaders who hope to benefit from the full potential of AI — including those responsible for developing human capital — have a lot to think about.

Expectations are high, but there is an undercurrent of skepticism. Technology has a history of producing unintended consequences. While AI has the capacity to transform work experiences for the better, it can also threaten the trust that underpins a healthy corporate culture and strong employee engagement.

Where Are We Now?

AI use is becoming widespread, including in automated logistics and warehousing systems, routine medical procedures, robotic manufacturing, chatbot technology, investment analysis, reporting and decision-making. Advances continue toward autonomous vehicles, Alexa and Siri have become like family, and music-streaming apps play our favorite new artists before we’ve ever heard of them.

The workforce needs to be prepared for the human-machine partnerships of the future. When the discussion turns to AI, L&D leaders should be speaking up — not stepping out.

BY MARK MARONE

Artificial intelligence is changing the way people live and work, promising massive advances in accuracy, productivity and personalization by replicating human capabilities with information technology systems that can sense, reason, comprehend, learn and act. Organizations are eager to employ it, but business leaders who hope to benefit from the full potential of AI — including those responsible for developing human capital — have a lot to think about.

Expectations are high, but there is an undercurrent of skepticism. Technology has a history of producing unintended consequences. While AI has the capacity to transform work experiences for the better, it can also threaten the trust that underpins a healthy corporate culture and strong employee engagement.

Where Are We Now?

AI use is becoming widespread, including in automated logistics and warehousing systems, routine medical procedures, robotic manufacturing, chatbot technology, investment analysis, reporting and decision-making. Advances continue toward autonomous vehicles, Alexa and Siri have become like family, and music-streaming apps play our favorite new artists before we’ve ever heard of them.

Even areas once considered invulnerable are beginning to yield to AI. Complex human resources activities such as generating performance reviews, gauging potential for promotion and succession planning are all being explored. Already software can assess job applicants via video, comparing their responses — both verbal and nonverbal — with the predetermined ideal for a given position. Personalized learning platforms offer training recommendations at relevant points in the learner’s journey tailored to an employee’s position, experience and the task at hand. The list seems to grow daily.
Should Organizations Adopt AI?
The short answer is “yes.” Over time, the insights and productivity gains that AI promises will likely provide an invaluable edge for organizations that have implemented it effectively.

Experts recommend beginning by evaluating which business functions could most easily benefit from AI and which analytical techniques would be required. Another critical step is ensuring the availability of current, quality data and attending to processes involved in managing it.

At this point in the discussion, many HR and L&D leaders begin to check out. What do they have to do with any of this? In fact, AI experts suggest that how well analytical techniques scale up in reality will depend heavily on the quality of a company’s human skills and capabilities.

There are important aspects of successful AI project planning that many organizations are missing, and HR and L&D professionals are in the perfect position to lend a hand. One of them is evaluating a project’s impact on the corporate culture, employee experience and, ultimately, employee engagement.

In July 2018, Dale Carnegie & Associates conducted an online survey on the impact of AI on the workplace. (Editor’s note: The author is an employee of Dale Carnegie & Associates.) The 752 respondents, all full-time employees, represented a range of leadership levels and industries in both the United States and Japan, two countries on the leading edge of AI adoption.

Seventy-five percent believe that AI will “fundamentally change the way we work and live in the next 10 years,” and the vast majority expect those changes to be for the good. The survey also found that 58 percent feel positive about AI’s potential to take on routine tasks and allow them to focus on more meaningful work.

At the same time, nearly a quarter (24 percent) of respondents (and 30 percent of those at director level or above) said they were very or extremely worried about the potential impact of AI on their organization’s culture, with another half of respondents expressing concern to a lesser degree (see Figure 1).

Technology has a history of producing unintended consequences.
These leaders recognize that the gains from AI are at risk of being offset by losses, at least in part, if the resulting changes to corporate culture have the effect of disengaging employees. And there are already warning bells. Global mega-retailer Amazon is on the cutting-edge of AI and recently received patents for the technology behind the development of a wristband that would track employees’ every move, vibrating when its algorithms judge they are doing something wrong. It would be an upgrade to similar tracking technology already in use. In an article that vividly illustrates the potential impact on employee engagement, one former Amazon warehouse worker told The New York Times, “After a year working on the floor, I felt like I had become a version of the robots I was working with.”

While optimism prevails in Dale Carnegie’s research, people do worry about losing their jobs due to advances in AI. More than half of respondents expressed at least some degree of concern. HR leaders don’t need a lecture on the negative impact that impending job cuts — or even rumors of them — have on a workforce’s performance. As Carnegie observed decades ago, “One of the worst features about worrying is that it destroys our ability to concentrate.”

Most who study the issue predict job creation will largely compensate for losses, yet anecdotes suggest fears of pink slips aren’t unfounded. The World Economic Forum reported a telling example in 2017: A Chinese mobile phone manufacturer cut its workforce of 650 people by 90 percent, replacing them with 60 robotic arms. Only 60 people are still employed there — three to maintain the production lines and the others to monitor AI-controlled systems. The factory’s general manager predicts the number of employees could fall to 20.

But predictable physical activity is one of the easiest work forms to automate; many occupations involve skills much harder to mechanize with available technologies. That said, McKinsey asserts that a significant proportion of tasks involving collecting and processing data, interfacing with stakeholders, and applying expertise to decision-making are also near-term candidates for automation.

Experts disagree on the timeline and magnitude of the expected impact. What is clear is that companies’ decisions about what to automate and how will have profound effects on the relationship between leadership and the remaining human workforce.

Breaking Down Resistance
Outside of the technological expertise it requires, the biggest obstacle to successfully implementing AI is trust. It’s not surprising given the lack of transparency often involved with machine learning applications, the potential for bias, and fears stemming from AI-induced dystopias in pop culture. Only recently, Amazon made headlines again when it had to shut down its AI-based hiring application because it showed bias against women.

The industry is trying to address some of these fears: IBM recently announced the launch of AI OpenScale, a service that promises to “infuse AI with trust and transparency, explain outcomes and automatically eliminate bias.”

But technological solutions, however sophisticated, won’t resolve the trust problem entirely. Company leaders who hope to retain the goodwill of their remaining human workforce will need to strengthen and protect trust. Doing so hinges on effective communication and honesty but also on behaving in ways that are consistent with the principles and values they espouse. Imagine the resulting cynicism when organizations that claim to “put employees first” are forced to admit to biased AI-generated performance assessments or privacy breaches resulting from AI projects or announce unexpected layoffs as their need for human employees decreases.

Maximizing the Human Contribution
Several years ago, Google’s founders experienced an epiphany that radically altered their hiring philosophy. Believing that only technologists can understand technology, they originally set their applicant screening algorithms to sort for computer science students with top grades from elite universities. In 2013, Google decided to test its hiring hypothesis. Codenamed Project Oxygen, it analyzed the entirety of its hiring, firing and promotion data since the company’s incorporation in 1998. The results were shocking: Among the most important qualities of Google’s best employees, hard skills such as science, technology, engineering and mathematics were rated last.
It’s less likely that entire occupations will be automated than that some activities will be automated across many occupations.
Instead, the project determined that the top characteristics of successful Google employees were all soft skills, which meant being a good coach and communicator, gaining insight by listening to differing points of view, having empathy and being supportive of colleagues, critical thinking, problem solving and being able to make connections across complex ideas.
This type of evidence suggests that the human contribution to the machine-human partnership will continue to be crucial. People sense this, as the Dale Carnegie survey confirmed. While there is no doubt the demand for advanced technical skills will grow, when asked which skills respondents believe they would need to avoid job loss to AI, 70 percent chose soft skills over hard skills. (Figure 2 shows which soft skills respondents expect to need most.) And they are looking to their employers to train them.
Looking Ahead
While predictions vary, it’s less likely that entire occupations will be automated than that some activities will be automated across many occupations. Roles for humans will remain as long as there are areas where employees aren’t ready for AI to take over. For instance, fewer than half of survey respondents are willing to accept a performance review conducted by AI — even if they knew the exact criteria being used. The time when AI-generated recognition will suffice hasn’t yet arrived either: Nearly 6 in 10 say it would be less valuable if they knew it wasn’t coming from a real human.

HR and L&D can contribute to successful implementation of AI by adopting the following three steps.

Stay attentive to trust. It forms the foundation of a competitive and healthy corporate culture. As Carnegie said, “Evidence defeats doubt.” Develop confidence in AI among employees by creating awareness of how it is being used successfully elsewhere. Assess, protect and build trust in senior leadership so that stakeholders can rest assured they will do the right thing when dealing with issues that will inevitably arise, such as ethics, security and privacy.

Bring a people-focused perspective to discussions about the implications of AI on the employee experience and engagement. HR and L&D leaders can sound the alarm when they see potential pitfalls, ensuring they are addressed early in the project. Also, they can look for opportunities to redesign the remaining work in ways that will strengthen employees’ connection to the organization’s purpose. Company leaders must clearly outline the value proposition in AI for their workforce.

Lay the foundation for success with the right training strategy. With AI’s implications in mind, now is the time for L&D leaders to assess skills gaps and identify how they can help their entire organization level up in the areas that will complement AI’s proficiencies.

There is unquestionably great promise for what can be achieved when people and machines combine their capabilities. Yet despite AI’s potential, most work — at least in the near-term — will still require a combination of human skills and technology, so when the discussion turns to AI, L&D leaders should be speaking up — not stepping out of the conversation.

Mark Marone is director of research and thought leadership for Dale Carnegie & Associates. He can be reached at editor@CLOmedia.com.