Artificial intelligence (AI) is back on the front page, and the news isn't comforting. According to some reports, researchers at OpenAI -- the company that brought you ChatGPT -- issued dire warnings about a potentially dangerous AI discovery just before Sam Altman, the company's CEO, was . One among many factors in the board's decision was concern about "commercializing advances before understanding the consequences" of a powerful new model that can generalize, learn, and comprehend.
Concerns like this are justified but, like it or not, AI has already permeated the U.S. work environment. In particular generative AI (like the OpenAI model) is said to be "super-charging the potential of technology to help, hinder or reorient how we work" in fields ranging from computer science to human resources, recruiting, corporate leadership, and psychology, Beyond making predictions about existing data sets, generative AI machine-learning models create new data. In effect, generative AI systems can learn to generate entirely new entities that are similar to the data they were trained on.
It simultaneously amazes and frightens me when I realize how AI has been transforming workplaces in the U.S.. Earlier this year, the Wall Street Journal ran an article containing perspectives from a wide range of thought leaders regarding AI's current and potential impact. The following insights and observations caught my attention.
Changing organizational structure versus building talent. As AI automates basic tasks (e.g., accounting, purchase orders, requisitions), there will be less need for entry-level employees in some areas. Similarly, broadening deployment of AI client/patient rating systems for job performance assessment will likely decrease the need for human oversight. To some, these trends suggest cuts in the entry-level tier of employees. Others observe that, without an entry-level tier, mid-level hiring will come from outside organizations and finding experienced mid-level talent will be difficult.
Automating worker performance assessment. While AI is generally accepted as a tool for assessing technical skills and measuring efficiency, it has become increasingly common for corporations to use AI to monitor employee empathy on the job! For example, AI programs can coach and score call center workers on an empathy scale to judge their performance with callers. In effect, these human interactions are being judged by "pretend-empathy machines" that are programmed to "understand" jealousy, competition, insecurity and other human feelings that arise in a workplace. This raises serious questions. Will human empathy eventually be redefined in terms of what machines can understand? Will humans eventually retrain themselves to please machines?
Potential advantages for older workers. The veritable explosion of knowledge and new technology makes training and upskilling essential for all employees. AI applications are being developed to address individual learning and knowledge gaps, and these may help older workers remain current and competitive. In the healthcare sector, AI systems that speed the collection and organization of clinical information will almost certainly become invaluable to physicians of all ages. However, older doctors will be better equipped to apply their experience and knowledge to validate AI diagnoses and treatment recommendations.
Identifying and addressing skill gaps. AI can automatically detect skill gaps on an individual, team, and organization-wide basis; moreover, it can identify ways to address those gaps even before management becomes aware of them. The skills necessary for certain jobs change constantly, making it difficult for organizations to keep track. For instance, nurses and physician assistants must become familiar with an increasing number of tech platforms and data analytic tools to help improve patient outcomes. AI can help keep track of what skills organizations need, and predict what they might need next.
Two cautionary points are worth mentioning. The first has to do with human judgment. AI drives better collaboration and productivity, but human judgment is essential for maximizing its power. It is known that AI can create "false facts," but research shows that only half of employees believe they know when to question the results of AI. It follows that companies should teach employees to have inquiring minds.
The second cautionary point relates to ethics. The rising use of digital assistants -- those that speak with a human voice, take on a human appearance, and use social intelligence -- in the clinical environment may be a serious threat to ethical behavior. In-person working groups build emotional bonds among the participants. Research shows that when collaboration occurs via AI, the social checks on ethical behavior weaken and interactions may become more transactional and self-interested. Individuals using AI are more likely to instruct an AI assistant to use deception and emotional manipulation.
Where do we stand vis-à-vis the future of AI in the workplace? The old cliché "tip of the iceberg" comes to mind. How do we deal with the coming wave of generative AI? Another cliché -- "proceed with caution" -- seems apt.