Implications of AI in Talent Acquisition

Written by Sarthak Ahuja

ChatGPT, launched on November 30, 2022, heralded a new era by making disruptive AI accessible to the masses. It revolutionized the way we interact with computers, replacing conventional methods. This groundbreaking development showcased the power of AI to non-tech-savvy individuals, highlighting its potential to shape our future.
ChatGPT and other NLP(Natural Language Processing)-based chatbots can, "By ingesting such large volumes of data, the models learn the complex patterns and structure of language and acquire the ability to interpret the desired outcome of a user's request." It was clearly a breakthrough moment in disruptive technology but soon OpenAI found itself in murky waters when accusations of bias and prejudice were levelled against the pathbreaking chatbot.
After all, the man is the father of machine, not just intelligence but prejudice was also inherited.
"These robots were trained on AI. They became racist and sexist." An article on Washington Post detailed how it has been well studied and observed that AI models display prejudiced behaviors and can exhibit racist, sexist or other socially prejudiced behaviors. AI is trained on large datasets and these keep getting larger and larger. The larger the dataset AI is trained upon, the wider its capabilities and knowledge base becomes. However, in pursuit of quantity, quality may be compromised, leaving cracks for prejudices and stereotypes to creep in. But what harm does a sexist robot do if all his job is to stock shelves? It may not, but when AI deals with or interacts with humans, its biases and prejudices can reinforce and ossify prejudices and stereotypes.
The responsibility that AI will be shouldering in near future is high. Talent Acquisition is also one task that organisations will delegate to AI which can be of great help for HR managers who deal with finding the right talent among a sea of applications. Moreover, AI will be used to screen and select the right candidates for specific roles, where the AI will be trained on datasets of high-performing or 'successful' candidates in that role.

Identified Discrimination
A significant concern arises when considering the role of AI in talent acquisition, particularly in relation to decision-making. Specifically, the question arises as to whether AI would discriminate between men and women when selecting a CEO, given the current underrepresentation of women in such positions. It is noteworthy that among Fortune 500 companies, only 52 CEOs are women, and there are no non-binary or queer CEOs. In this context, it becomes crucial to address how AI can ensure fair selection and avoid reinforcing the existing glass ceilings that hinder women and other marginalized groups from reaching leadership positions. Moreover, the issue of representation extends beyond gender to race, as the number of non-white CEOs in the same list is even smaller. It is important to recognize that biases can permeate AI systems in various dimensions, not limited to gender and race. For instance, a paper published in Nature demonstrated how language models tend to disproportionately associate the term "violence" with Muslims, highlighting another aspect of bias that can emerge in AI systems.
AI Talent Acquisition & Management tools have to be used ethically and their datasets to be cleaned of such biases. Social Identities of applicants and employees have to be anonymized and training data sets have to be de-identified.

Statistical Discrimination:
But will de-identifying or anonymizing data sets suffice? In a world full of social barriers, our social identities can significantly predict our curriculum vitae. This means, that even after de-identifying datasets, AI can make prejudiced decisions because the marginalisation of social groups is also a statistical reality. Say, AI has to make a choice between two potential candidates for a role whose social identities such as name, age, gender, and nationality are absent from the data set and the data available to make the decision is the educational background and work experience. Will AI prefer the Private School candidate over the Government School candidate because underprivileged groups are a minority in Private schools and are also less successful in their professional career due to being deprived of quality education and resources? This might sound like a stretch of an argument, but statistics prove that the Scheduled Caste (SC) students in private unaided schools are 11.17% and ST students 6.93%. SC and ST student population in government schools is 44.2% and it reflects the lack of diversity and inclusion in private unaided schools. Studies have consistently shown that private school students do significantly better than public school students and also form the majority in top national institutions such as IITs and IIMs. Given this disparity, even non-social identifiers such as schooling and employment can have latent asymmetries across social groups. AI can reinforce these asymmetries as 'statistically relevant' and further perpetuate social injustice.

Rational rather rapid adoption:
It can be argued that AI is still in its infancy and is yet to be nurtured and groomed to be fit enough to integrate into our society. Managers and entrepreneurs have to be vigilant and take AI with a pinch of salt. The disruptive and promising prospects of AI can woo managers into rapid adoption without due scrutiny of its implications for all stakeholders. A rational over rapid adoption of AI, with a clear understanding and comprehension of what it entails and how it arrives at a decision, is a wiser choice. AI is only a chisel in the hand of the sculptor that is a manager, let it not sculpt for you.  


Previous Post Next Post