• +91-9754677445
  • 457 Mott Street,NY 10013
  • 24 X 7

Should you tell an employee if theyre talking to a robot

While robotics is flourishing in the recruitment industry, it does not inherently necessitate that we mislead candidates. Although it may be amusing to observe the extent to which a chatbot can be programmed to sound like a human, it raises ethical concerns regarding the recruiting process.

Abhishek Gupta is a Researcher in AI Ethics at McGill University in Montreal, Canada, and the founder of the Montreal AI Ethics Institute..

His research concentrates on the application of technical and policy methods to address ethical, safety, and inclusivity concerns associated with the use of AI in various domains. Abhishek is a Software Engineer, Machine Learning at Microsoft in Montreal, and he has a strong technical foundation.

Abhishek was interviewed by HR Tech News regarding his perspective on the evolving nature of ethics in AI-powered recruitment and the significance of maintaining the human element in the recruiting process.

Abhishek clarified, “It is imperative that we refrain from granting an AI system an excessive amount of power.” “These machines are not autonomous in the sense that we are responsible for the system’s design.” In the end, the individual who operates the machine is the one we should either trust or distrust.

“This concept of an evolving output distribution, in which the algorithm interacts with real-world data and learns from it, prompts one to question whether it is more advisable to trust the human or the system itself.” Bias originates from the data itself. The stereotypes that are present in society are being captured, and as a result, they are reflected in the outputs of the machine learning systems. I am certain that there are numerous instances of AI errors that remain undetected within the organization for each one that is publicly disclosed.

AI and automation have revolutionized recruitment, which is one of the most prominent industries. HR managers are able to allocate their valuable time to more humanistic skills by avoiding any menial tasks, which is facilitated by robotics. Nevertheless, it is imperative that we establish a clear boundary with our digital colleagues.

For example, should the candidate be informed of the fact that they are applying for a position through a chatbot or AI-powered recruitment tool?

“Abhishek answered unequivocally in the affirmative.” “Refer to the Google Duplex example from this summer.” The team unveiled their new Google Duplex at the Google IO conference, which has the capability to schedule appointments on behalf of the user.

The system was actually imitating human cues, such as the use of filler words such as “umm” and “err” and the pauses necessary to replicate speech patterns. Despite its immense value in facilitating the connection between individuals and businesses, it also raises concerns regarding disclosure.

Is it dishonest to deceive an individual into believing that they are conversing with a human entity when they are not? For example, although scheduling an appointment with a hairdresser may seem inconsequential, it is impossible to guarantee that this is the sole application of this technology in the future.

“Automated telephone systems are satisfactory,” Abhishek continued, “but the disingenuousness is more apparent when you are conversing with what appears to be a live human voice.”

In relation to the recruitment process, Abhishek is of the opinion that organizations are obligated to inform candidates that they are interacting with a device, regardless of how human-like it may appear.

“It is crucial to adhere to full disclosure and respect transparency,” he stated in an interview with HR Tech News. “Indeed, it is not a good sign for a candidate to have confidence in the company they are applying to if the foundation is built on deception.”

X