• +91-9754677445
  • 457 Mott Street,NY 10013
  • 24 X 7

Beware old tweets may come back to haunt you

It is possible that you have already forgotten the tweet you sent following a night of revelry in 2012. It is likely that it is submerged in the 45,000 additional tweets that you have published since then.

Additionally, it is safeguarded from the inquisitive eyes of prospective employers. Or so you believe.

Although it may be challenging for a human to manually retrieve all of your previous social media posts, it would be relatively straightforward for a machine to do so. Algorithms have the ability to delve into and uncover even the most vile social media posts.

According to a 2017 study conducted by CareerBuilder, 70% of employers conduct social media profile screenings prior to employment.

Screening of social media

The practice has become so widespread that there is now a demand for background screening services that specifically concentrate on a candidate’s social media accounts.

For example, Fama Technologies, a company headquartered in the United States, asserts that it employs machine learning and natural language processing to identify “red flags” in an individual’s social media profile. Additionally, it notifies candidates when they are being examined.

The AI-powered service is not employed to monitor an individual’s recreational activities with the intention of intruding on their private life, according to the company. However, it does review public posts for any indications of hate speech or intolerance.

In 2016, Ben Mones, CEO and co-founder of Fama, stated to CNBC that employers are seeking individuals who are unconcerned with the content of their statements.

The AI transmits the link to the recruiting team if any post is tagged as hateful, misogynistic, or racist. The instrument is therefore employed to identify bullies prior to their employment with the organization.

Is it worthwhile to employ you?
The intention appears to be commendable; however, is AI truly dependable when it comes to identifying “red flags” on social media?

Jay Stanley, a senior policy analyst at the American Civil Liberties Union, harbors reservations regarding the operation of AI-enabled social media screening.

Even with the most sophisticated AI, the automated processing of human discourse, including social media, is exceedingly unreliable. Stanley stated, “Computers simply do not comprehend context.”

“I despise the notion of individuals being unjustly denied employment opportunities due to a computer’s determination that they possess a “bad demeanor” or some other red flag.”

Patrick Ambron, the CEO of BrandYourself, an online reputation management company, is skeptical that an algorithm can discern who is worth employing and who is not.

“Although they may save a company time, they are frequently inaccurate and unjust, penalizing individuals for issues that are not their fault and rewarding individuals who merely conform to a specific mold,” Ambron wrote in AdWeek.

“If you are not proactive, you may forfeit opportunities that you would otherwise be entitled to.”