February 10, 2022
-
4
minute read

Bias in Hiring, the EEOC, and How RI Can Help

Perspectives

The current conversational trend surrounding ML model trust, governance, reliability, and fairness has been centered around people impacted by the downstream consequences of AI-driven decision making. In particular, companies and organizations deploying AI to recruit, serve and manage employees are reckoning with the failures inherent to ML models. 

AI in hiring technology allows recruiters and employers to better leverage their applicant tracking systems (ATS), provides the ability to hire more efficiently, shortlist more accurately, screen resumes with increased fairness, and boosts the number of qualified applicants for roles. All these benefits position AI as an essential feature of competitive hiring practices

While artificial intelligence in hiring confers many benefits, it also raises challenges and ethical questions. But where exactly lies the problem?

AI is only as good as the inputs it learns from. Biases appear in model outputs for a variety of reasons, but it is commonly acknowledged that models trained on historical data can produce historical biases that are not acceptable to human equity and have immense consequences on hiring outcomes.

Additionally, many commonly used metrics such as performance reviews are subjective, and tend to favor certain sociodemographic and socioeconomic groups over others.

Another challenge for the use of AI in hiring relates to the limited amount of usable data. As most businesses and ATS only allow for data to be collected during the initial stages of the hiring process, AI is frequently required to make determinations about candidates using only partial information. Many firms try to combat these limitations by using algorithms and data from other sources (including social media platforms such as Instagram or Facebook), which raises privacy concerns.

While employers recognize that they can’t or shouldn’t ask candidates about criteria like national origin, sexual orientation, political allegiance, disability status, mental health status, et cetera, AI and ML technologies can already discern many of these factors indirectly and nonconsensually. There have been many high-profile examples in which systems have revealed learned biases, especially relating to race and gender.

The Impact of the EEOC (Equal Employment Opportunity Commission)

On October 28th, 2021, the EEOC (U.S. Equal Employment Opportunity Commission) launched a new initiative on artificial intelligence and algorithmic fairness, designed to ensure that artificial intelligence enabled technologies used in hiring, firing, and promotion all abide by federal civil rights laws.

Given the ethical issues that arise alongside the use of AI, the EEOC initiative is designed to help prevent some of the inherent risks (biases in automation), that pervade AI systems.

In the words of the EEOC Chair Charlotte A. Burrows, “Artificial intelligence and algorithmic decision-making tools have great potential to improve our lives, including in the area of employment,” Burrows said. “At the same time, the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”

There are three key governing equal employment opportunity laws that are relevant to the EEOC initiative: 1) Title VII of the Civil Rights Act of 1964, 2) Title I of the Americans with Disabilities Act (ADA), and 3) The Age Discrimination in Employment Act (ADEA). The Civil Rights Act (1) imposes restrictions on how employment tests are scored, and only permits tests as long as they are not “designed, intended or used to discriminate because of race, color, religion, sex, or national origin.” ADA (2) prohibits employers from discriminating against qualified individuals with disabilities on the basis of their disabilities. Lastly, ADEA (3) prohibits discrimination based on age (40 and over) with respect to any term, condition, or privilege of employment. 

How Robust Intelligence Can Help Address EEOC’s Regulatory Guidelines

The federal civil rights laws motivating EEOC’s new initiative for AI and algorithmic fairness are designed to prevent inherent risks from trickling into automated decision-making processes. RIME offers an auditable ML deployment that serves to eliminate AI failures from ML model pipelines and ensures legal compliance:

Stress Testing: 

  • Fairness and Bias tests
  • Model behavior tests
  • Datasets relevant, representative, and free of errors
  • Enables users to understand and control how the high-risk AI system produces it output 

AI Firewall: 

  • Allows users to ‘oversee’ systems in order to minimize potential risks
  • Continuous and in-production testing 
  • Resilience against errors, faults or inconsistencies

Data and Model Registries:

  • Data and model governance and management practices 

Request a demo here if you want to know more about how Robust Intelligence can help!

February 10, 2022
-
4
minute read

Bias in Hiring, the EEOC, and How RI Can Help

Perspectives

The current conversational trend surrounding ML model trust, governance, reliability, and fairness has been centered around people impacted by the downstream consequences of AI-driven decision making. In particular, companies and organizations deploying AI to recruit, serve and manage employees are reckoning with the failures inherent to ML models. 

AI in hiring technology allows recruiters and employers to better leverage their applicant tracking systems (ATS), provides the ability to hire more efficiently, shortlist more accurately, screen resumes with increased fairness, and boosts the number of qualified applicants for roles. All these benefits position AI as an essential feature of competitive hiring practices

While artificial intelligence in hiring confers many benefits, it also raises challenges and ethical questions. But where exactly lies the problem?

AI is only as good as the inputs it learns from. Biases appear in model outputs for a variety of reasons, but it is commonly acknowledged that models trained on historical data can produce historical biases that are not acceptable to human equity and have immense consequences on hiring outcomes.

Additionally, many commonly used metrics such as performance reviews are subjective, and tend to favor certain sociodemographic and socioeconomic groups over others.

Another challenge for the use of AI in hiring relates to the limited amount of usable data. As most businesses and ATS only allow for data to be collected during the initial stages of the hiring process, AI is frequently required to make determinations about candidates using only partial information. Many firms try to combat these limitations by using algorithms and data from other sources (including social media platforms such as Instagram or Facebook), which raises privacy concerns.

While employers recognize that they can’t or shouldn’t ask candidates about criteria like national origin, sexual orientation, political allegiance, disability status, mental health status, et cetera, AI and ML technologies can already discern many of these factors indirectly and nonconsensually. There have been many high-profile examples in which systems have revealed learned biases, especially relating to race and gender.

The Impact of the EEOC (Equal Employment Opportunity Commission)

On October 28th, 2021, the EEOC (U.S. Equal Employment Opportunity Commission) launched a new initiative on artificial intelligence and algorithmic fairness, designed to ensure that artificial intelligence enabled technologies used in hiring, firing, and promotion all abide by federal civil rights laws.

Given the ethical issues that arise alongside the use of AI, the EEOC initiative is designed to help prevent some of the inherent risks (biases in automation), that pervade AI systems.

In the words of the EEOC Chair Charlotte A. Burrows, “Artificial intelligence and algorithmic decision-making tools have great potential to improve our lives, including in the area of employment,” Burrows said. “At the same time, the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”

There are three key governing equal employment opportunity laws that are relevant to the EEOC initiative: 1) Title VII of the Civil Rights Act of 1964, 2) Title I of the Americans with Disabilities Act (ADA), and 3) The Age Discrimination in Employment Act (ADEA). The Civil Rights Act (1) imposes restrictions on how employment tests are scored, and only permits tests as long as they are not “designed, intended or used to discriminate because of race, color, religion, sex, or national origin.” ADA (2) prohibits employers from discriminating against qualified individuals with disabilities on the basis of their disabilities. Lastly, ADEA (3) prohibits discrimination based on age (40 and over) with respect to any term, condition, or privilege of employment. 

How Robust Intelligence Can Help Address EEOC’s Regulatory Guidelines

The federal civil rights laws motivating EEOC’s new initiative for AI and algorithmic fairness are designed to prevent inherent risks from trickling into automated decision-making processes. RIME offers an auditable ML deployment that serves to eliminate AI failures from ML model pipelines and ensures legal compliance:

Stress Testing: 

  • Fairness and Bias tests
  • Model behavior tests
  • Datasets relevant, representative, and free of errors
  • Enables users to understand and control how the high-risk AI system produces it output 

AI Firewall: 

  • Allows users to ‘oversee’ systems in order to minimize potential risks
  • Continuous and in-production testing 
  • Resilience against errors, faults or inconsistencies

Data and Model Registries:

  • Data and model governance and management practices 

Request a demo here if you want to know more about how Robust Intelligence can help!

Blog

Related articles

January 13, 2022
-
5
minute read

Head in the Clouds: Designing the RI On-Cloud/On-Prem Deployment

For:
May 29, 2024
-
4
minute read

Robust Intelligence wins three prestigious cybersecurity awards in May

For:
May 31, 2023
-
7
minute read

NeMo Guardrails Early Look: What You Need to Know Before Deploying (Part 1)

For:
No items found.