By Cynthia Gentile, J.D., SHRM-CP, Faculty Member, Dr. Wallace E. Boston School of Business;
Ivy Kempf, attorney, Faculty Member, Peirce College
Artificial intelligence (AI) or algorithmic decision-making tools are now being used by employers to make hiring decisions. In this episode, APU’s Cynthia Gentile talks to Ivy Kempf about how these tools conflict with the Americans with Disabilities Act.
Listen to the Episode:
Subscribe to The Everyday Scholar
Apple Podcasts | Spotify | Google Podcasts
Read the Transcript:
Cynthia Gentile: Welcome to the podcast. I’m your host, Cynthia Gentile. Today, I’m excited to welcome back Ivy Kempf. Ivy is a professor of legal studies at Peirce College in Philadelphia. And I’m honored to work with you again, Ivy, because I know your background as a litigator and an educator continues to give you unique perspectives on these topics.
Ivy Kempf: Well, thanks, Cyndi. I’m so glad to be here and to work with you, and I’m excited to tackle another topic with you today.
Cynthia Gentile: So, in this episode, we’ll continue our discussion of sort of hot topic, hot button issues in employment and business law. And I think we should just dive right in today with one of the hottest issues, and that is the use of artificial intelligence (AI) or specific algorithmic decision-making tools, which is also known as ADSs in the hiring process and how the use of those ADSs often clash with the Americans with Disabilities Act. So, Ivy, before we get to the Americans with Disabilities Act, can you expand a bit on the kinds of AI or ADSs that are used in an employment setting right now?
Ivy Kempf: Absolutely. So, I think it’s commonly known that many employers, particularly in large companies, use application trackers or algorithms that will sift through resumes to search for those keywords and phrases inside of a resume. And what the company’s doing is just looking to see if your resume matches with those keywords or phrases in the job description or maybe the company’s mission statement or maybe just some other skill sets that they’re looking for. Then what the program does is it scores, or it ranks, the resume according to the candidate’s fitness for the job. And then the algorithm basically weeds out any resumes with low scores that lack those keywords and phrases. So, I think that’s the more common AI that most of us know is currently in existence. When we try to work on our resumes, we’re always taught to kind of make sure you’re putting in those keywords and phrases.
So, I think we all know about that. But in addition to these algorithms, there are some other AI tools that are currently being used by employers that were at least not known by me, that’s for sure. And they can end up discriminating against people with disabilities. So, the first one I wanted to talk about is chatbots. I think we’ve all heard the term a bit, but employers can use these chatbots and they invite candidates through text or email to engage in a brief text interview with kind of these customized screening questions. And those screening questions can be programmed with a simple algorithm that rejects certain applicants. For example, maybe they reject candidates who during the course of the conversation indicate that they have significant gaps in their employment history. And while that might seem innocuous, because that can be a red flag to any employer, what if that gap was for medical treatment?
Then that person is screened because of his or her or their disability. Employers can also use interview screening tools, and that’s through video or audio. So, in essence, the company asks the candidates to record their answers to customized questions. Then the software automatically sifts through those recordings to advance candidates who meet the employer’s qualifications while rejecting those who do not. But again, this kind of software can also be programmed to analyze certain things. One example might be the applicant’s speech patterns. They can actually analyze speech patterns in order to see if people are able to problem solve. So they can reach conclusions about somebody’s ability to problem solve using that kind of algorithm. And if the applicant has a speech impediment, well, that will cause him or her or them to score low or unacceptable on the test. And that will bring me to my last one I wanted to bring up with you guys, which is video games.
This one surprised me. Employers are actually customizing video games to measure abilities, personality traits, and other qualifications to assess applicants. And these video games can be programmed to analyze the applicant’s memory or logic skills. So, for example, an applicant who does one of these video games, let’s assume that they’re visually impaired. They may not be able to play the game at all, or if they are, they’re not going to score very well on the games. So that’s another way that this can be discriminatory. These are just a few of the possibilities out there on what and how employers are using AI in the onboarding process and how they can result in a violation of the Americans with Disabilities Act, which is also commonly referred to by its acronym, the ADA. Speaking of which, Cyndi, do you want to tell our readers a bit more about the ADA and what it requires from employers?
Cynthia Gentile: Sure. So, first I just want to comment that I am surprised to hear about some of those tools and specifically about the ways in which those tools are being used to make decisions about new hires. So, clarifying the verbiage on those tools is important, and getting kind of a high-level overview of the ADA is also important because they really do often run afoul of each other. So, the Americans with Disabilities Act or the ADA, as you said, is a federal civil rights law. There are many federal civil rights laws, a couple of which we will find come into play with regard to these algorithmic decision-making tools. But the ADA is broken up into five separate sections. For today, we’re only going to discuss one of those, and that’s Title I of the ADA prohibits employers, employment agencies, labor organizations and joint labor management committees with 15 or more employees from discriminating on the basis of a disability.
So, we need to understand how the ADA defines a disability as that is a critical component of the applicability of the ADA, and it really does come into play as we dig into the AI piece. A disability under the ADA is a physical or mental impairment that if left untreated would substantially limit one or more major life activities. It doesn’t have to be permanent; it doesn’t have to be immediately apparent. And there are a couple ways that employers can violate the ADA with the use of these algorithmic decision-making tools. Just a couple I want to highlight. The first is that the employer doesn’t provide a reasonable accommodation that’s necessary for an applicant or employee to be rated fairly and accurately by the algorithm. I think that’s one to tie back to your comment about those video games. It’s pretty obvious that if you have an applicant who is visually impaired, they’re not going to be able to score as highly on a video game.
Another way that employers tend to violate the ADA here are when an employer adopts an algorithmic decision-making tool for use with its job applicants or employees that violates the ADA’s restrictions on disability related inquiries and medical examinations. So, this one ties back to your point about gaps in the resume. Of course, there can be many reasons we have a gap in our resume, but if one of those is because an individual was seeking medical treatment, then it’s very possible that the algorithmic decision-making tool is violating the ADA restriction on employers inquiring about disabilities or medical examinations. And then the last one I want to highlight today, and that probably we’ll spend the most time with today, is when an employer relies on an algorithmic decision-making tool, then intentionally or unintentionally screens out an individual with a disability, even though that individual is able to do the job with a reasonable accommodation. So, it’s important to note here that employers can violate the ADA even if they’re using a third-party software program to perform the screenings.
So, they’re not exempt if they buy a product or contract with a provider. And instead of making their own tests or video games or whatever we may have. Under the Equal Employment Opportunity Commission and its civil rights rules, it is ultimately the employer who’s responsible for making sure that the systems it uses don’t result in discrimination. If an employer is using an algorithmic decision-making tool to assess an applicant or their own employees, the ADA requires that they provide reasonable accommodations if requested. The applicant or employee doesn’t need to reveal the disability to request a reasonable accommodation. Some brief examples of reasonable accommodations might be, let’s say, administering a test orally rather than compelling the use of a keyboard or a mouse, providing a screen reader or extending time limits for completing assessments. There are other federal civil rights laws that can be violated by the use of these ADSs in the hiring context. And I know we have one case to discuss here, EEOC v. iTutorGroup. Ivy, can you expand a bit on that really interesting case?
Ivy Kempf: Absolutely. In fact, this case was filed only a few months after the EEOC technical assistant document was released. I was actually able to pull the complaint that was filed by the EEOC in this case. So, I’m going to read the allegations that are alleged by the EEOC, but it’s important to note that these are only allegations. Nothing has been proven yet as this case is still ongoing. So, let’s dive into the complaint a bit. So according to the complaint, the defendant, who is iTutorGroup, Inc. They hire tutors from the US and other countries to provide online English language tutoring to adults and children in China. And so, the defendant iTutor has hired thousands of tutors each year from the US. These tutors teach from remote locations like home and stuff like that, which many of us get to do now, right?
The EEOC alleges that only qualification that they needed in order to be hired as a tutor was a bachelor’s degree. So that was the only qualification. Paragraph 19 of the complaint states that in March of 2020, the charging party, which is the person who originally filed the complaint with the EEOC, filled out an application using her real birthdate and was immediately rejected. The very next day, she applies again using a more recent birthday. And guess what? The same identical information. And what happened? She was offered an interview. So therefore, the complaint goes on to allege the iTutor group discriminates against older applicants by programming their tutor application software to automatically reject female candidates 55 or older, I should say, and male applicants aged 60 or older. So, there’s some more gender discrimination for you, Cindy.
Cynthia Gentile: So, we have both an age discrimination and a gender discrimination claim.
Ivy Kempf: That’s right, a double whammy. So ultimately, the plaintiffs alleged that the plaintiff rejected more than 200 other applicants that were aged 55 and over from the U.S., all of whom had their bachelor’s degree or higher. So that’s what we know thus far. There is a trial demand, so we will have to wait to see if this case actually goes to trial or if it ends up settling, but that’s the latest on some cases that are related to this topic.
Cynthia Gentile: Let’s take a break. Ivy, thank you for helping me work through this complicated issue.
Ivy Kempf: Now, did you want to get a bit more into some federal law?
Cynthia Gentile: So, I do. I want to talk a bit here about a law that has been proposed in April of 2022. It’s entitled the Algorithmic Accountability Act. It’s been introduced in both chambers of the US Congress. It has not passed yet. The proposed legislation actually directs the FTC, which is the Federal Trade Commission, to promulgate regulations requiring organizations that use algorithmic decision-making tools to perform impact assessments to ensure equity when critical decisions are being made. Employment decisions are definitely within the realm of critical decisions as envisioned by this act, but there are some others as well. So, some other critical decisions that have been identified as having a significant effect on consumers’ lives are those concerning education, family planning, financial services, healthcare, housing, legal services, and many others. We don’t really have a final legislative document here for us to look at, and we don’t have any impact studies yet.
But we do know that the law as written would provide at least a first step to ensuring some public reporting and accountability when a company is using algorithms in critical decisions. So, while today we are focused on the use of algorithms in an employment context, I just wanted to bring this up because we are subject to algorithms assisting companies in making key decisions throughout all facets of our life. It could be in providing us reviewing an applicant for an apartment, reviewing applicants’ financial documents for a loan, things regarding medical care. So at least here with this law, we have a first step potentially in understanding how those algorithms are used. Kind of as we mentioned in the beginning of this conversation, there are so many different tools that are being employed, and most consumers aren’t even aware of what they are, let alone how they are impacting their access to services, employment, education, et cetera. So, this law, if it does pass, really would be a great first step in at least bringing things out into the open and having a chance of understanding where these algorithms impact negatively in providing services.
Ivy Kempf: It would be huge actually, especially because it’s a federal, so it’ll impact so many, because right now we’re doing so many state-wide and city-wide regulations, and so the importance of having a federal one would be huge.
Cynthia Gentile: Right. And this one really goes beyond the employment context, but clearly does impact there, would I should say, impact there. Now, looking specifically at the employment context, there are some really interesting state and local laws that have actually already been enacted. One of which I would love for you to talk a bit about is what’s going on in New York.
Ivy Kempf: Sure. So last year, the New York City Council passed a local law to provide new protections to employees during the hiring and the promotion process. This law actually goes into effect very soon, next month, April of 2023*. Employers who use, and this is a key term, automated employment tools must now, in New York City, have an independent auditor conduct an AI tool audit to confirm that these tools are not biased. The employer is required to disclose the data that the AI tool collects by publishing the results of the audit, usually on their website. This new law also requires employers and employment agencies to satisfy two notice requirements. So, this is particularly novel. The first is that the employer must notify a candidate who resides in New York City that an automated employment decision tool, there’s that word again, will be used in assessing the candidate or employee and what job qualifications and characteristics that tool will use in the assessment.
Now, going back to that phrase, an automated employment decision tool, that’s still currently being more specifically defined by the New York City Council. So, we’ll find out more about what that actually means come April. So, the first notification, like I said, they have to first notify the candidate that they’re going to be using this tool and what job qualifications or characteristics that the tool is going to assess. And the employer or employment agency must allow the candidate to request alternative process or accommodation. So, they do have to allow them to request an accommodation, however, the law is currently silent as to whether they actually have an obligation to provide the accommodation. The second notice requirement is that the employer must disclose on their website or make available to a candidate within 30 days of receiving a written request information about the type of data that’s collected for the automated employment decision tool, the source of the collected data and the employer’s data retention policy.
So those are the two novel notice requirements in this New York City Council local law. Employers who violate the law will be fined. You might want to know, well, what’s going to happen if they don’t meet these notice requirements? Well, they will be fined up to $500 for the first violation, and then between $500 to $1,500 for each subsequent violation. The fines are then multiplied by the number of AI tools and the number of days that the employer fails to correct the non-compliance. So, it can get pretty pricey for the employer. And of course, there’s still civil recourse, like a class action lawsuit for discrimination that’s available under federal laws, as Cindy talked about earlier. Now, while this city council law only applies to employers inside New York City, it’s still likely to have a pretty broad impact. For one thing, it’s hard for larger-sized New York City companies to separate that hiring system from the rest of the hiring systems across the country.
So that’s something. But we do see other states actually stepping up to the plate and trying to pass regulations to monitor and to, well to regulate AI in the onboarding process. Several states and cities have passed or are at least considering similar laws with regard to the use of artificial intelligence, and other technology and employment decisions. For example, in Illinois, there’s an Artificial Intelligence Video Interview Act, and that took effect in January of 2020. And that requires employers who are using AI interview technology to provide advanced notice and an explanation of the technology to applicants, and then also to obtain the applicant’s consent to use the technology and to comply with restrictions on the distribution and the retention of those videos. Similarly, Maryland enacted a law that took effect in October of 2020, which requires employers to obtain an applicant’s written consent and a waiver prior to using facial recognition technology during pre-employment job interviews.
California and DC have also proposed legislation that would address the use of AI in the employment context as well. And so, this isn’t just on state levels, but we also need to look globally. And right now, one of the things to kind of keep your eye on and continue to follow is there’s proposed EU regulations that’s actually called the AI Act, and that’s expected to come into law late this year, early 2024. And that’s likely to impact any organization that operates anywhere in Europe. So, a lot more to come with regard to the state regulations and international law on this topic, Cindy.
Cynthia Gentile: Right, and I think that what’s really critical here is to tie some of these individual iterations of law and process back to the beginning of this conversation. One of the things that the ADA requires is that an employee pre-provided with a reasonable accommodation. However, because so much of the assessment tools are secretive, and if we think about that, most places don’t really generally release their interview questions. Let’s say back in the old way that we interviewed for jobs, you don’t necessarily see those in advance. Well, if you aren’t aware that a certain algorithmic assessment tool is being utilized, how can you request an accommodation to make that something that you may be more successful with in spite of your qualified disability? So, what we see, and this is also true with that federal law that is still making its way through Congress, we see a lot of emphasis on notice and notification. And so sometimes those things sound kind of boring and perfunctory, but in practice, that makes all the difference. Because if you don’t know what to ask for, then how can you ask for what you need?
Ivy Kempf: Absolutely. It’s an excellent point. Nowadays, because of the technology that’s out there, those with disabilities don’t know what kind of an accommodation they even need until they get notice of what’s being used. So, I agree wholeheartedly.
Cynthia Gentile: And to expand just briefly on the point that you made as each of these states, local towns and cities pass these ordinances, and then certainly at the EU level, even though it is a piecemeal approach, it has a more vast and unifying impact because a company based in New York is not necessarily only hiring people who live in New York. As we know, remote work is incredibly common now. And then also thinking about the EU, how many of our companies have a parent company based in the EU? So, while the federal law in the United States may be lagging a little bit and making its way through the Houses of Congress, we may get a similar outcome with the patchwork that is in place or continues to roll through at the state, local, and then international level.
Ivy Kempf: A lot more to come, for sure.
Cynthia Gentile: Certainly. And that’s a good point for me to say that we know that this really only scratches the surface when it comes to the ways artificial intelligence can run afoul of federal civil rights laws. I hope we can return to this topic as developments allow. I know that there will be those developments and there’s so much more to discuss, so we’ll be keeping an eye on it. Ivy, I look forward to tackling our next so-called hot topic in employment law soon.
Ivy Kempf: Me too.
Cynthia Gentile: And to our listeners, thanks for joining. Be well and be safe.
* After the recording of this podcast, the NYC Council delayed the effective date of this ordinance to 7/5/23.
Comments are closed.