Artificial Intelligence

Pregnancy discrimination could be enabled by AI


In June 2023, the Pregnant Workers Fairness Act (PWFA) went into effect. The federal law requires employers to give pregnant employees reasonable accommodations, such as getting appropriately sized uniforms or having closer parking. Another federal law, included in the December 2022 consolidated appropriations bill, now requires employers to provide break times for parents who need to pump breast milk during work hours. It is hard to believe that such basic yet vital protections were only codified in the 2020s.

Yet, the reality is that navigating pregnancy and employment remains a significant challenge in today’s society. While pregnancy discrimination is illegal under federal and state laws, it remains an omnipresent and pernicious force in U.S. society. What’s more, instances of pregnancy discrimination are likely to increase in insidious ways with the rise of artificial intelligence and algorithms in employment.

Employers are increasingly turning to AI in their employment decisions. For example, upward of 83% of employers now leverage AI or other automated tools in their hiring process. When AI is used for hiring, it has the great potential to result in discrimination.

An algorithm is unlikely to be programed to explicitly look for indications of pregnancy, since this would open employers up to violations of existing discrimination laws. Regulators are already on the lookout for these potential unlawful uses of AI. For example, an AI used in hiring could theoretically identify those with limited absences as good employees. Therefore, without constraints, someone who takes sick days for pregnancy symptoms or medical appointments could be classified as a “bad” employee. The Equal Employment Opportunity Commission (EEOC), the government agency in charge of enforcing anti-discrimination laws in employment, recently issued guidance warning employers of potential liability if an algorithm screens out an employee in such an unlawful manner.

Yet, identifying and eradicating problematic discrimination by AI is no easy task, most notably because of the pernicious possibility of proxy discrimination. If an employer realizes that its hiring AI program is systematically screening out pregnant individuals, it could bar the AI from considering pregnancy. However, this will not actually solve the problem. The AI will naturally find the next best proxy to aid its prediction. Now, instead of screening applicants for pregnancy, the AI can look to other indicators, such as recent gaps in employment as seemingly innocuous data points that happen to strongly correlate with pregnancy and parenthood.

Depending on the breadth of data available to the algorithm, the sky is the limit on what information could be used as a proxy for pregnancy status. I learned this the hard way. When I became pregnant, I tried to hide this fact from advertisers by not disclosing my pregnancy to companies or social media. I bought prenatal vitamins and pregnancy tests in cash and used a VPN and privacy protecting software for internet searches. The advertisers still found me.

Given the vast reproductive health data ecosystem, it is not difficult to imagine a myriad of data points potentially available to an employer’s hiring algorithm that could be used to proxy for pregnancy, such as following certain profiles on social media or changing shopping habits.

Currently, the vast amount of data linked to reproductive health remains unprotected by U.S. privacy laws, making proxy pregnancy discrimination essentially inevitable.

The dangers of proxy discrimination are by no means confined only to pregnancy nor only to the employment context. Other protected traits, such as health status, disability, or race, are vulnerable, too. Other actors, such as insurers or lenders, may also utilize AI in decisions in ways that increase the likelihood for proxy discrimination. However, it is important to shine a light on the particular potential harms of pregnancy discrimination, especially when reproductive rights in general are under attack in this country.

Last summer, in Dobbs v. Jackson Women’s Health Organization, the Supreme Court took away the constitutional right to an abortion. Since that time, states across the country have severely restricted reproductive autonomy. In a world where an increasing number of individuals may face forced birth, it is important to think about how our society actually treats those who do become pregnant. In his majority opinion in Dobbs, Justice Alito noted that changes in pregnancy discrimination laws and modern attitudes about pregnancy support restrictions on abortions. This short-sighted stance elides over the realities on the ground. Sadly, threats of pregnancy discrimination have not gone away in the 50 years since Roe v. Wade was decided. Maybe an interviewer can no longer explicitly reject a candidate because they’re pregnant — but the rise of AI and eroding of reproductive health privacy only make pregnancy discrimination harder to detect and block.

Anya E.R. Prince is a professor at the University of Iowa College of Law. Her research focuses on big data and algorithm implications of health privacy, particularly reproductive and genetic privacy.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.