As we usher in 2025, California employers face a range of new legal requirements aimed at enhancing protections against workplace discrimination and improving employee rights. From expanding anti-discrimination protections to refining leave policies, the following legislative updates demand attention. Here’s what employers need to know about Assembly Bills (“AB”) 672, 1815, and 2499, as well as Senate Bills (“SB”) 1100, 1137, and 1340.

1. Expanding Anti-Discrimination Protections

SB 1137: Recognizing Intersectionality
California continues to lead the nation in addressing nuanced forms of discrimination. SB 1137 amends the Fair Employment and Housing Act (“FEHA”) to state that it explicitly includes “intersectionality” as a basis for discrimination claims. “Intersectionality,” a concept introduced by American civil rights advocate Kimberlé Crenshaw, describes the interconnected nature of various forms of discrimination and inequality such as race and gender. Rather than viewing them as separate or isolated, it emphasizes how they overlap and intersect to create complex and compounded experiences of oppression. This means employers must recognize that employees can experience unique forms of discrimination based on overlapping protected characteristics.

The bill states that the change is declaratory of existing law, but underscores the concept of intersectionality as a concern of the FEHA. It confirms that an individual can experience discrimination under the FEHA based on a combination of protected characteristics (e.g., being a gender of a certain race) where they might not have experienced discrimination based on a single protected characteristic alone.

AB 1815: Hair Discrimination Clarification
Building on the CROWN Act, AB 1815 amends the definition of “race” under FEHA by removing the term “historically” from the definition, so that the term includes traits associated with race, including, but not limited to, hair texture and protective hairstyles like braids, locs, and twists. This amendment reinforces that grooming or appearance policies targeting such hairstyles can constitute racial discrimination.

SB 1100: Driver’s License
SB 1100 amends FEHA to make it an unlawful employment practice for an employer to include statements about the need for a driver’s license in job advertisements, postings, applications, and similar employment material unless the following conditions are met:

  1. The employer reasonably expects driving to be one of the job functions for the position.
  2. The employer reasonably believes that using alternate forms of transportation, including but not limited to ride-hailing services, carpooling, taxis, walking, and bicycling, would not be comparable in travel time or cost to the employer. 

2. Strengthening Leave Protections for Employees who are Crime Victims

AB 2499: Safe Time for Crime Victims
Labor Code sections 230 and 230.1 prohibit employers from discharging or discriminating against employees who are victims of crime or abuse. This includes protections for appearing as a witness in court, seeking prescribed relief as a victim, or requesting workplace accommodations for safety. Employers must provide reasonable accommodations to victims of domestic violence, sexual assault, or stalking and cannot retaliate against employees for taking time off to seek medical attention or services related to the crime. Discrimination or retaliation complaints can be filed with the Division of Labor Standards Enforcement (DLSE).

AB 2499 expands and revises these provisions, categorizing them as unlawful employment practices under the Fair Employment and Housing Act (FEHA) and placing enforcement authority with the Civil Rights Department (CRD). Key updates include:

  • Broadened Definition of “Qualifying Act of Violence”: Replaces “crime or abuse” with “qualifying act of violence,” which includes:
    • Domestic violence
    • Sexual assault
    • Stalking
    • Acts causing bodily injury or death
    • Display, use, or threat of a firearm or dangerous weapon
    • Threats or perceived threats to use force causing physical injury or death
  • Expanded Leave Protections: Employees can now use vacation, personal leave, paid sick leave, or compensatory time off to assist family members who are victims of domestic violence, sexual assault, or stalking. Covered family members include parents, children, grandparents, grandchildren, siblings, spouses, domestic partners, or designated persons. Leave can be used to support family members in accessing services, attending court, or addressing other needs arising from these incidents.
  • Employer Notification Requirements: Employers must inform employees of their rights under the law at the following times:
    • Upon hire
    • Annually
    • Upon request
    • When an employee notifies the employer that they or a family member is a victim

AB 2499 strengthens protections for victims and expands employer obligations, ensuring employees can access necessary support without fear of discrimination or retaliation.

3. Promoting Fair Employment Practices

AB 672: Expanding the role of Civil Rights Department
Existing law authorizes the Civil Rights Department (“CRD”) to provide assistance to communities in resolving disputes, disagreements, or difficulties relating to discriminatory practices, but only upon the request of an appropriate state or local public body, or upon the request of a person directly affected by a dispute, disagreement, or difficulty.

AB 672 authorizes the CRD to offer its services instead of having to wait until its services are requested.

SB 1340: Local Enforcement of Discrimination Laws
FEHA makes it unlawful for an employer to engage in specified discriminatory employment practices based on certain protected characteristics. Additionally, existing law expressly authorizes only the CRD to receive, investigate, conciliate, mediate, and prosecute complaints alleging employment discrimination.

SB 1340 allows cities and counties to enforce any local anti-discrimination laws that are as or more stringent than the state’s anti-discrimination laws, subject to certain requirements.

Under SB 1340, any city or county can enforce any local law that prohibits employment discrimination if all four of the following requirements are met:

  1. The local enforcement concerns an employment complaint filed with the CRD;
  2. The local enforcement occurs after the CRD has issued a right-to-sue notice under Government Code section 12965;
  3. The local enforcement commences before the expiration of time to file a civil action specified in the right-to-sue notice;
  4. The local enforcement is pursuant to a local law that is at least as protective as the FEHA.

SB 1340 also tolls the one year time to file a complaint, under existing law, when a city or county commences an enforcement action.

Conclusion

California’s 2025 legislative updates reflect the state’s commitment to fostering inclusive and equitable workplaces. By understanding and implementing any necessary changes in response to these laws, employers can not only ensure compliance but also build trust and goodwill with their workforce. For tailored guidance on navigating these legal changes, consult with experienced employment counsel.

Bonuses are a common form of employee compensation that can incentivize and reward performance to retain high quality employees. On the flip side, employers must navigate legal risks and challenges to ensure compliance with federal and state laws.  

As with any matter that concerns payment of wages, bonuses implicate a plethora of legal subjects. This discussion aims to alert employers of some potential issues, but a full and periodic legal review of an agency’s compensation procedures is always prudent.

Familiar Issues

Employers tend to be familiar with discrimination which falls under the Fair Employment and Housing Act and Title VII. Such discrimination may manifest as “disparate treatment” or “disparate impact.

Disparate Treatment

Disparate treatment is intentional discrimination as to similarly situated individuals based on a protected classification, such as race, sex, gender, religion, national origin, or disability. While it would be difficult to “audit” whether a supervisor is doling out bonuses discriminatorily, the employer can minimize the risk of disparate treatment by establishing clear, objective, and written criteria for an employee’s eligibility for a bonus. The employer should encourage employees to report potential unlawful or perceived unlawful conduct to any member of management. Employers should also require management employees to promptly and appropriately address such complaints or face disciplinary action themselves.

Disparate Impact

Disparate impact discrimination occurs when a facially neutral employment practice disproportionately affects members of a protected class, even with no intent to discriminate. A complainant must show that the practice resulted in a significant adverse effect on a protected class, typically through statistical evidence showing a disproportionate exclusion of members of a protected class.

The EEOC applies a general rule-of-thumb known as the “four-fifths rule” to determine if an employment practice has a disproportionate impact. In the context of a bonus, if the ratio between the bonus for one group is less than 80% (four-fifths) of the bonus received by another group, this may indicate a disparate impact.

For example, if a school’s female teachers receive an average bonus of $7000 and its male teachers receive an average bonus of $10,000, the female teachers are receiving less than 80% of their male counterparts’ bonus compensation:

Ratio = $7,000 / $10,000 = 70%

Employers need not wait for a complaint to determine whether its bonus system has a disproportionate impact on a protected classification. Agencies can consult with legal counsel to proactively detect potential disparities.

The Equal Pay Act (“EPA”)

The federal Equal Pay Act prohibits pay disparity between employees conducting substantially similar work on the basis of sex. The California Equal Pay Act similarly mandates equal pay for equal work, but with an expanded scope to include gender, race, and ethnicity.

Essentially, employees of a protected class cannot be paid at a rate less than the rate paid to employees of a different protected class. Employers may defend against an EPA claim by showing that the disparity is the result of (a) a seniority system; (b) a merit system; (c) a system that measures earnings by quantity or quality of productions; or (4) a bona fide factor other than sex, race, or ethnicity such as education, training, or experience. Notably, employers cannot rely on the employee’s prior salary to excuse the disparity.

How to Spot an EPA Issue

Similar to the disparate impact analysis, employers should also proactively investigate whether their bonus systems have resulted in pay disparity between members of different protected classifications. Generally, members of the same job classification should receive bonuses measured under the same metrics regardless of their protected classification. If an employee within a job classification is performing substantially different work, which thereby results in an apparent pay disparity, the employer should consider moving the employee out of class.

Keep in mind that prior salary is never a defense to an alleged pay disparity. This may become an issue if employees within a job classification receive different bonus rates based on their salary step.

For example, a bonus system could establish that employees at step 1 of the salary scale receive a bonus of 5% for satisfactory performance while employees at step 2 receive a bonus of 7% for also satisfactory performance; such bonus systems are not inherently unlawful. However, if the system results in pay disparity between protected classifications, the employer cannot defend the disparity by asserting that the employees were simply at different salary steps. The employer must demonstrate different objective criteria, such as a non-discriminatory reason for placing the employees at different salary steps in the first place (e.g., employees with less than 2 years of experience are at step 1 while employees with more than 2 years of experience are at step 2).

To effectively defend potential legitimate, nondiscriminatory disparities in pay between members of protected classifications, employers should endeavor to detail as thoroughly as possible the objective criteria of their bonus system.

Leave a Paper Trail

As noted above, these are just a few of the several issues employers may encounter when providing bonus compensation. Generally speaking, an employer cannot go wrong documenting its legal activities, and this is true with bonuses. Ideally, agencies should document the reason for any employee’s bonus pay based on written procedures for objective evaluations adopted by its governing body. Clear documentation and adherence to established procedures not only ensure compliance but also foster transparency and fairness in bonus compensation practices.

We are excited to continue our video series – Tips from the Table. In these videos, members of LCW’s Labor Relations and Collective Bargaining practice group will provide various tips that can be implemented at your bargaining tables. We hope that you will find these clips informative and helpful in your negotiations.

On November 15, 2024, a federal judge in the Eastern District of Texas blocked the Department of Labor’s (“DOL”) newly issued salary rules for exempt status under the Fair Labor Standards Act (“FLSA”).

The new rules, which took effect July 1, 2024, increased the minimum salary threshold required in order to qualify for overtime-exempt status, thereby increasing the number of employees who are eligible for overtime. The Court’s action on November 15 enjoins, or stops, the DOL rule change nationwide.

Consequently, the salary thresholds for exempt status under the FLSA return to the levels in effect as of June 30, 2024: $684 per week executive, administrative, and professional (“EAP”) employees or $107,432 per year for the highly compensated employees (“HCE”).

What Were the DOL’s 2024 Salary Rules?

The DOL’s 2024 salary rules increased the “standard salary level” minimum for exempt EAP employees from $684 per week ($35,568 per year) to $884 per week ($43,888 per year), effective July 1, 2024. On January 1, 2025, the standard salary level minimum will be set to increase again to $1,128 per week ($58,656 per year).

The 2024 salary rules also increased the minimum “total annual compensation” for employees exempt under the HCE exemption from $107,432 per year to $132,964 per year effective July 1, 2024 and to $151,164 per year on January 1, 2025.

The new rule also included an automatic indexing mechanism that would increase the minimum salary levels in the future.

In sum, under the new rules, employers would have to pay higher salaries to employees in order to avoid paying those employees overtime.

November 15, 2024 Eastern District TX Decision

Shortly after the DOL finalized the new rules, various business groups as well as the State of Texas sued, arguing that the new rule exceeded the DOL’s authority.

In its November 15 decision, the Eastern District of Texas agreed, holding that the 2024 salary rule was an unlawful exercise of the DOL’s authority because the salary threshold was set so high and would increase so frequently that the test for exemption would turn on salary only, and not account for an employee’s job duties. As the Court stated of the EAP exemptions, citing the U.S. Supreme Court, “Congress elected to exempt employees based on the capacity in which they are employed. It’s their duties and not their dollars that really matter.” (Emphasis added.)

Accordingly, the court nullified the 2024 salary rules.

The court’s ruling returns the salary thresholds for FLSA exemptions to the levels effective on June 30, 2024:

EAP Standard Salary LevelHCE Total Annual Compensation
$684 per week ($35,568 per year)$107,432 per year; including at least $684 per week

It is possible that the DOL appeals the decision to the Fifth Circuit Court of Appeals. However, given the upcoming changes in the Executive Branch with the incoming Trump administration, DOL may elect not to do so. Further, even if the DOL appealed the decision to the Fifth Circuit, it is likely that the Fifth Circuit would uphold the decision given the Fifth Court’s composition and conservative predisposition. Thus, for the foreseeable future, the salary thresholds in effect on June 30, 2024 will likely remain in effect and control employee eligibility for overtime.

What Does This Mean For Your Agency

Given that changes to employee compensation may be warranted at your agency, you should consult with legal counsel and your agency’s labor relations team to consider FLSA overtime implications. LCW attorneys will continue to closely monitor developments in this area of the law and will provide updates as needed.

The case is State of Texas, Plano Chamber of Commerce, et al. v. United States Department of Labor, et al. (E.D. TX) Civil Case No. 4:24-CV-499-SDJ, November 15, 2024.

Liebert Cassidy Whitmore attorneys are closely monitoring developments in relation to this Special Bulletin and are able to advise on the impact this could have on your organization. If you have any questions about this issue, please contact our Los Angeles, San Francisco, Fresno, San Diego, or Sacramento office.

In Ramirez v. City of Indio, the Court of Appeal held that a city manager had the authority to affirm, revoke, or modify the arbitrator’s advisory findings and recommendations under the Memorandum of Understanding’s appeals procedure.  This case confirms that, absent sufficient contractual language to the contrary, when an MOU provides for advisory arbitration, the final decision-maker (e.g., a city manager) has the discretion to evaluate the totality of an employee’s conduct, even when certain allegations are not sustained, and the authority to make final decisions on disciplinary matters, even when an arbitrator recommends a different outcome.

Background

This case arose from the termination of a former police officer at the City of Indio Police Department.

Following an internal affairs investigation and pre-disciplinary conference, the Chief issued a final notice to terminate based on the officer’s admissions that he drove under the influence of alcohol, and the preponderance of evidence that the officer engaged in dishonest behavior and showed poor judgment that embarrassed the City and Indio Police Department. The officer appealed his termination through the appeals procedure set forth in the Memorandum of Understanding.

The appeals procedure included an advisory arbitration and the city manager was vested with power to affirm, revoke, or modify the arbitrator’s advisory findings and recommendations.

The arbitrator overturned the City’s decision to terminate, and recommended reinstatement. The arbitrator found that the City failed to prove that the officer drove under the influence of alcohol, the officer presented credible and uncontradicted evidence that officers are routinely instructed to reset their phones when they turn in their department-issued phones, and the officer “credibly testified” in explaining differences in his statements during his internal affairs investigation and his testimony in a trial.  

The city manager, however, issued a final written decision rejecting the arbitrator’s advisory recommendations and upheld the termination. This decision was based on the officer’s poor judgment and conduct unbecoming an officer. Additionally, the city manager disagreed with the arbitrator, and agreed with the Chief’s findings that the City established that the officer drove under the influence, reset his department-issued phone with the intent to prevent discovery of incriminating information, and made dishonest and/or inconsistent statements during the internal affairs investigation and trial.  The city manager also looked to the totality of the conduct, including matters the arbitrator found lacked relevance.

The officer challenged the city manager’s final administrative decision and petitioned the superior court for a writ of mandate directing a reversal of the decision. The officer argued that the city manager failed to defer to the arbitrator’s findings on credibility and weight of evidence. The superior court denied the writ petition.

Court of Appeal

The officer appealed the superior court’s denial, continuing to challenge the procedural fairness of the City’s decision. In a published and precedential decision, however, the Court of Appeal affirmed the superior court’s ruling, focusing on due process and the scope of the city manager’s authority.

1. Due Process

The court found that the disciplinary procedures, including the advisory arbitration, provided the officer with adequate due process. Importantly, the MOU explicitly granted the city manager final decision-making authority. The Court of Appeal emphasized that due process requires a fair hearing and opportunity to respond, not a particular decision-maker.

2. Scope of City Manager’s Authority

The Court rejected the officer’s argument that the city manager was bound by the arbitrator’s factual findings. The Court held that the city manager appropriately based the termination on charged conduct including drunk driving and dishonesty, and the totality of the circumstances. The Court rejected the officer’s argument that the procedures set forth in the City’s particular MOU mandated the city manager pay deference to the arbitrator’s decision.


Case: http://sos.metnews.com/sos.cgi?1024//D082997

Citation: Ramirez v. City of Indio, No. D082997 (Cal. Ct. App. Sep. 13, 2024)

As the landscape of education evolves, colleges are increasingly looking to artificial intelligence (AI) to enhance learning and streamline operations. While the potential benefits of AI are significant, administrators must navigate a range of legal and ethical challenges to ensure that these technologies serve all students effectively and equitably.

Equity and Access: The Digital Divide

One of the foremost concerns surrounding the integration of AI in higher education is equity. Many students, particularly those with fewer resources, may lack access to the necessary technology or reliable internet. Implementing AI tools without addressing these disparities risks deepening the existing educational divide. As student-centered administrators, it is crucial to prioritize strategies that ensure all students can engage with AI-enhanced learning environments. This may involve partnerships with external organizations to provide devices and internet access.

Data Privacy and Ethical Considerations

The use of AI may require extensive data collection on student performance and behavior. This raises significant concerns regarding data privacy. Administrators must navigate the complexities of federal laws like FERPA (Family Educational Rights and Privacy Act) and state regulations, ensuring we all safeguard student information. Transparency is key; districts should inform students about what data it collects and how it uses the data. By developing clear privacy practices and adhering to ethical guidelines, districts can foster trust with their students while complying with legal obligations. It is also critical to ensure that your contracts with vendors, software, and other technology tools is protecting student data, and follow state and federal privacy laws.

Job Displacement and Workforce Dynamics

AI’s potential to automate administrative tasks and even aspects of teaching raises concerns about job displacement within colleges. Faculty and staff may resist the adoption of AI technologies due to fears of losing their positions. It is essential to engage in open dialogue with employees and employee representative groups, highlighting how AI can serve as a tool to enhance rather than replace human roles. By focusing on the collaborative potential of AI—such as using it to free up time for faculty to engage in more meaningful interactions with students—colleges can foster a more positive perception of these technologies.

Maintaining Educational Quality

While AI can offer personalized learning experiences, there is a risk that over-reliance on technology might lead to a diluted educational experience. Colleges and districts must ensure that AI systems align with their mission and the diverse needs of their students. This requires careful evaluation of AI tools and their effectiveness in promoting academic success. Administrators should involve faculty in the decision-making process, ensuring that any AI implementation enhances, rather than detracts from, the quality of education.

Training and Resource Allocation

Successful AI integration requires that faculty and staff receive adequate training and support. This can be a resource-intensive process, but it is essential for the effective and safe use of technology. Administrators should allocate budgetary resources not only for AI tools but also for professional development programs to encourage faculty and staff to implement those tools. By equipping educators and staff with the necessary skills, districts can maximize the benefits of AI while minimizing resistance to its adoption.

Conclusion: A Balanced Approach

The integration of AI into the educational environment presents both exciting opportunities and formidable challenges. The key to successful implementation lies in a balanced approach that prioritizes equity, safeguards data privacy, fosters job security, maintains educational quality, and invests in training. By addressing these concerns proactively, your colleges can harness the power of AI to create a more inclusive, efficient, and enriching educational experience for all students.

As community colleges increasingly embrace technology, artificial intelligence (AI) has emerged as a powerful tool for academic search committees. However, as we integrate it, we must carefully assess District use to ensure fairness, privacy considerations, transparency, and legal compliance, among other issues. Below are tips for administrators on the “dos” and “don’ts” of using AI in this essential process.

Dos

1. Do Set Clear Objectives:

Before implementing AI tools, define your objectives clearly. Determine what you hope to achieve by using the tool and in what part of the process you will use it. Having specific goals will help you select the appropriate AI applications.

2. Do Train Committee Members:

Ensure that you have provided all members of the search committee with adequate training on the AI tools you’re using. Understanding how these tools operate, including their limitations, is crucial for interpreting results accurately and making informed decisions.

3. Do Leverage AI to Help Guide Assessment of Candidate Sensitivity to Diverse Student Backgrounds:

Education Code Section 87360, requires candidates demonstrate a “sensitivity to and understanding of the diverse academic, socioeconomic, cultural, disability, and ethnic backgrounds of community college students.” This requirement applies to academic and administrative positions. We strongly recommend including the requirement for classified hiring as well.

AI tools can help develop questions, demonstration prompts, supplemental questions, model answers, and the like. AI should supplement, not replace, human judgment, allowing hiring committees to make informed decisions that reflect institutional values and the needs of a diverse student population.

4. Do Prioritize Transparency:

Be clear and open about how your district is using AI throughout the hiring process. Communicate clearly with candidates about the role AI plays in the search. This builds trust and mitigates concerns about bias.

5. Do Monitor for Bias:

Regularly assess the AI systems for potential biases in decision-making. Analyze outcomes to identify patterns that may favor or disadvantage specific groups. Addressing bias proactively can help ensure a fair hiring process and compliance with anti-discrimination laws.

6. Do Incorporate Human Oversight and Control:

While AI can enhance efficiency, it cannot replace human judgment in the process. Human must validate and check the data. Ensure that human evaluators have the final say in candidate progression and selection. Combining AI insights with human expertise can lead to more nuanced and informed decisions.

7. Do Stay Updated on Legal Considerations:

Stay up to date on relevant laws and regulations regarding employment practices and data privacy. Ensure that your use of AI complies with federal and state laws, especially those pertaining to equal employment opportunities. Search committees should consult with HR and IT before utilizing any AI tool or application.

Don’ts

1. Don’t Rely Solely or Primarily on AI:

Human recruiters are essential in providing human touch and building personal connections. Avoid the temptation to allow AI to dominate the hiring process. While AI can sift through large volumes of applications, human context and judgment are essential in evaluating candidates holistically. AI is also prone to hallucinations, making human contribution, oversight, and review essential.

2. Don’t Ignore Data Privacy or IT Direction:

Be cautious about the data you input into AI systems. Work closely with your District’s IT department to ensure approval to use specific AI applications or tools. Protect candidates’ privacy and ensure that any data collected complies with applicable privacy laws. Do not input sensitive information into AI tools and ensure secure handling of all data.

3. Don’t Use Outdated Algorithms and Processes:

Things are moving fast with AI! The effectiveness of AI tools can diminish over time, as algorithms or processes become outdated. Regularly review and update the AI systems you employ to ensure you are obtaining the desired results and that your processes reflect current best practices.

4. Don’t Overlook Candidate Experience:

Be mindful of how the use of AI may affect the candidate experience. Automated processes can feel impersonal; ensure there are adequate opportunities for candidates to engage with the search committee.

5. Don’t Dismiss Candidate Fit:

AI tools may prioritize specific qualifications, but don’t overlook the importance of institutional fit and soft skills. Consider how candidates align with your District’s values and mission, which AI cannot fully capture.

6. Don’t Neglect Feedback Mechanisms:

Implement mechanisms for feedback on the AI tools used in the hiring process. Encourage search committee members and candidates to share their experiences, which can help refine the AI’s application in future searches.

Conclusion

Integrating AI into search committees in a community college can enhance the efficiency and effectiveness of the hiring process. Following these dos and don’ts, administrators can harness the benefits of AI while promoting impartiality, transparency, and legal compliance. Balancing technology with human oversight can lead to better hiring outcomes and a more inclusive academic environment.

Engaging with an Artificial Intelligence (“AI”) service provider entails navigating a complex legal landscape. To develop a successful partnership, your organization must carefully address key legal considerations before signing a contract. This article outlines essential best practices for contracting with AI service providers.

  1. Conduct Due Diligence

AI companies and their services are relatively new in the business world. Your organization must conduct comprehensive due diligence on the AI service provider. Review the company’s status on the California Secretary of State website to confirm good standing, examine publicly available reviews, request references from other customers, and evaluate how long the company has been in business and how long the product has been operational. Investigate the company’s financial health to understand its overall stability and reliability. By assessing these factors, your organization can avoid partnering with an unstable or unreliable provider.

  1. Understand Data Protection and Ownership

Your organization needs to identify the types of data it will share with the AI provider to provide the offered services. After determining what data will be shared, assess the legal restrictions or protections associated with that data. For example, consider whether the data includes employee information, pupil records, or non-public business information. If your organization shares any confidential information, the contract must clearly define each party’s responsibilities for complying with applicable laws. Additionally, the contract should specify who holds legal responsibility for a breach of confidential information.

The contract should also explicitly state who holds ownership of the data (both input and output). Typically, your organization should retain ownership of its data. However, AI providers may request rights to aggregated anonymized data or data in a form that they can use for their own purposes. Scrutinize these provisions carefully to ensure the AI service provider does not use the data in ways that violate privacy laws or your organization’s policies.

  1. Establish Liability Obligations

Require the AI provider to assume liability in the event of a data breach. Specify their obligations under data breach laws and notification requirements to ensure clear accountability for immediate actions following a breach of confidential information. The contract should clearly state the consequences of any breach of the data obligations or contract violations. Include indemnification provisions that hold the AI provider accountable for any third party claims arising from their AI services. These provisions will help your organization mitigate potential fiscal risks in the event issues occur with the services.

  1. Anticipate Termination of the Contract

Draft a strong termination provision to give your organization significant leverage throughout the contract term. Ensure the clause allows for termination if the AI provider’s services fall short of expectations, fail to meet your needs, or do not comply with the contract requirements. Protect your organization in the event the AI provider abruptly cancels by requiring notice and reimbursement requirements.

Clearly define how the AI provider must handle data at the conclusion of the relationship. Specify the return or destruction of data and establish timelines for completing these actions.

Contracting with an AI service provider requires careful attention to multiple factors. By conducting due diligence, understanding data protection and ownership rights, establishing liability obligations, and anticipating contract termination, your organization can set itself up for a successful partnership. Although this list is not exhaustive, it highlights critical considerations for evaluating AI service contracts. Engaging legal counsel to review the agreement will further reduce risks and ensure the contract is tailored to the specific needs of your organization.

OpenAI’s launch of ChatGPT nearly two years ago kicked off the rapid integration of artificial intelligence into society’s daily activities. Today, established tech giants and upcoming startups alike are seemingly adding some level of AI to every product. This developing landscape provides employers with possibilities of both increased efficiency and increased liability.

While the technology is new, the potential harms are familiar. In a lawsuit developing in the Northern District Court of California, Mobley v. Workday, Inc., a plaintiff is suing the HR-provider Workday alleging that its algorithmic decision-making tools screened employment applications in a discriminatory manner. The litigation has focused on whether the plaintiff could bring such a lawsuit in the first place. In this regard, the court recently reasoned, “Workday’s role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being who is sitting in an office going through resumes manually to decide which to reject. Nothing in the language of the federal anti-discrimination statutes or the case law interpreting those statutes distinguishes between delegating functions to an automated agent versus a live human one.” At least for this judge, employers must ensure that their AI tools comply with existing employment laws.

The EEOC’s Guidance on AI and Hiring

Absent new laws specifically addressing AI use, regulators aim to address potential AI risks under existing legal frameworks. The Equal Employment Opportunity Commission (“EEOC”) published guidance earlier this year focusing on actions employers may take to monitor their AI tools. The EEOC has taken the position that employers are responsible under Title VII for their use of AI tools even if another entity designed or administered them. The EEOC also noted that employers may be held responsible for the actions of their agents, such as software vendors.

The EEOC specifically focused on employers’ obligations to prevent “disparate impact” or “adverse impact” absent a business necessity. A disparate impact occurs when a selection procedure has the effect of disproportionately screening out a protected classification. As an example, if an existing workforce has a large number of male supervisors, AI software may inappropriately correlate being male with success and favor males for hire and promotion.

As a rule of thumb, the EEOC uses the “four-fifths rule” to determine disproportionate impact. The selection rate of one group is substantially different from the selection rate of another group if their ratio is less than four-fifths, or 80%. For example, if a selection procedure results in hiring 80 applicants, 30% of whom are Black while 60% are White, the procedure may have a disparate impact on Black applicants. This is because the proportion between the selection rates (30/60 = 1/2) is 50%, which is less than 80%.

Analyzing the potential adverse impact of an AI tool is an easy step since it focuses on the output data of the tool, rather than attempting to determine the technical parameters of an algorithm. However, adverse impact is only one form of discrimination, and the “four-fifths” rule is only a general rule of thumb. Employers should still attempt to form other guardrails over AI use.

Indeed, the EEOC’s recent Title VII guidance supplements a 2022 guidance on potential risks of violating the ADA using AI tools. In that guidance, the EEOC noted areas of concern such as failing to provide a reasonable accommodation to applicants that cannot be fairly rated by automated application procedures or that perhaps reveal a medical restriction.

California’s Proposed Regulations

Late last year, Governor Gavin Newsom signed Executive Order N-12-23. The Executive Order instructed several California agencies to analyze and report on potential risks of AI on governmental functions. It also directed the agencies to establish guidelines ensuring responsible development of AI systems and to prepare the government for AI use.

Significantly, there may be new AI-focused state regulations on the horizon. On May 17, 2024, the Civil Rights Department’s Civil Rights Council (“Council”) noticed its Proposed Regulations to Protect Against Employment Discrimination in Automated Decision-Making Systems. The initial public comment period for the proposed regulations closed on July 18, 2024.

On October 17, 2024, the Council noticed its first modification of the proposed regulations. The comment period for the proposed modifications closes on November 18, 2024. Significantly, the Council is taking the position that an “agent” that utilizes an automated decision-making tool, directly or indirectly, on behalf of an employer to facilitate decision-making traditionally exercised by an employer is also an “employer.” The Council may be relying on the California Supreme Court’s recent holding in Raines v. U.S. Healthworks Medical Group (2023) 15 Cal.5th 268 for this position.  Raines concluded that an employer’s business entity agents could be directly liable under the Fair Employment and Housing Act (“FEHA”) when they carry out FEHA-regulated activities on behalf of an employer.

The regulations also broadly define automated decision systems to mean a “computational process that makes a decision or facilitates human decision making.” The Council initially tried to carve out basic tools like calculators or excel spreadsheets, but the amended regulations appear to reverse course if those tools facilitate human decision-making. Thus, employers need to have some level of confidence that any calculation or formula used to make employment-related decisions does not create a disparate impact. The proposed regulations note that proof of anti-bias testing or similar proactive efforts to avoid algorithmic discrimination may be relevant evidence to a claim of employment discrimination. However, it recently deleted a previously articulated business necessity defense—leaving it to the courts to determine the appropriate nature and scope of what that defense will look like (if at all).

The Council maintains that the proposed regulations do not impose any new requirements. Instead, it asserts that they are only clarifying how existing regulations apply to AI tools. Both employers and software vendors are likely to test that assertion in court.

The October 17, 2024 modifications reflect that the Council is receptive to some concerns. Particularly, the original proposals would have defined “medical or psychological examinations” to include “personality-based” questions, which include questions that measure optimism/positive attitudes, personal/emotional stability, extroversion/introversion, and “intensity.” The original proposed regulations did not limit the definition to AI-use, nor clearly limit the scope of “personality-based” questions. Thus, an employer could potentially violate the law by asking any pre-offer interview questions that attempt to determine a candidate’s personality in any way. In the modified draft regulations, the Council more plainly defined medical or psychological examinations to “include a test, question, puzzle, game, or other challenge that leads to the identification of a disability.”

AI at Work

Beyond management’s use of AI tools, employers should also be aware of their employee’s use of AI tools for work. More than likely, at least several of any workplace’s employees have used AI tools. As a result of the increasing AI integration into existing products, employees may have even used AI without realizing it. For example, Google searches result in an “AI Overview” which summarizes several webpages into one result.

In the context of employee use of AI tools, the general risks in using AI apply. One primary concern is accuracy. AI systems may “hallucinate” false information. Even Google’s AI Overview is prone to make mistakes. Employers should instruct employees to not rely on AI summaries, and instead confirm the information by visiting the sources of information.

Also, agencies often interact with sensitive information from members of the public. For example, employees could use AI tools to draft incident reports or personnel documents. Employers should contemplate specifically whether to allow such use, and if so, employees should receive guidance on how to safely use AI without jeopardizing information security.

Further, agencies must be ever mindful of their obligations under the Public Records Act. A member of the public may argue that “communications” between employees and AI tools are public records, which must be disclosed.

Evolving Scene

Unquestionably, the impact of AI on the employment landscape will continue to develop quickly. It is unclear when or if the Council’s regulations will be implemented, or whether the state legislature (which is actively working on AI-related statutes) will beat them to the punch. What is certain, however, is that employers have an opportunity now to take a hard look at the formulas and software being used to assist with their employment decisions, directly or indirectly through a vendor. Employers should actively question whether anti-bias testing or other proactive methods have been implemented and can be cited as a potential defense, as well as the possibility of indemnity provisions in contracts with software or recruitment vendors.

AI will transform our world in the coming years, and its adoption and utilization will become ubiquitous. Employers must be mindful, however, of the risks associated with AI and ensure they are considering the ways it can be a double-edged sword. LCW continues to monitor these issues with specific attention to how AI will affect California’s public employers.