We are excited to continue our video series – Tips from the Table. In these videos, members of LCW’s Labor Relations and Collective Bargaining practice group will provide various tips that can be implemented at your bargaining tables. We hope that you will find these clips informative and helpful in your negotiations.

On November 15, 2024, a federal judge in the Eastern District of Texas blocked the Department of Labor’s (“DOL”) newly issued salary rules for exempt status under the Fair Labor Standards Act (“FLSA”).

The new rules, which took effect July 1, 2024, increased the minimum salary threshold required in order to qualify for overtime-exempt status, thereby increasing the number of employees who are eligible for overtime. The Court’s action on November 15 enjoins, or stops, the DOL rule change nationwide.

Consequently, the salary thresholds for exempt status under the FLSA return to the levels in effect as of June 30, 2024: $684 per week executive, administrative, and professional (“EAP”) employees or $107,432 per year for the highly compensated employees (“HCE”).

What Were the DOL’s 2024 Salary Rules?

The DOL’s 2024 salary rules increased the “standard salary level” minimum for exempt EAP employees from $684 per week ($35,568 per year) to $884 per week ($43,888 per year), effective July 1, 2024. On January 1, 2025, the standard salary level minimum will be set to increase again to $1,128 per week ($58,656 per year).

The 2024 salary rules also increased the minimum “total annual compensation” for employees exempt under the HCE exemption from $107,432 per year to $132,964 per year effective July 1, 2024 and to $151,164 per year on January 1, 2025.

The new rule also included an automatic indexing mechanism that would increase the minimum salary levels in the future.

In sum, under the new rules, employers would have to pay higher salaries to employees in order to avoid paying those employees overtime.

November 15, 2024 Eastern District TX Decision

Shortly after the DOL finalized the new rules, various business groups as well as the State of Texas sued, arguing that the new rule exceeded the DOL’s authority.

In its November 15 decision, the Eastern District of Texas agreed, holding that the 2024 salary rule was an unlawful exercise of the DOL’s authority because the salary threshold was set so high and would increase so frequently that the test for exemption would turn on salary only, and not account for an employee’s job duties. As the Court stated of the EAP exemptions, citing the U.S. Supreme Court, “Congress elected to exempt employees based on the capacity in which they are employed. It’s their duties and not their dollars that really matter.” (Emphasis added.)

Accordingly, the court nullified the 2024 salary rules.

The court’s ruling returns the salary thresholds for FLSA exemptions to the levels effective on June 30, 2024:

EAP Standard Salary LevelHCE Total Annual Compensation
$684 per week ($35,568 per year)$107,432 per year; including at least $684 per week

It is possible that the DOL appeals the decision to the Fifth Circuit Court of Appeals. However, given the upcoming changes in the Executive Branch with the incoming Trump administration, DOL may elect not to do so. Further, even if the DOL appealed the decision to the Fifth Circuit, it is likely that the Fifth Circuit would uphold the decision given the Fifth Court’s composition and conservative predisposition. Thus, for the foreseeable future, the salary thresholds in effect on June 30, 2024 will likely remain in effect and control employee eligibility for overtime.

What Does This Mean For Your Agency

Given that changes to employee compensation may be warranted at your agency, you should consult with legal counsel and your agency’s labor relations team to consider FLSA overtime implications. LCW attorneys will continue to closely monitor developments in this area of the law and will provide updates as needed.

The case is State of Texas, Plano Chamber of Commerce, et al. v. United States Department of Labor, et al. (E.D. TX) Civil Case No. 4:24-CV-499-SDJ, November 15, 2024.

Liebert Cassidy Whitmore attorneys are closely monitoring developments in relation to this Special Bulletin and are able to advise on the impact this could have on your organization. If you have any questions about this issue, please contact our Los Angeles, San Francisco, Fresno, San Diego, or Sacramento office.

In Ramirez v. City of Indio, the Court of Appeal held that a city manager had the authority to affirm, revoke, or modify the arbitrator’s advisory findings and recommendations under the Memorandum of Understanding’s appeals procedure.  This case confirms that, absent sufficient contractual language to the contrary, when an MOU provides for advisory arbitration, the final decision-maker (e.g., a city manager) has the discretion to evaluate the totality of an employee’s conduct, even when certain allegations are not sustained, and the authority to make final decisions on disciplinary matters, even when an arbitrator recommends a different outcome.

Background

This case arose from the termination of a former police officer at the City of Indio Police Department.

Following an internal affairs investigation and pre-disciplinary conference, the Chief issued a final notice to terminate based on the officer’s admissions that he drove under the influence of alcohol, and the preponderance of evidence that the officer engaged in dishonest behavior and showed poor judgment that embarrassed the City and Indio Police Department. The officer appealed his termination through the appeals procedure set forth in the Memorandum of Understanding.

The appeals procedure included an advisory arbitration and the city manager was vested with power to affirm, revoke, or modify the arbitrator’s advisory findings and recommendations.

The arbitrator overturned the City’s decision to terminate, and recommended reinstatement. The arbitrator found that the City failed to prove that the officer drove under the influence of alcohol, the officer presented credible and uncontradicted evidence that officers are routinely instructed to reset their phones when they turn in their department-issued phones, and the officer “credibly testified” in explaining differences in his statements during his internal affairs investigation and his testimony in a trial.  

The city manager, however, issued a final written decision rejecting the arbitrator’s advisory recommendations and upheld the termination. This decision was based on the officer’s poor judgment and conduct unbecoming an officer. Additionally, the city manager disagreed with the arbitrator, and agreed with the Chief’s findings that the City established that the officer drove under the influence, reset his department-issued phone with the intent to prevent discovery of incriminating information, and made dishonest and/or inconsistent statements during the internal affairs investigation and trial.  The city manager also looked to the totality of the conduct, including matters the arbitrator found lacked relevance.

The officer challenged the city manager’s final administrative decision and petitioned the superior court for a writ of mandate directing a reversal of the decision. The officer argued that the city manager failed to defer to the arbitrator’s findings on credibility and weight of evidence. The superior court denied the writ petition.

Court of Appeal

The officer appealed the superior court’s denial, continuing to challenge the procedural fairness of the City’s decision. In a published and precedential decision, however, the Court of Appeal affirmed the superior court’s ruling, focusing on due process and the scope of the city manager’s authority.

1. Due Process

The court found that the disciplinary procedures, including the advisory arbitration, provided the officer with adequate due process. Importantly, the MOU explicitly granted the city manager final decision-making authority. The Court of Appeal emphasized that due process requires a fair hearing and opportunity to respond, not a particular decision-maker.

2. Scope of City Manager’s Authority

The Court rejected the officer’s argument that the city manager was bound by the arbitrator’s factual findings. The Court held that the city manager appropriately based the termination on charged conduct including drunk driving and dishonesty, and the totality of the circumstances. The Court rejected the officer’s argument that the procedures set forth in the City’s particular MOU mandated the city manager pay deference to the arbitrator’s decision.


Case: http://sos.metnews.com/sos.cgi?1024//D082997

Citation: Ramirez v. City of Indio, No. D082997 (Cal. Ct. App. Sep. 13, 2024)

As the landscape of education evolves, colleges are increasingly looking to artificial intelligence (AI) to enhance learning and streamline operations. While the potential benefits of AI are significant, administrators must navigate a range of legal and ethical challenges to ensure that these technologies serve all students effectively and equitably.

Equity and Access: The Digital Divide

One of the foremost concerns surrounding the integration of AI in higher education is equity. Many students, particularly those with fewer resources, may lack access to the necessary technology or reliable internet. Implementing AI tools without addressing these disparities risks deepening the existing educational divide. As student-centered administrators, it is crucial to prioritize strategies that ensure all students can engage with AI-enhanced learning environments. This may involve partnerships with external organizations to provide devices and internet access.

Data Privacy and Ethical Considerations

The use of AI may require extensive data collection on student performance and behavior. This raises significant concerns regarding data privacy. Administrators must navigate the complexities of federal laws like FERPA (Family Educational Rights and Privacy Act) and state regulations, ensuring we all safeguard student information. Transparency is key; districts should inform students about what data it collects and how it uses the data. By developing clear privacy practices and adhering to ethical guidelines, districts can foster trust with their students while complying with legal obligations. It is also critical to ensure that your contracts with vendors, software, and other technology tools is protecting student data, and follow state and federal privacy laws.

Job Displacement and Workforce Dynamics

AI’s potential to automate administrative tasks and even aspects of teaching raises concerns about job displacement within colleges. Faculty and staff may resist the adoption of AI technologies due to fears of losing their positions. It is essential to engage in open dialogue with employees and employee representative groups, highlighting how AI can serve as a tool to enhance rather than replace human roles. By focusing on the collaborative potential of AI—such as using it to free up time for faculty to engage in more meaningful interactions with students—colleges can foster a more positive perception of these technologies.

Maintaining Educational Quality

While AI can offer personalized learning experiences, there is a risk that over-reliance on technology might lead to a diluted educational experience. Colleges and districts must ensure that AI systems align with their mission and the diverse needs of their students. This requires careful evaluation of AI tools and their effectiveness in promoting academic success. Administrators should involve faculty in the decision-making process, ensuring that any AI implementation enhances, rather than detracts from, the quality of education.

Training and Resource Allocation

Successful AI integration requires that faculty and staff receive adequate training and support. This can be a resource-intensive process, but it is essential for the effective and safe use of technology. Administrators should allocate budgetary resources not only for AI tools but also for professional development programs to encourage faculty and staff to implement those tools. By equipping educators and staff with the necessary skills, districts can maximize the benefits of AI while minimizing resistance to its adoption.

Conclusion: A Balanced Approach

The integration of AI into the educational environment presents both exciting opportunities and formidable challenges. The key to successful implementation lies in a balanced approach that prioritizes equity, safeguards data privacy, fosters job security, maintains educational quality, and invests in training. By addressing these concerns proactively, your colleges can harness the power of AI to create a more inclusive, efficient, and enriching educational experience for all students.

As community colleges increasingly embrace technology, artificial intelligence (AI) has emerged as a powerful tool for academic search committees. However, as we integrate it, we must carefully assess District use to ensure fairness, privacy considerations, transparency, and legal compliance, among other issues. Below are tips for administrators on the “dos” and “don’ts” of using AI in this essential process.

Dos

1. Do Set Clear Objectives:

Before implementing AI tools, define your objectives clearly. Determine what you hope to achieve by using the tool and in what part of the process you will use it. Having specific goals will help you select the appropriate AI applications.

2. Do Train Committee Members:

Ensure that you have provided all members of the search committee with adequate training on the AI tools you’re using. Understanding how these tools operate, including their limitations, is crucial for interpreting results accurately and making informed decisions.

3. Do Leverage AI to Help Guide Assessment of Candidate Sensitivity to Diverse Student Backgrounds:

Education Code Section 87360, requires candidates demonstrate a “sensitivity to and understanding of the diverse academic, socioeconomic, cultural, disability, and ethnic backgrounds of community college students.” This requirement applies to academic and administrative positions. We strongly recommend including the requirement for classified hiring as well.

AI tools can help develop questions, demonstration prompts, supplemental questions, model answers, and the like. AI should supplement, not replace, human judgment, allowing hiring committees to make informed decisions that reflect institutional values and the needs of a diverse student population.

4. Do Prioritize Transparency:

Be clear and open about how your district is using AI throughout the hiring process. Communicate clearly with candidates about the role AI plays in the search. This builds trust and mitigates concerns about bias.

5. Do Monitor for Bias:

Regularly assess the AI systems for potential biases in decision-making. Analyze outcomes to identify patterns that may favor or disadvantage specific groups. Addressing bias proactively can help ensure a fair hiring process and compliance with anti-discrimination laws.

6. Do Incorporate Human Oversight and Control:

While AI can enhance efficiency, it cannot replace human judgment in the process. Human must validate and check the data. Ensure that human evaluators have the final say in candidate progression and selection. Combining AI insights with human expertise can lead to more nuanced and informed decisions.

7. Do Stay Updated on Legal Considerations:

Stay up to date on relevant laws and regulations regarding employment practices and data privacy. Ensure that your use of AI complies with federal and state laws, especially those pertaining to equal employment opportunities. Search committees should consult with HR and IT before utilizing any AI tool or application.

Don’ts

1. Don’t Rely Solely or Primarily on AI:

Human recruiters are essential in providing human touch and building personal connections. Avoid the temptation to allow AI to dominate the hiring process. While AI can sift through large volumes of applications, human context and judgment are essential in evaluating candidates holistically. AI is also prone to hallucinations, making human contribution, oversight, and review essential.

2. Don’t Ignore Data Privacy or IT Direction:

Be cautious about the data you input into AI systems. Work closely with your District’s IT department to ensure approval to use specific AI applications or tools. Protect candidates’ privacy and ensure that any data collected complies with applicable privacy laws. Do not input sensitive information into AI tools and ensure secure handling of all data.

3. Don’t Use Outdated Algorithms and Processes:

Things are moving fast with AI! The effectiveness of AI tools can diminish over time, as algorithms or processes become outdated. Regularly review and update the AI systems you employ to ensure you are obtaining the desired results and that your processes reflect current best practices.

4. Don’t Overlook Candidate Experience:

Be mindful of how the use of AI may affect the candidate experience. Automated processes can feel impersonal; ensure there are adequate opportunities for candidates to engage with the search committee.

5. Don’t Dismiss Candidate Fit:

AI tools may prioritize specific qualifications, but don’t overlook the importance of institutional fit and soft skills. Consider how candidates align with your District’s values and mission, which AI cannot fully capture.

6. Don’t Neglect Feedback Mechanisms:

Implement mechanisms for feedback on the AI tools used in the hiring process. Encourage search committee members and candidates to share their experiences, which can help refine the AI’s application in future searches.

Conclusion

Integrating AI into search committees in a community college can enhance the efficiency and effectiveness of the hiring process. Following these dos and don’ts, administrators can harness the benefits of AI while promoting impartiality, transparency, and legal compliance. Balancing technology with human oversight can lead to better hiring outcomes and a more inclusive academic environment.

Engaging with an Artificial Intelligence (“AI”) service provider entails navigating a complex legal landscape. To develop a successful partnership, organizations must carefully evaluate legal considerations prior to signing a contract. This article covers the essential best practices for contracting with AI service providers.

  1. Conduct Due Diligence

AI companies and the services they provide are relatively new to the business world. Thus, it is imperative to conduct comprehensive due diligence on the AI service provider. This may include reviewing the company’s status on the California Secretary of State website to ensure good standing, reviewing publicly available reviews, asking for references from other customers, determining how long they have been in business and how long the product has been live, and making inquiries into the company’s financial health. Getting a sense of the overall health of the company can be key to avoiding a relationship with an unstable or unreliable entity.

  1. Understand Data Protection and Ownership

It is key to understand what type of data will need to be shared with the company in order for it to provide the offered services. Once it is determined what data will be shared, an assessment must be made as to what legal restrictions or protections exist for that data. For example, does the data include employee information, pupil records, or non-public business information? If any confidential information will be shared, the contract must clearly state each party’s obligations regarding compliance with all applicable laws. It should also clearly delineate legal responsibility for a breach of any confidential information.

The contract should also set forth who holds ownership of the data (both input and output). Typically, ownership of data should remain with the entity receiving the AI services. However, AI companies may request to retain aggregated anonymized data or data in a form that it can utilize for its own benefit. These provisions should be carefully scrutinized to avoid the AI service provider utilizing the data in a manner that violates any applicable privacy laws or organization policies.

  1. Establish Liability Obligations

A contract with an AI provider should require the provider to be liable in the event of a data breach. It should also specify the provider’s specific obligations under data breach laws and notification requirements so it is clear who must take immediate steps in the event of a breach of confidential information. A contract should clearly state the repercussions of any breach of the data obligations or other breach of the contract. It is also prudent to require the company to indemnify for any third party claims that may arise from their AI services. These provisions are critical to reducing potential fiscal impacts in the event something goes wrong with the services.

  1. Anticipate Termination of the Contract

A strong termination provision can provide your organization with significant leverage throughout the term of the services. If drafted correctly, it can allow termination if the AI provider’s services are falling short of expectations, not meeting the desired needs, or if the AI provider is not delivering the services in accordance with the contract. Termination provisions can also protect your organization in the event the AI company abruptly cancels by requiring notice and reimbursement requirements.

Another element to include in a termination provision is how the data will be handled at the conclusion of the relationship. It should articulate any return or destruction of data and the timelines for the action.

Contracting with an AI service provider requires careful consideration of a number of variables. By conducting due diligence, understanding data protection and ownership rights, establishing liability obligations, and anticipating termination, organizations can set themselves up for a successful relationship. This list is not exhaustive however and there are other important factors that should be considered, especially depending on the type of AI services being provided. Engaging legal counsel to assist with reviewing AI service contracts will mitigate risk and ensure the agreement is tailored to the specific needs of your organization.

OpenAI’s launch of ChatGPT nearly two years ago kicked off the rapid integration of artificial intelligence into society’s daily activities. Today, established tech giants and upcoming startups alike are seemingly adding some level of AI to every product. This developing landscape provides employers with possibilities of both increased efficiency and increased liability.

While the technology is new, the potential harms are familiar. In a lawsuit developing in the Northern District Court of California, Mobley v. Workday, Inc., a plaintiff is suing the HR-provider Workday alleging that its algorithmic decision-making tools screened employment applications in a discriminatory manner. The litigation has focused on whether the plaintiff could bring such a lawsuit in the first place. In this regard, the court recently reasoned, “Workday’s role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being who is sitting in an office going through resumes manually to decide which to reject. Nothing in the language of the federal anti-discrimination statutes or the case law interpreting those statutes distinguishes between delegating functions to an automated agent versus a live human one.” At least for this judge, employers must ensure that their AI tools comply with existing employment laws.

The EEOC’s Guidance on AI and Hiring

Absent new laws specifically addressing AI use, regulators aim to address potential AI risks under existing legal frameworks. The Equal Employment Opportunity Commission (“EEOC”) published guidance earlier this year focusing on actions employers may take to monitor their AI tools. The EEOC has taken the position that employers are responsible under Title VII for their use of AI tools even if another entity designed or administered them. The EEOC also noted that employers may be held responsible for the actions of their agents, such as software vendors.

The EEOC specifically focused on employers’ obligations to prevent “disparate impact” or “adverse impact” absent a business necessity. A disparate impact occurs when a selection procedure has the effect of disproportionately screening out a protected classification. As an example, if an existing workforce has a large number of male supervisors, AI software may inappropriately correlate being male with success and favor males for hire and promotion.

As a rule of thumb, the EEOC uses the “four-fifths rule” to determine disproportionate impact. The selection rate of one group is substantially different from the selection rate of another group if their ratio is less than four-fifths, or 80%. For example, if a selection procedure results in hiring 80 applicants, 30% of whom are Black while 60% are White, the procedure may have a disparate impact on Black applicants. This is because the proportion between the selection rates (30/60 = 1/2) is 50%, which is less than 80%.

Analyzing the potential adverse impact of an AI tool is an easy step since it focuses on the output data of the tool, rather than attempting to determine the technical parameters of an algorithm. However, adverse impact is only one form of discrimination, and the “four-fifths” rule is only a general rule of thumb. Employers should still attempt to form other guardrails over AI use.

Indeed, the EEOC’s recent Title VII guidance supplements a 2022 guidance on potential risks of violating the ADA using AI tools. In that guidance, the EEOC noted areas of concern such as failing to provide a reasonable accommodation to applicants that cannot be fairly rated by automated application procedures or that perhaps reveal a medical restriction.

California’s Proposed Regulations

Late last year, Governor Gavin Newsom signed Executive Order N-12-23. The Executive Order instructed several California agencies to analyze and report on potential risks of AI on governmental functions. It also directed the agencies to establish guidelines ensuring responsible development of AI systems and to prepare the government for AI use.

Significantly, there may be new AI-focused state regulations on the horizon. On May 17, 2024, the Civil Rights Department’s Civil Rights Council (“Council”) noticed its Proposed Regulations to Protect Against Employment Discrimination in Automated Decision-Making Systems. The initial public comment period for the proposed regulations closed on July 18, 2024.

On October 17, 2024, the Council noticed its first modification of the proposed regulations. The comment period for the proposed modifications closes on November 18, 2024. Significantly, the Council is taking the position that an “agent” that utilizes an automated decision-making tool, directly or indirectly, on behalf of an employer to facilitate decision-making traditionally exercised by an employer is also an “employer.” The Council may be relying on the California Supreme Court’s recent holding in Raines v. U.S. Healthworks Medical Group (2023) 15 Cal.5th 268 for this position.  Raines concluded that an employer’s business entity agents could be directly liable under the Fair Employment and Housing Act (“FEHA”) when they carry out FEHA-regulated activities on behalf of an employer.

The regulations also broadly define automated decision systems to mean a “computational process that makes a decision or facilitates human decision making.” The Council initially tried to carve out basic tools like calculators or excel spreadsheets, but the amended regulations appear to reverse course if those tools facilitate human decision-making. Thus, employers need to have some level of confidence that any calculation or formula used to make employment-related decisions does not create a disparate impact. The proposed regulations note that proof of anti-bias testing or similar proactive efforts to avoid algorithmic discrimination may be relevant evidence to a claim of employment discrimination. However, it recently deleted a previously articulated business necessity defense—leaving it to the courts to determine the appropriate nature and scope of what that defense will look like (if at all).

The Council maintains that the proposed regulations do not impose any new requirements. Instead, it asserts that they are only clarifying how existing regulations apply to AI tools. Both employers and software vendors are likely to test that assertion in court.

The October 17, 2024 modifications reflect that the Council is receptive to some concerns. Particularly, the original proposals would have defined “medical or psychological examinations” to include “personality-based” questions, which include questions that measure optimism/positive attitudes, personal/emotional stability, extroversion/introversion, and “intensity.” The original proposed regulations did not limit the definition to AI-use, nor clearly limit the scope of “personality-based” questions. Thus, an employer could potentially violate the law by asking any pre-offer interview questions that attempt to determine a candidate’s personality in any way. In the modified draft regulations, the Council more plainly defined medical or psychological examinations to “include a test, question, puzzle, game, or other challenge that leads to the identification of a disability.”

AI at Work

Beyond management’s use of AI tools, employers should also be aware of their employee’s use of AI tools for work. More than likely, at least several of any workplace’s employees have used AI tools. As a result of the increasing AI integration into existing products, employees may have even used AI without realizing it. For example, Google searches result in an “AI Overview” which summarizes several webpages into one result.

In the context of employee use of AI tools, the general risks in using AI apply. One primary concern is accuracy. AI systems may “hallucinate” false information. Even Google’s AI Overview is prone to make mistakes. Employers should instruct employees to not rely on AI summaries, and instead confirm the information by visiting the sources of information.

Also, agencies often interact with sensitive information from members of the public. For example, employees could use AI tools to draft incident reports or personnel documents. Employers should contemplate specifically whether to allow such use, and if so, employees should receive guidance on how to safely use AI without jeopardizing information security.

Further, agencies must be ever mindful of their obligations under the Public Records Act. A member of the public may argue that “communications” between employees and AI tools are public records, which must be disclosed.

Evolving Scene

Unquestionably, the impact of AI on the employment landscape will continue to develop quickly. It is unclear when or if the Council’s regulations will be implemented, or whether the state legislature (which is actively working on AI-related statutes) will beat them to the punch. What is certain, however, is that employers have an opportunity now to take a hard look at the formulas and software being used to assist with their employment decisions, directly or indirectly through a vendor. Employers should actively question whether anti-bias testing or other proactive methods have been implemented and can be cited as a potential defense, as well as the possibility of indemnity provisions in contracts with software or recruitment vendors.

AI will transform our world in the coming years, and its adoption and utilization will become ubiquitous. Employers must be mindful, however, of the risks associated with AI and ensure they are considering the ways it can be a double-edged sword. LCW continues to monitor these issues with specific attention to how AI will affect California’s public employers.

On October 24, 2024, the U.S Department of Education (“DOE”) released its AI Toolkit for Safe, Ethical, and Equitable AI Integration for school leaders. This toolkit, primarily targeted towards K-12 school leaders, implements recommendations from the DOE’s Artificial Intelligence (“AI”) Future of Teaching and Learning report (published May 2023). It provides guidance for the effective use and integration of AI in teaching and learning, summarizes key federal laws and considerations for ensuring safe, secure, and non-discriminatory AI use, and promotes the principles of transparency and awareness for schools using AI. The toolkit covers ten key topic areas, or “modules”:  1) opportunities and risks; 2) privacy and data security; 3) civil rights; 4) accessibility, and digital equity; 5) understanding evidence of impact; 6) considering the instructional core; 7) planning an AI strategy; 8) establishing a task force to guide and support AI efforts; 9) building AI literacy for educators; and 10) updating AI policies and advocating for responsible use; and developing an organization-wide AI action plan.

The DOE was required to develop this toolkit pursuant to President Biden’s October 30, 2023, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“Executive Order”). This Executive Order was issued to promote the thoughtful responsible development of AI with appropriate guardrails by developing of federal standards for artificial intelligence (“AI”) use within public institutions and in connection with federal programs. California Department of Education’s previously released a resource entitled “Learning With AI, Learning About AI” https://www.cde.ca.gov/ci/pl/documents/cdeairesourcekit.pdf in September 2023, providing policy suggestions and guidance for the use of AI in the classroom and the safe use of the technology.

Liebert Cassidy Whitmore attorneys are closely monitoring guidance from the DOE and will provide updates to assist our clients with compliance. We are available to assist our clients in developing policies to promote responsible and ethical use of AI.

Though technological innovation is always happening, in the past year there has been an almost inescapable reference to a decades-old science fiction term: AI, or Artificial Intelligence.  Seen as a watershed moment of research and development in the past, AI is now a buzzword for content creation, software features, and product design.  Whether you are on the internet or watching TV, AI seems to be everywhere.  But some employers don’t realize that AI is making its way into the workplace, whether it is wanted or not.   This means that regulation of this budding technology is inevitable.

On April 29, 2024, the Department of Labor’s Wage and Hour Division joined the fray to provide its analysis on emerging issues, publishing a Field Assistance Bulletin (“Bulletin”) concerning “Artificial Intelligence and Automated Systems in the Workplace under the Fair Labor Standards Act and Other Federal Labor Standards.” The bulletin discusses how AI can affect the workplace in relation to several federal laws, but dives a bit deeper into how it might interact with the Fair Labor Standards Act (“FLSA”).  Specifically how it relates to these two FLSA issues: “Hours Worked” and “Calculating Wages Owed under the FLSA”, and we think both deserve a little more perspective. 

Hours Worked and AI’s Impact on Employee Tracking – The DOL Bulletin

    The Bulletin provides an overview of how all “hours worked” must be paid under the FLSA, grouping its discussion into AI’s impact on tracking work time, monitoring break time, waiting time, and employee travel.  Each discussion point focuses on how AI now makes it easier and more tempting than ever, for employers to track an employee’s work performance. 

    The FLSA mandates employers pay covered employees at least the federal minimum wage for every hour worked, as well as overtime pay of one and one-half their regular rate of pay for each hour worked in excess of 40 hours in a workweek.  See 29 U.S.C. §§ 206-207.  In all cases, it is the duty of management to exercise control and ensure that work is not being performed when it does not want work to be performed.  See 29 C.F.R. § 785.13.  That makes enforcing schedules and tracking employee time essential for compliance and, if the employer knows or has reason to believe that work is being performed, it counts as hours worked.  See 29 C.F.R. §§ 785.11, 785.12. 

    With AI improved employee tracking, it is easier than ever to recognize when employees are working and how much time they are spending on individual tasks.  In 2022, Vice published an article exploring Amazon’s detailed warehouse employee tracking system that used employee’s handheld scanners to discern their actions, and which Amazon ultimately used to discipline employees for time spent “off task”.  As the Bulletin recognizes, AI and employee monitoring tools are now more widely available and accurate, so that an employer can know precisely when an employee is idle and not performing tasks directly related to their job.

    But “hours worked” under the FLSA are not always spent working.  Short breaks and waiting time might require payment, even if the employee is not doing any actual work during those periods, depending on the circumstances.  See 29 C.F.R. §§ 785.14 (noting that being engaged to wait is compensable time), 785.18 (noting that rest periods of short duration from five to 20 minutes must be counted as hours worked).  Though part of the rationale for these rules is that such breaks and waiting time boost productivity and efficiency, the end-result is that measurable productivity and efficiency play no role in the calculation of “hours worked”.  So though AI and an increasingly digital world are making it easier to track employees and their work performance down to the minute, employers cannot claw back “idle” time.  

    So what happens if an employer can definitively prove that an employee was not working at a certain time (using advanced AI tracking) and has a written policy prohibiting idle time while working?  From an FLSA perspective, not much.  With an increase in wearable smart technology, employers now have more tools that collect real time data that AI could use to classify activities as “work” or “idle” time, regardless of whether employees are working desk jobs or are in the field.  Whether an activity is excluded from hours worked under the FLSA is a mixed question of fact and law, meaning that there is no one-size fits all solution and the details matter.  But “hours worked” is not changing to reflect that AI might enable more that accurate tracking, so “idle” time remains a performance issue, rather than an FLSA one. 

    One area of the FLSA that AI may significantly impact is the defense of “de minimis” time.  The de minimis time concept arose because sometimes, small periods of time spent on work related tasks could not be accurately recorded for payroll purposes.  See 29 C.F.R. § 785.47.  De minimis time has impacted cases where employees spent time “donning and doffing” protective gear and equipment, took time uploading data gathered during field work once returning home, and when logging into an employer’s time-keeping program.  Under the de minimis doctrine, the employer bears the burden to satisfy a three-part test which considers: (1) the regularity of the additional work, (2) the aggregate amount of compensable time, and (3) the practical administrative difficulty of recording the additional time.  But with the capability for increasingly accurate employee tracking and monitoring through AI, we may see the erosion of the de minimis doctrine, as situations involving uncertain and indefinite periods of work time become more rare. 

    So, while the FLSA may eventually change to reflect improved employee tracking abilities, for now the traditional rules on “hours worked” continue to apply. 

    Payroll and AI

      The Bulletin also discusses how AI might impact the calculation of wages under the FLSA, aka payroll, due to increased automation.  With the ability for AI systems to track and assess the amount or type of work an employee is performing in detail, AI systems can also recalculate and adjust pay rates in real time to reflect when an employee is completing certain tasks.  For instance, if an employee earns a premium during work time spent in a certain location, or working out of class, an AI system might be able to track and log that without relying on an employee or supervisor to manually tag such an adjustment. 

      Despite the jump in technology, there are a few reasons to be wary about adopting an AI payroll system without close oversight.  AI automation can be extremely useful for employers (and employees) to tackle time-heavy and rote tasks but can also have unintended consequences when complicated variables are at play.  For instance, hiring systems using AI have faced backlash for unintentional discrimination.  And during the early days of image generative AI systems, AI-generated images regularly displayed stereotypical and offensive things that it had not yet “learned” was incorrect.  Unless you designed it, there is a lot unknown about how a given AI system works – it is a “black box” where inputs go in and outputs come out.  AI systems may not properly account for variables, or can have a “bias” built in due to its designer’s own misunderstandings.  So there are several reasons why an AI system might fail to address an unforeseen problem that requires a more “human” touch. 

      AI payroll systems, while hopefully not as prone to discrimination-related mistakes, still pose the danger of oversimplifying or failing to account for legal requirements, especially when their designers do not know what to watch out for.  One hallmark of the FLSA is that for non-exempt employees, employers must pay employees overtime, at one and one-half times their “regular rate of pay”, for any time worked in a workweek in excess of 40 hours.  Beyond the difficulties of tracking time, discussed above, the “regular rate of pay” causes multiple complications in the public sector because of the need to comply with local rules, collective bargaining agreements or memoranda of understanding, the FLSA, and more.  And because the “regular rate of pay” includes “all remuneration” with certain exclusions, it is not always easy to determine which payments made to employees need to be included.  If a payroll vendor is trying to craft a payroll system from the ground up, there are a lot of legal variables that aren’t readily obvious, and quite a few employers are going to have situations where legal advice is needed to ensure proper calculations.

      So an AI payroll system vendor attempting to build their system without fully understanding or accounting for those changing variables may ultimately produce a payroll system that suffers from similar problems that traditionally offered automated payroll systems face.  Even in “man-made” automated payroll systems, we regularly see payroll errors happen frequently due to human error, such as when a bonus is coded incorrectly and not counted in the regular rate of pay, when overtime is due for hours worked, and when certain cash in lieu payments are not included in the regular rate of pay.  Add onto these issues that the designers of AI payroll systems might not know how to properly address both FLSA and MOU problems at the same time, and you can quickly have unintended underpayments (or overpayments) happening with an AI payroll program that provides a quick fix.  And even if an AI payroll system was capable of “learning” what variables or documents to consider, and then seek them out, there is simply no guarantee that the AI payroll system would still be legally accurate without review.   

      In short, AI is likely to provide additional automation that makes it tempting to quickly adopt, but employers are going to want to double-check the work for accuracy.  We heavily recommend, just like we do for all current payroll systems, regular audits and oversight of any payroll system using AI.   

      Future AI Uses in the FLSA Sphere

        AI has the potential to drastically improve the productivity of workers, alter employee monitoring, and quite possibly eliminate the need for certain positions.  One possibility is that AI will allow employees to hold multiple jobs, as rote tasks and data analysis become simpler and quicker to perform.  If an employee works multiple job classifications for a single employer with different rates of pay, maybe the AI payroll system could automatically detect and accurately pay the employee the correct rate of pay for the time spent on each classification, as well as calculate their regular rate of pay to properly pay overtime. 

        Another possibility is that AI could become widely incorporated into wearable trackers issued to all employees.  We already have law enforcement personnel using body-worn cameras, is it a stretch to have all employees wear some sort of similar tech that allows constant tracking during the workday?  There are already hi-tech eyeglasses on the market that allow for video recording without much added bulk, and we would anticipate it will not take long for AI to become incorporated.  Will the FLSA need to adapt so that there are more protections for “hours worked”?  With more advanced AI technology creeping into everything, we might find out the answers to the above answers sooner rather than later.