As California public agencies increasingly experiment with generative artificial intelligence (“AI”) tools like ChatGPT, a critical question is no longer theoretical: are AI prompts and outputs subject to disclosure under the California Public Records Act (“CPRA”)?
Recent reporting and public records requests suggest the answer may soon be tested in real time—and that agencies should prepare now. Just as importantly, these developments highlight a parallel concern: how AI use may affect public trust in government communications.
Public Records Requests Are Already Targeting AI Use
A recent California Public Records Act request submitted to a California city sought “generative AI records” and information about how employees interact with AI systems. This request reflects a growing interest by researchers and the public in understanding how government agencies are using AI tools in their day-to-day operations.
Although the request’s outcome remains to be seen, it signals a clear trend:
requesters are beginning to treat AI-generated content and usage logs as potential public records.
For California agencies, this raises immediate questions about whether:
- AI prompts entered by employees must be retained.
- AI-generated outputs are disclosable.
- Agencies can locate and produce these records if requested.
Real-World Example: AI Records Already Produced Through Public Records Laws
A recent investigation by Cascade PBS into AI use by Washington state cities demonstrates how quickly this issue is evolving.
In that investigation:
- Journalists obtained ChatGPT conversation logs from city officials through public records requests.
- The records showed employees using AI to draft:
- emails to constituents
- mayoral letters
- policy documents
- speeches and press releases
In some cases, AI-generated language appeared in final communications sent on behalf of public officials.
This reporting is significant for California agencies for two reasons:
- AI prompts and outputs were treated as disclosable records.
The cities produced ChatGPT conversation logs in response to records requests. - AI is being used in outward-facing communications.
AI tools are not just internal aids—they are shaping the tone and substance of messages sent to the public.
AI Content May Fall Within the CPRA’s Broad Definition of “Public Records”
Under the CPRA, a “public record” includes any writing relating to the conduct of the public’s business that is prepared, used, or retained by a public agency.
Applied to AI tools, the following may be the case:
- Prompts are “prepared” by employees.
- Outputs are “used” in agency business (e.g., drafting communications or policies).
- Logs or histories may be “retained,” even if stored by a third-party vendor.
The Washington state example demonstrates that, in practice, these materials can be collected and produced—reinforcing the likelihood that similar records could be subject to CPRA in California.
Beyond Compliance: Public Trust and Authenticity Concerns
The Cascade PBS reporting also raises an important public relations and governance issue: when government communications are generated—wholly or partially—by AI, they may appear less authentic to the public.
For example:
- Constituents may assume communications reflect the personal views or voice of an elected official.
- AI-generated messages may feel generic, overly polished, or disconnected from community context.
- Disclosure of AI use through public records requests could lead to public skepticism or criticism.
Even where AI is used appropriately as a drafting tool, agencies should consider how its use might be perceived if:
- ChatGPT logs are later disclosed.
- Drafts show heavy reliance on AI-generated language.
- There is no transparency about AI involvement.
In short, the issue is not only whether AI-generated content is disclosable—but how it will be received once disclosed.
Key Risk: Where Are AI Records Stored?
Unlike traditional documents, AI-generated content may not reside in agency systems. Instead, it may be:
- Stored on third-party platforms.
- Maintained in vendor-controlled logs.
- Not formally saved by the employee at all.
However, as the Washington records requests show, the location of the data does not necessarily prevent disclosure if the agency can access or retrieve it.
This creates practical challenges:
- Does the agency have access to employee AI usage logs?
- Are there retention or deletion policies in place?
- Can the agency respond to a CPRA request for AI records in a timely manner?
Practical Steps for California Public Agencies
Given these developments, agencies should assume that AI-related records may be requested and potentially disclosable—and scrutinized.
To prepare, agencies should consider:
- Adopting an AI Use Policy
Address what employees can input into AI tools and whether prompts/outputs must be saved. - Considering Transparency Practices
Evaluate whether and when to disclose AI assistance in public-facing communications. - Evaluating Vendor Capabilities
Determine whether AI platforms retain user inputs and whether those records can be retrieved. - Training Employees
Make clear that AI use in the workplace may generate public records—and may be publicly reviewed. - Coordinating with IT and Legal
Ensure AI usage aligns with records retention schedules and CPRA compliance. - Limiting Use for Sensitive or High-Profile Communications
Exercise caution when using AI for communications that may impact public trust.
Key Takeaways
- Public records requests are already targeting AI usage, including in California.
- Investigations show that ChatGPT prompts and outputs can be produced as public records.
- AI-generated content used in agency business will likely fall within CPRA’s broad definition of “public records.”
- Disclosure of AI use may raise public trust and authenticity concerns.
- Agencies must be prepared not only for compliance, but for public scrutiny of how AI shapes government communications.





