The regulation of artificial intelligence is undergoing a momentous transformation in 2025, with a particular focus on consumer-facing applications. Companies using AI chatbots, automated decision systems, and generative technologies must prepare for an increasingly complex and rigorous regulatory landscape.
The year 2025 marks the end of the "Wild West" era of AI development. The European AI Act went into effect on August 1, 2024, with the main provisions becoming operational during 2025: AI literacy obligations became effective on February 2, 2025, while governance rules and obligations for GPAI models became applicable on August 2, 2025.
Emerging regulations follow an approach structured around three levels of risk:
1. Critical Infrastructure AI Systems: Applications in healthcare, transportation, energy, and financial markets now require pre-deployment certification, continuous monitoring, and meaningful human oversight.
2. AI Consumer-Facing: Applications that interact directly with consumers must provide clear communications about AI use, maintain comprehensive audit trails, and implement bias detection protocols.
3. AI General Purpose: General systems also require registration, basic security testing, and disclosure of training methodologies.
California Senate Bill 243, introduced by Senator Steve Padilla, came about in response to the tragic suicide of Sewell Setzer, a 14-year-old Florida boy who took his own life after developing an emotional relationship with a chatbot.
SB 243 Key Requirements:
The legislation provides for a private lawsuit with actual or statutory damages of $1,000 per violation, whichever is greater.
SB 420 aims to provide a regulatory framework to ensure that AI systems respect human rights, promote fairness, transparency and accountability. The legislation regulates the development and implementation of "high-risk automated decision-making systems" by requiring impact assessments to evaluate purpose, use of data, and potential for bias.
Consumer Notification Obligations: Under SB 420, individuals subject to automated decision-making systems must know when the tool is being used to make decisions about them, receive details about the system, and, where technically feasible, have the opportunity to appeal those decisions for human review.
Alabama, Hawaii, Illinois, Maine, and Massachusetts have all introduced regulations in 2025 that would make failure to notify when consumers interact with AI chatbots a violation of the Unfair or Deceptive Acts or Practices (UDAP), subjecting companies to Attorney General investigations and potential private actions.
Hawaii (HB 639): Would classify as unfair or deceptive the use of AI chatbots capable of mimicking human behavior without first communicating it to consumers in a clear and visible manner. Small businesses that unknowingly use AI chatbots are exempt unless clear notifications are provided.
Illinois (HB 3021): Would amend the Consumer Fraud and Deceptive Business Practice Act to require clear notification when consumers communicate with chatbots, AI agents, or avatars that might lead consumers to believe they are communicating with humans.
California enacted the first bot disclosure law (Cal. Bus. & Prof. Code § 17940-17942) requiring disclosure when bots are used to "knowingly deceive" a person for business transactions or electoral influence.
Utah's Artificial Intelligence Policy Act, effective May 1, 2024, requires consumer-facing bots to disclose "on demand" that consumers are interacting with "generative artificial intelligence and not a human."
In 2022, customers of the weight loss app Noom sued the company for allegedly violating California's bot disclosure law, claiming that Noom falsely represented that members would receive personalized plans from human coaches when they were actually automated bots. The parties reached a settlement worth $56 million.
The FTC issued guidelines requiring companies to "be transparent about the nature of the tool users are interacting with" and warned against using automated tools to trick people.
According to the EU AI Act, as of August 2026, AI providers must inform users when they interact with AI unless it is obvious. AI-generated content must be clearly labeled in a machine-readable manner, except for minor changes.
Even companies that do not consider themselves AI companies could use chatbots subject to regulation. Chatbots are pervasive in customer service, healthcare, banking, education, marketing and entertainment.
Companies must navigate a fragmented regulatory landscape with varying requirements across jurisdictions. The lack of federal preemption means that companies must comply with different requirements in different states.
State legislators are considering a diverse range of AI legislation, with hundreds of regulations introduced by 2025, including comprehensive consumer protection laws, sector-specific regulations and chatbot regulations.
Organizations that prioritize AI governance will gain a competitive advantage, as proactive compliance is the key to unlocking the full potential of AI while avoiding legal pitfalls.
The regulatory landscape for consumer-facing AI applications is evolving rapidly, with California leading the way through comprehensive legislation addressing both chatbot security (SB 243) and transparency of broader AI decisions (SB 420).
This patchwork of state-level regulations creates compliance challenges for companies operating in multiple jurisdictions, while the lack of federal preemption means that companies must navigate varying requirements.
The emphasis on transparency, human oversight rights, and protection of vulnerable populations signals a shift toward more prescriptive AI governance that prioritizes consumer protection over innovation flexibility.
Consumer-facing AI applications are artificial intelligence systems that interact directly with consumers, including customer service chatbots, virtual assistants, recommendation systems, and conversational AI used in industries such as e-commerce, healthcare, financial services, and entertainment.
The main requirements include:
No, SB 243 specifically applies to "companion chatbots"- AI systems with natural language interfaces that provide adaptive, human-like responses and are capable of meeting users' social needs. Not all customer service chatbots necessarily fall under this definition.
Penalties vary by state but may include:
Companies should:
Yes, the AI Act applies to any AI system that serves users in the EU, regardless of where the company is based. Starting August 2026, providers will have to inform users when they interact with AI unless it is obvious.
Companies must comply with the laws of each state in which they operate. Currently, there is no federal preemption, so it is necessary to develop multi-state compliance strategies that meet the most stringent requirements.
Some regulations provide exemptions or reduced requirements for small businesses. For example, Hawaii HB 639 exempts small businesses that unknowingly use AI chatbots as long as they comply after receiving proper notification.
Sources and Reference Links: