Analysis 7 min read

Can AI Agents Make Outbound Calls? What the Law Actually Says

AI outbound calls aren't illegal, but they aren't unregulated either. Here's what the FCC, FTC, and state laws actually require if you're using AI voice agents.

Voice-agentsLegalCompliance
By AgentsCrux ·

If you’re building or buying an AI voice agent for outbound calls, you’ve probably run into conflicting information. Some headlines say the FCC “banned” AI robocalls. Others say AI calling is fine. The truth sits between the two, and the details matter if you’re putting real money behind this.

Here’s where things actually stand, what you need to do to stay compliant, and where the rules are still changing.

The short version

AI outbound calls are legal with consent. They’re illegal without it.

That’s always been the rule for automated calls. What changed in 2024 is that the FCC explicitly confirmed AI-generated voices fall under the same rules. Before that, there was a gray area. Now there isn’t.

What the FCC actually said

On February 8, 2024, the FCC issued a unanimous declaratory ruling on AI-generated voice calls. The ruling didn’t create new law. It clarified that the Telephone Consumer Protection Act (TCPA), which has been on the books since 1991, covers AI voices.

The key line: AI technologies that generate human voices, including voice cloning, are “artificial” under the TCPA. That means any call using an AI-generated voice needs prior express consent from the person receiving the call, unless there’s an emergency or a specific exemption applies.

This was a response to a real problem. Scammers were using AI-cloned voices to impersonate celebrities, politicians, and family members. A fake Biden robocall in New Hampshire during the 2024 primary triggered a lot of the urgency. But the ruling applies to all AI voice calls, not just scam calls.

This is where it gets specific, and where companies get into trouble.

The TCPA has two levels of consent, depending on what kind of call you’re making:

For non-marketing calls (appointment reminders, order confirmations, service updates): you need “prior express consent.” In practice, this means the person gave you their phone number and a reasonable person would expect to receive calls at that number about the topic. If someone books an appointment and gives you their number, that’s generally enough for a confirmation call, even from an AI.

For telemarketing and sales calls: you need “prior express written consent.” This is a higher bar. The person must have signed (physically or electronically) an agreement that clearly says they’re agreeing to receive marketing calls using automated technology. A terms-of-service checkbox that mentions automated calls usually satisfies this, but the language matters.

One wrinkle: the FCC proposed in August 2024 that consent language should specifically mention AI-generated calls. That rule isn’t finalized yet. But companies that are planning ahead are already updating their consent forms to include that language, because retroactively getting new consent is much harder.

The FTC side: Telemarketing Sales Rule

The FCC isn’t the only agency involved. The FTC enforces the Telemarketing Sales Rule (TSR), which covers outbound sales calls.

In March 2024, the FTC updated the TSR and confirmed that AI-enabled calls are treated as “prerecorded messages” under its rules. All TSR provisions apply to AI calls. That means:

  • You must identify yourself and the company at the beginning of the call
  • You must disclose that the call is a sales call
  • You must provide an opt-out mechanism
  • Calling numbers on the Do Not Call Registry without specific consent is a violation
  • B2B calls now also fall under misrepresentation protections

TSR violations carry civil penalties up to $51,744 per violation. That’s separate from TCPA damages.

State laws add another layer

This is the part that catches people off guard. Even if you’re fully compliant with federal law, individual states have their own rules, and they’re moving fast.

California requires AI systems with over one million monthly users to disclose when content is AI-generated (SB 942, effective August 2026). AB 489 specifically prohibits AI from falsely claiming healthcare licenses and requires disclosure when AI communicates with patients.

Utah was the first state to pass comprehensive AI consumer protection (UAIPA, May 2024). Originally required disclosure at the start of any AI interaction. Amended in 2025 to only require disclosure when a consumer directly asks or during “high-risk” interactions involving health, financial, or biometric data. There’s a safe harbor if you disclose at the outset anyway.

Colorado’s AI Act takes effect June 30, 2026. It requires disclosure when consumers interact with AI chatbots, regardless of whether the AI deployment is “high risk.” It also mandates impact assessments for high-risk AI systems.

Texas enacted the Responsible AI Governance Act (TRAIGA), effective January 1, 2026. Requires healthcare providers to disclose AI use. Prohibits AI systems designed to discriminate or incite harm.

Maine passed a Chatbot Disclosure Act in June 2025. If a reasonable consumer can’t tell they’re talking to an AI, the business must disclose it. Enforceable under the Maine Unfair Trade Practices Act.

More states are moving. As of early 2026, Alabama, Hawaii, Virginia, Washington, Arizona, and others have introduced bills requiring AI disclosure in consumer interactions. If you operate across state lines, assume disclosure will be required everywhere within the next year or two.

There’s also a federal wildcard: President Trump signed an executive order in December 2025 attempting to establish a federal AI framework that would preempt conflicting state laws. The Commerce Department was directed to evaluate state AI laws by March 2026 and flag ones that conflict with federal policy. Whether that actually overrides state laws is going to be decided in court, not by executive fiat. For now, state laws remain in force.

The penalty math

TCPA penalties apply per violation, per call. There’s no cap.

  • Standard: $500 per violation
  • Willful violations: up to $1,500 per violation (treble damages)
  • TRACED Act (for intentional robocall violations): additional $10,000 per call
  • FTC TSR violations: up to $51,744 per violation

A campaign of 5,000 unconsented AI calls at the standard rate: $2.5 million. At the willful rate: $7.5 million.

And about 80% of TCPA lawsuits are class actions. In 2024, TCPA case filings were up 67% year over year, with 2,788 cases filed. One multi-level marketing company lost $925 million in a TCPA class action verdict in 2024, though that included non-AI calls.

These aren’t hypothetical numbers. The TCPA is one of the most litigated consumer protection laws in the US, and plaintiff attorneys actively look for non-compliant campaigns.

What you actually need to do

If you’re deploying AI voice agents for outbound calls, here’s the compliance checklist:

Before you make a single call:

  1. Get proper consent. Written consent for marketing calls. Express consent for non-marketing. Make sure the consent language mentions automated and AI-generated calls.

  2. Check the DNC Registry. Subscribe to the FTC’s Do Not Call Registry and scrub your call lists against it. Subscription costs depend on the number of area codes you need.

  3. Build an opt-out mechanism into every call. The person must be able to say “stop” or press a button to be removed immediately. “Immediately” means within that same call, not a few days later.

  4. Identify yourself and the company at the start of every call. Don’t try to pass the AI off as a human. Even where disclosure isn’t yet legally required, it’s the direction every jurisdiction is moving.

On an ongoing basis:

  1. Log everything. Call records, consent records, opt-out records. Keep them for at least four years (the TCPA statute of limitations). The 2024 TSR amendments added new recordkeeping requirements.

  2. Respect time restrictions. Federal law allows calls between 8 AM and 9 PM in the called party’s time zone. Some states are narrower.

  3. Monitor state laws. If you’re calling into California, Utah, Colorado, or Texas, you’re already under state-specific AI disclosure requirements. More states are adding them.

  4. Update consent forms proactively. When the FCC finalizes its proposed AI-specific consent rule, you don’t want to have to re-consent your entire list. Add AI disclosure language now.

Where this is heading

The regulatory direction is clear, even if the timeline isn’t: more disclosure requirements, more consent specificity, and more enforcement.

The FCC’s proposed rules from August 2024 would define “AI-generated call” broadly (any call that uses AI to generate voice or text content) and would require both consent-stage disclosure and in-call disclosure. Comments were due October 2024 and the final rule hasn’t been published yet, but the direction is obvious.

At the state level, chatbot and AI disclosure bills are the fastest-growing category of AI legislation in 2026. Virginia, Washington, Utah, Arizona, Hawaii, and others all have active bills.

The federal preemption question (Trump’s executive order vs. state laws) will play out in courts over the next year. Until it’s resolved, the safe bet is compliance with the strictest applicable state law.

For businesses building AI calling systems: build disclosure into the product from day one. It’s cheaper to add a “This call is powered by AI” disclosure now than to defend a class action later.


This post covers federal and state law as of March 2026. AI calling regulations are changing fast. If you’re deploying AI voice agents at scale, get advice from a telecom attorney who tracks TCPA developments. This is not legal advice.

Frequently Asked Questions

Are AI outbound calls illegal?
No. AI outbound calls are legal if you have prior express consent from the person you're calling. The FCC's February 2024 ruling didn't ban AI calls. It clarified that AI-generated voices count as 'artificial voices' under the TCPA, which means the same consent rules that apply to robocalls also apply to AI calls.
Do I need to tell people they're talking to an AI?
At the federal level, the FCC has proposed (but not yet finalized) a rule requiring in-call disclosure that AI is being used. At the state level, California, Utah, Colorado, and several others already require disclosure when consumers interact with AI. If you operate in multiple states, the safest approach is to always disclose.
What happens if I make AI calls without consent?
TCPA penalties start at $500 per violation and go up to $1,500 for willful violations. There's no cap on total damages. A campaign of 1,000 unconsented calls could cost $500,000 to $1.5 million. About 80% of TCPA cases are class actions, and in 2024 alone, TCPA filings were up 67% year over year.
Can AI agents make outbound sales calls?
Yes, with two conditions: you need prior express written consent from the recipient (not just general consent, but written consent specifically for telemarketing), and you must provide an opt-out mechanism on the call. The Do Not Call Registry rules also apply. If the person is on the DNC list and hasn't given you specific consent, don't call.
What about B2B calls with AI agents?
B2B calls have more flexibility under the TCPA, but the AI voice rules still apply. You still need consent for automated calls to wireless numbers. The FTC's March 2024 TSR amendment also extended misrepresentation protections to B2B calls, so you can't mislead businesses about the nature of the call.
Will AI replace call center agents?
AI handles routine, predictable calls well: appointment confirmations, order status, basic FAQ. But complex calls that need empathy, negotiation, or judgment still go to humans. The trend is augmentation, not replacement. Companies using AI voice agents typically see human agents handling fewer calls overall, but the calls they do handle are higher-value.

Stay ahead of the AI agent curve

Weekly tools, tutorials, and industry analysis. No spam, unsubscribe anytime.