Police Are Using AI Surveillance: What It Means for Your Privacy
A Florida sheriff’s recent expansion of AI-powered surveillance has drawn attention to how law enforcement agencies across the country are quietly adopting these tools. The Sarasota County Sheriff’s Office announced plans to add new artificial intelligence capabilities to its existing camera network, raising questions about what data is collected, how it’s used, and what rights residents have.
Here’s a practical look at what’s happening, why it matters for your privacy, and what you can do about it.
What Happened in Sarasota
In early May 2026, the Sarasota Herald-Tribune reported that Sheriff Kurt Hoffman was expanding the department’s use of AI surveillance technology. The system, built on existing public and private cameras, uses machine learning to analyze video feeds in real time. According to the article, the system can flag suspicious behavior, track vehicles, and run facial recognition on individuals. The sheriff’s office says the goal is to deter crime and respond faster to incidents.
The expansion follows a trend: many law enforcement agencies now contract with vendors like Axon, Motorola Solutions, and surveillance startups that offer AI video analytics. Sarasota’s move is not unique, but it’s a concrete example of how quickly this technology is spreading.
How AI Surveillance Tools Work
Most AI police surveillance systems rely on one or more of the following:
- Automated license plate readers (ALPRs) – Cameras that scan plates and log the time, date, and location. Data can be stored for months or years.
- Facial recognition – Software that matches faces against databases of mugshots, driver’s license photos, or other images. Accuracy varies significantly based on lighting, angle, and the demographics of the person being scanned.
- Predictive policing algorithms – Models that analyze historical crime data to forecast where and when crimes might occur. These have been criticized for reinforcing racial bias.
- Behavioral analytics – AI that flags actions like loitering, running, or sudden movements as suspicious. The definitions of “suspicious” are often proprietary and not publicly audited.
Police can deploy these tools on fixed cameras (street poles, buildings) and on mobile units like drones or patrol car cameras. The Sarasota system reportedly integrates feeds from private businesses that voluntarily share access.
Privacy Risks You Should Know
The main concern is that AI surveillance shifts the balance between public safety and privacy. Unlike a human officer who can only watch one feed at a time, an AI system can monitor thousands of cameras simultaneously and retain data indefinitely.
Here are specific risks:
- Mission creep – Systems justified for serious crimes are later used for minor violations or civil matters. For example, an ALPR database built to find stolen cars could also be used to track political protesters.
- False positives – Facial recognition errors disproportionately affect people of color and women. In one known case, a man in Detroit was wrongfully arrested based on a flawed AI match.
- Lack of oversight – Many departments adopt AI surveillance without clear public policies on data retention, access, or auditing. Contracts with vendors often keep the algorithm’s logic secret.
- Chilling effect – Knowing you’re being watched can change behavior. People may avoid attending public meetings, exercising free speech, or visiting certain neighborhoods.
The American Civil Liberties Union and the Electronic Frontier Foundation have documented dozens of communities where surveillance systems expanded beyond their original purpose. The Sarasota case is still new, so it’s unclear how oversight will work.
Your Rights and How to Protect Yourself
The legal landscape is uneven. The U.S. Supreme Court has ruled that there is no reasonable expectation of privacy in public spaces, which means police generally need no warrant to record you on a street corner. However, a few states and cities have passed laws restricting the use of facial recognition or requiring audits.
What you can do:
- Know your local laws. Check if your city or county has a surveillance ordinance. Some require police to publish an annual report on what systems they use and how data is handled.
- Opt out where you can. Some private camera networks (like Amazon’s Ring Neighbors) let you request that your home’s footage not be shared with police. Others do not offer that choice.
- Limit your digital footprint. Use privacy-focused search engines, avoid posting location-tagged photos, and consider using a VPN on public Wi-Fi. This won’t stop a street camera from seeing you, but it reduces the data trail you leave online.
- Support local transparency efforts. If your city council debates a new surveillance contract, contact your representatives and ask for public hearings, data retention limits, and independent audits.
- File a record request. You can submit a Freedom of Information Act request (or state equivalent) to ask what surveillance technologies your local police use. The EFF has guides for this.
No method will make you invisible to a camera on a pole, but knowing what data exists and where it goes gives you a better chance of spotting abuse.
What’s Next for AI Policing
Federal legislation on police AI has stalled, so rules will likely come from state and local governments. Some cities (San Francisco, Portland) have banned facial recognition by police. Others have created civilian oversight boards. The outcome in Sarasota may influence neighboring counties.
The key question is whether the benefits of AI surveillance—faster crime response, lower staffing costs—justify the erosion of privacy. For now, the burden is on citizens to stay informed and involved.
Sources
- Sarasota Herald-Tribune: “Sarasota sheriff expands AI-powered surveillance amid privacy concerns” (May 2026)
- American Civil Liberties Union (ACLU): “Surveillance and Privacy” resources
- Electronic Frontier Foundation (EFF): “Street Level Surveillance” guide