Normal view

Supreme Court geofencing case weighs constitutionality of digital dragnets – and how far your rights go in the data Big Tech collects on you

Police got cellphone data for many people who happened to be in this area near the time of a bank robbery. AP Photo/Steve Helber

Google tracks the vast majority of cellphones in the United States, collecting your location, usage and device data through installed software and apps. The tracking occurs by various autonomous processes you cannot see or stop, even when you turn off location history, and Google and other companies keep that data for years. Outside of your control and wherever you go, your cellphone continuously creates a durable and revealing digital trail, and law enforcement agencies can get warrants to obtain it.

But some of those warrants aren’t looking for data about a specific person. Instead, police are compelling tech companies to reveal every cellphone in a particular area during certain time periods. Called geofence warrants, their use is at the heart of a case before the U.S. Supreme Court that will determine what the Fourth Amendment’s protections against unreasonable search and seizure mean in the digital age.

The Supreme Court case Chatrie v. United States involves the hunt for a suspect in an armed bank robbery in busy Midlothian, Virginia, in May 2019, and how police settled on a man named Okello Chatrie as the perpetrator.

Detective Joshua Hylton was granted a geofence warrant that compelled Google to search its database and identify every cellphone in a 17½-acre area around the bank, including private residences and a church, for a period of two hours. Working closely with Google, police ultimately narrowed in on Chatrie. When the trial court denied Chatrie’s motion to suppress the geofence-derived evidence, Chatrie appealed.

The Supreme Court will decide if, when and how law enforcement can use geofences. It matters because all cellphone-carrying people can end up in tomorrow’s geofence, like all those who were unknowingly grabbed in the Chatrie search. And nearly all users are unaware of these fences. No one specifically consents to be included in them, but people have no choice. What happened in the Chatrie case is a feat otherwise impossible but for advances in location tracking technology and advanced AI systems.

As a privacy, electronic surveillance and tech law attorney, author and legal educator, I have spent years researching, writing, educating and advising about these kinds of privacy and legal issues, and my books on electronic surveillance and evidence are routinely cited and relied upon by courts grappling with these issues.

a woman walks in between a brick and cement buidling and a parking lot
A customer walks out of a credit union in Virginia where a robbery in 2019 set in motion events that led to a Supreme Court case. AP Photo/Steve Helber

How geofences work

Geofences are part of modern life. By carrying your smartphone and other devices, you generate location and other device activity data. That data is collected, stored, analyzed, and bought and sold by multiple companies. The location history data being collected about you is what makes geofences possible, and it is comprehensive and precise.

Location history relies on a variety of sources of data that can include cell tower location, cellphone data such as connections to Wi-Fi networks and Bluetooth sources, and cellular data sent via cell tower. This means the communications you received and sent and the apps you used can be swept up in a geofence.

Advanced AI technologies analyze that data to discern increasing amounts of personal and behavioral data – insights about people, groups and activities – that can be used for a variety of purposes, including targeted advertising. Your rich location history and device data get snatched up regularly in such fences by private companies; your present and past self travels through them constantly.

A geofence can be in real time, for instance to identify and track who is at a protest, or any period in the past decade or so. It can be dynamically generated, like a circle around a specific location, or it could be a predefined set of boundaries, such as a specific address or area defined by streets or other geographical boundaries. One geofence warrant that Google received covered 2.5 square miles of San Francisco for a period of 2½ days.

There has been a significant increase in law enforcement’s use of geofence warrants over the past decade. Google revealed in court that it received a 1,500% percent increase in geofence requests from 2017 to 2018, a 500% percent increase from 2018 to 2019, and by 2020, it had 11,500 geofence warrants in a year. Between 2021 and 2023, geofence warrants made up over 25% of all warrants that Google received from law enforcement agencies in the United States.

a hand holds a smartphone displaying a map with a map in the background
If you carry a smartphone around with you, Google and other tech companies keep track of where you are and everywhere you’ve been. Dilara Irem Sancar/Anadolu via Getty Images

Search warrants and the Fourth Amendment

The Fourth Amendment is the foundation on which all U.S. electronic privacy laws rest. When government agents want to search or seize a person, place or thing – absent consent or emergency – the Fourth Amendment requires agents to obtain a court-approved warrant based on probable cause. Agents do this by providing a judge with enough evidence to establish probable cause that the person, place or thing to be searched or seized is associated with a crime.

The resulting warrant must describe with “particularity” the specific person, place or thing to be searched or seized. If these requirements are not met, the search is unreasonable and therefore unlawful, and evidence obtained in that search cannot be used in court, barring a good-faith exception.

The Fourth Amendment’s “particularity” requirement strictly forbids general warrants, historically used by British troops against Colonists to engage in overly broad or all-encompassing searches.

Reverse warrants

The only “particularity” that police can specify in applying for a geofence warrant is that a crime occurred at a particular time and place. Hence, geofence warrants are often called reverse warrants because they literally reverse the traditional process of conducting an investigation to identify a suspect and then obtain a warrant to gather information on that suspect. Geofence warrants gather all devices in a time and place, and then, aided by technology, police sift through for potential suspects.

The execution of a geofence warrant is very different from that of a typical warrant. Litigation records reveal a collaborative effort between law enforcement and Google that follows a three-step process. First, law enforcement officials specify in the warrant a time and place to be searched. The data they’re seeking is not merely a list of cellphone devices in the area; it is usually more detailed. For instance, it could include data about whether a device accessed a particular email account or app or sent a text at the time it was in the area of the geofence.

Second, the company provides the officials with an anonymized list of users or devices matching the warrant’s criteria. At this point, things start to become more fluid, and the officials may seek additional information about specific users outside of the initial search parameters.

Third, law enforcement officials then analyze the information and request that the company “unmask” certain users. In complying, Google may tell police the account holder’s name, their address, their email address, and even whether they were communicating or using certain apps during the relevant time. The officials then decide whether any of the users may be connected to the crime.

This close work between the private entity – usually Google – and law enforcement throughout the geofence warrant process raises significant privacy and civil liberties concerns. It also does not appear that there is any court review or judicial oversight during this give-and-take between law enforcement officers and Google in the geofence warrant process.

A split among appeals courts

In the Chatrie case, the trial court took issue with the geofence warrant police used, finding that it lacked particularized probable cause. But the trial court also determined that the officers in question had relied on the defective warrant in good faith, and thus it ruled the geofence evidence could be used against the defendant.

On appeal to the 4th U.S. Circuit Court of Appeals, a divided panel affirmed the trial court’s decision, and it concluded, over vigorous dissent, that obtaining the geofence data was not a search. The full 4th Circuit affirmed the trial court’s decision.

But the 4th Circuit’s 2024 Chatrie decision stands at odds with the 5th Circuit’s 2024 decision in United States v. Smith. In the Smith case, the 5th Circuit ruled that “geofence warrants are modern-day general warrants and are unconstitutional under the Fourth Amendment.” This split among the federal appeals courts should be resolved by the Supreme Court in its Chatrie decision.

Chatrie and the Supreme Court

For decades, the court has grappled with law enforcement’s use of technologies to track the location of people or things, issuing decisions about cell site location information and GPS. It has ruled that the U.S. Constitution requires law enforcement agents to obtain a warrant to track a person using their cellphone location history data or GPS, barring exigent circumstances.

The government is arguing in the Chatrie case that users voluntarily consented to the collection of location history, so they have no reasonable expectation of privacy in the data, and thus there is no violation of the Fourth Amendment.

Some of the amicus briefs filed in support of the defendant assert that electronic location data is protected by the Fourth Amendment’s warrant requirement, and that the geofence warrant fails to satisfy the Fourth Amendment’s particularity requirement. Some also argue that approving this warrant would open the door to a variety of reverse search warrants. And some contend that there is no meaningful consent or voluntariness around the data collection that underpins geofence technology.

Questions from the Supreme Court justices during oral arguments on April 27, 2026, indicate that at least some of them consider geofence warrants to be general warrants and thus unconstitutional. But for now, we wait.

The Conversation

Anne Toomey McKenna serves on the Advisory Board to the Institute for Electrical and Electronics Engineers (IEEE)-USA's Artificial Intelligence Policy Committee (AIPC) and Chairs multiple AIPC subcommittees. The AIPC work involves subject matter and education-related interaction with U.S. Senate and House congressional staffers and the Congressional AI Caucus. McKenna has received funding from the National Security Agency for the development of legal educational materials about cyberlaw (a course which the government still makes available online for the public) and funding from The National Police Foundation together with the U.S. Department of Justice-COPS division for legal analysis regarding the use of drones in domestic policing.

US government ramps up mass surveillance with help of AI tech, data brokers – and your apps and devices

The U.S. government is using AI to speed analysis of government and commercial data about you. Anton Petrus/Moment via Getty Images

On a Saturday morning, you head to the hardware store. Your neighbors’ Ring cameras film your walk to the car. Your car’s sensors, cameras and microphones record your speed, how you drive, where you’re going, who’s with you, what you say, and biological metrics such as facial expression, weight and heart rate. Your car may also collect text messages and contacts from your connected smartphone.

Meanwhile, your phone continuously senses and records your communications, info about your health, what apps you’re using, and tracks your location via cell towers, GPS satellites and Wi-Fi and Bluetooth.

As you enter the store, its surveillance cameras identify your face and track your movements through the aisles. If you then use Apple or Google Pay to make your purchase, your phone tracks what you bought and how much you paid.

All this data quickly becomes commercially available, bought and sold by data brokers. Aggregated and analyzed by artificial intelligence, the data reveals detailed, sensitive information about you that can be used to predict and manipulate your behavior, including what you buy, feel, think and do.

Companies unilaterally collect data from most of your activities. This “surveillance capitalism” is often unrelated to the services device manufacturers, apps and stores are providing you. For example, Tinder is planning to use AI to scan your entire camera roll. And despite their promises, “opting out” doesn’t actually stop companies’ data collection.

While companies can manipulate you, they cannot put you in jail. But the U.S. government can, and it now purchases massive quantities of your information from commercial data brokers. The government is able to purchase Americans’ sensitive data because the information it buys is not subject to the same restrictions as information it collects directly.

The federal government is also ramping up its abilities to directly collect data through partnerships with private tech companies. These surveillance tech partnerships are becoming entrenched, domestically and abroad, as advances in AI take surveillance to unprecedented levels.

As a privacy, electronic surveillance and tech law attorney, author and legal educator, I have spent years researching, writing and advising about privacy and legal issues related to surveillance and data use. To understand the issues, it is critical to know how these technologies function, who collects what data about you, how that data can be used against you, and why the laws you might think are protecting your data do not apply or are ignored.

Big money for AI-driven tech and more data

Congressional funding is supercharging huge government investments in surveillance tech and data analytics driven by AI, which automates analysis of very large amounts of data. The massive 2025 tax-and-spending law netted the Department of Homeland Security an unprecedented US$165 billion in yearly funding. Immigration and Customs Enforcement, part of DHS, got about $86 billion.

Disclosure of documents allegedly hacked from Homeland Security reveal a massive surveillance web that has all Americans in its scope.

DHS is expanding its AI surveillance capabilities with a surge in contracts to private companies. It is reportedly funding companies that provide more AI-automated surveillance in airports; adapters to convert agents’ phones into biometric scanners; and an AI platform that acquires all 911 call center data to build geospatial heat maps to predict incident trends. Predicting incident trends can be a form of predictive policing, which uses data to anticipate where, when and how crime may occur.

DHS has also spent millions on AI-driven software used to detect sentiment and emotion in users’ online posts. Have you been complaining about Immigration and Customs Enforcement policies online? If so, social media companies including Google, Reddit, Discord, and Facebook and Instagram owner Meta may have sent identifying data, such as your name, email address, phone number and activity, to DHS in response to hundreds of DHS subpoenas served on the companies.

Meanwhile, the Trump administration’s national policy framework for artificial intelligence, released on March 20, 2026, urges Congress to use grants and tax incentives to fund “wider deployment of AI tools across American industry” and to allow industry and academia to use federal datasets to train AI.

Using federal datasets this way raises privacy law concerns because they contain a lifetime of sensitive details about you, including biographical, employment and tax information.

Blurring lines and little oversight

In foreign intelligence work, the funding, development and controlled use of certain AI-driven gathering of data makes sense. The CIA’s new acquisition framework to turbocharge collaboration with the private sector may be legal with proper oversight. But the line between collaborating for lawful national security purposes versus unlawful domestic spying is becoming dangerously blurred or ignored.

For example, the Pentagon has declared a contractor, Anthropic, a national security risk because Anthropic insisted that its powerful agentic AI model, Claude, not be used for mass domestic surveillance of Americans or fully autonomous weapons.

On March 18, 2026, FBI Director Kash Patel confirmed to Congress that the FBI is buying Americans’ data from data brokers, including location histories, to track American citizens.

As the federal government accelerates the use of and investment in AI-driven spy tech, it is mandating less oversight around AI technology. In addition to the national AI policy framework, which discourages state regulation of AI, the president has issued executive orders to accelerate federal government adoption of AI systems, remove state law AI regulation barriers and require that the federal government not procure the use of AI models that attempt to adjust for bias. But using advanced AI systems is risky, given reports of AI agents going rogue, exposing sensitive data and becoming a threat, even during routine tasks.

Your data

The surveillance capitalism system requires people to unwittingly participate in a manipulative cycle of group- and self-surveillance. Neighborhood doorbell cameras, Flock license plate readers and hyperlocal social media sites like Nextdoor create a crowdsourced record of all people’s movements in public spaces.

Sensors in phones and wearable devices, such as earbuds and rings, collect ever more sensitive details. These include health data, including your heart rate and heart rate variability, blood oxygen, sweat and stress levels, behavioral patterns, neurological changes and even brain waves. Smartphones can be used to diagnose, assess and treat Parkinson’s disease. Earbuds could be used to monitor brain health.

This data is not protected under HIPAA, which prohibits health care providers and those working with them from disclosing your health information without your permission, because the law does not consider tech companies to be health care providers nor these wearables to be medical devices.

Legal protections

People have little choice when buying devices, using apps or opening accounts but to agree to lengthy terms that include consent for companies to collect and sell their personal data. This “consent” allows their data to end up in the largely unregulated commercial data market.

The government claims it can lawfully purchase this data from data brokers. But in buying your data in bulk on the commercial market, the government is circumventing the Constitution, Supreme Court decisions and federal laws designed to protect your privacy from unwarranted government overreach.

The Fourth Amendment prohibits unreasonable search and seizure by the government. Supreme Court cases require police to get a warrant to search a phone or use cellular or GPS location information to track someone. The Electronic Communications Privacy Act’s Wiretap Act prohibits unauthorized interception of wire, oral and electronic communications.

Despite some efforts, Congress has failed to enact legislation to protect data privacy, the use of sensitive data by AI systems or to restore the intent of the Electronic Communications Privacy Act. Courts have allowed the broad electronic privacy protections in the federal Wiretap Act to be eviscerated by companies claiming consent.

In my opinion, the way to begin to address these problems is to restore the Wiretap Act and related laws to their intended purposes of protecting Americans’ privacy in communications, and for Congress to follow through on its promises and efforts by passing legislation that secures Americans’ data privacy and protects them from AI harms.

This article is part of a series on data privacy that explores who collects your data, what and how they collect, who sells and buys your data, what they all do with it, and what you can do about it.

The Conversation

Anne Toomey McKenna serves on the Advisory Board to the Institute for Electrical and Electronics Engineers (IEEE)-USA's Artificial Intelligence Policy Committee (AIPC) and Chairs multiple AIPC subcommittees. The AIPC work involves subject matter and education-related interaction with U.S. Senate and House congressional staffers and the Congressional AI Caucus. McKenna has received funding from the National Security Agency for the development of legal educational materials about cyberlaw (a course which the government still makes available online for the public) and funding from The National Police Foundation together with the U.S. Department of Justice-COPS division for legal analysis regarding the use of drones in domestic policing.

❌