Who Can Read Your DMs? Your Data, the Cloud, and Government Access

Alison O'Leary

Last updated 2 days ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.

When you send a private message on a social media app, an email, or a workplace chat platform, there’s a natural expectation of privacy. The message is intended for a specific recipient, and the digital walls of the application feel like the walls of a private room.

But who, besides the intended recipient, can actually see that message?

The answer is far more complex than most people realize, involving technology, corporate policies, and a legal framework largely conceived before the invention of the smartphone.

The privacy of a direct message (DM) is not absolute. It depends on where the data is stored, how it’s protected, and who can be legally compelled to grant access. Understanding these factors is crucial for any citizen navigating the digital world.

Where Your Messages Live: The Cloud and Your Device

The first and most fundamental question determining the privacy of a digital message is: where is it stored? The physical location of data profoundly impacts who has control over it and who can access it.

In modern computing, there are two primary homes for user data: their own device or a company’s cloud server. This distinction is the bedrock upon which all subsequent privacy considerations are built.

The Two Homes for Your Data: Local vs. Cloud Storage

When a message is sent, its history can be saved in one of two places.

Local Storage: The data resides directly on a user’s physical device, such as a smartphone or computer. This is analogous to keeping a physical diary in a locked desk drawer at home. The user has direct physical possession of the data.

Messaging services that prioritize this model, like Signal and, for message content by default, WhatsApp and iMessage, are designed to keep conversation history on users’ endpoints.

Cloud Storage: This involves storing data not on the user’s device, but on remote servers owned and operated by a third-party company. These servers are accessed via the internet through an Application Programming Interface (API), which allows an app on a phone to retrieve the data.

This is like keeping personal belongings in a commercial storage unit. While the belongings are the user’s, the unit and the facility are owned and managed by someone else.

Most popular messaging platforms, including Facebook Messenger, Google Chat, Slack, Discord, and Telegram’s standard cloud chats, rely on this model. The primary user-facing benefit is convenience: message histories are synced and accessible across multiple devices.

The Cloud Explained: What It Is and How It Works

The term “cloud” is a marketing triumph that can obscure the physical reality of the infrastructure. The cloud is not an ethereal, abstract entity; it’s a vast, physical network of data centers located around the globe, filled with powerful computer servers owned by companies like Amazon, Google, and Microsoft.

When data is uploaded to the cloud, it’s sent via an internet connection to one of these data centers, where it’s stored on a physical server.

To manage this immense infrastructure efficiently, cloud providers use a technology called virtualization. A single, powerful physical server can be partitioned to run multiple “virtual machines” (VMs), each acting as a separate, independent server. This allows for incredible scalability: if a service needs more storage or processing power, the cloud provider can simply spin up more VMs to handle the load.

To ensure data is always available and to protect against hardware failure or outages, providers also practice redundancy, often storing copies of the same data on multiple machines, sometimes in different geographic locations.

The Inherent Risk of the Cloud: A Matter of Trust

The convenience of cloud storage comes with a fundamental privacy trade-off. When messages are stored on a company’s cloud servers, the user is implicitly placing trust in that company to act as a responsible custodian of their data.

Even if the company has a strong privacy policy and promises not to access the content, the technical capability to do so may still exist. The service provider effectively holds the keys to the digital storage unit.

This reliance on trust has profound legal consequences. Because the company has possession and control of the data, it can be legally compelled by governments to turn over that data. This single fact is the lynchpin of modern digital surveillance.

The very architecture that enables seamless, multi-device access also centralizes vast quantities of personal data, making it an efficient, one-stop shop for law enforcement investigations. The abstraction of the term “cloud” masks this concrete risk.

Users may feel their data is safe in a nebulous digital space, but in reality, it resides on a specific company’s server, subject to that company’s security practices, its terms of service, and, most importantly, the laws of the jurisdiction in which it operates.

Furthermore, this architecture is not just a technical choice but a business one. Centralizing data in the cloud allows companies to offer features like universal access but also enables them to process and analyze that data for other purposes, such as business intelligence and user profiling for targeted advertising.

A 2014 class-action lawsuit, for instance, alleged that Facebook was scanning the contents of supposedly private messages for data mining purposes. This business incentive to centralize data creates the very vulnerability that legal doctrines can exploit, turning user convenience into a vector for surveillance.

The Digital Lock and Key: Understanding Encryption

While the location of data is the first factor in determining its privacy, the second is how that data is protected. The primary technical safeguard for digital communications is encryption. It’s the digital equivalent of a lock and key, designed to prevent unauthorized access to private information.

However, not all encryption is created equal, and the specific type of encryption used by a messaging service is arguably the single most important factor in determining whether anyone other than the sender and recipient can read a message.

What is Encryption?

At its core, encryption is a process that transforms readable data, known as plaintext, into an unreadable, scrambled format called ciphertext. This transformation is performed using a complex mathematical algorithm and a specific “key.”

The process is reversed through decryption, where the correct key is used to turn the ciphertext back into readable plaintext. The entire system is built on a simple premise: without the correct key, the message remains unintelligible.

Encryption is the principal method that instant messaging applications employ to protect the privacy and security of user data.

The Crucial Difference: Encryption-in-Transit vs. End-to-End Encryption

The distinction between different types of encryption is not merely technical; it has massive implications for privacy. The two most relevant types for messaging are encryption-in-transit and end-to-end encryption.

Encryption-in-Transit: Often implemented using protocols like Transport Layer Security (TLS), this protects a message only while it’s traveling over the network. It creates a secure tunnel between a user’s device and the company’s server, and another secure tunnel from that server to the recipient’s device.

The analogy is sending a letter through the mail using two separate sealed envelopes. The first envelope goes from the sender to the post office. At the post office, an employee opens the letter, reads it, and places it in a new sealed envelope to send to the recipient.

While the letter is protected from being read by outsiders during its journey, the intermediary, the post office, or in this case, the tech company, has full access to the message content in its plaintext form on its servers.

End-to-End Encryption (E2EE): This is widely considered the gold standard for secure communication. In an E2EE system, a message is encrypted on the sender’s device and can only be decrypted on the intended recipient’s device. The company that operates the service, and any other intermediary, cannot access the content of the message because it does not possess the necessary decryption keys.

The analogy here is sending a letter in a locked box. The sender locks the box and sends it through the mail. The post office can handle the box, but it cannot open it. Only the recipient, who has the unique key to that specific box, can unlock it and read the letter.

This is typically achieved through a sophisticated cryptographic process known as public-key cryptography. Each user has a pair of mathematically linked keys: a public key, which can be shared with anyone, and a private key, which is kept secret on the user’s device.

To send an E2EE message, the sender’s app uses the recipient’s public key to encrypt the message. Once encrypted, that message can only be decrypted by the recipient’s corresponding private key. Because the private key never leaves the user’s device, the company in the middle never has the ability to decrypt the communication.

The choice of encryption model varies significantly across popular messaging platforms:

Default End-to-End Encryption: Services like Signal, WhatsApp, and Apple’s iMessage have implemented E2EE as the default for their core messaging functions. This means that for one-on-one and group chats on these platforms, the service provider cannot read the content of the messages.

In a sign of increasing focus on security, some of these services, including Signal and iMessage, have begun implementing post-quantum cryptography to protect against the threat of future, more powerful computers that could break current encryption standards.

Optional End-to-End Encryption: Some of the world’s largest messaging platforms offer E2EE, but only as an optional mode that users must actively enable. Facebook Messenger offers “Secret Conversations,” and Telegram offers “Secret Chats.”

Standard conversations on these platforms are not end-to-end encrypted and are stored on the company’s cloud servers, where they can be accessed. This approach has drawn criticism, as many users may not be aware that their standard chats are less secure or may not know how to activate the more private mode.

No End-to-End Encryption: Many popular services do not offer E2EE for direct messages at all. These typically include the DM features on platforms like Instagram, X (formerly Twitter), Discord, and Slack. Messages on these services are generally protected by encryption-in-transit, meaning the company can access the content on its servers.

The Strategic Importance of E2EE

The implementation of E2EE is not just a technical feature, it’s a direct technical countermeasure to the legal vulnerabilities that arise from storing data with a third party.

If a company receives a legal order from the government to produce the content of a message that is end-to-end encrypted, it can truthfully respond that it is technically incapable of complying. The company possesses only unintelligible ciphertext, not the decryption keys.

This fundamentally alters the legal dynamic. It forces law enforcement to shift its focus away from compelling the service provider and toward targeting the “ends” of the communication—the devices themselves—which is a far more complex and legally fraught endeavor.

The ongoing public debate over E2EE, often framed by government agencies as a concern about criminals “going dark,” is therefore a proxy war over the future of surveillance. A company’s choice to implement default E2EE is a powerful statement about its position in this conflict, prioritizing technologically guaranteed privacy over the possibility of providing “lawful access” to government entities.

The Other Watchers: Your Employer and the Platform Itself

While government surveillance often captures headlines, there are other, more common entities that may have the ability to read private messages: employers and the platform providers themselves. The expectation of privacy shifts dramatically depending on the context, particularly in the workplace.

At the Office: Can Your Boss Read Your DMs?

When an individual is using company equipment or networks, the legal expectation of privacy is drastically reduced.

The general rule is that there should be no expectation of privacy when using:

  • Company-owned assets, such as a laptop or phone
  • Company-provided software, like Microsoft Teams or Slack
  • The company’s network, including its Wi-Fi

The rationale is straightforward: the company owns the systems and has a legitimate interest in monitoring them for security, productivity, and legal compliance.

Corporate Monitoring Tools

Corporate IT administrators have powerful tools at their disposal to facilitate this monitoring. In environments using platforms like Microsoft 365, an administrator can use built-in features such as “Content search” or “eDiscovery” to search for and retrieve all communications across the organization, including one-to-one direct messages.

These messages are often stored in an employee’s cloud-based company mailbox, making them company records that are fully accessible to authorized personnel.

Even when using a personal device, connecting to the company’s Wi-Fi network creates potential vulnerabilities. While a message sent via an E2EE app like WhatsApp or iMessage would protect the content of the communication, the network administrator could still see the metadata—for example, that a device at a specific IP address connected to WhatsApp’s servers at a particular time.

Advanced Corporate Surveillance

In more sophisticated corporate environments, a company might employ techniques like SSL offloading. This involves installing a special security certificate on an employee’s device, which allows the company network to intercept and decrypt otherwise secure traffic.

This would typically trigger a certificate warning on the device, but if an employee accepts it, their encrypted traffic could be exposed.

The Policy vs. Capability Distinction

It’s important to note that while the technical capability to read messages often exists, it’s usually governed by internal company policy and HR rules. Access to an employee’s private chats is typically restricted to specific, legitimate business purposes, such as:

  • An internal investigation into misconduct
  • A response to a lawsuit
  • A security audit

It is not standard practice for a manager to be able to read an employee’s DMs out of simple curiosity.

The Blurred Lines of Modern Work

The modern workplace, with its blend of remote work and “bring your own device” policies, has created new privacy traps by blurring the lines between personal and professional life.

An employee might use a company laptop for a personal chat on a work platform like Slack, inadvertently bringing that private conversation under the umbrella of corporate surveillance. Conversely, installing a work application like Microsoft Outlook on a personal phone can grant the employer new powers over that device, including, in some cases, the ability to remotely wipe all of its data through Mobile Device Management (MDM) software.

The “consent” given to such monitoring in an employment contract is also fundamentally different from a consumer’s choice. It’s often a condition of employment, making it a form of coerced consent that challenges the notion of a “voluntary” agreement.

On the Platform: How Companies Use Your Non-E2EE Messages

For any messaging service that does not use end-to-end encryption, the platform provider itself is a potential “watcher.” If a message is stored on a company’s servers in a decrypted or decryptable format, the company has the technical ability to access it.

This access is generally used for several purposes:

Policy Enforcement: Companies automatically scan messages for content that violates their terms of service, such as spam, phishing links, or the distribution of illegal material like child sexual abuse imagery. This is a critical part of maintaining a safe platform.

Commercial Purposes: More controversially, companies may analyze the content and metadata of messages to build more detailed profiles of their users. This information can be used to improve ad targeting or for other forms of data mining and user profiling.

Metadata Collection: Even if the content of a message is not analyzed, the metadata is incredibly valuable. Metadata is the data about the communication: who sent it, who received it, when it was sent, and from what location.

News reports in 2013 revealed that the U.S. National Security Agency (NSA) was engaged in the mass collection of this type of metadata, which underscores its immense intelligence value for mapping social networks and patterns of life. For services without E2EE, this metadata is readily available to the platform provider and can be used for its own purposes or turned over to the government.

The Government’s Passkey: The Fourth Amendment and the Third-Party Doctrine

The most powerful entity that may seek access to private digital messages is the government. The legal framework governing this access is rooted in the U.S. Constitution, specifically the Fourth Amendment, but has been shaped and complicated by a series of court decisions that have struggled to apply 18th-century principles to 21st-century technology.

At the heart of this struggle is a legal concept known as the Third-Party Doctrine.

The Fourth Amendment: Your Right to Be Secure

The Fourth Amendment to the U.S. Constitution forms the foundation of privacy rights against government intrusion. It states:

“The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”

Originally, the Fourth Amendment was understood to protect against physical intrusions by the government into a person’s tangible property: their home, their physical papers, and their personal belongings. To conduct a “search” of these protected areas, the government generally needs to obtain a search warrant from a neutral judge, which requires a showing of probable cause—a reasonable basis for believing that a crime has been committed and that evidence of the crime will be found in the place to be searched.

The Shift to Privacy: Katz v. United States

For nearly two centuries, the Fourth Amendment’s focus remained on property. This changed dramatically in 1967 with the landmark Supreme Court case Katz v. United States.

In Katz, the FBI had placed a listening device on the outside of a public phone booth to record the calls of a suspected bookmaker. Because there was no physical trespass into the booth itself, the lower courts found no Fourth Amendment violation.

The Supreme Court disagreed, famously declaring that the Fourth Amendment “protects people, not places.”

The Court established a new two-part test for what constitutes a “search”:

  1. Whether a person has an actual, subjective expectation of privacy
  2. Whether that expectation is one that society is prepared to recognize as “reasonable”

The Court found that when Charles Katz entered the phone booth and shut the door, he sought to keep his conversation private, and this was a reasonable expectation. The government’s electronic eavesdropping was therefore an unconstitutional search.

However, the Katz decision contained a critical limitation that would have far-reaching consequences. Justice Potter Stewart wrote for the majority: “What a person knowingly exposes to the public, even in his own home or office, is not a subject of Fourth Amendment protection.”

This single sentence planted the seed for what would become the Third-Party Doctrine.

The Birth of the Third-Party Doctrine: Miller and Smith

Building on the “knowing exposure” idea from Katz and a series of earlier cases involving government informants, where the Court held that a person “assumes the risk” that a confidant might be a government agent, the Supreme Court created the Third-Party Doctrine in two pivotal cases in the 1970s.

United States v. Miller (1976)

In this case, federal agents investigating an illegal whiskey distillery subpoenaed Mitch Miller’s bank records, including checks and deposit slips, directly from his bank without a warrant.

The Supreme Court ruled that this was constitutional. It reasoned that Miller had no reasonable expectation of privacy in his bank records because he had “voluntarily” turned that information over to the bank, a third party, in the ordinary course of business.

The records, the Court concluded, were the business records of the bank, not Miller’s private papers.

Smith v. Maryland (1979)

Police suspected Michael Smith of robbery and, without a warrant, requested that the telephone company install a “pen register” to record all the phone numbers he dialed from his home.

The Supreme Court again found no Fourth Amendment violation. It held that Smith had no reasonable expectation of privacy in the numbers he dialed because he had “voluntarily conveyed” them to the phone company in the process of making a call. He had “assumed the risk” that the company would keep a record of these numbers and could disclose them to the police.

The Doctrine’s Impact

Together, Miller and Smith established a clear and powerful rule: any information a person voluntarily gives to a third party loses its Fourth Amendment protection.

This legal proposition, the Third-Party Doctrine, has become one of the most significant and controversial principles in modern privacy law. It created a legal framework that equated the act of using a necessary service with a conscious forfeiture of privacy rights.

This was a profound intellectual leap, taking the logic of betrayal by a trusted human confidant from the informant cases and applying it to impersonal, automated business transactions. The doctrine’s flawed analogy, that sharing data for a functional purpose is the same as confiding in a person, stripped constitutional protection from the routine activities of modern life.

By creating this massive loophole, the doctrine inadvertently incentivized a “surveillance by proxy” model. Instead of conducting direct surveillance on an individual, which would require a warrant, the government could simply demand the data from the third-party corporations that hold it, using a much lower legal standard.

This effectively deputized private companies as data collectors for the state, allowing the government to sidestep the core protections of the Fourth Amendment.

The Doctrine’s Critics: An Outdated Rule in a Digital World

From its inception, the Third-Party Doctrine has faced fierce criticism, which has only intensified in the digital age. Critics, including dissenting Supreme Court justices and civil liberties advocates, argue that the doctrine is built on a flawed understanding of modern life.

One major critique is that the disclosure of information to third parties is not truly “voluntary.” As Justice Thurgood Marshall argued in his Smith dissent, one cannot realistically participate in modern personal or professional life without using a telephone.

Similarly, as Justice William J. Brennan Jr. noted in his Miller dissent, it is “impossible to participate in the economic life of contemporary society without maintaining a bank account.”

In the 21st century, this argument extends to internet service providers, email hosts, and social media platforms. Forgoing these services is not a realistic option for most people.

Another core criticism is that privacy is not an “all-or-nothing” proposition. Justice Marshall contended that privacy is not a “discrete commodity, possessed absolutely or not at all.” Just because a person shares information with a company for a limited purpose: to connect a call, process a check, or send a message, does not mean they have consented to that information being shared with the government for any reason.

These critiques culminated in Justice Sonia Sotomayor’s influential concurrence in the 2012 case United States v. Jones. She wrote that the Third-Party Doctrine “is ill-suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks.”

In an era where our most sensitive personal information, our communications, photos, location history, health data, and political views reside on servers owned by companies like Google, Meta, and Apple, the application of a 1970s legal doctrine threatens to render the Fourth Amendment obsolete.

Congress Responds: The Stored Communications Act

In response to the growing use of computers and the legal uncertainties created by cases like Miller and Smith, Congress took action in 1986 to create a statutory framework for protecting digital data.

The result was the Electronic Communications Privacy Act (ECPA), a complex and now notoriously outdated law that governs how the government can access electronic communications. For direct messages and other stored data, the most important part of ECPA is Title II, known as the Stored Communications Act (SCA).

The Electronic Communications Privacy Act of 1986

ECPA was a landmark piece of legislation passed to update federal wiretap laws for the emerging digital era. Its stated goal was to balance the “privacy expectations of citizens and the legitimate needs of law enforcement.”

The act is divided into three main titles:

Title I (The Wiretap Act): This governs the real-time interception of “wire, oral, and electronic communications” while they are in transit. It generally requires a “super warrant” with a high legal burden.

Title II (The Stored Communications Act): This governs government access to communications that are “at rest” in electronic storage, such as emails, direct messages, and files stored in the cloud.

Title III (The Pen Register Act): This governs the use of pen registers and trap-and-trace devices, which collect non-content metadata like the phone numbers dialed or the IP addresses contacted.

For the question of who can read stored DMs, the Stored Communications Act is the central statute.

Decoding the Stored Communications Act

The SCA creates a set of procedural rules that law enforcement must follow to compel a service provider to disclose user data. It’s important to understand that the SCA did not grant robust new constitutional rights. Instead, it created a statutory framework for accessing data that the Supreme Court, under the Third-Party Doctrine, had already deemed to have little to no constitutional protection.

The SCA is therefore a legal floor, not a ceiling, for privacy.

The law’s structure is notoriously confusing because its rules depend on several technical distinctions that are based on how technology worked in 1986. The two most important distinctions are the type of service provider and the type of data being sought.

Types of Service Providers

The SCA defines two types of providers:

Electronic Communication Service (ECS): Any service that provides users with the ability to send or receive communications. This is typically interpreted as a service holding a communication in temporary, intermediate storage. For example, an unread email sits on a server before it is downloaded by the user.

Remote Computing Service (RCS): A service that provides computer processing or storage to the public. This is typically interpreted as a service holding a communication for long-term storage purposes. For example, an email that has already been opened and is being saved in a folder, or files stored on a cloud drive.

Types of Information

The law also distinguishes between two types of information:

Content: The “substance, purport, or meaning” of the communication. This is the actual body of a DM or email.

Non-Content: Transactional or addressing information, such as server logs, timestamps, and the sender and recipient information on an email (metadata).

The Problem with 1986 Thinking

The entire architecture of the SCA is a product of a technological world that no longer exists. In 1986, a user might connect to a server via dial-up, download their emails to a local computer, and the emails would be deleted from the server. The server acted as a temporary ECS.

Today, with webmail and cloud-based messaging, a single service like Gmail or Facebook Messenger acts as both an ECS (when a message first arrives) and an RCS (for the entire history of stored messages) simultaneously.

This forces courts and companies to apply a legal framework that is fundamentally misaligned with the technology it’s supposed to regulate, leading to decades of legal confusion.

The Tiers of Access: Warrant, Court Order, and Subpoena

The SCA establishes a tiered system for government access, requiring different legal tools depending on the type and age of the data being sought. Each tool comes with a different standard of proof that law enforcement must meet.

Search Warrant: This is the highest standard of protection, mirroring the Fourth Amendment. To obtain a warrant, the government must go to a judge and demonstrate probable cause to believe that a crime has occurred and that the requested data contains evidence of that crime.

§2703(d) Court Order: This is a special type of court order created by the SCA. To obtain one, the government must provide a judge with “specific and articulable facts showing that there are reasonable grounds to believe” that the requested records are “relevant and material to an ongoing criminal investigation.” This is a lower standard than probable cause.

Subpoena: This is the lowest standard. A subpoena is a formal request issued by a government lawyer (like a prosecutor) or a grand jury. It does not require prior approval from a judge. The only legal standard is that the information requested must be relevant to an investigation.

The “180-Day Rule” and the SCA’s Confusing Matrix

The most controversial and outdated provision of the SCA is the “180-day rule.” In 1986, Congress reasoned that an electronic communication left on a third-party server for more than 180 days (about six months) could be considered “abandoned” by the user.

Therefore, the law provides significantly less protection for these older communications. In an era of virtually unlimited cloud storage where users intentionally store years of communications, this distinction is arbitrary and nonsensical, yet it remains part of the law.

The interaction between the service type, data type, storage duration, and legal tool creates a complex matrix of privacy protections under the SCA:

Type of DataStorage ConditionService TypeLegal Tool Required by GovernmentStandard of Proof
Content (The message itself)180 days or lessECSSearch WarrantProbable Cause
Content (The message itself)More than 180 daysECS or RCSWarrant, Subpoena, or §2703(d) Court OrderVaries (Probable Cause for warrant; relevance for others)
Non-Content (Metadata, logs)AnyECS or RCSSubpoena or §2703(d) Court OrderVaries (Relevance or “specific and articulable facts”)
Basic Subscriber Info (Name, address)AnyECS or RCSSubpoenaRelevance to an investigation

As the table shows, the content of a new, unopened DM held by a service provider receives the highest level of protection, requiring a warrant. But after 180 days, the government can potentially access that same message with only a subpoena. This bizarre outcome is a direct result of the law’s outdated assumptions.

The CLOUD Act (Clarifying Lawful Overseas Use of Data Act), enacted in 2018, expanded the ability of U.S. law enforcement to obtain data stored by American technology companies, even when that data is located on servers outside the United States. The law requires service providers—including email, messaging, and cloud-storage platforms—to comply with valid warrants or court orders for user data, regardless of where the data is geographically stored. It also authorizes the U.S. to enter into bilateral agreements with other countries, allowing reciprocal cross-border data access under agreed-upon privacy and human-rights standards. While the CLOUD Act resolves conflicts between U.S. legal demands and foreign data-protection laws, critics argue that it may increase government surveillance capabilities and reduce the protective value of storing data abroad.

The Supreme Court Reconsiders: Carpenter v. United States

For decades, the legal landscape of digital privacy was defined by the tension between the weak constitutional protections of the Third-Party Doctrine and the complex, outdated statutory rules of the Stored Communications Act.

This landscape was seismically altered in 2018 by the Supreme Court’s decision in Carpenter v. United States, the most important digital privacy case of the modern era.

The Case: Tracking a Robber Through His Phone

The case arose from an FBI investigation into a string of armed robberies at RadioShack and T-Mobile stores in Michigan and Ohio. After arresting several suspects, the FBI identified Timothy Carpenter as another potential member of the robbery crew.

To place him at the scene of the crimes, the government applied for court orders under the Stored Communications Act to obtain his historical cell phone records from his wireless carriers.

Specifically, the government obtained 127 days of cell-site location information (CSLI). CSLI is the data generated every time a cell phone connects to a nearby cell tower. Because modern smartphones are constantly connecting to the network to make calls, send texts, and use data, they generate a continuous, time-stamped log of their owner’s movements.

Using this data, the FBI created maps showing that Carpenter’s phone had been near several of the robberies when they occurred.

The government obtained the CSLI using a §2703(d) order, which only requires a showing of “specific and articulable facts” that the data is relevant to an investigation—a much lower standard than a warrant’s probable cause requirement.

Carpenter was convicted, and he appealed, arguing that the government’s warrantless acquisition of such a vast trove of his location data was a violation of his Fourth Amendment rights.

The Majority’s Landmark Ruling

In a landmark 5-4 decision authored by Chief Justice John Roberts, the Supreme Court ruled in favor of Carpenter. The Court held that accessing historical CSLI is a Fourth Amendment “search” and that the government must therefore generally obtain a warrant supported by probable cause to acquire it.

The Court’s reasoning marked a significant departure from the traditional Third-Party Doctrine. While the Court did not formally overturn Miller and Smith, it declined to extend their logic to this new, far more invasive form of digital surveillance.

The majority focused on the unique nature of CSLI, arguing it was fundamentally different from the limited financial or phone records at issue in the 1970s cases. CSLI, the Court noted, creates a “detailed, encyclopedic, and effortlessly compiled” record of a person’s physical movements.

The “Voluntary” Sharing Problem

The Court also rejected the idea that sharing this data was truly “voluntary.” A cell phone, Roberts wrote, is not just a convenience but “almost a feature of human anatomy,” and it “faithfully follows its owner” everywhere.

A user cannot opt out of generating CSLI and still use a modern cell phone, making the “choice” to share this data with a provider essentially meaningless.

The Revealing Nature of Location Data

Most importantly, the Court recognized the profoundly revealing nature of this data. A complete historical record of a person’s movements provides an “intimate window into a person’s life, revealing not only his particular movements, but through them his familial, political, professional, religious, and sexual associations.”

Allowing the government warrantless access to this data would be akin to attaching an ankle monitor to every citizen and would grant the state the power of “near perfect surveillance.”

The Mosaic Theory

The Carpenter decision signaled a potential doctrinal shift in how the Court views privacy in the digital age. Instead of focusing on the simple, transactional act of sharing a piece of data, the Court looked at the aggregate nature of the information revealed.

The constitutional problem was not any single location point, but the “mosaic” created by assembling all the points over time. This focus on what is revealed by a dataset, rather than simply whether it was shared, could have massive implications for other vast troves of personal data held by third parties, such as search history, browsing history, and social media activity.

The Strong Dissents: A Fractured Court

The narrow 5-4 vote and the presence of four separate, forceful dissenting opinions revealed a deep and unresolved crisis in Fourth Amendment jurisprudence. The justices were not just disagreeing on the outcome; they were disagreeing on the fundamental legal theory that should be used to analyze digital privacy.

Justice Kennedy’s Dissent: Argued that CSLI was simply another form of business record owned and controlled by the phone company. He accused the majority of drawing an “unprincipled and unworkable” line between CSLI and other third-party records.

Justice Thomas’s Dissent: Attacked the entire “reasonable expectation of privacy” test from Katz as illegitimate and having no basis in the Fourth Amendment’s original text. He argued the focus should be on property rights.

Justice Alito’s Dissent: Argued that the SCA court order used to get the CSLI was the modern equivalent of a subpoena for documents. He contended that such subpoenas were not considered “searches” when the Fourth Amendment was written.

Justice Gorsuch’s Dissent: Also criticized the Katz test and the Third-Party Doctrine but proposed re-examining property law concepts. He argued that a person might retain a positive legal interest in their digital data even when held by a third party.

This profound fracture among the justices means that the future of digital privacy remains highly uncertain. Carpenter was not a final settlement of the issue but rather the opening of a new, unstable chapter in the ongoing struggle to adapt the Constitution to the digital world.

Pulling Back the Curtain: Corporate Transparency Reports

The legal battles over government access to data play out every day in the form of requests sent from law enforcement agencies to technology companies. In an effort to shed light on the scale and scope of this surveillance, many of the world’s largest tech companies have begun publishing regular transparency reports.

These reports provide a crucial, if incomplete, window into the real-world application of the laws and doctrines governing data privacy.

What is a Transparency Report?

A transparency report is a public statement, typically issued semi-annually or annually, in which a company discloses statistics about government requests for its users’ data. These reports detail:

  • The number of requests received
  • The countries they came from
  • The type of legal process used (subpoena, court order, warrant)
  • The number of user accounts affected
  • The percentage of requests with which the company complied

Google became the first company to publish such a report in 2010, and the practice has since been adopted by dozens of technology and communications companies, including Meta (Facebook), Apple, and X (formerly Twitter).

The stated purpose of these reports is to inform the public about the frequency and nature of government surveillance, enabling stakeholders and advocacy groups to push for greater accountability and legal reform.

A Look at the Data: What the Reports Tell Us

While the format and detail vary by company, these reports reveal important trends about government surveillance.

Google: Google’s reports cover a wide range of issues, including government requests for user information, content removal demands, and security efforts. The data consistently shows that government demands for user data have been steadily increasing on a global scale over the last decade.

Meta: Meta’s Transparency Center provides quarterly reports on government requests for user data across its platforms, including Facebook, Instagram, and WhatsApp. The reports also detail content restrictions based on local law and intellectual property takedowns.

Apple: Apple’s reports are notable for their detailed categorization of requests by type (e.g., device, financial identifier, account) and legal process. In a clear example of using a report to signal a pro-privacy stance, Apple explicitly states that it receives “geofence” warrants (requests for data on all users in a specific area) but has no data to provide in response, as its commitment to privacy means it refrains from collecting that type of detailed location information.

X (formerly Twitter): After a hiatus following its acquisition by Elon Musk, X has resumed publishing transparency reports. The data reveals a stark shift in policy. Despite Musk’s public rhetoric as a “free speech absolutist,” the company’s compliance rate with government legal demands has increased dramatically.

Data from the first half of 2024 showed X complied with about 70% of content takedown requests and disclosed information in 52% of data requests. This is a significant increase from the pre-takeover era, where compliance rates were closer to 40-50%.

Global Digital Sovereignty

The data in these reports also reveals a growing global conflict over digital sovereignty. A large and increasing number of requests come from countries outside the U.S., such as Turkey, India, and Germany.

This puts U.S.-based companies in a difficult position, caught between the free speech principles of the U.S. First Amendment and the often more restrictive laws of the countries where they operate.

The public battles between X and the governments of Brazil and India are high-profile examples of this tension. The trend suggests the erosion of a single, global internet, replaced by a fragmented “splinternet” where user rights depend heavily on geographic location.

What the Reports Don’t Tell Us: The Limits of Transparency

While valuable, transparency reports have significant limitations and do not paint a complete picture of government surveillance.

National Security Gaps: One of the largest gaps concerns national security requests. Under U.S. law, companies are severely restricted in how they can report requests made under the Foreign Intelligence Surveillance Act (FISA) or through National Security Letters (NSLs).

They are only permitted to report the number of such requests they receive in broad, time-delayed ranges (e.g., 0-499, 500-999), a practice mandated by the USA FREEDOM Act of 2015. This makes it impossible to know the true scale of national security surveillance.

Gag Orders: Furthermore, the government can frequently obtain gag orders alongside data requests. These orders legally prohibit a company from notifying the user that their information has been searched or turned over to law enforcement.

These gag orders can last for 90 days or, in some cases, indefinitely, meaning a person may never know their private messages were read by the government.

Inconsistent Standards: Finally, there is no single, legally mandated standard for transparency reporting. This leads to inconsistencies in how different companies categorize and present data, making direct comparisons difficult.

The reports are not just raw data—they are also a form of strategic corporate communication. A company’s decisions about what data to highlight, how to frame it, and the narrative it builds around the numbers are all part of a broader effort to manage public perception, signal corporate values, and navigate a complex and contentious legal environment.

Protecting Your Digital Privacy

Understanding who can read your DMs is the first step in protecting your digital privacy. Here are key takeaways and actionable steps:

Choose Your Tools Wisely

For Maximum Privacy: Use messaging apps with default end-to-end encryption like Signal, WhatsApp, or iMessage for sensitive conversations.

Be Aware of Defaults: Many popular platforms like Instagram DMs, Discord, and standard Telegram chats are not end-to-end encrypted. Assume these can be read by the company and potentially law enforcement.

Enable E2EE When Available: If using platforms like Facebook Messenger or Telegram, activate their optional end-to-end encryption features for sensitive conversations.

Understand Your Context

At Work: Assume no privacy when using company devices, networks, or platforms. Use personal devices on personal networks for truly private communications.

Personal vs. Business: Keep work and personal communications separate to avoid inadvertent corporate surveillance of private conversations.

Stay Informed

The legal landscape of digital privacy is constantly evolving. Supreme Court decisions, new legislation, and corporate policy changes can all affect your privacy rights. Understanding these fundamentals helps you make informed decisions about your digital communications in an increasingly surveilled world.

Your DMs are only as private as the weakest link in the chain—whether that’s the storage location, encryption implementation, corporate policy, or legal framework. By understanding these factors, you can better protect your most sensitive communications in the digital age.

Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.

As a former Boston Globe reporter, nonfiction book author, and experienced freelance writer and editor, Alison reviews GovFacts content to ensure it is up-to-date, useful, and nonpartisan as part of the GovFacts article development and editing process.