Written by Abdul Rafay

Spilling the Tea: How a "Hack" Exposed Millions, and Why Your Backend Matters More Than Ever

If you haven’t been following, Tea is an app designed for women to share insights about men they’ve dated. Regardless of your personal feelings about the app’s premise (and trust me, we all have them!), the security snafu that hit them is an absolute masterclass in “what not to do.”

While many are calling it a simple data leak, I’m unapologetically calling it a hack. And no, it wasn’t some fancy zero-day exploit or “vibe coding” gone wrong. It was, quite simply, a symphony of bad design decisions that turned a simple file bucket into a data public square. And spoiler alert: Firebase plays a starring role in this drama.

Let’s uncork this bottle of un-tea-fied chaos.

What Went Down? The “Tea” on the Breach

First, the facts. To ensure only women could sign up for Tea, the app required users to submit identity verification photos or IDs. These highly sensitive files (we’re talking faces, government IDs, and even raw metadata like location from phone cameras!) were stored in a public Firebase bucket. And then, well, someone found all the URLs, saved every file, and poof – hundreds of thousands of people’s private data were exposed to the world.

To add insult to injury, this specific bucket hadn’t even been in active use since mid-2024, meaning it primarily affected older users who thought their data was safe. And just when you thought it couldn’t get worse, a second breach exposed all their in-app message data, forcing them to shut down the messaging feature entirely.

The chilling takeaway? Whether you love or hate this app, no one expects their identity documents or private conversations to be laid bare for the internet to feast upon. This is a fundamental betrayal of user trust.

The Unlocked Door Fallacy: Why It’s Still a Hack

Now, let’s address the elephant in the room: “But the bucket was public! It wasn’t a hack!”

My take? That’s just silly. We use the term “hack” similarly to “steal.” If someone leaves their front door unlocked, and a person walks in, rummages through their belongings, and takes their valuables, did they not steal? Absolutely! They just didn’t have to pick a lock.

The same logic applies here. Yes, the files were on publicly accessible URLs. But those URLs are usually randomly generated (like UUIDs) and virtually impossible to guess en masse. You can’t just “sniff” every public URL on the internet. To get all the files, the perpetrators needed an index—a list of every single URL. And guess what? They got that index by hitting an exposed Firebase API endpoint that conveniently listed all the data in the bucket.

This is the crucial distinction: Exposing individual public URLs isn’t ideal, but it’s often a design choice (like unlisted YouTube videos). Exposing an endpoint that provides a comprehensive list of all those URLs? That, my friends, is a fundamental security flaw that was exploited. They didn’t “break in” through a strong door, but they sure as heck walked through an open garage, found the keys to the house, and took everything. That’s a hack in my book.

The Root Cause: When Your Database Becomes a Public Square

So, how did this happen? It all boils down to an architectural philosophy that, frankly, sends shivers down my spine: allowing direct client-to-database access without a proper API layer.

Let’s visualize the difference:

Diagram 1: The Secure API Model

graph TD
    User["User (Client)"] <--> API["API / Server (Auth & Validation)"]
    API <--> DB["Database"]

    subgraph Explanation
        direction LR
        S1["How it works:"] --> S2("1. User sends a specific request (e.g., /getProfile?id=XYZ) to the API.")
        S2 --> S3("2. The API, written by developers, validates the user's identity and permissions.")
        S3 --> S4("3. If authorized, the API constructs a secure query to the database.")
        S4 --> S5("4. The database returns only the requested data to the API.")
        S5 --> S6("5. The API processes/reshapes the data and sends it back to the user.")
        S6 --> S7("Security: Logic is centralized in the API. Developers explicitly write code for authorization.")
    end

Diagram 2: The Direct Database Access Model (Firebase / Bad Pattern)

graph TD
    User["User (Client)"] --> DB["Database (Firebase/NoSQL)<br>(Rules / Permissions<br>-- Often Missing/Misconfigured)"]

    subgraph Explanation
        direction LR
        S1["How it works:"] --> S2["1. User's client-side code directly queries the database (e.g., db.collection('profiles').get())."]
        S2 --> S3["2. The database (theoretically) applies rules to determine access."]
        S3 --> S4["3. The database sends data directly back to the user."]
        S4 --> S5["Security: Relies heavily on complex, declarative rules that are easy to misconfigure or overlook, often 'exposed by default.'"]
    end

Firebase: A Convenient Trap?

Firebase, being a “backend as a service” provider, simplifies app development by handling databases, file storage, authentication, and more. It gained popularity, especially among mobile developers, because it offered a seemingly elegant solution to avoid writing server-side code. “Why build an API,” the thinking goes, “when the user can just query the database directly?”

This is where the convenience becomes a trap. In a traditional API, you’re forced to write code that explicitly defines how data is accessed and who can access it. When you’re building an endpoint like /getProfile, it’s glaringly obvious that you need to add authentication and authorization checks. It’s part of the coding process.

With Firebase (and similar services that promote direct client access), you’re often given a database that is “exposed by default.” You then layer on rules or permissions to restrict access. The problem? It’s much easier to forget to restrict something, or to misconfigure a complex rule, than it is to forget to write the code that grants access in the first place. This leads to common misconfigurations where an endpoint that should only show your profile data ends up revealing everyone’s, or worse, listing every single file in a storage bucket.

Learning from the Spill: How to Brew Secure Apps

The core lesson from the Tea App fiasco is simple, yet often overlooked: All data access should be explicitly defined in your source code, on a server.

If you’re building an application, you should be defining specific API endpoints (functions, mutations, queries) that act as the gatekeepers to your data. This forces you to think about:

Platforms like Convex (a personal favorite, full disclosure!) mandate this approach. You can’t directly query the database from the client. Instead, you define explicit queries and mutations (as actual functions in your codebase) that mediate access, often with built-in authentication helpers. This makes security an integral part of development, not an afterthought.

Beyond the Mobile Bubble: A Call for Full-Stack Sanity

A big reason issues like this persist, in my opinion, is a divide in the developer world. There’s a tendency, especially among some mobile developers (and I say this as a former mobile dev!), to have an almost phobic aversion to anything server-side. Building “real” backend infrastructure often gets pushed aside for the convenience of services like Firebase, promising to remove the need to “think about infra.”

But guess what? If you’re not thinking about your infrastructure, you’re leaving a gaping hole for disaster. The “Tea App” isn’t an isolated incident; stories abound of companies, even major fast-food chains, getting “Fireponed” because mobile teams built their backends with publicly exposed databases.

My advice for every developer, regardless of your specialty: Break out of your bubble!

Understanding the entire stack — from client to API to database — makes you a more valuable, more secure, and frankly, a more complete engineer. The average mobile dev can sometimes be indistinguishable from a “vibe coder” when it comes to server knowledge, and that’s a problem we need to fix.

Key Takeaways (The Strong Brew)

Here’s the concentrated essence of what we’ve discussed:

  1. Write Your Data Access Code: Always define server-side functions or API endpoints that control how data is accessed from your database. Don’t rely solely on declarative database rules.
  2. Beware of “Exposed by Default”: If a service encourages direct client-to-database access, or if data is exposed by default, proceed with extreme caution. It’s a disaster waiting to happen.
  3. Broaden Your Horizons: Don’t limit yourself to one part of the stack. A deeper understanding of how front-end, back-end, and databases interact is crucial for building secure and robust applications.

This hack wasn’t fancy. It was a basic failure of architectural understanding and secure defaults. Let this be a bitter lesson that we all take to heart.

What do you think? Was I too harsh on Firebase, or did I hit the nail on the head? Let me know your thoughts in the comments!

Until next time, Nerd…Keep Coding

💬 Join the Discussion

Share your thoughts and engage with the community

Loading comments...