Don't Be Your Own Privacy Nemesis: Harnessing Red Team Testing to Identify Internal Software Risks
Imagine this: you've spent months meticulously crafting your organization's privacy policies, ensuring they adhere to the strictest regulations and protect sensitive data with utmost care. Yet, amidst the whirlwind of data collection, processing, and storage, a subtle oversight creeps in, a hidden vulnerability that could expose your company to reputational damage and even legal repercussions.
This is where privacy red teams come in. Simply put, privacy Red teams are ethical engineers who test whether a system makes good on all their privacy statements to users. In 5 years of running privacy red team exercises, I’ve seen in practice that they can uncover subtle bugs or weaknesses in an organization’s privacy stance. But how does it work? What adversaries should you be worried about? I’ll shed a little light on Privacy Red Teams, and provide a simple walk-through of one you can do now.
A common question I receive regarding privacy red teams is whether they differ from security red teams. While there are some similarities, a significant difference is that privacy concerns extend beyond deliberate malicious adversaries. (Learn more about privacy adversaries here) We must also address the possibility of self-inflicted privacy breaches.
Companies can inadvertently harm their privacy posture, just like a soccer (football) player accidentally scoring against their own team, known as an "own goal." This can stem from a series of mistakes, such as when a player passes the ball back to their goalkeeper, who mishandles it, resulting in a goal against their team. Oops. Own goals also happen from a single player gets confused about what side of the court they are on, as several professional basketball and American football players have done. These kinds of mistakes can happen in privacy too.
Like a bad pass of the ball to a goalie, privacy mistakes can happen when communication is passed between different teams. Imagine a mismatch between how the data governance or legal team thought a feature should work, and how it was interpreted by the development team. One team thinks some data is being deleted, the other team doesn’t implement deletion. Therefore the privacy notices or settings incorrectly represent the data handling and retention.
Professional athletes can make mistakes, and so can the humans implementing your privacy program. I’ve seen unintended data collection that is missed in manual data discovery. All sorts of crazy stuff can end up in your logs or your database without the privacy program being aware of it. Due to these types of mishaps, privacy red team testing typically has a broad definition of “adversary,” which can include user error, company error, or software bugs.
Let me give you an example:
Imagine you have an API and one function should take just two parameters. A programmer accidentally includes an optional third parameter. Let’s pretend that parameter is something sensitive, like “password”. In practice, your company’s code just ignores that parameter if it is sent. Since it is never used, the existence of this parameter is never covered in documentation or descriptions of the product, and is never covered in manual data discovery (learn more about data discovery in this blog post)
At the same time, anyone calling the API can happily send you passwords. All the calls to the API get logged somewhere, but since you don’t expect passwords in the logs, you have inadvertently collected a bunch of passwords and stored them in an insecure place. Yikes! Furthermore, for anyone outside your company (like a malicious adversary or curious customer), it looks like you are collecting passwords but not being transparent about it.
Privacy red teams can help look for these types of mistakes in three steps. Here is a framework for doing so. There are more details on privacy red teams in this Privado webinar, so here we just provide an overview of how it could work.
- Find out where you implement that promise. It’s often non-trivial to find where the promise is implemented exactly. Linking a privacy notice to the actual implementation of data handling can be difficult and may require a privacy engineer. Automated Privacy code scanning can help with this.
- Check if you uphold that promise through a technical test. With the API, check what data is being sent over the network. You might use a packet capturing tool (such as Wireshark), you might do a Machine-In-The-Middle-Attack or review the logs. Check what data you are sending, and what data is coming to you. Does it match what you told your customer? If not, you have a potential privacy risk, and you should address it.
Privacy red teams are one way to check whether you are upholding your promises to your customer, and doing what you say you will do. What you do and what you say you do need to be consistent. Tools such as privacy code scanning that provide maps of data stores and data flows can make privacy red team exercises more efficient.
In summary, privacy red teams can help test against malicious attacks, as well as “own goals.” I described two potential “own goals” in privacy programs: miscommunication, or programming errors that lead to a mismatch in what you tell customers you are doing and what you are doing. You can test against these types of errors with a simple three-step framework: reviewing your privacy promise, identifying where you implement the process, and running a technical test to see if that promise is implemented in practice. You are now enabled to run your own privacy red team test!
Rebecca is a privacy engineer consultant who helps organizations measure and assess their data protection.
Privacy by Design