A Glossary of
Security and Data Privacy Terminology
What is Static Application Security Testing [SAST]?
Static Application Security Testing (SAST) is a vulnerability scanning technique focusing on source code, bytecode, or assembly code. In general, static program analysis recommends program behavior by analyzing its source code without running it.
The code is the focus of static application security testing (SAST). It runs early in the CI pipeline and scans the source code, bytecode, or binary code for coding patterns that violate best practices and may cause problems. SAST tools are designed to assist developers in writing more secure code by detecting suspicious builds, unsafe API usage, and dangerous runtime errors early on.
SAST tools scan your code for security flaws, such as saving passwords in plain text or transmitting data over an unencrypted connection. They then compare it to best practices standards to recommend how to fix it.
Privacy, Data Security, and SAST
Data protection laws require data privacy must be integrated into the system from beginning to end.
This is known as "Privacy by Design," It is a legal requirement under the GDPR (GDPR Article 25) for you to implement appropriate technical and organizational measures to enforce data protection principles and effectively protect individual rights.
The Privacy by Design approach emphasizes proactive rather than reactive measures. This entails foreseeing and preventing breaches of confidentiality before they occur and taking action rather than waiting for privacy threats to manifest.
Fair, transparent, and lawful processing are the cornerstones of data protection laws, and any vulnerabilities in your code during or after production can jeopardize the standards of these principles. Static analysis can be a very effective tool for "baking" data protection into your processing activities and business applications from the beginning of the design process. It ensures that the system is powerful to begin with.
This is why Static Application Security Testing can be used not just as a way to test software but also to ensure it is safe and protected by design.
However, static analysis techniques differ; not all test the same things in the system. You can create rules that ensure software builds are safe and use static analysis as an accurate preventative measure during code analysis with the right static analysis technique.
With Static Application Security Testing (SAST), you can identify problems before anything is checked out during the development phase. SAST can run on source code; it can pinpoint the exact location of a vulnerability. This increases the effectiveness of finding and fixing them.
Since SAST tools apply all of their rules to your codebase, which depends on vast guidelines database, they can detect security vulnerabilities you didn't even know existed.
What is Code Scanning?
Code scanning is one of the tools used to identify potential security issues within an application. Code scanning tools examine the code in your application's current iteration, inspect the code for bugs and vulnerabilities, and provide a summary of the findings which can sometimes be displayed on a dashboard.
Code scanning identifies potential issues developers should address before proceeding with the application development process. This will enable you to address them quickly and increase the security of your application.
Detecting vulnerabilities in an application before it enters the production phase can significantly reduce the risk of security errors and the cost and difficulty of fixing them.
Do you need Code Scanning?
Code scanning is an integral part of an organization's application security program and is essential for regulatory compliance.
According to GDPR, organizations must now determine whether their applications process personal data and take organizational and technical measures to keep this personal data safe. (GDPR Article 32)
For example, If you have multiple central databases accessed by many applications, it will not be sufficient to identify the databases simply; a code scan must be performed at the application level.
Applications can process personal data without a database as well. For instance, a piece of source code can read, process, and share data, and this data might be personal data with other components combined. Even if these data are not considered personal data on their own, they can become personal data if combined with other data. (GDPR Article 4.1)
Is your code privacy compliant?
If your code runs a script that reads personal data and creates various security vulnerabilities, you won't be able to detect it without scanning the code. Code scanning allows you to identify, categorize, and prioritize fixes for existing bugs in your code.
In any investigation following a data breach, submitting code scan results and a report classifying and prioritizing the errors you've identified and the precautions you've taken will demonstrate that you've taken responsibility for securing the data seriously and handled it with care.
This can save you from hefty regulatory penalties and reputation damage.
Code Scanning Approaches
Static Analysis Security Testing (SAST) of the application source code detects application vulnerabilities by modeling its execution state and applying rules based on common code patterns.
Dynamic Application Security Testing (DAST) uses a library of known attacks on the application to detect vulnerabilities. DAST identifies application vulnerabilities by testing its response to unusual or malicious inputs.
Interactive Analysis Security Testing (IAST) uses instrumentation to view an application's inputs and outputs in the execution state. This runtime visibility enables it to detect unusual behavior that may indicate application vulnerabilities.
Due to the difficulty of later creating and distributing software patches, fixing vulnerabilities in a deployed application will be expensive and time-consuming.
Production-related vulnerabilities will make your application vulnerable, implying that your product is not secure or meets the expectations of security standards and related regulations. With code scanning, you can take immediate action and fix these vulnerabilities.
What is Data Protection Impact Assessment [DPIA]?
DPIA is a process that helps organizations identify and mitigate privacy risks.
The objective of a DPIA is to investigate potential problems in advance so that they can be mitigated, thereby decreasing the likelihood of their occurrence and associated costs. Following that, organizations can take appropriate steps to mitigate and manage identified risks.
GDPR requires a Data Protection Impact Assessment (DPIA) when introducing new data processing processes, systems, or technologies. (GDPR Article 35)
DPIAs are critical for meeting the requirements for "data protection by design" and "data protection by default" as they help demonstrate compliance with data protection principles and the accountability principle. (GDPR Article 5.2, 25)
Conducting DPIAs before implementing or launching a new project involving the processing of personal data can help avoid non-compliance, the potential costs of a claim, and associated reputational damage.
When to do it?
GDPR requires DPIAs to be conducted 'prior to processing.' Therefore, organizations must ensure that no new projects are initiated before a DPIA is considered and, where necessary, conducted. As a result, determining whether a DPIA is required should be done early on as part of project management procedures.
Do I need a DPIA?
Ask yourself: Are you a controller or processor?
The controller is responsible for performing a DPIA.
Processors involved in relevant processing activities are required to assist under their contract with the controller, but they are not required to conduct DPIAs directly.
Ask yourself: What are the nature, scope, context, and purposes of the processing?
A DPIA is mandatory only if there is a high risk to data subjects' rights and freedoms or if otherwise required by law. (GDPR Article 35)
GDPR lists four situations requiring a DPIA:
1) A systematic and extensive evaluation of personal aspects of natural persons based on automated processing, including profiling, that would have legal or other significant effects on the persons.
2) Large-scale processing of special categories of data (Article 9.1) or personal data relating to criminal convictions and offenses (Article 10).
3) Systematic, large-scale public area monitoring.
4) Any processing on a list published by your competent supervisory authority or the European Data Protection Board.
This is a non-exhaustive list, and there are numerous data processing activities. Examples are provided in the list published by the EDPB (ex-data protection working party) Guidelines (Reference Below). Thus, it is essential to determine if your personal data processing activities fall into one of these categories.
You should consult with a DPO to identify these activities, as they should have the necessary experience and expertise.
If you do not have access to a DPO, you may contact the supervisory authority instead.
How to conduct DPIA?
Under Article 35(7) of the GDPR and the ICO's Code of Practice, the following steps must be taken:
1-Explain data processing activities and processing purposes
2-Assess the necessity and proportionality of the processing activities in relation to the purposes
3-Evaluate data protection risks
4-Identify measures to address risks
Even if a DPIA is not required for proposed processing activities, organizations must ensure that all proposed activities involving personal data adhere to GDPR principles.
After the DPIA:
Documenting agreed solutions is an essential part of the DPIA process.
Findings will need to be communicated internally, and a plan should be agreed upon on how the proposals will be integrated into the project. Additionally, it will be important to follow up with the project team to ensure the agreed-upon changes are implemented and have the desired impact. Completed DPIA can also be used as a post-implementation tool for future data protection audits and updates to DPIA.
In conclusion, DPIAs should be considered whenever new technologies or processes involving the collection, use, and sharing of personal data emerge or when significant changes are made to existing data processing activities, even if only a portion of these projects are required to conduct a DPIA under the GDPR.
Guidelines on Data Protection Impact Assessment (DPIA) and determining whether the processing is "likely to result in a high risk" for the purposes of Regulation 2016/679-DATA PROTECTION WORKING PARTY- https://ec.europa.eu/newsroom/article29/items/611236
What is Privacy Impact Assessment [PIA]?
Privacy Impact Assessments are used to determine the level of risk that your processing activities pose to individuals' rights and freedoms. Based on the results of this survey, you assess the project's privacy risks and implement appropriate mitigation measures and controls.
In short, PIA is a process that helps organizations identify and minimize the privacy risks of new projects or policies.
How is it different from Data Protection Impact Assessment?
While these terms are frequently used interchangeably, the term DPIA is clearly defined in the GDPR and includes specific elements (specified in article 35) that must be captured when a DPIA is conducted.
While Data Protection Impact Assessment is a legal requirement that is not always mandatory, all organizations that process personal data should have privacy impact assessment integrated as a valuable organizational practice.
While DPIA should be kept in a GDPR-compliant format, PIA can be kept in a more flexible format. A brief risk analysis or survey can be used as an example of privacy impact assessment and can be used to determine whether DPIA is required.
PIAs are practical tools for identifying privacy risks and accelerating an organization's ability to manage data privacy and privacy processes.
How to do PIA?
PIA can be all-encompassing as it is a flexible process that must consider the balance between the risks and benefits of the processing activity.
Some laws in the United States may require you to conduct a PIA, and in that case, it should be considered that each state's laws and practices and PIA requirements must be addressed separately. PIA is not directly mentioned in GDPR.
For example, the California Privacy Rights Act (CPRA) establishes a fairly broad threshold for performing a PIA.
Data controllers must balance the risks and benefits of the processing activity and include context, the relationship between the controller and the consumer whose personal data will be processed, reasonable consumer expectations, and anonymized data in their PIAs. It is important that PIAs do not become pointless box-checking exercises.
Using a concise set of screening questions to determine the extent to which a PIA is required can help prioritize projects and maximize the use of limited resources. PIAs can also be used as auxiliary tools in the development of DPIAs.
You can automate this process by requiring project teams to describe their proposed data processing activities at a high level and answer a few key screening questions online.
For those who want to focus on the DPIA requirements of the GDPR, you can limit the screening questions to the high-risk areas defined in the GDPR, ICO lists, and related guidance.
In addition to screening questions, it is advantageous for the project kickoff documents to request fundamental information about the project context and participants. This information can serve as the foundation for descriptions of processing activities, consultations with interested parties, and risk assessments.
What is Data Flow?
Data Flow is the journey of data from the point of collection to where it flows to third parties throughout your organization.
Understanding the data flow allows us to map the data journey and enable businesses to manage and secure their customers' data fairly and securely. Implementing any type of security is difficult without thoroughly understanding the data lifecycle.
Data flow is the tracking of where data flows from source to destination, and it is possible to visualize data flow by asking the following questions about data management processes:
- What data exists?
- Where is it kept?
- Under what conditions is it kept?
- Where is it transferred? (if any)
When you can answer these questions thoroughly, we can safely assume that you have a comprehensive understanding of the data flow within your organization.
Understanding the Data Flow is a crucial step before performing Data Mapping and determining the regulations to which we will be subject, particularly when transferring data to third parties (third country or an international organization).
Components of Data Flows
The data flow has four fundamental components: data items, formats, transfer methods, and locations.
You will be able to build your data map based on those components.
1- Data items are information itself.
It addresses the question: What information do you have about a data subject? For instance, if the transaction uses only one person's address, that address will be the transaction's data item.
2- Formats is the state in which data items are stored.
You can fully comprehend the data flow by identifying all the actual data storage formats you utilize.
3- Transfer methods explain how physical or electronic data items are moved from one location to another.
E-mail, fax, or cloud storage? At this point, data flow takes on a physical form.
4-Locations are locations where data is stored and processed.
Data servers, cloud servers, portable hard drives, and any other physical location?It is critical to answer this question to find data quickly when needed.
In conclusion, when we talk about data flow, we usually mean the movement of data from the point of data collection to third parties throughout the organization. The first step in safeguarding this data is comprehending the term "Data Flow" and visualizing its movement. You can start by visualizing the data flow and tracing its path from the source to the final transfer point.
*Reference: IT GOVERNANCE PRIVACY TEAM. (2020). EU General Data Protection Regulation (GDPR) – An implementation and compliance guide, fourth edition. IT Governance Publishing. https://doi.org/10.2307/j.ctv17f12pc (Data Mapping-Page 191,192)