What Is Data Security? | Palo Alto Networks

5 min. read

Data security is a set of measures designed to protect valuable and sensitive data from unauthorized access, disclosure, alteration and loss. From the physical protection of hardware to the logical security of software and establishment of internal policies, data security covers the entirety of information security.

Data security basics involve:

  • Identification and classification of digital information based on its sensitivity
  • Establishment of access controls and encryption mechanisms
  • Implementation of proper authentication and authorization processes
  • Adoption of secure storage and transmission methods
  • Continuous monitoring and detection of potential security incidents

Video 1: Michael Sieper, Senior Security Engineer at Personio, discusses the overarching goal to protect sensitive information from unauthorized access.

Data Security Explained

Data security is paramount today, where the volume and value of data continue to grow exponentially. It encompasses the strategies, controls and technologies used to safeguard private and personal data. At its core, data security aims to preserve the confidentiality, integrity and availability of data throughout its lifecycle.

  • Confidentiality ensures that data is accessible only to authorized individuals or systems, preventing unauthorized disclosure.
  • Integrity ensures that data remains accurate, complete and unaltered, protecting against unauthorized modifications.
  • Availability ensures that data is accessible to authorized users when needed, preventing disruptions to critical operations.

By mitigating risks and preventing data leaks, entities responsible for securing data maintain the trust and privacy of individuals and organizations whose data has been entrusted. It takes but one data breach, however, to lose trust. Such is the nature of data security.

Evolution of Data Storage

Data security forms the bedrock of the modern technological ecosystem. As businesses, governments and individuals migrate vast amounts of information to digital platforms, the need to secure data from unauthorized access and threats arises in tandem. Over the years, data security has evolved from a focus on securing on-premises systems to safeguarding complex, distributed cloud environments.

Differences Between On-Prem and Cloud Data

The differences between traditional on-premises data security and cloud data security lie primarily in control and visibility. In an on-premises setup, organizations maintain complete control over their infrastructure and data. They can physically secure their servers and directly manage their network security, access controls and data encryption.

In contrast, cloud data security involves handing over partial control to a cloud service provider (CSP). Organizations using cloud services lack the degree of physical and direct control over their infrastructure, which can make visibility into security issues difficult.

The composite nature of multicloud and hybrid cloud settings elevates the complexity of security management. Each cloud platform likely has proprietary security controls and features, the combination of which challenge consistent security enforcement.

Data Considerations for Cloud Deployments

Public Cloud

In the public cloud, a third-party CSP owns and manages the infrastructure. The provider maintains the hardware, software and other infrastructure components. Examples of public cloud providers include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Data security in the public cloud requires understanding the security measures implemented by the provider and ensuring adequate protection of data within their environment.

Private Cloud

A private cloud is dedicated to a single organization. The infrastructure can be located on-premises or hosted by a third-party service provider. In a private cloud, the organization has greater control and visibility over their data but also assumes full responsibility for securing it.

Hybrid Cloud

A hybrid cloud combines elements of both public and private clouds. Data and applications can move between these environments, and organizations must ensure the protection of data as it moves. This requires organizations to apply consistent security policies across both environments.

Multicloud

Multicloud environments offer organizations numerous benefits, but they require organizations to protect data across multiple environments, each with separate and incompatible security tools and controls.

The Data Lifecycle

Data, in its various forms, serves as the lifeblood of an organization. It spans user data, customer data, and sensitive data, including personal information and personally identifiable information (PII). Each type of data carries its own sensitivity and value and requires an appropriate level of protection.

Data Discovery and Classification

In the context of cloud computing, the data lifecycle begins with data discovery, which involves identifying data at its inception or as it enters the system. Next, data classification categorizes data based on its sensitivity and the level of protection required. Classification can range from public data requiring minimal protection to highly confidential data requiring the highest level of security.

Data Storage

Data in the cloud can be stored in various forms and locations, such as databases, data lakes or storage services provided by cloud providers. The term 'cloud storage' often refers to scalable, high availability storage systems like Amazon S3, Google Cloud Storage or Azure Blob Storage. These services store data across multiple devices and locations, providing redundancy and resilience.

Data centers, the physical locations where cloud providers house their servers, play a critical role in cloud data security. These facilities feature rigorous physical security measures, including access controls and surveillance, to protect the hardware that stores and processes data.

Data Governance

Data governance involves a set of processes and technologies that provide a holistic approach to managing, improving and maintaining data. It ensures that data remains accurate, consistent, secure and legally compliant throughout its lifecycle.

During the governance stage, organizations define and enforce data usage policies that outline how data should be accessed, shared and used. This data security policy determines who has permission to access certain data, specifying the purposes for which data can be used and establishing guidelines for data sharing and collaboration. Data usage governance ensures that data is used in a secure, compliant and ethical manner, aligning with regulatory requirements and internal best practices.

Data Erasure

The last step in the data lifecycle involves data erasure, or data deletion. This essential step refers to the process of securely removing data from storage media, ensuring that it can’t be recovered. Data erasure is particularly important when decommissioning hardware or responding to a request to delete personal data.

Threats to Modern Data Security

Cloud data security faces an array of threats. Building proper defenses begins with an understanding of them.

Data Breaches

Data breaches can result from several factors, including weak passwords, misconfigured security settings, vulnerable APIs or backend databases, and phishing attacks. These incidents can involve significant financial loss and damage to an organization's reputation.

Cyberattacks

Cyberattacks come in too many forms. SQL injection attacks exploit vulnerabilities in web applications that don’t properly validate user input. Attackers inject malicious SQL commands into input fields, enabling them to access, modify or delete data stored in databases. Ransomware attacks, well known among cyberattacks, center on critical data. Attackers encrypt an organization’s data and demand a ransom for its release, often bringing business operations to a standstill.

In zero-day exploits, attackers gain unauthorized access, bypass security measures, or execute malicious code via a software or systems vulnerability that hasn’t been patched because it remains unknown to the vendor. Attackers can also attempt to hijack cloud accounts through credential stuffing attacks. Once inside, they ‘re free to access sensitive data, alter configurations, move laterally and disrupt service operations.

Malware

Malware attacks can disrupt operations, gain unauthorized access to systems, compromise data integrity, and steal sensitive data. This malicious software includes worms, Trojans, ransomware and spyware. Antimalware measures are essential in a comprehensive security strategy.

Video 2: Customer data in the shared responsibility model is QlikTech’s primary concern. See how they secure container development with AWS and Prisma Cloud.

Cloud Architectures and Data Security

The responsibility for data security shifts depending on the cloud service model. Regardless of the model, though, users retain a level of responsibility, especially with data management and access control. This makes understanding the shared responsibility model essential for securing data in a cloud service model.

Infrastructure as a Service (IaaS) Security

IaaS provides users with the highest level of control over their infrastructure, which also means they carry a significant share of the security responsibilities. Users assume responsibility for securing the operating systems, applications and data they deploy on the IaaS platform. This responsibility involves:

Ensuring database security is of utmost importance in IaaS environments. Users must properly configure their databases, control access and encrypt all data. Conducting regular security audits and implementing continuous monitoring can aid in detecting unauthorized access or identifying anomalous activity.

Platform as a Service (PaaS) Security

With PaaS, the cloud provider manages the underlying infrastructure and runtime environment, while users are responsible for the applications they deploy and the data those applications handle.

Users must ensure that the applications they develop adhere to best security practices, such as secure coding standards and vulnerability testing. They should encrypt data and implement access controls at the application level. Users should also understand the data backup and recovery processes provided by the PaaS provider.

Software as a Service (SaaS) Security

In a SaaS model, the cloud provider manages the infrastructure, runtime environment and applications. Users primarily manage the data they store and transmit via the SaaS applications.

Users should focus on access control, ensuring only authorized users can access sensitive data. They should also verify the data protection measures the SaaS provider has in place, such as encryption standards and compliance certifications. Understanding the provider's data breach notification policy is also crucial.

Access Control: The Cornerstone of Data Security

Access control serves as the foundational pillar of data security. By determining who’s allowed access to data and under what circumstances, it provides the initial line of defense. It not only prevents unauthorized access but also enforces the principle of least privilege, granting individuals access only to the data and resources necessary for their roles. This principle minimizes the risk of insider threats, accidental data leaks and unauthorized modifications.

Access Control Models

Access control models limit access to sensitive data to reduce the risk of both accidental and deliberate data leaks. Two widely used examples include role-based access control (RBAC) and attribute-based access control (ABAC). RBAC grants access rights based on the user's role within the organization. Comparatively, ABAC allows granting of access rights based on attributes that include user location, time of access, and type of data accessed.

Depending on the complexity of the access control requirements and the need for fine-grained access control, organizations may choose to combine RBAC and ABAC to capitalize on their respective strengths.

Authentication

Users must authenticate their identity before accessing data. The authentication process often relies on username and password but can incorporate additional factors for increased security. Two-factor and multifactor authentication methods ask users to present two or more pieces of evidence to verify their identity, making it harder for unauthorized users to gain access.

Authorization

After authentication, the system determines the user's permissions. Authorization defines what authenticated users can do, such as granting read, write or delete access to specific data or resources.

Data Access Policies

Clear data security policies should dictate data access within organizations, specifying who can access data and for what purpose. Regular reviews and updates should keep these policies in line with organizational changes or regulatory shifts.

Access Management Tools

Numerous tools aid in managing access to cloud data. These tools automate the process of granting, updating and revoking access rights, simplifying management of large numbers of users and resources.

Monitoring and Auditing

Regular monitoring and auditing of access events help maintain security and spot potential issues before they escalate. Cloud providers usually offer tools for logging and monitoring access events, while third-party solutions can offer extended capabilities.

In the context of regulations, access control and management become even more critical. Compliance with standards like HIPAA, GDPR and the California Consumer Privacy Act (CCPA) requires vigorous access controls to ensure the adequate protection of sensitive information, such as health data or personally identifiable information.

Compliance Regulations

Ensuring compliance with data protection regulations is no easy feat. But superimpose data protection regulations on a global infrastructure involving cross-border data transfers — and compliance becomes a moving target.

Cloud services often involve data dispersal across servers located throughout the world. And different jurisdictions introduce complexities related to data sovereignty. Organizations need to understand the data protection laws imposed by jurisdictions involved with cross-border data transfers.

General Data Protection Regulation (GDPR)

The GDPR imposes requirements for the protection of personal data of European Union (EU) residents. When transferring personal data outside the EU, organizations must ensure an adequate level of data protection in the destination country, considering factors such as privacy laws, data security practices and the existence of binding corporate rules or standard contractual clauses.

Asia-Pacific Economic Cooperation (APEC) Privacy Framework

The APEC governs the collection, use and transfer of personal information to promote the protection of personal information and facilitate cross-border data flows within the Asia-Pacific region. It provides a framework for businesses and organizations to handle personal information responsibly and in compliance with privacy laws.

California Consumer Privacy Act (CCPA)

The CCPA grants consumers in California enhanced privacy rights and imposes obligations on businesses that handle their personal information. When transferring personal information outside of California, organizations must ensure compliance with CCPA requirements, including the obligation to inform consumers about cross-border transfers and potential risks.

Health Insurance Portability and Accountability Act (HIPAA)

HIPAA establishes data privacy and data security standards for protected health information (PHI) in the healthcare industry. Organizations subject to HIPAA must consider implications when transferring PHI across borders, ensuring compliance with the HIPAA Privacy Rule and Security Rule, even if data is stored in cloud environments.

Payment Card Industry Data Security Standard (PCI DSS)

PCI DSS governs the protection of cardholder data to maintain the security of payment card transactions. Organizations that process, store, or transmit cardholder data must adhere to PCI DSS requirements when transferring such data across borders, ensuring that appropriate security measures are in place to protect the data during transit and at rest.

Sarbanes-Oxley Act (SOX)

SOX establishes financial reporting requirements for public companies to protect investors and maintain the integrity of financial systems. Organizations subject to SOX should consider the implications of cross-border data transfers on the confidentiality, integrity and availability of financial data, ensuring compliance with the act's provisions related to data security and control.

Data Security Solutions

Organizations rely on a range of technologies to support data security efforts. Tools designed for cloud and hybrid environments provide capabilities ranging from data access control to threat detection and response.

Data Security Posture Management (DSPM)

Data security posture management (DSPM) solutions provide visibility into data security risks and assist in implementing appropriate controls to protect critical data stored in cloud environments.

By scanning and analyzing data stored in structured and unstructured data in the cloud, DSPM gains valuable insights into the content and context of the information. Potential risks are identified and prioritized through a data classification process and risk analysis, allowing organizations to assess their multicloud environment comprehensively. Organizations can proactively identify and address data loss risks by establishing a security baseline, enhancing their overall data protection strategy and fortifying their security posture.

Data Detection and Response (DDR)

Data detection and response (DDR) describes a technology-enabled solution for dynamically protecting data stored in the cloud. DDR tools look beyond static posture and risk analysis, and take data content and context into account to identify cybersecurity risks in real time. With DDR, organizations are able track both data in use and data at rest at a granular level, regardless of which cloud datastore it resides in (both managed and unmanaged). DDR is capable of detecting threats based on what is done with the data — which can make it useful for preventing insider risk, or other types of misuse of data by authorized personnel.

DDR tools need to operate in an agentless model to monitor infrastructure owned by public cloud providers, without sacrificing speed or accuracy in monitoring data events. Data privacy and compliance with legislation such as GDPR should also be considered, as the solution requires access to sensitive customer data. Both of these requirements can be satisfied by tools that monitor data events using the logs provided by the cloud vendor, within the customer’s cloud account.

An effective solution allows organizations to catch incidents earlier, averting data loss or minimizing its harms. DDR can also be integrated with SIEM/SOAR tools to reduce ‘notification overload’ and allow security teams to consume all alerts in one place.

DSPM with DDR

DSPM, in tandem with DDR, unifies static and dynamic monitoring. Organizations can equip security teams with real-time detection of unusual patterns in data interaction, enabling swift identification of potential security threats.

Data Loss Prevention (DLP)

Data loss prevention solutions monitor and protect sensitive data in cloud environments, preventing unauthorized disclosure or leakage. These tools employ techniques such as content analysis, encryption and policy enforcement to detect and prevent data loss incidents. DLP solutions assist organizations in maintaining control over their data, ensuring compliance with regulations and mitigating risks associated with data breaches.

Cloud Security Posture Management (CSPM)

CSPM solutions provide continuous monitoring and assessment of cloud infrastructure to ensure compliance with regulatory data protection requirements. These tools offer visibility into the security posture of cloud resources, helping organizations identify misconfigurations, vulnerabilities and potential risks. CSPM solutions enable proactive remediation, ensuring that cloud environments maintain a strong security posture.

Cloud Workload Protection Platforms (CWPP)

CWPP solutions offer comprehensive visibility into cloud-based applications, virtual machines and containers. This visibility enables organizations to identify security risks, anomalous activities and unauthorized access attempts. Supported by behavior analysis and machine learning, CWPPs detect and mitigate threats in real-time, safeguarding data from malicious activities.

Identity and Access Management (IAM)

An effective IAM framework helps manage user identities and control access to resources in cloud environments — across multiple cloud services. IAM solutions enable organizations to implement fine-grained access controls and enforce strong authentication. In addition to preventing unauthorized access, IAM equips security teams to streamline user provisioning and ensure compliance with access policies. It includes elements like multifactor authentication, role-based access control and user activity monitoring.

Intrusion Detection and Prevention System (IDS/IPS)

IDS/IPS solutions monitor cloud networks and infrastructure for malicious activities, unauthorized access attempts and potential intrusions. These tools detect and alert IT to suspicious network traffic, with some advanced solutions able to automatically block or mitigate threats. IDS/IPS solutions enhance the security posture of cloud environments by providing immediate threat detection and incident response capabilities.

Web Application and API Protection (WAAP)

WAAP solutions provide security measures designed to protect web applications and APIs from various threats, such as cross-site scripting (XSS), SQL injection, and API abuse. By safeguarding web applications and APIs, these tools help prevent unauthorized access, data breaches and other security incidents. WAAP solutions often include features like web application firewalls, bot mitigation and API security gateways to ensure the integrity and availability of web applications and APIs, reducing the risk of data compromise.

Data Protection Techniques and Industry Standards

When establishing an effective data security strategy, organizations can employ a variety of data-centric security techniques.

Secure Cloud Network Architecture

Cloud networks should be designed with security in mind. This includes segregating networks using firewalls and virtual private clouds (VPCs), limiting exposure of sensitive systems, and securing connections with techniques like IPsec VPN or SSL/TLS for data in transit.

End-to-End Encryption

Implementing end-to-end encryption ensures that data encryption occurs at the source, maintains encryption during transit and at rest and decrypts only at the destination. Adopting this practice safeguards data throughout its lifecycle, making it an integral component of a secure cloud architecture.

Data Classification

By classifying data, organizations can apply appropriate security controls and prioritize resource allocation for protecting sensitive information. This includes identifying personally identifiable information (PII), trade secrets, or confidential data, and implementing specific security measures based on the classification level.

Data Masking and Tokenization

Data masking and tokenization techniques help organizations protect critical data. Data masking ensures that sensitive information isn’t exposed in nonproduction environments, reducing the risk of data breaches. Tokenization replaces sensitive or personal data with nonsensitive tokens, maintaining referential integrity while minimizing the exposure of sensitive information.

Data Backups and Disaster Recovery

Regularly backing up data and storing backups in secure off-site locations or cloud repositories helps protect against data loss due to accidental deletion, system failures or malicious attacks. Additionally, establishing disaster recovery plans and conducting periodic recovery drills prepare organizations to restore data and operations in the event of a major incident.

Secure Data Disposal

Securely disposing of data includes deleting data from storage systems, sanitizing or destroying physical media, and ensuring the proper disposal of end-of-life cloud resources. By adhering to data disposal best practices, organizations can mitigate the risk of data leakage in compliance with regulatory requirements.

Defense in Depth

Defense-in-depth strategies can establish a multilayered approach to data protection in the cloud. Combining perimeter security, network segmentation, centralized identity and access management, data encryption, IDS/IPS, and continuous monitoring and auditing, organizations can enhance their security posture and mitigate risks associated with data breaches and unauthorized access.

Zero Trust Architecture

Developers and security engineers enhance data security when they ensure that only authenticated and authorized users or devices can access sensitive data. By implementing strict authentication and encryption measures, Zero Trust architecture minimizes the risk of unauthorized access and helps protect data confidentiality and integrity.

Integration with DevSecOps

Ensuring data protection involves incorporating security practices into every stage of the software development lifecycle. By integrating security into DevOps, organizations promote secure coding practices, address vulnerabilities early and establish security controls to safeguard data from coding to deployment and maintenance.

Securing Big Data

Securing big data presents unique challenges due to the sheer volume, variety and velocity of data involved. Protecting the confidentiality, integrity and availability of this vast amount of sensitive information requires specialized security considerations and advanced techniques.

Preserving Privacy in Big Data

Big data analytics often involve processing massive volumes of personal and sensitive information. Preserving privacy becomes paramount. Organizations must navigate data privacy regulations, such as the GDPR, by implementing anonymization and pseudonymization techniques. Data minimization practices help mitigate the risk of unauthorized disclosure and ensure compliance with privacy requirements unique to big data.

Securing the Data Lifecycle

To achieve and maintain big data security throughout its lifecycle, organizations should employ strong encryption algorithms and implement secure management mechanisms to protect data at rest and in transit. Encryption measures should extend to storage systems, network connections and data replication processes, effectively addressing the velocity and distributed nature of big data environments.

Access Control Challenges

Managing access control becomes more complex in big data environments. RBAC and ABAC models can be adapted to handle the unique challenges of big data access control. Fine-grained access control policies should be established to restrict access to sensitive data, considering the scale and diversity of data sources and users in big data ecosystems.

Maintaining Data Integrity

Ensuring the integrity and quality of big data is crucial for accurate analysis and decision-making. Validation techniques, checksums and data lineage tracking mechanisms should be employed to detect and rectify any anomalies or inconsistencies. Data quality checks and audits become more challenging due to the volume and variety of big data, requiring scalable and automated processes.

Real-Time Monitoring and Analysis

Monitoring big data environments in real time is essential for detecting anomalies and potential security breaches. Comprehensive logging and analysis tools are necessary to track system and network activities, enabling prompt response and forensic analysis. Scalable solutions are required to handle the velocity and volume of logs generated by big data systems.

Securing Data Processing at Scale

Securing the data processing phase is critical to prevent attacks and maintain data integrity in big data environments. Secure coding practices, rigorous input validation and vulnerability assessments should be integrated into big data processing frameworks and applications. Unique challenges arise due to the distributed nature of big data processing and the need to handle massive datasets efficiently and securely.

Third-Party and Supply Chain Considerations

Big data environments often rely on third-party services and components. Careful evaluation and due diligence are essential when selecting vendors and partners to ensure they follow security practices. Contracts and service-level agreements (SLAs) should explicitly address security requirements and the responsibilities of involved parties in handling and processing big data.

Human Element in Data Security

Weak link or first line of defense? For better or worse, people impact the security landscape. Effective data protection measures need to factor in the human element.

User Training and Awareness

Organizations should train all system users on security best practices — including developers. Emphasize the importance of secure coding practices, such as properly handling and storing API keys, passwords and cryptographic keys. Educate developers on the risks associated with hardcoding secrets in source code repositories or using insecure storage methods, such as storing secrets in plaintext configuration files. Regularly update training materials to keep developers informed about emerging threats and relevant security practices for cloud development.

Insider Threats

Insider threats can take various forms, ranging from malicious acts like data theft by a disgruntled employee to non-malicious incidents such as accidentally sending sensitive data to the wrong recipient. Monitoring tools can help identify unusual user behavior. Additionally, fostering a security-conscious culture reduces the risk of both types of insider threat.

Human Error

Even highly trained and well-intentioned employees make mistakes. Human errors, such as misconfigured security controls or inadvertently clicking on phishing links, can lead to security incidents. Automated checks and double-check systems serve to catch these errors before they escalate into security incidents.

Social Engineering

Despite the presence of advanced security technologies, people remain susceptible to manipulation and may unintentionally disclose sensitive information. Security training should focus on raising awareness of social engineering tactics, such as phishing or pretexting.

Physical Security

Although cloud data is stored remotely, physical actions can impact its security. A stolen laptop with an active cloud account can result in a data breach, for example. To mitigate risk, implement physical security measures, such as securing devices with strong passwords, encrypting sensitive data and enabling remote wipe or device tracking capabilities.

Incident Response in the Cloud

Incident response involves identifying, managing and mitigating security incidents that occur within cloud environments. Developing an incident response plan can reduce the impact of a cyberattack and help organizations restore normal operations as quickly as possible.

Preparation

The preparation phase of incident response involves establishing an incident response team, defining their roles and responsibilities, and providing training. It also includes preparing tools and resources for investigation and recovery and setting up communication channels for internal and external notification during an incident.

Detection and Analysis

Effective incident response requires timely detection of security incidents. Use monitoring and logging systems, intrusion detection systems and threat intelligence feeds. Once an incident is detected, review cloud access logs and check for unauthorized access to cloud resources. Analyze the nature, scope and potential impact of the incident.

Containment, Eradication and Recovery

Contain the incident to prevent further damage and, when necessary, isolate affected systems or suspend compromised accounts. Once contained, eradicate the threat. This could involve deleting malicious code, eliminating vulnerabilities or changing compromised credentials. Now recover systems and data to normal operations, possibly through data restoration from backups or the deployment of patched systems.

Post-Incident Activity

After the incident, conduct a postmortem to identify lessons learned and improve future responses. Aim to determine the root cause, assess the response process, and update incident response plans based on feedback.

The shared responsibility model can pose unique challenges, such as coordinating response efforts with CSPs. This makes it imperative to both understand the cloud service provider's role in incident response and aligns your incident response plan with the provider's policies and procedures.

Data Security FAQs

The security operations center, or SOC, is a vital component of an organization's cybersecurity infrastructure. The SOC provides a central hub for managing and responding to security incidents, fortifying the continuous protection of critical assets and data from cyberthreats.
A risk-based approach to allocating security resources involves identifying the biggest risks and prioritizing resources to address those risks first. Regular risk assessments can help keep the security strategy aligned with the evolving threat landscape.
ISO/IEC 27001 is an international standard that outlines best practices for an information security management system (ISMS). It provides a risk-based approach for establishing, implementing, maintaining, and continually improving information security. The ISO/IEC 27001 applies to all types of organizations.
Developed by the National Institute of Standards and Technology (NIST), the NIST SP 800-53 provides a catalog of security and privacy controls for all U.S. federal information systems except those related to national security. It includes controls specifically related to cloud computing.
NIST Cybersecurity Framework consists of standards, guidelines and best practices to manage cybersecurity-related risk. It's widely adopted across sectors and includes considerations for cloud environments.
Established by the Cloud Security Alliance, the CCM framework provides specific security controls designed for cloud providers and cloud customers. The CCM covers fundamental security principles across 16 domains, including data security and information lifecycle management.
Data classification is the process of categorizing data based on its sensitivity, value, and regulatory requirements. This process helps organizations identify and apply appropriate security controls, determine access permissions, and ensure compliance with data protection regulations. Data classification typically involves three main categories: public, internal, and confidential or sensitive. Public data is accessible to anyone, internal data is restricted to authorized employees, and confidential or sensitive data requires strict access controls and security measures due to its critical nature or regulatory constraints.
Data democratization is the process of making data accessible and understandable to all members of an organization. It enables individuals to access, analyze, and interpret data without needing specialized skills or depending on data experts. Implementing this approach requires a combination of user-friendly tools, clear governance policies, and appropriate data security measures to ensure responsible data usage while maintaining compliance and privacy standards. That said, by breaking down technical and organizational barriers, organizations promote a data-driven culture and empower employees across departments to make data-driven decisions.

A data inventory is a comprehensive list of all the data assets that an organization has and where they're located. It helps organizations understand and track:

  • Types of data they collect, store, and process
  • Sources, purposes, and recipients of that data

Data inventories can be managed manually or automatically. The reasons for maintaining a data inventory vary — and could include data governance, data management, data protection, data security, and data compliance.

For example, having a data inventory can help organizations identify and classify sensitive data, assess the risks associated with different types of data, and implement appropriate controls to protect that data. It can also help organizations understand which data they have available to support business objectives, or to generate specific types of analytics reports.

Data mapping is the process of creating visual representations of the relationships and flows of data within an organization's systems and processes. It helps organizations understand how data is collected, stored, processed, and shared across different systems, applications, and third parties. Data mapping is essential for complying with data protection regulations, as it enables organizations to identify potential risks, maintain data accuracy, and respond effectively to data subject rights requests. By creating a data map, organizations can optimize data management processes, implement robust security measures, and enhance data governance.
Privacy policies are legally binding documents that outline how an organization collects, processes, stores, shares, and protects personal data. These policies inform users about the types of data collected, the purpose of data collection, data retention periods, and the rights of data subjects.

Access control models are frameworks that define how permissions are granted and managed within a system, determining who can access specific resources. They guide the development and implementation of access control policies. Common models include:

  • Discretionary access control (DAC), where resource owners decide who can access their resources
  • Mandatory access control (MAC), where a central authority regulates access rights based on clearances and classifications
  • Role-based access control (RBAC), where permissions are granted according to roles within an organization
  • Attribute-based access control (ABAC), where access is granted based on a combination of user attributes, resource attributes, and environmental factors
Data at rest refers to data that is stored in a persistent state — typically on a hard drive, a server, a database, or in blob storage. It's in contrast to data in motion, which is data that is actively being transmitted over an internal network or the internet.
Data in motion refers to data that is actively being transmitted or transferred over a network or through some other communication channel. This could include data being sent between devices, such as from a computer to a server or from a smartphone to a wireless router. It could also refer to data being transmitted over the internet or other networks, such as between local on-premises storage to a cloud database. Data in motion is distinct from data at rest, which is data that is stored in a persistent state.
Data in use refers to data that is actively stored in computer memory, such as RAM, CPU caches, or CPU registers. It’s not passively stored in a stable destination but moving through various systems, each of which could be vulnerable to attacks. Data in use can be a target for exfiltration attempts as it might contain sensitive information such as PCI or PII data.

To protect data in use, organizations can use encryption techniques such as end-to-end encryption (E2EE) and hardware-based approaches such as confidential computing. On the policy level, organizations should implement user authentication and authorization controls, review user permissions, and monitor file events.

Data leak prevention (DLP) software can identify and alert security teams that data in use is being attacked. In public cloud deployments, this is better achieved through the use of data detection and response tools.
The data lifecycle describes the stages involved in a data project — from generating the data records to interpreting the results. While there are slight variations between definitions, lifecycle stages might include: data generation, collection, processing, storage, management, analysis, visualization, and interpretation.

Managing data throughout its lifecycle helps ensure its accuracy, timeliness, and availability. Understanding the way data is processed, stored, and accessed — by people and by information systems — is also important for security and disaster recovery purposes. Managing data governance, classification, and retention policies can all be seen as part of a broader data lifecycle management effort.
Data sprawl refers to the growing volumes of data produced by organizations, and the difficulties this creates in effectively managing and monitoring this data. As companies collect more data — both internally and through the broader range of enterprise software tools in use today — and increase the amount of storage systems and data formats, it can become difficult to understand which data is stored where. This can lead to increased cloud costs, inefficient data operations, and data security risks as the organization loses track of where sensitive data is stored — and fails to apply adequate security measures as a result.

To mitigate the impact of data sprawl, automated data discovery and classification solutions can be used to scan repositories and classify sensitive data. Establishing policies to deal with data access permissions can also be beneficial. Data loss prevention (DLP) tools can detect and block sensitive data leaving the organizational perimeter, while DDR tools offer similar functionality in public cloud deployments.
Database as a service (DBaaS) is a type of cloud computing service that enables users to work with a managed database without purchasing or configuring infrastructure. DBaaS subscriptions include the necessary components for operating a database in the cloud — physical resources as well as scaling, support, and maintenance. All of these components are provided and managed by the DBaaS provider.

This lack of control and visibility over the underlying infrastructure also creates security challenges. Since the database server is managed by an external vendor, security teams can’t use the same toolset that is applied to on-premises deployments, such as monitoring agents. This makes real-time threat detection difficult, and can be addressed by technologies such as Data Detection and Response (DDR).
Shadow data is data that is created, stored, or shared without being formally managed or governed by the relevant IT teams. Shadow data can be found in spreadsheets, local copies of databases, emails, and presentations. It would often find its way to personal devices, but shadow data assets can also live on cloud storage such as Amazon S3, or as overlooked tables in a database.

Shadow data can pose a security risk to organizations: in most cases, security controls and policies won't be applied to this data. This can make it more difficult to track and monitor, and more vulnerable to unauthorized access.

To mitigate the risks associated with shadow data, it's important for organizations to have policies and procedures in place to manage and govern the creation, storage, and sharing of new datasets. In addition, organizations can use data security tools (such as DSPM) to identify, classify, and secure shadow data.
Dormant data is data that is collected but not analyzed or used to inform decisions. According to some estimates, 80% of all data collected by organizations remains dormant. Dormant data is often unstructured and unmanaged, and can be stored in various locations including cloud and local storage systems. Dormant records or datasets can also be found in business software applications (such as project management tools).

Since dormant data isn't used regularly, it can easily fall under the radar when it comes to data security. However, this data can potentially contain sensitive information such as customer details, and should be covered as part of an organization’s broader data protection strategy.