CCSK Domain 5: Information governance

Information governance is the management practices we introduce to enusre that data and information complies with organizational policies, standards and strategy, including regulatory, contractual and business objectives. 

There are several aspects of cloud storage of data that has implications for information governance. 

Public cloud deployments are multi-tenant. That means that there will be other organizations also storing their information in the same datacenter, on the same hardware. The security features for account separation will thus be an important part of achieving information compliance in most cases. 

As data is shared across cloud infrastructure, so is the responsibility for securing the data. To define a working governance structure it is important to define data ownership and who the data custodian is. The difference between the two, is that the former is who actually owns the data (and is accountable for its governance), and the latter who manages the data (and is responsible for ensuring compliance in practice). 

When we host third-party data in the cloud, we are introducing a third-party into the governance model. This third-party is the cloud provider; the information governance now depends on the provider’s management practices and technologies offered by the cloud provider. This complicates the regulatory compliance considerations we need to make and should be taken into account when designing a project’s regulatory compliance matrix. First, legal requirements may change because the cloud stores, or makes data available, in more geographical regions that would otherwise be the case. Compliance, regulations, and in particular privacy, should be carefully reviewed with regard to how governance is managed in the cloud for customer data. Further, one should ensure that customer requirements to deletion (destruction) of data is possible to satisfy given the technical offerings from the cloud provider. 

Moving data to the cloud provides a welcome opportunity to review and perhaps redesign information architectures. In many organizations information architectures have evolved over a long time, perhaps with little planning, and may have resulted in a fractured model where it is hard to manage compliance. 

Cloud information governance domains

Cloud computing can have an effect on multiple aspects of data governance. The following list defined issues the CSA has described as affected by cloud artifacts: 

Information classification. Often tied to storage and handling requirements, that may include limitations on access, location. Storing information in an S3 bucket will require a different method for access control than using a file share on the local network. 

Information management practices. How data is managed based on classification. This should include different cloud deployment models (or SPI tiers: SaaS, PaaS, IaaS). You need to decide what can be allowed where in the cloud, with which products and services and with which security requirements. 

Location and jurisdiction policies. You need to comply with regulations and contractual obligations with respect to data storage, data access. Make sure you understand how data is processed and stored, and the contractual instruments in place to manage regulatory compliance. One primary example here is personal data under the GDPR, and how data processing agreements with cross-border transfer clauses can be used to manage foreign jurisdictions. 

Authorizations. Cloud computing does not typically require much changes to authorizations but the data security lifecycle will most likely be impacted. The way authorization controls are implemented may also change (e.g. IAM practices of the cloud vendor for account level authorization). 

Ownership. The organization owns its data and this is not changed when moving to cloud. One should be careful with reviewing the terms and conditions of cloud providers here, in particular SaaS products (especially those targeting the consumer market).

Custodianship. The cloud provider may fully or partially become the custodian, depending on the deployment model. Encrypted data stored in a cloud bucket is still under custody of the cloud provider. 

Privacy. Privacy needs to be handled in accordance with relevant regulations, and the necessary contractual instruments such as data processing agreements must be put in place. 

Contractual controls. Contractual controls when moving data and workloads to control will be different from controls you employ in an on-premise infrastructure. There will often be limited access to contract clause negotiations in public cloud environments. 

Security controls. Security controls are different in cloud environments than in on-premise environments. Main concepts are security groups and access control lists.

Data Security Lifecycle

A data security lifecycle is typically different from information lifecycle. A data security lifecycle has 6 phases: 

  • Create: generation of new digital content, or modification of existing content
  • Store: committing digital data to storage, typically happens in direct sequence with creation. 
  • Use: data is viewed, processed or otherwise used in some activity that does not include modification. 
  • Share: Information is made accessible to others, such as between users, to customers, and to partners or other stakeholders. 
  • Archive: data leaves active use and enters long-term storage. This type of storage will typically have much longer retrieval times than data in active storage. 
  • Destroy. Data is permanently destroyed by physical or digital means (cryptoshredding)

The data security lifecycle is a description of phases the data passes through, without regard for location or how it is accessed. The data typically goes through “mini lifecycles” in different environments as part of these phases. Understanding the physical and logical locations of data is an important part of regulatory compliance. 

In addition to where data lives and how it is transferred, it is important to keep control of entitlements; who accesses the data, and how can they access it (device, channels)? Both devices and channels may have different security properties that may need to be taken into account in a data governance plan. 

Functions, actors and controls

The next step in assessing the data security lifecycle is to review what functions can be performed with the data, by a given actor (personal or system account) and a particular location. 

There are three primary functions: 

  • Read the data: including creating, copying, transferring.
  • Process: perform transactions or changes to the data, use it for further processing and decision making, etc. 
  • Store: hold the data (database, filestore, blob store, etc)

The different functions are applicable to different degrees in different phases. 

An actor (a person or a system/process – not a device) can perform a function in a location. A control restricts the possible actions to allowed actions. The key question is: 

What function can which actor perform in which location on a given data object?

An example of data modeling connecting actions to data security lifecycle stages.

CSA Recommendations

The CSA has created a list of recommendations for information governance in the cloud: 

  • Determine your governance requirements before planning a transition to cloud
  • Ensure information governance policies and practices extent to the cloud. This is done with both contractual and security controls. 
  • When needed, use the data security lifecycle to model data handling and controls. 
  • Do not lift and shift existing information architectures to the cloud. First, review and redesign the information architecture to support the current governance needs, and take anticipated future requirements into account. 

CCSK Domain 4 – Compliance and Audit Management

This section on the CCSK domains is about compliance management and audits. This section goes through in some detail aspects one should think about for a compliance program when running services in the cloud. The key issues to pay attention to are:

  • Regulatory implications when selecting a cloud supplier with respect to cross-border legal issues
  • Assignment of compliance responsibilities
  • Provider capabilities for demonstrating compliance

Pay special attention to: 

  • The role of provider audits and how they affect customer audit scope
  • Understand what services are within which compliance scope with the cloud provider. This can be challenging, especially with the pace of innovation. As an example, AWS is adding several new features every day. 

Compliance 

The key change to compliance when moving from an on-prem environement to the cloud is the introduction of a shared responsibility model. Cloud consumers must typically rely more on third-party auudit reports to understand compliance arrangement and gaps than they would in a traditional IT governance case. 

Many cloud providers certify for a variety of standards and compliance frameworks to satisfy customer demand in various industries. Typical audit reports that may be available include: 

  • PCI DSS
  • SOC1, SOC2
  • HIPAA
  • CSA CCM
  • GDPR
  • ISO 27001

Provider audits need to be understood within their limitations: 

  • They certify that the provider is compliant, not any service running on infrastructure provided by that provider. 
  • The provider’s infrastructure and operations is then outside of the customer’s audit scope, relying on pass-through audits. 

To prove compliance in a servicec built on cloud infrastructure it is necessary that the internal parts of the application/service comply with the regulations, and that no non-compliant cloud services or components are used. This means paying attention to audit scopes is important when designing cloud architectures. 

There are also issues related to jurisdictions involved. A cloud service typically will let you store and process data across a global infrastructure. Where you are allowed to do this depends on the compliance framework, and you as cloud consumer have to make the right choices in the management plane. 

Audit Management

The scope of audits and audit management for information security is related to the fulfillment of defined information security practices. The goal is to evaluate the effectiveness of security management and controls. This extends to cloud environments. 

Attestations are legal statements from a third party, which can be used as a statement of audit findings. This is a key tool when working with cloud providers. 

Changes to audit management in cloud environments

On-premise audits on multi-tenant environments are seen as a security risk and typically not permitted. Instead consumers will have to rely on attestations and pass-through audits. 

Cloud providers should assist consumers in achieving their compliance goals. Because of this they should publish certifications and attestations to consumers for use in audit management. Providers should also be clear about the scope of the various audit reports and attestations they can share. 

Some types of customer technical assessments, such as vulnerability scans, can be liimted in contracts and require up-front approval. This is a change to audit management from on-prem infrastructures, although it seems most major cloud providers allow certain penetration testing activities without prior approval today. As an example, AWS has published a vulenrability anpenetration testing policy for customers here: https://aws.amazon.com/security/penetration-testing/

In addition to audit reports, artifacts such as logs and documentation are needed for compliance proof. The consumer will in most cases need to set up the right logging detail herself in order to collect the right kind of evidence. This typically includes audit logs, activity reporting, system configuration details and change management details. 

CSA Recommendations for compliance and audit management in the cloud

  1. Compliance, audit and assurance should be continuous. They should not be seen as point-in-time activities  but show that compliance is maintained over time. 
  2. Cloud providers should communicate audit results, certifications and attestations including details on scope, features covered in various locations and jurisdictions, give guidance to customers for how to build compliant services in their cloud, and be clear about specific customer responsibilities. 
  3. Cloud customer should work to understand their own compliance requirements before making choices about cloud providers, services and architectures. They should also make sure to understand the scope of compliance proof from the cloud vendor, and ensure they understand what artifacts can be produced to support the management of compliance in the cloud. The consumer should also keep a register of cloud providers and services used. CSA recommends the Cloud control matrix is used to support this activity (CCM).

CCSK Domain 3: Legal and contractual issues

This is a relatively long post. Specific areas covered:

3.1 Overview

3.1.1 Legal frameworks governing data protection and privacy

Conflicting requirements in different jurisdictions, and sometimes within the same jurisdiction. Legal requirements may vary according to

  • Location of cloud provider
  • Location of cloud consumer
  • Location of data subject
  • Location of servers/datacenters
  • Legal jurisdiction of contract between the parties, which may be different than the locations of those parties
  • Any international treaties between the locations where the parties are located

3.1.1.1 Common themes

Omnibus laws: same law applicable across all sectors

Sectoral laws

3.1.1.2 Required security measures

Legal requirements may include prescriptive or risk based security measures.

3.1.1.3 Restrictions to cross-border data transfer

Transfer of data across borders can be prohibited. The most common situation is a based on transferring personal data to countries that do not have “adequate data protection laws”. This is a common theme in the GDPR. Other examples are data covered by national security legislation.

For personal data, transfers to inadequate locations may require specific legal instruments to be put in place in order for this to be considered compliant with the stricter region’s legal requirements.

3.1.1.4 Regional examples

Australia

  • Privacy act of 1988
  • Australian consumer law (ACL)

The privacy act has 13 Australian privacy principles (APP’s) that apply to all sectors including non-profit organizations that have an annual turnover of more than 3 million Australian dollars.

In 2017 the Australian privacy act was amended to require companies to notify affected Australian residents and the Australian Information Commissioner of breaches that can cause serious harm. A security breach must be reported if:

  1. There is unauthorized access or disclosure of personal information that can cause serious harm
  2. Personal information is lost in circumstances where disclosure is likely and could cause serious harm

The ACL protects consumers from fraudulent contracts and poor conduct from service providers, such as failed breach notifications. The Australian Privacy Act can apply to Australian customers/consumers even if the cloud provider is based elsewhere or other laws are stated in the service agreement.

China

China has introduced new legislation governing information systems over the last few years.

  • 2017: Cyber security law: applies to critical information infrastructure operators
  • May 2017: Proposed measures on the security of cross-border transfers of personal information and important data. Under evaluation for implementation at the time of issue of CCSP guidance v. 4.

The 2017 cybersecurity law puts requirements on infrastructure operators to design systems with security in mind, put in place emergency response plans and give access and assistance to investigating authorities, for both national security purposes and criminal investigations.

The Chinese security law also requires companies to inform users about known security defects, and also report defects to the authorities.

Regarding privacy the cybersecurity law requires that personal information about Chinese citizens is stored inside mainland China.

The draft regulations on cross-border data transfer issued in 2017 go further than the cybersecurity law.

  • New security assessment requirements for companies that want to send data out of China
  • Expanding data localization requirements (the types of data that can only be stored inside China)

Japan

The relevant Japanese legislation is found in “Act on the Protection of Personal Information (APPI). There are also multiple sector specific laws.

Beginning in 2017, amendments to the APPI require consent of the data subject for transfer of personal data to a third party. Consent is not required if the receiving party operates in a location with data protection laws considered adequate by the Personal Information Protection Commission.

EU: GDPR and e-Privacy

The GDPR came into force on 25 May 2018. The e-Privacy directive is still not enforced. TechRepublic has a short summary of differences between the two regulations (https://www.techrepublic.com/article/gdpr-vs-epPRrivacy-the-3-differences-you-need-to-know/):

  1. ePrivacy specifically covers electronic communications. It is evolved from the 2002 ePrivacy directive that focused primarily on email and sms, whereas the new version will cover electronic communications in general, including data communication with IoT devices and the use of social media platforms. The ePrivacy directive will also cover metadata about private communications.
  2. ePrivacy includes non-personal data. The focus is on confidentiality of communications, that may also contain non-personal data and data related to a legal person.
  3. The have different legal precedents. GDPR is based on Article 8 in the European Charter of Human Rights, whereas the ePrivacy directive is based on Article 16 and Article 114 of the Treaty on the Functioning of the European Union – but also Article 7 of the Charter of Fundamental Rights: “Everyone has the right to respect for his or her private and family life, home and communications.”

The CSA guidance gives a summary of GDPR requirements:

  • Data processors must keep records of processing
  • Data subject rights: data subjects have a right to information on how their data is being processed, the right to object to certain uses of their personal data, the right to have data corrected or deleted, to be compensated for damages suffered as a result of unlawful processing, and the right to data portability. These rights significantly affect cloud relationships and contracts.
  • Security breaches: breaches must be reported to authorities within 72 hours and data subjects must be notified if there is a risk of serious harm to the data subjects
  • There are country specific variations in some interpretations. For example, Germany required that an organization has a data protection officer if the company has more than 9 employees.
  • Sanctions: authorities can use fines up to 4% of global annual revenue, or 20 million EUR for serious violations, whichever amount is higher.

EU: Network information security directive

The NIS directive is enforced since May 2018. The directive introduces a framework for ensuring confidentiality, integrity and availability of networks and information systems. The directive applies to critical infrastructure and essential societal and financial functions. The requirements include:

  • Take technical and organizational measures to secure networks and information systems
  • Take measures to prevent and minimize impact of incidents, and to facilitate business continuity during severe incidents
  • Notify without delay relevant authorities
  • Provide information necessary to assess the security of their networks and information systems
  • Provide evidence of effective implementation of security policies, such as a policy audit

The NIS directive requires member states to impose security requirements on online marketplaces, cloud computing service providers and online search engines. Digital service providers based outside the EU but that supply services within the EU are under scope of the directive.  

Note: parts of these requirements, in particular for critical infrastructure, are covered by various national security laws. The scope of the NIS directive is broader than national security and typically requires the introduction of new legislation. This work is not yet complete across the EU/EEC area. Digital Europe has an implementation tracker site set up here: https://www.digitaleurope.org/resources/nis-implementation-tracker/.

Central and South America

Data protection laws are coming into force in Central and South American countries. They include security requirements and the need for a data custodian.

North America: United States

The US has a sectoral approach to legislation with hundreds of federal, state and local regulations. Organizations doing business in the United States or that collect or process data on US residents or often subject to multiple laws, and identification of the regulatory matrix can be challenging for both cloud consumers and providers.

Federal law

  • The Gramm-Leach-Bliley Act (GLBA)
  • The Health Insurance Portability and Accountability Act, 1996 (known as HIPAA)
  • The Children’s Online Privacy Protection Act of 1998 (COPPA)

Most of these laws require companies to take precautions when hiring subcontractors and service providers. They may also hold organizations responsible for the acts of subcontractors.

US State Law

In addition to federal regulations, most US states have laws relating to data privacy and security. These laws apply to any entity that collect or process information on residents of that state, regardless of where the data is stored (the CSA guidance says regardless of where within the United States, but it is likely that they would apply to international storage as well in this case).

Security breach disclosure requirements

Breach disclosure requirements are found in multiple regulations. Most require informing data subjects.

Knowledge of these laws is important for both cloud consumers and providers, especially to regulate the risk of class action lawsuits.

In addition to the state laws and regulations, there is the “common law of privacy and security”, a nickname given to a body of consent orders published by federal and state government agencies based on investigations into security incidents.

Especially the FTC (Federal Trade Commission) has for almost 20 years the power to conduct enforcement actions against companies whose privacy and security practices are inconsistent with claims made in public disclosures, making their practices “unfair and deceptive”. For cloud computing this means that when a certain way of working changes, the public documentation of the system needs to be updated to make sure actions are not in breach of Section 4 of the FTC Act.

1.3.2 Contracts and Provider Selection

In addition to legal requirements, cloud consumers may have contractual obligations to protect the personal data of their own clients, contacts or employees, such as securing the data and avoiding other processing that what has been agreed. Key documents are typically Terms and Conditions and Privacy Policy documents posted on websites of companies.

When data or operations are transferred to a cloud, the responsibility for the data typically remains with the collector. There may be sharing of responsibilities when the cloud provider is performing some of the operations. This also depends on the service model of the cloud provider. In any case a data processing agreement or similar contractual instrument should be put in place to regulate activities, uses and responsibilities.

3.1.2.1 Internal due diligence

Prior to using a cloud service both parties (cloud provider and consumer) should identify legal requirements and compliance barriers.

Cloud consumers should investigate whether it has entered into any confidentiality agreements or data use agreements that could limit the use of a cloud service. In such cases consent from the client needs to be in place before transferring data to a cloud environment.

3.1.2.3 External due diligence

Before entering into a contract, a review of the other party’s operations should be done. For evaluating a cloud service, this will typically include a look at the applicable service level, end-user and legal agreements, security policies, security disclosures and compliance proof (typically an audit report).

3.1.2.4 Contract negotiations

Cloud contracts are often standardized. An important aspect is the regulation of shared responsibilities. Contracts should be reviewed carefully also when they are presented as “not up for negotiation”. When certain contractual requirements cannot be included the customer should evaluate if other risk mitigation techniques can be used.

3.1.2.5 Reliance on third-party audits and attestations

Audit reports could and should be used in security assessments. The scope of the audit should be considered when used in place of a direct audit.

3.1.3 Electronic discovery

In US law, discovery is the process by which an opposing party obtains private documents for use in litigation. Discovery does not have to be limited to documents known to be admissible as evidence in court from the outset. Discovery applies to all documents reasonably held to be admissible as evidence (relevant and probative). See federal rules on civil procedure: https://www.federalrulesofcivilprocedure.org/frcp/title-v-disclosures-and-discovery/rule-26-duty-to-disclose-general-provisions-governing-discovery/.

There have been many examples of litigants having deleted or lost evidence that caused them to lose the case and be sentenced to pay damages to the party not causing the data destruction. Because of this it is necessary that cloud providers and consumers plan for how to identify and extract all relevant documents relevant to a case.

3.1.3.1 Possession, custody and control

In most US jurisdictions, the obligation to produce relevant information to court is limited to data within its possession, custody or control. Using a cloud provider for storage does not remove this obligation. Some data may not be under the control of the consumer (disaster recovery, metadata), and such data can be relevant to a litigation. The responsibility of a cloud provider to provide such data remains unclear, especially in cross-border/international cases.

Recent cases of interest:

  • Norwegian police against Tidal regarding streaming fraud
  • FBI against Microsoft (Ireland Onedrive case)

3.1.3.2 Relevant cloud applications and environment

In some cases, a cloud application or environment itself could be relevant to resolving a dispute. In such circumstances the artefact is likely to be outside the control of the client and require a discovery process to served on the cloud provider directly, where such action is enforceable.

3.1.3.3 Searchability and e-discovery tools

Discovery may not be possible using the same tools as in traditional IT environments. Cloud providers do sometimes provide search functionality, or require such access through a negotiated cloud agreement.

3.1.3.4 Preservation

Preservation is the avoidance of destruction of data relevant to a litigation, or that is likely to be relevant to a litigation in the future. There are similar laws on this in the US, Europe, Japan, South Korea and Singapore.

3.1.3.5 Data retention laws and record keeping obligations

Data retention requirements exist for various types of data. Privacy laws put restrictions on retention. In the case of conflicting requirements on the same data, this should be resolved through guidance and case law. Storage requirements should be weighed against SLA requirements and costs when using cloud storage.

  • Scope of preservation: a requesting party is only entitled to data hosted in the cloud that contains data relevant to the legal issue at hand. Lack of granular identifiability can lead to a requirement to over-preserve and over-share data.
  • Dynamic and shared storage: the burden of preserving data in the cloud can be relevant if the client has space to hold it in place, if the data is static and the people with access is limited. Because of the elastic nature of cloud environments this is seldom the case in practice and it may be necessary to work with the cloud provider on a plan for data preservation.
  • Reasonable integrity: when subject to a discovery process, reasonable steps should be taken to secure the integrity of data collection (complete, accurate)
  • Limits to accessibility: if a cloud customer cannot access all relevant data in the cloud. The cloud consumer and provider may have to review the relevance of the request before taking further steps to acquire the data.

3.1.3.7 Direct access

Outside cloud environments it is not common to give the requesting party direct access to an IT environment. Direct hardware access in cloud environments if often not possible or desirable.

3.1.3,8 Native production

Cloud providers often store data in proprietary systems that the clients do not control. Evidence is typically expected to be delivered in the form of PDF files, etc. Export from the cloud environment may be the only option, which may be challenging with respect to the chain of custody.

3.1.3.9 Authentication

Forensic authentication of data admitted into evidence. The question here is whether the document is what it seems to be. Giving guarantees on data authenticity can be hard, an a document should not inherently be considered more or less admissible due to storage in the cloud.

3.1.3.10 Cooperation between provider and client in e-discovery

e-Discovery cooperation should preferably be regulated in contracts and be taken into account in service level agreements.

3.1.3.11 Response to a subpoena or search warrant

The cloud agreement should include provisions for notification of a subpoena to the client, and give the client time to try to fight the order.

3.2 Recommendations

The CSA guidance makes the following recommendations

  • Cloud customers should understand relevant legal and regulatory frameworks, as well as contractual requirements and restrictions that apply to handling of their data, and the conduct of their operations in the cloud.
  • Cloud providers should clearly disclose policies, requirements and capabilities, including its terms and conditions that apply to the services they provide.
  • Cloud customers should perform due diligence prior to cloud vendror selection
  • Cloud customers should understand the legal implications of the location of physical operations and storage of the cloud provider
  • Cloud customers should select reasonable locations for data storage to make sure they comply with their own legal requirements
  • Cloud customers should evaluate and take e-discovery requests into account
  • Cloud customers should understand that click-through legal agreements to use a cloud service do not negate requirements for a provider to perform due diligence

CCSK Domain 2: Governance and Enterprise Risk Management

Governance and risk management principles remain the same, but there are changes to the risk picture as well as available controls in the cloud. We need in particular take into account the following:

  • Cloud risk trade-offs and tools
  • Effects of service and deployment models
  • Risk management in the cloud
  • Tools of cloud governance

A key aspect to remember when deploying services or data to the cloud is that even if security controls are delegated to a third-party, the responsibility for corporate governance cannot be delegated; it remains within the cloud consumer organization.

Cloud providers aim to streamline and standardize their offerings as much as possible to achieve economies of scale. This is different from a dedicated third-party provider where contractual terms can often be negotiated. This means that governance frameworks should not treat cloud providers with the same approach as those dedicated service providers allowing for custom governance structures to be agreed on.

Responsibilities and mechanisms for governance is regulated in the contract. If a governance need is not described in the contract, there exists a governance gap. This does not mean that the provider should be excluded directly, but it does mean that the consumer should consider how that governance gap can be closed.

Moving to the cloud transfers a lot of the governance and risk management from technical controls to contractual controls.

Cloud governance tools

The key tools of governance in the cloud are contracts, assessments and reporting.

Contracts are the primary tools for extending governance into a third party such as a cloud provider. For public clouds this would typically mean the terms and conditions of the provider. They are the guarantee of a given service level, and also describes requirements for governance support through audits.

Supplier assessments are important as governance tools, especially during provider selection. Performing regular assessments can discover if changes to the offerings of the cloud provider has changed the governance situation, in particular with regard to any governance gaps.

Compliance reporting includes audit reports. They may also include automatically generated compliance data in a dashboard, such as patch level status on software, or some other defined KPI. Audit reports may be internal reports but most often these are made by an accredited third party. Common compliance frameworks are provided by ISO 27017, ISO 38500, COBIT.

Risk management

Enterprise risk management (ERM) in the cloud is based on the shared responsibility model. The provider will take responsibility for certain risk controls, whereas the consumer is responsible for others. Where the split is depends on the service model.

The division of responsibilities should be clearly regulated in the contract. Lack of such regulation can lead to hidden implementation gaps, leaving services vulnerable to abuse.

Service models

IaaS mostly resembles traditional IT as most controls remain under direct management of the cloud consumer. Thus, policies and controls do to a large degree remain under control of the cloud consumer too. There is one primary change and that is the orchestration/management plane. Managing the risk of the management plane becomes a core governance and risk management activity – basically moving responsibilities from on-prem activities to the management plane.

SaaS providers vary greatly in competence and the tools offered for compliance management. It is often possible to negotiate custom contracts with smaller SaaS providers, whereas the more mature or bigger players will have more standardized contracts but also more tools appropriate to governance needs of the enterprise. The SaaS model can be less transparent than desired, and establishing an acceptable contract is important in order to have good control over governance and risk management.

Public cloud providers often allow for less negotiation than private cloud. Hybrid and community governance can easily become complicated because the opinions of several parties will have to be weighed against each other.

Risk trade-offs

Using cloud services will typically result in more trust put in third-parties and less direct access to security controls. Whether this increases or decreases the overall risk level depends on the threat model, as well as political risk.

The key issue is that governance is changed from internal policy and auditing to contracts and audit reports; it is a less hands-on approach and can result in lower transparency and trust in the governance model.

CSA recommendations

  • Identify the shared responsibilities. Use accepted standards to build a cloud governance framework.
  • Understand and manage how contracts affect risk and governance. Consider alternative controls if a contract leaves governance gaps and cannot be changed.
  • Develop a process with criteria for provider selection. Re-assessments should be regular, and preferably automated.
  • Align risks to risk tolerances per asset as different assets may have different tolerance levels.

#2cents

Let us start with the contract side: most cloud deployments will be in a public cloud, and our ability to negotiate custom contracts will be very limited, or non-existing. What we will have to play with is the control options in the management plane.

The first thing we should perhaps take note of, is not really cloud related. We need to have a regulatory compliance matrix in order to make sure our governance framework and risk management processes actually will help us achieve compliance and acceptable risk levels. One practical way to set up a regulatory compliance matrix is to map applicable regulations and governacne requirements to the governance tools we have at our disposal to see if the tools can help achieve compliance.

Regulatory source Contractual impact Supplier assessments Audits Configuration management
GDPR Data processing agreement Security requirements GDPR compliance Data processing acitvities audits Data retention Backups Discoverability Encryption
Customer SLA SLA guarantees
Uptime reporting
ISO 27001
Certifications Audit reports for certifications Extension of company policies to management plane

Based on the regulatory compliance matrix, a more detailed governance matrix can be developed based on applicable guidance. Then governance and risk management gaps can be identified, and closing plans created.

Traditionally cloud deployments have been seen as higher risk than on-premise deployments due to less hands-on risk controls. For many organizations the use of cloud services with proper monitoring will lead to better security because many organizations have insufficient security controls and logging in their on-premise tools. There are thus situations where a shift from hands-on to contractual controls is a good thing for security. One could probably claim that this is the case for most cloud consumers.

One aspect that is critical to security is planning of incident response. To some degree the ability to do incidence response on cloud deployments depends on configurations set in the management plane; especially the use of logging and alerting functionality. It should also be clarified up front where the shared responsibility model puts the responsibility for performing incident response actions throughout all phases (preparation, identification, containment, eradication, recovery and lessons learned).

The best way to take cloud into account in risk management and governance is to make sure policies, procedures and standards cover cloud, and that cloud is not seen as an “add-on” to on-premise services. Only integrated governance systems will achieve transparency and managed regulatory compliance.

How to reduce cybersecurity risks for stores, shops and small businesses

Crime in general is moving online, and with that the digital risks for all businesses are increasing, including for traditional physical stores – as well as eCommerce sites. This blog post is a quick summary of some risks that are growing quickly and what shop owners can do to better control them.

Top 10 Cybersecurity Risks

The following risks are faced by most organizations. For many stores selling physical goods these would be devastating today as they rely more and more on digital services.

How secure is your shop when you include the digital arena? Do you put your customers at risk?
  1. Point of sale malware leading to stolen credit cards
  2. Supply chain disruptions due to cybersecurity incidents
  3. Ransomware on computers used to manage and run stores
  4. Physical system manipulation through sensors and IoT, e.g. an adversary turning off the cooling in a grocery store’s refrigerators
  5. Website hacks
  6. Hacking of customer’s mobile devices due to insecure wireless network
  7. Intrusion into systems via insecure networks
  8. Unavailability of critical digital services due to cyber incidents (e.g. SaaS systems needed to operate the business)
  9. Lack of IT competence to help respond to incidents
  10. Compromised e-mail accounts and social media accounts used to run the business

Securing the shop

Shop owners have long been used to securing their stores against physical theft – using alarms, guards and locks. Here are 5 things all shop owners can do to also secure their businesses against cybersecurity events:

1 – Use only up-to-date IT equipment and software.

Outdated software can be exploited by malware. Keeping software up to date drastically reduces the risk of infection. If you have equipment that cannot be upgraded because it is too old you should get rid of it. The rest should receive updates as quickly as possible when they are made avialable, preferably automatically if possible.

2 – Create a security awareness program for employees.

No business is stronger than its weakest link – and that is true for security too. By teaching employees good cybersecurity habits the risk of an employee downloading a dangerous attachment or accepting a shady excuse for weird behavior from a criminal will be much lower. A combination of on-site discussions and e-learning that can be consumed on mobile devices can be effective for delivering this.

3 – Use the guest network only for guests.

Many stores, coffee shops and other businesses offer free wifi for their customers. Make sure you avoid connecting critical equipment to this network as vulnerabilities can be exposed. Things I’ve seen on networks like this include thermostats, cash registers and printers. Use a separate network for those important things, and do not let outsiders onto that network.

4 – Secure your website like your front door.

Businesses will usually have a web site, quite often with some form of sales and marketing integration – but even if you don’t have anything else than a pretty static web page you should take care of its security. If it is down you lose a few customers, if it is hacked and customers are tricked out of their credit card data they will blame your shop, not the firm you bought the web design from. Make sure you require web designers to maintain and keep your site up to date, and that they follow best practices for web security. You should also consider running a security test of the web page on regular intervals.

5 – Prepare for times of trouble.

You should prepare for bad things to happen and have a plan in place for dealing with it. The basis for creating an incident response plan is a risk assessment that lists the potential threat scenarios. This will also help you come up with security measures that will make those scenarios less likely to occur.

6 – Create backups and test them!

The best medicine against losing data is having a recent backup and knowing how to restore your system. Make sure all critical data are backed up regularly. If you are using a cloud software for critical functions such as customer relationship management (CRM) or accounting, check with your vendor what backup options they have. Ideally your backups should be stored in a location that is not depending on the same infrastructure as the software itself. For example – if Google runs your software you can store your backups with Microsoft.

7 – Minimize the danger of hacked accounts.

The most common way a company gets hacked is a compromised account. This very often happens because of phishing or password reuse. Phishing is the use of e-mails to trick users into giving up their passwords – for example by sending them to a fake login page that is controlled by the hacker. Three things you can do that will reduce this risk by 99% is:

  • Tell everyone to use a password manager and ask them to use very long and complex passwords. They will no longer need to remember the passwords themselves so this will not be a problem. Examples of such software include 1Password and Lastpass.
  • Enforce two-factor authentication wherever possible (2FA for short). 2FA is the use of a second factor in addition to your password, such as a code generated on you mobile in order to log in.
  • Give everyone training on detection of social engineering scams as part of your awareness training program.

All of this may seem like quite a lot of work – but when it becomes a habit it will make your team more efficient, and will significantly reduce the cybersecurity threats for both you and your customers.

If you need tools for awareness training, risk management or just someone to talk to about security – take a look at the offerings from Cybehave – intelligent cloud software for better security.

Running an automated security audit using Burp Professional

Reading about hacking in the news can make it seem like anyone can just point a tool at any website and completely take it over. This is not really the case, as hacking, whether automated or manual, requires vulnerabilities.

A well-known tool for security professionals working with web applications is Burp from Portswigger. This is an excellent tool, and comes in multiple editions from the free community edition, which is a nice proxy that you can use to study HTTP requests and responses (and some other things), to the professional edition aimed at pentesting and enterprise which is more for DevOps automation. In this little test we’ll take the Burp Professional tool and run it using only default settings against a target application I made last year. This app is a simple app for posting things on the internet, and was just a small project I did to learn how to use some of the AWS tools for deployment and monitoring. You find it in all its glory at https://www.woodscreaming.com.

Just entering the URL http://www.woodscreaming.com and launching Burp to attack the application first goes through a crawl and audit of unauthenticated routes it can find (it basically clicks all the links it can find). Burp then registers a user, and starts probing the authenticated routes afterwards, including posting those weird numerical posts.

Woodscreaming.com: note the weird numerical posts. These are telltale signs of automated security testing with random input generation.

What scanners like Burp are usually good at finding, is obvious misconfigurations such as missing security headers, flags on cookies and so on. It did find some of these things in the woodscreaming.com page – but not many.

Waiting for security scanners can seem like it takes forever. Burp estimated some 25.000 days remaining after a while with the minimal http://www.woodscreaming.com page.

After runing for a while, Burp estimated that the remaining scan time was something like 25.000 days. I don’t know why this is the case (not seen this in other applications) but since a user can generate new URL paths simply by posting new content, a linear time estimation may easily diverge. A wild guess at what was going on. Because of this we just stopped the scan after some time as it was unlikely to discover new vulnerabilities after this.

The underlying application is a traditional server-driven MVC application running Django. Burp works well with applications like this and the default setup works better than it typically does for single page applications (SPA’s) that many web applications are today.

So, what did Burp find? Burp assigns a criticality to the vulnerabilities it finds. There were no “High” criticality vulns, but it reported some “Medium” ones.

Missing “Secure” flag on session cookies?

Burp reports 2 cookies that seem to be session cookies and that are missing the Secure flag. This means that these cookies would be set also if the application were to be accessed over an insecure connection (http instead of https), making a man-in-the-middle able to steal the session, or perform a cross-site request forgery attack (CSRF). This is a real find but the actual exposure is limited because the app is only served over https. It should nevertheless be fixed.

A side note on this: cookies are set by the Django framework in their default state, no configuration changes made. Hence, this is likely to be the case also on many other Django sites.

If we go to the “Low” category, there are several issues reported. These are typically harder to exploit, and will also be less likely to cause major breaches in terms of confidentiality, integrity and availability:

  • Client-side HTTP parameter pollution (reflected)
  • CSRF cookie without HTTPOnly flag set
  • Password field with autocomplete enabled
  • Strict transport security not enforced

The first one is perhaps the most interesting one.

HTTP paramter pollution: dangerous or not?

In this case the URL parameter reflected in an anchor tag’s href attribute is not interpreted by the application and thus cannot lead to bad things – but it could have been the case that get parameters had been interpreted in the backend, making it possible to have a person perform an unintended action in a request forgery attack. But in our case we say as the jargon file directs us: “It is not a but, it is a feature”!

So what about the “password field with autocomplete enabled”? This must be one of the most common alerts from auditing software today. This can lead to unintended disclosure of passwords and should be avoided. You’ll find the same on many well-known web pages – but that does not mean we shouldn’t try to avoid it. We’ll put it on the “fix list”.

Are automated tests useful?

Automated tests are useful but they are not the same as a full penetration test. They are good for:

  1. Basic configuration checks. This can typically be done entirely passively, no attack payloads needed.
  2. Identifying vulnerabilities. You will not find all, and you will get some false positives but this is useful.
  3. Learning about vulnerabilities: Burp has a very good documentation and good explanations for the vulnerabilities it finds.

If you add a few manual checks to the automated setup, perhaps in particular give it a site-map before starting a scan and testing inputs with fuzzing (which can also be done using Burp) you can get a relatively thorough security test done with a single tool.

Defending against OSINT in reconnaissance?

Hackers, whether they are cyber criminals trying to trick you into clicking a ransomware download link, or whether they are nation state intelligence operatives planning to gain access to your infrastructure, can improve their odds massively through proper target reconnaissance prior to any form of offensive engagement. Learn how you can review your footprint and make your organization harder to hack.

https://cybehave.no

Cybehave has an interesting post on OSINT and footprinting, and what approach companies can take to reduce the risk from this type of attack surface mapping: https://cybehave.no/2019/03/05/digital-footprint-how-can-you-defend-against-osint/ (disclaimer: written by me and I own 25% of this company).

tl;dr – straight to the to-do list

  • Don’t publish information with no business benefit and that will make you more vulnerable
  • Patch your vulnerabilities – both on the people and tech levels
  • Build a friendly environment for your people. Don’t let them struggle with issues alone.
  • Prepare for the worst (you can still hope for he best)

Storing seeds for multifactor authentication tokens

When setting up an application to use two-factor authentication for example with Google Authenticator, each user will have a unique seed value for the authenticator. The identity server will require knowledge of the seed to verify the token – meaning you will have to store it and retrieve it somehow. This means that if an attacker gets access to the storage solution that links OTP secret seeds to user ID’s (e.g. usernames), the protocol is broken. So, trying to think up some options for securing the secrets – we cannot hash and salt it because it breaks the OTP authentication flow. We are hence left with encrypting the seed before storing it.

The most practical seems to be a symmetric crypto approach, the question is what to use as the crypto key. Here are some approaches I’ve seen people discuss that all seem bad: 

  • User password: if you can phish the password, then you can also generate the OTP provided you know which algorithm/library is used
  • A static application secret: should be safe provided that secret is never leaked but using a static secret means that if it is compromised, all users are compromised. Still better than the user password, though. 
  • Using non-static user level meta data to create a unique key for each user that is not vulnerable to phishing or guessing. Typically visible to admins.
Get username/password
Verify username/password
Get OTP seed (encrypted)
Get metadata and reconstruct encryption key
Verify OTP
Authenticate user and store timestamp and other auth metadata
Construct new encryption key
Encrypt seed
Store in database

The question is what metadata to use. We need the following properties to be true:

  • Not possible to guess for a third party even if we tell what metadata it is
  • Not possible to reconstruct for an administrator with access to the account
  • Not possible to phish or obtain through social engineering or client side attacks

There are many possibilities but here is one possible solution that would satisfy all the above requirements:

Key = Password (Not available to admins) + Timestamp for last login (not guessable/phishable)

Combining VueJS and Django to build forms with custom widgets

This post is brief and explains a pattern that may be dangerous but still is very handy for combining VueJS with Django templates for dynamic forms. Here’s the case: I need to build a form for sending out some messages. One of the form widgets is a <select> tag where each <option> is a model instance from Django. The widget will then show the name of that model instance in the UI, but this does not provide enough context to be useful, we also need some description text. There are basically two options for how to handle this:

  1. Use the form “as-is” but provide the extra context in the UI by pulling some extra information and building an information box in the UI.
  2. Create a custom widget, and bind it to the Django model form using a hidden field.

Both are probably equally good, but I went with the second option. So here’s what I did:

  1. Build a normal Django model form, but change the widget for the field in question to type “HiddenInput” in the form.py file.
  2. Build a selector widget using VueJS that allows the user to get the desired content and review the various options with full context (including images and videos, things you can’t put inside a dropdown list. We are binding the selected choices to frontend data using the v-model directive in VueJS.
  3. Set the hidden field to set its value based on the data value stored in the frontend using that v-model directive
  4. Process the form as you normally would with a Django model form.

The form definition remains very simple. Here’s the relevant class from this example:

class MailForm(forms.ModelForm):

    class Meta:
        model = Campaign
        fields = ('name','to','elearning',)
        widgets = {
            'elearning': forms.HiddenInput(attrs={':value': 'module.pk'})
        }

The selector widget can take any form you could desire. The point in this project was to show some more context for the “eLearning” model. The user here gets notification about enrollment in an eLearning module by e-mail. The administrator setting up the program needs to get a bit of context about that e-learning, such as the name of the module, a description of its content, and perhaps a preview of a video or other multimedia. Below is an example of a simple widget of this type. The course administrator can here browse through the various options by clicking next, and the e-mail form is automatically updated.

Of course, to do that binding we need a bit of JavaScript in the Django template. We need to perform the following tasks to make our custom widget work:

  1. Fetch information about all the options from the server. We need to create an API endpoint for this, that can deliver JSON data to the frontend.
  2. Set the data item bound to the Django form based on the user’s current selection

Now the form can be submitted and processed using the normal Django framework patterns – but with a much more context-rich selection widget than a simple dropdown list.

Is it safe to do this?

Combining frontend and server-side rendering with different templates for HTML rendering can be dangerous. See this excellent write-up on XSS vulnerabilities that can be the result from such combinations: https://github.com/dotboris/vuejs-serverside-template-xss.

This is a problem when user input is injected via the server-side template as the user can supply the interpolation tags as part of the input. In our case there is no user input in those combinations. However, if you need to take user input and rerender this using the server-side templates of some framework like Django, here are some things you can do to harden against this threat:

  • Use the v-pre directive in VueJS
  • Sanitize the input to discard unsafe characters, including VueJS delimiters
  • Escape generated output from the database to avoid injections making it as executable JavaScript reaching the user’s context

Security awareness: the tale of the Minister of Fisheries and his love of an Iranian former beauty queen

An interesting story worthy of inspiring books and TV shows is unfolding in Norway. The Minister of Fisheries, Per Sandberg (born 1960), from the Progress Party (a populist right party), spent his summer holiday in Iran together with his new girlfriend, a 28-year old former beauty queen who fled to Norway do escape forced marriage when she was 16. The minister brought his smartphone, where he has access to classified information systems. He forgot to inform the prime minister before after he left, a breach of security protocol. He ignored security advice from the Norwegian security police, responsible for national security issues and counter-intelligence. He is still a member of the cabinet. This post is an attempt at making sense of this, and what the actual risk is. A lot of people in Norway have had their say in media about this case, both knowledgeable voices, and less reasonable ones.

Some context: Norwegian-Iranian relations

Traditionally there has been little trade between Iran and Norway. Recently, following the nuclear agreement between Iran and the US, UK, France, China, Russia and Germany this has started to change. Norway has seen significant potential for exports to Iran of fish and aquaculture technologies. In the last year or so, Minister Sandberg has been central to this development (see timeline further down on Sandberg’s known touch points with Iran).

In the Norwegian public skepticism of the Iranian regime is high, and there has been vocal criticism of establishing trade relationships over human rights concern.

The Norwegian Iranian interest spheres are also intersecting in the Middle East. Iran has established tighter relations with Russia since 2016 when it started to allow Russian bombers to take off from Iranian air force bases for bombing missions inside Syria. The Norwegian-Russian relations are strained, following the response of NATO and the EU to Russian operations in the Ukraine, influencing of western elections and a general intensification of cyber operations against Norwegian targets (see open threat assessment from Norwegian military intelligence: https://forsvaret.no/fakta/undersokelser-og-rapporter/fokus2018/rapport (in Norw.). Operations against Norwegian government officials by Iranian services may thus also be driven by other interests of Iran than direct Norwegian-Iranian relations.

Sandberg: who is he and what could make him a viable target for intelligence operations?

This is a presentation of Sandberg taken from the web page of the Ministry of Trade, Industry and Fisheries (https://www.regjeringen.no/no/dep/nfd/organisation/y-per-sandberg/id2467677/). Note that his marital status is “married” – but he separated from his wife in May 2018. Sandberg is a vocal figure in Norwegian politics. He has been known to be against immigration and a supporter of strict immigration laws. He has repeatedly been accused of racism, especially by the opposition. He has long held top positions in the Progress Party, which has been a part of a coalition cabinet together with the conservatives (Høyre), and more recently also with the moderately liberalist party “Venstre” (meaning left but it is not a socialist party). Sandberg is known for multiple controversies, summarized on this Wikipedia page: https://en.wikipedia.org/wiki/Per_Sandberg#Controversies. This involves addressing the parliament after having too much to drink, losing his driver’s license due to speeding and finally he was also convicted for violence against an asylum seeker in 1997.

Sandberg has been married since 2010 to 2018 to Line Miriam Sandberg, who has been working as a state secretary for the Ministry of Health since 2017. They recently separated.

His new girlfriend

Sandbergs new girlfriend came to Norway when she was 16 (or 13/14 the first time according to some sources) to flee forced marriage to a 60-year old man in Iran. She is now a Norwegian citizen and is 28 years old. She has participated in several beauty contests in 2013-2014. After she first came to Norway, she was not granted asylum and returned to Iran. Iran sent her back to Norway again because she did not have any identification papers when arriving, and she was adopted by a Norwegian family. A summary of known facts about Letnes and how she gained access to Iran after being returned to Norway without ID papers when she was a teenager was written in Norwegian by Mahmod Fahramand (https://www.nettavisen.no/meninger/farahmand/per-sandbergs-utfordring/3423519653.html). Farahmand is currently a consultant with the auditing and consulting firm BDO and has background from the Norwegian armed forces. He often writes opinion pieces about security related topics. To summarize some of Farahmand’s points.

  • Letnes was returned to Norway and was later adopted by her foster family
  • She has been a “go-to-person” for journalists wanting to get in touch with Iranian officials and has been known to have close relationships with the Iranian embassy in Oslo
  • Iran does not allow Iranian-born indivdiuals to enter Iran without an Iranian passport. If they do not have this, they will need to get access to their birth certificate or otherwise prove to the Iranian government that they in fact have a right to an Iranian passport. Since Letnes fled Iran to seek protection from the threat of her family, it seems she must have gotten access to this without contacting her family, Farahmand argues.

Letnes had her application for asylum turned down 3 times before getting it approve. The reason for the change of the decision of the immigration authorities in 2008 is not known (Norw: https://www.nrk.no/trondelag/–jeg-er-kjempeglad-1.6236578). In addition, it has become known in media in the last few days that Letnes applied for a job with Sandberg’s ministry, suggesting she could act as a translator and guide for Sandberg’s communications with Iran in matters related to fishery and aquaculture trade, which she did not get. Sandberg denied any knowledge of this prior to media inquiring about it. The job application was sent in 2016. She also registered a sole proprietorship in January this year, B & H GENERAL TRADING COMPANY. BAHAREH LETNES, a company to trade with Iran in fish, natural gas, oil and technology (corporate registration information: https://w2.brreg.no/enhet/sok/detalj.jsp?orgnr=920188095). The company has according to Letnes not had any activity so far, according to media reports.

A honeytrap? Possibly. A security breach? For sure.

The arguments from Farahmand’s article above, together with the fact that Letnes tried to get a job for Sandberg in 2016, could easily indicate that Letnes sought to get close to Sandberg. She has sought multiple touchpoints with him since he was appointed Minister of Fisheries in 2015.

This would be a classical honeytrap, although a relatively public one. Sandberg has failed to follow security protocol on many occasions in his dealings with Letnes and Iran. Obvious signs of poor security awareness on behalf of the Minister:

  • He brought his government issued cell phone to Iran and left it unattended for longer periods of time where they stayed at the time
  • He did not tell the office of the Prime Minister about his travel to Iran before leaving. This is a breach of security protocol for Norwegian ministers
  • His separation from his wife became known in May this year
  • He has announced his “original vacation plans got smashed, so the trip to Iran was a last-minute decision”. He was supposed to go on holiday to Turkey, which he had also reported to his Ministry and the office of the Prime Minister, in accordance with security protocol (Norw: https://www.aftenposten.no/norge/politikk/i/e1xzJO/Fiskeriminister-Per-Sandberg-bekrefter-at-han-reiste-til-Iran-uten-a-informere-departementet-eller-statsministeren )
  • The Norwegian government was made aware of Sandberg’s presence in Iran when they received an e-mail from the Iranian embassy in Oslo, requesting official meetings with Minister Sandberg while he was in Iran

Iranian TTP

According to Kjell Grandhagen, former head of Norwegian military intelligence, Iran has a very capable and modern intelligence organization. He holds it as highly likely that Sandberg’s government issued phone, which he left unattended a lot of the time while in Iran, has been hacked (https://www.digi.no/artikler/tidligere-sjef-for-e-tjenesten-tror-per-sandbergs-mobil-har-blitt-hacket/442930). According to this CSO summary, Iran has serious capabilities within both HUMINT and cyber domains. Considering the known cyber capabilities of Iran, and the looming sanctions from the Trump administration, getting both information and leverage over a key politician in a NATO country becomes even more interesting, not only to Iran but also to Russia.

Coming back to Iran’s more recent tighter cooperation with Russia, it is not unlikely that they are also initiating a closer relationship when it comes to intelligence gathering. The use of honey traps has been a long-standing Russian tactic for information gathering and getting leverage over decision makers. In 2015, Norwegian police warned against Russian intelligence operations targeting politicians, including the use of honey traps (https://finance.yahoo.com/news/norwegian-police-warning-citizens-against-195510994.html).

A summary: why is he still in office?

The facts and arguments presented above should indicate two things very clearly:

  • Based on publicly known information, it is clearly possible that Iranian intelligence is targeting Per Sandberg. They may have an asset close to him, as well as having had physical access to his smartphone that has direct access to classified information systems.
  • Further, Sandberg has broken established security protocol, and although admitting this, he does not seem to appreciate the potential impact

The effect of a top leader not taking security seriously is very unfortunate. Good security awareness in an organization depends heavily on the visible actions of its people at the top – in business as well as in politics. A breach of security policy without getting any personal consequences on this level sends a very poor message to other politicians and government officials. It also sends a message to adversaries that targeting top-level politicians is likely to work, even if there are numerous indicators of a security breach. There should be no other possible conclusion of this than relieving Mr. Sandberg of his position – which would set him free to further develop his relationship with the Iranian beauty queen.