Localization in Vue 2.0 single file components without using an i18n library

This post is a quick note on localization. Recently I needed to add some localization features to a frontend project running Vue 2.0. Why not just use an i18n package? That would probably have been a good thing to do, but those I found either had confusing documentation, a high number of old unresolved issues on Github, or warnings from npm audit. And my needs were very simple: translate a few text blocks.

Localization is not always easy – how hard is it to get it done without extra dependencies?

Starting simple

Let’s start with a simple Vue app in a single HTML file using a CDN to load Vue.

<html>
    <head>
        <!-- Import VueJS in dev version from CDN -->
        <script src="https://cdn.jsdelivr.net/npm/vue/dist/vue.js"></script>
        <title>Vue Translation</title>
    </head>
    <body>
        <div id="app">
            <h1>
                {{ translations.en.hello }}
            </h1>
            
        </div>
        <script>
            var app = new Vue({
                el: '#app',
                data: {
                    message: 'Hello Vue!',
                    translations: {
                        en: {
                            "hello": "Hello World"
                        },
                        no: {
                            "hello": "Hallo Verden"
                        },
                        de: {
                            "hello": "Guten Tag, Welt!"
                        }
                    },
                    locale: 'en'
                }
            })
        </script>
    </body>
</html>

This will render the English version of “Hello World”. Let us first consider how we are going to get the right text for our locale dynamically. We need a function to load that. Let’s create a methods section in our Vue app, and add a get_text() function:

 <h1>
   {{ get_text('hello') }}
</h1>
...
<script>
var app = new Vue({
  el: '#app',
  data: {
    ...
  },
  methods () {
    get_text (textbit) {
      return this.translations[this.locale][textbit]
    }
  }
})

OK – so this works, now we are only missing a way to set the preferred language. A simple way for this example is to add a button that sets it when clicked. To make the choice persistent we can save the choice to localStorage.

If you want the example file, you can download it from github.com. It is also published here so you can see it in action.

Translation – is this a good approach?

The answer to that is probably yes and no – it depends on your needs. Here’s what you will miss out on that a good i18n library would solve for you:

  • Dynamic tagging of translation links inline in templates. Making it easier to generate localization files for translators automatically.
  • Pluralization: getting grammar right can be hard. Going from one to several things is hard enough in English – but other languages may be much more difficult to translate because of more variation in the way words and phrases are used in different contexts.

The good things are: no dependencies, and you can use the data structure you want. Giving JSON files to translators works, just show them where to insert the translated strings.

Tip: use `string` for strings instead of ‘string’ or “string”. This makes multiline strings directly usable in your .vue files or .js files.

What about single-file components?

You can use this approach in single-file components without modification. A few notes:

  • Consider using a Vuex store to share state across components instead of relying on localStorage
  • It is probably a good idea to create a global translation file for things that are used in many components, to avoid repeating the same translation many times
  • Finally – if your translation needs are complicated, it is probably better to go with a well maintained i18n library. If you can find one.

CCSK Domain 3: Legal and contractual issues

This is a relatively long post. Specific areas covered:

3.1 Overview

3.1.1 Legal frameworks governing data protection and privacy

Conflicting requirements in different jurisdictions, and sometimes within the same jurisdiction. Legal requirements may vary according to

  • Location of cloud provider
  • Location of cloud consumer
  • Location of data subject
  • Location of servers/datacenters
  • Legal jurisdiction of contract between the parties, which may be different than the locations of those parties
  • Any international treaties between the locations where the parties are located

3.1.1.1 Common themes

Omnibus laws: same law applicable across all sectors

Sectoral laws

3.1.1.2 Required security measures

Legal requirements may include prescriptive or risk based security measures.

3.1.1.3 Restrictions to cross-border data transfer

Transfer of data across borders can be prohibited. The most common situation is a based on transferring personal data to countries that do not have “adequate data protection laws”. This is a common theme in the GDPR. Other examples are data covered by national security legislation.

For personal data, transfers to inadequate locations may require specific legal instruments to be put in place in order for this to be considered compliant with the stricter region’s legal requirements.

3.1.1.4 Regional examples

Australia

  • Privacy act of 1988
  • Australian consumer law (ACL)

The privacy act has 13 Australian privacy principles (APP’s) that apply to all sectors including non-profit organizations that have an annual turnover of more than 3 million Australian dollars.

In 2017 the Australian privacy act was amended to require companies to notify affected Australian residents and the Australian Information Commissioner of breaches that can cause serious harm. A security breach must be reported if:

  1. There is unauthorized access or disclosure of personal information that can cause serious harm
  2. Personal information is lost in circumstances where disclosure is likely and could cause serious harm

The ACL protects consumers from fraudulent contracts and poor conduct from service providers, such as failed breach notifications. The Australian Privacy Act can apply to Australian customers/consumers even if the cloud provider is based elsewhere or other laws are stated in the service agreement.

China

China has introduced new legislation governing information systems over the last few years.

  • 2017: Cyber security law: applies to critical information infrastructure operators
  • May 2017: Proposed measures on the security of cross-border transfers of personal information and important data. Under evaluation for implementation at the time of issue of CCSP guidance v. 4.

The 2017 cybersecurity law puts requirements on infrastructure operators to design systems with security in mind, put in place emergency response plans and give access and assistance to investigating authorities, for both national security purposes and criminal investigations.

The Chinese security law also requires companies to inform users about known security defects, and also report defects to the authorities.

Regarding privacy the cybersecurity law requires that personal information about Chinese citizens is stored inside mainland China.

The draft regulations on cross-border data transfer issued in 2017 go further than the cybersecurity law.

  • New security assessment requirements for companies that want to send data out of China
  • Expanding data localization requirements (the types of data that can only be stored inside China)

Japan

The relevant Japanese legislation is found in “Act on the Protection of Personal Information (APPI). There are also multiple sector specific laws.

Beginning in 2017, amendments to the APPI require consent of the data subject for transfer of personal data to a third party. Consent is not required if the receiving party operates in a location with data protection laws considered adequate by the Personal Information Protection Commission.

EU: GDPR and e-Privacy

The GDPR came into force on 25 May 2018. The e-Privacy directive is still not enforced. TechRepublic has a short summary of differences between the two regulations (https://www.techrepublic.com/article/gdpr-vs-epPRrivacy-the-3-differences-you-need-to-know/):

  1. ePrivacy specifically covers electronic communications. It is evolved from the 2002 ePrivacy directive that focused primarily on email and sms, whereas the new version will cover electronic communications in general, including data communication with IoT devices and the use of social media platforms. The ePrivacy directive will also cover metadata about private communications.
  2. ePrivacy includes non-personal data. The focus is on confidentiality of communications, that may also contain non-personal data and data related to a legal person.
  3. The have different legal precedents. GDPR is based on Article 8 in the European Charter of Human Rights, whereas the ePrivacy directive is based on Article 16 and Article 114 of the Treaty on the Functioning of the European Union – but also Article 7 of the Charter of Fundamental Rights: “Everyone has the right to respect for his or her private and family life, home and communications.”

The CSA guidance gives a summary of GDPR requirements:

  • Data processors must keep records of processing
  • Data subject rights: data subjects have a right to information on how their data is being processed, the right to object to certain uses of their personal data, the right to have data corrected or deleted, to be compensated for damages suffered as a result of unlawful processing, and the right to data portability. These rights significantly affect cloud relationships and contracts.
  • Security breaches: breaches must be reported to authorities within 72 hours and data subjects must be notified if there is a risk of serious harm to the data subjects
  • There are country specific variations in some interpretations. For example, Germany required that an organization has a data protection officer if the company has more than 9 employees.
  • Sanctions: authorities can use fines up to 4% of global annual revenue, or 20 million EUR for serious violations, whichever amount is higher.

EU: Network information security directive

The NIS directive is enforced since May 2018. The directive introduces a framework for ensuring confidentiality, integrity and availability of networks and information systems. The directive applies to critical infrastructure and essential societal and financial functions. The requirements include:

  • Take technical and organizational measures to secure networks and information systems
  • Take measures to prevent and minimize impact of incidents, and to facilitate business continuity during severe incidents
  • Notify without delay relevant authorities
  • Provide information necessary to assess the security of their networks and information systems
  • Provide evidence of effective implementation of security policies, such as a policy audit

The NIS directive requires member states to impose security requirements on online marketplaces, cloud computing service providers and online search engines. Digital service providers based outside the EU but that supply services within the EU are under scope of the directive.  

Note: parts of these requirements, in particular for critical infrastructure, are covered by various national security laws. The scope of the NIS directive is broader than national security and typically requires the introduction of new legislation. This work is not yet complete across the EU/EEC area. Digital Europe has an implementation tracker site set up here: https://www.digitaleurope.org/resources/nis-implementation-tracker/.

Central and South America

Data protection laws are coming into force in Central and South American countries. They include security requirements and the need for a data custodian.

North America: United States

The US has a sectoral approach to legislation with hundreds of federal, state and local regulations. Organizations doing business in the United States or that collect or process data on US residents or often subject to multiple laws, and identification of the regulatory matrix can be challenging for both cloud consumers and providers.

Federal law

  • The Gramm-Leach-Bliley Act (GLBA)
  • The Health Insurance Portability and Accountability Act, 1996 (known as HIPAA)
  • The Children’s Online Privacy Protection Act of 1998 (COPPA)

Most of these laws require companies to take precautions when hiring subcontractors and service providers. They may also hold organizations responsible for the acts of subcontractors.

US State Law

In addition to federal regulations, most US states have laws relating to data privacy and security. These laws apply to any entity that collect or process information on residents of that state, regardless of where the data is stored (the CSA guidance says regardless of where within the United States, but it is likely that they would apply to international storage as well in this case).

Security breach disclosure requirements

Breach disclosure requirements are found in multiple regulations. Most require informing data subjects.

Knowledge of these laws is important for both cloud consumers and providers, especially to regulate the risk of class action lawsuits.

In addition to the state laws and regulations, there is the “common law of privacy and security”, a nickname given to a body of consent orders published by federal and state government agencies based on investigations into security incidents.

Especially the FTC (Federal Trade Commission) has for almost 20 years the power to conduct enforcement actions against companies whose privacy and security practices are inconsistent with claims made in public disclosures, making their practices “unfair and deceptive”. For cloud computing this means that when a certain way of working changes, the public documentation of the system needs to be updated to make sure actions are not in breach of Section 4 of the FTC Act.

1.3.2 Contracts and Provider Selection

In addition to legal requirements, cloud consumers may have contractual obligations to protect the personal data of their own clients, contacts or employees, such as securing the data and avoiding other processing that what has been agreed. Key documents are typically Terms and Conditions and Privacy Policy documents posted on websites of companies.

When data or operations are transferred to a cloud, the responsibility for the data typically remains with the collector. There may be sharing of responsibilities when the cloud provider is performing some of the operations. This also depends on the service model of the cloud provider. In any case a data processing agreement or similar contractual instrument should be put in place to regulate activities, uses and responsibilities.

3.1.2.1 Internal due diligence

Prior to using a cloud service both parties (cloud provider and consumer) should identify legal requirements and compliance barriers.

Cloud consumers should investigate whether it has entered into any confidentiality agreements or data use agreements that could limit the use of a cloud service. In such cases consent from the client needs to be in place before transferring data to a cloud environment.

3.1.2.3 External due diligence

Before entering into a contract, a review of the other party’s operations should be done. For evaluating a cloud service, this will typically include a look at the applicable service level, end-user and legal agreements, security policies, security disclosures and compliance proof (typically an audit report).

3.1.2.4 Contract negotiations

Cloud contracts are often standardized. An important aspect is the regulation of shared responsibilities. Contracts should be reviewed carefully also when they are presented as “not up for negotiation”. When certain contractual requirements cannot be included the customer should evaluate if other risk mitigation techniques can be used.

3.1.2.5 Reliance on third-party audits and attestations

Audit reports could and should be used in security assessments. The scope of the audit should be considered when used in place of a direct audit.

3.1.3 Electronic discovery

In US law, discovery is the process by which an opposing party obtains private documents for use in litigation. Discovery does not have to be limited to documents known to be admissible as evidence in court from the outset. Discovery applies to all documents reasonably held to be admissible as evidence (relevant and probative). See federal rules on civil procedure: https://www.federalrulesofcivilprocedure.org/frcp/title-v-disclosures-and-discovery/rule-26-duty-to-disclose-general-provisions-governing-discovery/.

There have been many examples of litigants having deleted or lost evidence that caused them to lose the case and be sentenced to pay damages to the party not causing the data destruction. Because of this it is necessary that cloud providers and consumers plan for how to identify and extract all relevant documents relevant to a case.

3.1.3.1 Possession, custody and control

In most US jurisdictions, the obligation to produce relevant information to court is limited to data within its possession, custody or control. Using a cloud provider for storage does not remove this obligation. Some data may not be under the control of the consumer (disaster recovery, metadata), and such data can be relevant to a litigation. The responsibility of a cloud provider to provide such data remains unclear, especially in cross-border/international cases.

Recent cases of interest:

  • Norwegian police against Tidal regarding streaming fraud
  • FBI against Microsoft (Ireland Onedrive case)

3.1.3.2 Relevant cloud applications and environment

In some cases, a cloud application or environment itself could be relevant to resolving a dispute. In such circumstances the artefact is likely to be outside the control of the client and require a discovery process to served on the cloud provider directly, where such action is enforceable.

3.1.3.3 Searchability and e-discovery tools

Discovery may not be possible using the same tools as in traditional IT environments. Cloud providers do sometimes provide search functionality, or require such access through a negotiated cloud agreement.

3.1.3.4 Preservation

Preservation is the avoidance of destruction of data relevant to a litigation, or that is likely to be relevant to a litigation in the future. There are similar laws on this in the US, Europe, Japan, South Korea and Singapore.

3.1.3.5 Data retention laws and record keeping obligations

Data retention requirements exist for various types of data. Privacy laws put restrictions on retention. In the case of conflicting requirements on the same data, this should be resolved through guidance and case law. Storage requirements should be weighed against SLA requirements and costs when using cloud storage.

  • Scope of preservation: a requesting party is only entitled to data hosted in the cloud that contains data relevant to the legal issue at hand. Lack of granular identifiability can lead to a requirement to over-preserve and over-share data.
  • Dynamic and shared storage: the burden of preserving data in the cloud can be relevant if the client has space to hold it in place, if the data is static and the people with access is limited. Because of the elastic nature of cloud environments this is seldom the case in practice and it may be necessary to work with the cloud provider on a plan for data preservation.
  • Reasonable integrity: when subject to a discovery process, reasonable steps should be taken to secure the integrity of data collection (complete, accurate)
  • Limits to accessibility: if a cloud customer cannot access all relevant data in the cloud. The cloud consumer and provider may have to review the relevance of the request before taking further steps to acquire the data.

3.1.3.7 Direct access

Outside cloud environments it is not common to give the requesting party direct access to an IT environment. Direct hardware access in cloud environments if often not possible or desirable.

3.1.3,8 Native production

Cloud providers often store data in proprietary systems that the clients do not control. Evidence is typically expected to be delivered in the form of PDF files, etc. Export from the cloud environment may be the only option, which may be challenging with respect to the chain of custody.

3.1.3.9 Authentication

Forensic authentication of data admitted into evidence. The question here is whether the document is what it seems to be. Giving guarantees on data authenticity can be hard, an a document should not inherently be considered more or less admissible due to storage in the cloud.

3.1.3.10 Cooperation between provider and client in e-discovery

e-Discovery cooperation should preferably be regulated in contracts and be taken into account in service level agreements.

3.1.3.11 Response to a subpoena or search warrant

The cloud agreement should include provisions for notification of a subpoena to the client, and give the client time to try to fight the order.

3.2 Recommendations

The CSA guidance makes the following recommendations

  • Cloud customers should understand relevant legal and regulatory frameworks, as well as contractual requirements and restrictions that apply to handling of their data, and the conduct of their operations in the cloud.
  • Cloud providers should clearly disclose policies, requirements and capabilities, including its terms and conditions that apply to the services they provide.
  • Cloud customers should perform due diligence prior to cloud vendror selection
  • Cloud customers should understand the legal implications of the location of physical operations and storage of the cloud provider
  • Cloud customers should select reasonable locations for data storage to make sure they comply with their own legal requirements
  • Cloud customers should evaluate and take e-discovery requests into account
  • Cloud customers should understand that click-through legal agreements to use a cloud service do not negate requirements for a provider to perform due diligence

CCSK Domain 2: Governance and Enterprise Risk Management

Governance and risk management principles remain the same, but there are changes to the risk picture as well as available controls in the cloud. We need in particular take into account the following:

  • Cloud risk trade-offs and tools
  • Effects of service and deployment models
  • Risk management in the cloud
  • Tools of cloud governance

A key aspect to remember when deploying services or data to the cloud is that even if security controls are delegated to a third-party, the responsibility for corporate governance cannot be delegated; it remains within the cloud consumer organization.

Cloud providers aim to streamline and standardize their offerings as much as possible to achieve economies of scale. This is different from a dedicated third-party provider where contractual terms can often be negotiated. This means that governance frameworks should not treat cloud providers with the same approach as those dedicated service providers allowing for custom governance structures to be agreed on.

Responsibilities and mechanisms for governance is regulated in the contract. If a governance need is not described in the contract, there exists a governance gap. This does not mean that the provider should be excluded directly, but it does mean that the consumer should consider how that governance gap can be closed.

Moving to the cloud transfers a lot of the governance and risk management from technical controls to contractual controls.

Cloud governance tools

The key tools of governance in the cloud are contracts, assessments and reporting.

Contracts are the primary tools for extending governance into a third party such as a cloud provider. For public clouds this would typically mean the terms and conditions of the provider. They are the guarantee of a given service level, and also describes requirements for governance support through audits.

Supplier assessments are important as governance tools, especially during provider selection. Performing regular assessments can discover if changes to the offerings of the cloud provider has changed the governance situation, in particular with regard to any governance gaps.

Compliance reporting includes audit reports. They may also include automatically generated compliance data in a dashboard, such as patch level status on software, or some other defined KPI. Audit reports may be internal reports but most often these are made by an accredited third party. Common compliance frameworks are provided by ISO 27017, ISO 38500, COBIT.

Risk management

Enterprise risk management (ERM) in the cloud is based on the shared responsibility model. The provider will take responsibility for certain risk controls, whereas the consumer is responsible for others. Where the split is depends on the service model.

The division of responsibilities should be clearly regulated in the contract. Lack of such regulation can lead to hidden implementation gaps, leaving services vulnerable to abuse.

Service models

IaaS mostly resembles traditional IT as most controls remain under direct management of the cloud consumer. Thus, policies and controls do to a large degree remain under control of the cloud consumer too. There is one primary change and that is the orchestration/management plane. Managing the risk of the management plane becomes a core governance and risk management activity – basically moving responsibilities from on-prem activities to the management plane.

SaaS providers vary greatly in competence and the tools offered for compliance management. It is often possible to negotiate custom contracts with smaller SaaS providers, whereas the more mature or bigger players will have more standardized contracts but also more tools appropriate to governance needs of the enterprise. The SaaS model can be less transparent than desired, and establishing an acceptable contract is important in order to have good control over governance and risk management.

Public cloud providers often allow for less negotiation than private cloud. Hybrid and community governance can easily become complicated because the opinions of several parties will have to be weighed against each other.

Risk trade-offs

Using cloud services will typically result in more trust put in third-parties and less direct access to security controls. Whether this increases or decreases the overall risk level depends on the threat model, as well as political risk.

The key issue is that governance is changed from internal policy and auditing to contracts and audit reports; it is a less hands-on approach and can result in lower transparency and trust in the governance model.

CSA recommendations

  • Identify the shared responsibilities. Use accepted standards to build a cloud governance framework.
  • Understand and manage how contracts affect risk and governance. Consider alternative controls if a contract leaves governance gaps and cannot be changed.
  • Develop a process with criteria for provider selection. Re-assessments should be regular, and preferably automated.
  • Align risks to risk tolerances per asset as different assets may have different tolerance levels.

#2cents

Let us start with the contract side: most cloud deployments will be in a public cloud, and our ability to negotiate custom contracts will be very limited, or non-existing. What we will have to play with is the control options in the management plane.

The first thing we should perhaps take note of, is not really cloud related. We need to have a regulatory compliance matrix in order to make sure our governance framework and risk management processes actually will help us achieve compliance and acceptable risk levels. One practical way to set up a regulatory compliance matrix is to map applicable regulations and governacne requirements to the governance tools we have at our disposal to see if the tools can help achieve compliance.

Regulatory source Contractual impact Supplier assessments Audits Configuration management
GDPR Data processing agreement Security requirements GDPR compliance Data processing acitvities audits Data retention Backups Discoverability Encryption
Customer SLA SLA guarantees
Uptime reporting
ISO 27001
Certifications Audit reports for certifications Extension of company policies to management plane

Based on the regulatory compliance matrix, a more detailed governance matrix can be developed based on applicable guidance. Then governance and risk management gaps can be identified, and closing plans created.

Traditionally cloud deployments have been seen as higher risk than on-premise deployments due to less hands-on risk controls. For many organizations the use of cloud services with proper monitoring will lead to better security because many organizations have insufficient security controls and logging in their on-premise tools. There are thus situations where a shift from hands-on to contractual controls is a good thing for security. One could probably claim that this is the case for most cloud consumers.

One aspect that is critical to security is planning of incident response. To some degree the ability to do incidence response on cloud deployments depends on configurations set in the management plane; especially the use of logging and alerting functionality. It should also be clarified up front where the shared responsibility model puts the responsibility for performing incident response actions throughout all phases (preparation, identification, containment, eradication, recovery and lessons learned).

The best way to take cloud into account in risk management and governance is to make sure policies, procedures and standards cover cloud, and that cloud is not seen as an “add-on” to on-premise services. Only integrated governance systems will achieve transparency and managed regulatory compliance.

CCSK Domain 1: Cloud Computing Concepts and Architecture

Recently I participated in a one-day class on the contents required for the “Certificate of Cloud Security Knowledge” held by Peter HJ van Eijk in Trondheim as part of the conference Sikkerhet og Sårbarhet 2019 (translates from Norwegian to: Security and Vulnerability 2019). The one-day workshop was interesting and the instructor was good at creating interactive discussions – making it much better than the typical PowerPoint overdose of commmercial professional training sessions. There is a certification exam that I have not yet taken, and I decided I should document my notes on my blog; perhaps others can find some use for them too.

The CCSK exam closely follows a document made by the Cloud Security Alliance (CSA) called “CSA Security Guidance for Critical Areas of Focus in Cloud Computing v4.0” – a document you can download for free from the CSA webpage. They also lean on ENISA’s “Cloud Computing Risk Assessment”, which is also a free download.

Cloud computing isn’t about who owns the compute resources (someone else’s computer) – it is about providing scale and cost benefits through rapid elasticity, self-service, shared resource pools and a shared security responsibility model.

The way I’ll do these blog posts is that I’ll first share my notes, and then give a quick comment on what the whole thing means from my point of view (which may not really be that relevant to the CCSK exam if you came here for a shortcut to that).

Introduction to D1 (Cloud Concepts and Architecture)

Domain 1 contains 4 sections:  

  • Defining cloud computing 
  • The cloud logical model 
  • Cloud conceptual, architectural and reference model 
  • Cloud security and compliance scope, responsibilities and models 

NIST definition of cloud computing: a model for ensuring ubiquitous, convenient, on-demand network access to a shared pool for configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. 

A Cloud User is the person or organization requesting computational resources. The Cloud Provider is the person or organization offering the resources. 

Key techniques to create a cloud:  

  • Abstraction: we abstract resources from the underlying infrastructure to create resource pools  
  • Orchestration: coordination of delivering resources out of the pool on demand.  

Clouds are multitenant by nature. Consumers are segregated and isolated but share resource pools.  

Cloud computing models 

The foundation model of cloud computing of the CSA is the NIST model. A more in-depth model used as a reference model is taken from ISO/IEC.  The guidance talks mostly about the NIST model and doesn’t dive into the ISO/IEC model, which probably is sufficient for most definition needs.

Cloud computing has 5 charcteristics:

  1. Shared resource pool (compute resources in a pool that consumers can pull from)
  2. Rapid elasticity (can scale up and down quickly)
  3. Broad network access
  4. On-demand self-service (management plane, API’s)
  5. Measured service (pay-as-you-go)

Cloud computing has 3 service models

  • Software as a Service (SaaS): like Cybehave or Salesforce
  • Platform as a Service (PaaS): like WordPress or AWS Elastic Beanstalk
  • Infrastructure as a Service (IaaS): like VM’s running in Google Cloud

Cloud computing has 4 deployment models:

  • Public Cloud: pool shared by anyone
  • Private Cloud: pool shared within an organization
  • Hybrid Cloud: connection between two clouds, commonly used when an on-prem datacenter connects to a public cloud
  • Community Cloud: pool shared by a community, for example insurance companies that have formed some form of consortium

Models for discussing cloud security

The CSA document discusses multiple model type in a somewhat incoherent manner. The types of models it mentions can be categorized as follows:

  • Conceptual models: descriptions to explain concepts, such as the logic model from CSA.  
  • Controls models: like CCM 
  • Reference architectures: templates for implementing security controls 
  • Design patterns: solutions to particular problems 

The document also outlines a simple cloud security process model 

  • Identify security and compliance requirements, and existing controls 
  • Select provider, service and deployment models 
  • Define the architecture 
  • Assess the security controls 
  • Identify control gaps 
  • Design and implement controls to fill gaps 
  • Manage changes over time 

The CSA logic model

This model explains 4 “layers” of a cloud enviornment and introduces some “funny words”:

  • Infrastructure: the core components in computing infrastructure, such as servers, storage and networks 
  • Metastructure: protocols and mechanisms providing connections between infrastructure and the other layers 
  • Infostructure: The data and information (database records, file storage, etc) 
  • Applistructure: The applications deployed in the cloud and the underlying applications used ot build them. 

The key difference between traditional IT and cloud is the metastructure. Cloud metastructure contains the management plane components.  

Another key feature of cloud is that each layer tends to double. For example infrastructure is managed by the cloud provider, but the cloud consumer will establish a virtual infrastructure that will also need ot be managed (at least in the case of IaaS). 

Cloud security scope and responsibilities 

The responsibility for security domains maps to the access the different stakeholders have to each layer in the architecture stack.  

  • SaaS: cloud provider is responsible for perimeter, logging, and application security and the consumer may only have access to provision users and manage entitlemnets 
  • PaaS: the provider is typically responsible for platform security and the consumer is responsible for the security of the solutions deployed on the platform. Configuring the offered security features is often left to the consumer.  
  • IaaS: cloud provider is responsible for hypervisors, host OS, hardware and facilities, consumer for guest OS and up in the stack.  

Shared responsibility model leaves us with two focus areas:  

  • Cloud providers should clearly document internal security management and security controls available to consumers.  
  • Consumers should create a responsibility matrix to make sure controls are followed up by one of the parties 

Two compliance tools exist from the CSA and are recommended for mapping security controls:  

  • The Consensus Assessment Initiative Questionnaire (CAIQ) 
  • The Cloud Controls Matrix (CCM) 

#2cents

This domain is introductory and provides some terminology for discussing cloud computing. The key aspects from a risk management point of view are:

  • Cloud creates new risks that need to be managed, especially as it introduces more companies involved in maintaining security of the full stack compared to a full in-house managed stack. Requirements, contracts and audits become important tools.
  • The NIST model is more or less universally used in cloud discussions in practice. The service models are known to most IT practitioners, at least on the operations side.
  • The CSA guidance correctly designates the “metastructure” as the new kid on the block. The practical incarnation of this is API’s and console access (e.g. gcloud at API level and Google Cloud Console on “management plane” level). From a security point of view this means that maintaining security of local control libraries becomes very important, as well as identity and access management for the control plane in general.

In addition to the “who does what” problem that can occur with a shared security model, the self-service and fast-scaling properties of cloud computing often lead to “new and shiny” being pushed faster than security is aware of. An often overlooked part of “pushing security left” is that we also need to push both knowledge and accountability together with the ability to access the management plane (or parts of it through API’s or the cloud management console).

How to reduce cybersecurity risks for stores, shops and small businesses

Crime in general is moving online, and with that the digital risks for all businesses are increasing, including for traditional physical stores – as well as eCommerce sites. This blog post is a quick summary of some risks that are growing quickly and what shop owners can do to better control them.

Top 10 Cybersecurity Risks

The following risks are faced by most organizations. For many stores selling physical goods these would be devastating today as they rely more and more on digital services.

How secure is your shop when you include the digital arena? Do you put your customers at risk?
  1. Point of sale malware leading to stolen credit cards
  2. Supply chain disruptions due to cybersecurity incidents
  3. Ransomware on computers used to manage and run stores
  4. Physical system manipulation through sensors and IoT, e.g. an adversary turning off the cooling in a grocery store’s refrigerators
  5. Website hacks
  6. Hacking of customer’s mobile devices due to insecure wireless network
  7. Intrusion into systems via insecure networks
  8. Unavailability of critical digital services due to cyber incidents (e.g. SaaS systems needed to operate the business)
  9. Lack of IT competence to help respond to incidents
  10. Compromised e-mail accounts and social media accounts used to run the business

Securing the shop

Shop owners have long been used to securing their stores against physical theft – using alarms, guards and locks. Here are 5 things all shop owners can do to also secure their businesses against cybersecurity events:

1 – Use only up-to-date IT equipment and software.

Outdated software can be exploited by malware. Keeping software up to date drastically reduces the risk of infection. If you have equipment that cannot be upgraded because it is too old you should get rid of it. The rest should receive updates as quickly as possible when they are made avialable, preferably automatically if possible.

2 – Create a security awareness program for employees.

No business is stronger than its weakest link – and that is true for security too. By teaching employees good cybersecurity habits the risk of an employee downloading a dangerous attachment or accepting a shady excuse for weird behavior from a criminal will be much lower. A combination of on-site discussions and e-learning that can be consumed on mobile devices can be effective for delivering this.

3 – Use the guest network only for guests.

Many stores, coffee shops and other businesses offer free wifi for their customers. Make sure you avoid connecting critical equipment to this network as vulnerabilities can be exposed. Things I’ve seen on networks like this include thermostats, cash registers and printers. Use a separate network for those important things, and do not let outsiders onto that network.

4 – Secure your website like your front door.

Businesses will usually have a web site, quite often with some form of sales and marketing integration – but even if you don’t have anything else than a pretty static web page you should take care of its security. If it is down you lose a few customers, if it is hacked and customers are tricked out of their credit card data they will blame your shop, not the firm you bought the web design from. Make sure you require web designers to maintain and keep your site up to date, and that they follow best practices for web security. You should also consider running a security test of the web page on regular intervals.

5 – Prepare for times of trouble.

You should prepare for bad things to happen and have a plan in place for dealing with it. The basis for creating an incident response plan is a risk assessment that lists the potential threat scenarios. This will also help you come up with security measures that will make those scenarios less likely to occur.

6 – Create backups and test them!

The best medicine against losing data is having a recent backup and knowing how to restore your system. Make sure all critical data are backed up regularly. If you are using a cloud software for critical functions such as customer relationship management (CRM) or accounting, check with your vendor what backup options they have. Ideally your backups should be stored in a location that is not depending on the same infrastructure as the software itself. For example – if Google runs your software you can store your backups with Microsoft.

7 – Minimize the danger of hacked accounts.

The most common way a company gets hacked is a compromised account. This very often happens because of phishing or password reuse. Phishing is the use of e-mails to trick users into giving up their passwords – for example by sending them to a fake login page that is controlled by the hacker. Three things you can do that will reduce this risk by 99% is:

  • Tell everyone to use a password manager and ask them to use very long and complex passwords. They will no longer need to remember the passwords themselves so this will not be a problem. Examples of such software include 1Password and Lastpass.
  • Enforce two-factor authentication wherever possible (2FA for short). 2FA is the use of a second factor in addition to your password, such as a code generated on you mobile in order to log in.
  • Give everyone training on detection of social engineering scams as part of your awareness training program.

All of this may seem like quite a lot of work – but when it becomes a habit it will make your team more efficient, and will significantly reduce the cybersecurity threats for both you and your customers.

If you need tools for awareness training, risk management or just someone to talk to about security – take a look at the offerings from Cybehave – intelligent cloud software for better security.

Running an automated security audit using Burp Professional

Reading about hacking in the news can make it seem like anyone can just point a tool at any website and completely take it over. This is not really the case, as hacking, whether automated or manual, requires vulnerabilities.

A well-known tool for security professionals working with web applications is Burp from Portswigger. This is an excellent tool, and comes in multiple editions from the free community edition, which is a nice proxy that you can use to study HTTP requests and responses (and some other things), to the professional edition aimed at pentesting and enterprise which is more for DevOps automation. In this little test we’ll take the Burp Professional tool and run it using only default settings against a target application I made last year. This app is a simple app for posting things on the internet, and was just a small project I did to learn how to use some of the AWS tools for deployment and monitoring. You find it in all its glory at https://www.woodscreaming.com.

Just entering the URL http://www.woodscreaming.com and launching Burp to attack the application first goes through a crawl and audit of unauthenticated routes it can find (it basically clicks all the links it can find). Burp then registers a user, and starts probing the authenticated routes afterwards, including posting those weird numerical posts.

Woodscreaming.com: note the weird numerical posts. These are telltale signs of automated security testing with random input generation.

What scanners like Burp are usually good at finding, is obvious misconfigurations such as missing security headers, flags on cookies and so on. It did find some of these things in the woodscreaming.com page – but not many.

Waiting for security scanners can seem like it takes forever. Burp estimated some 25.000 days remaining after a while with the minimal http://www.woodscreaming.com page.

After runing for a while, Burp estimated that the remaining scan time was something like 25.000 days. I don’t know why this is the case (not seen this in other applications) but since a user can generate new URL paths simply by posting new content, a linear time estimation may easily diverge. A wild guess at what was going on. Because of this we just stopped the scan after some time as it was unlikely to discover new vulnerabilities after this.

The underlying application is a traditional server-driven MVC application running Django. Burp works well with applications like this and the default setup works better than it typically does for single page applications (SPA’s) that many web applications are today.

So, what did Burp find? Burp assigns a criticality to the vulnerabilities it finds. There were no “High” criticality vulns, but it reported some “Medium” ones.

Missing “Secure” flag on session cookies?

Burp reports 2 cookies that seem to be session cookies and that are missing the Secure flag. This means that these cookies would be set also if the application were to be accessed over an insecure connection (http instead of https), making a man-in-the-middle able to steal the session, or perform a cross-site request forgery attack (CSRF). This is a real find but the actual exposure is limited because the app is only served over https. It should nevertheless be fixed.

A side note on this: cookies are set by the Django framework in their default state, no configuration changes made. Hence, this is likely to be the case also on many other Django sites.

If we go to the “Low” category, there are several issues reported. These are typically harder to exploit, and will also be less likely to cause major breaches in terms of confidentiality, integrity and availability:

  • Client-side HTTP parameter pollution (reflected)
  • CSRF cookie without HTTPOnly flag set
  • Password field with autocomplete enabled
  • Strict transport security not enforced

The first one is perhaps the most interesting one.

HTTP paramter pollution: dangerous or not?

In this case the URL parameter reflected in an anchor tag’s href attribute is not interpreted by the application and thus cannot lead to bad things – but it could have been the case that get parameters had been interpreted in the backend, making it possible to have a person perform an unintended action in a request forgery attack. But in our case we say as the jargon file directs us: “It is not a but, it is a feature”!

So what about the “password field with autocomplete enabled”? This must be one of the most common alerts from auditing software today. This can lead to unintended disclosure of passwords and should be avoided. You’ll find the same on many well-known web pages – but that does not mean we shouldn’t try to avoid it. We’ll put it on the “fix list”.

Are automated tests useful?

Automated tests are useful but they are not the same as a full penetration test. They are good for:

  1. Basic configuration checks. This can typically be done entirely passively, no attack payloads needed.
  2. Identifying vulnerabilities. You will not find all, and you will get some false positives but this is useful.
  3. Learning about vulnerabilities: Burp has a very good documentation and good explanations for the vulnerabilities it finds.

If you add a few manual checks to the automated setup, perhaps in particular give it a site-map before starting a scan and testing inputs with fuzzing (which can also be done using Burp) you can get a relatively thorough security test done with a single tool.

Defending against OSINT in reconnaissance?

Hackers, whether they are cyber criminals trying to trick you into clicking a ransomware download link, or whether they are nation state intelligence operatives planning to gain access to your infrastructure, can improve their odds massively through proper target reconnaissance prior to any form of offensive engagement. Learn how you can review your footprint and make your organization harder to hack.

https://cybehave.no

Cybehave has an interesting post on OSINT and footprinting, and what approach companies can take to reduce the risk from this type of attack surface mapping: https://cybehave.no/2019/03/05/digital-footprint-how-can-you-defend-against-osint/ (disclaimer: written by me and I own 25% of this company).

tl;dr – straight to the to-do list

  • Don’t publish information with no business benefit and that will make you more vulnerable
  • Patch your vulnerabilities – both on the people and tech levels
  • Build a friendly environment for your people. Don’t let them struggle with issues alone.
  • Prepare for the worst (you can still hope for he best)