CCSK Domain 3: Legal and contractual issues

This is a relatively long post. Specific areas covered:

3.1 Overview

3.1.1 Legal frameworks governing data protection and privacy

Conflicting requirements in different jurisdictions, and sometimes within the same jurisdiction. Legal requirements may vary according to

  • Location of cloud provider
  • Location of cloud consumer
  • Location of data subject
  • Location of servers/datacenters
  • Legal jurisdiction of contract between the parties, which may be different than the locations of those parties
  • Any international treaties between the locations where the parties are located

3.1.1.1 Common themes

Omnibus laws: same law applicable across all sectors

Sectoral laws

3.1.1.2 Required security measures

Legal requirements may include prescriptive or risk based security measures.

3.1.1.3 Restrictions to cross-border data transfer

Transfer of data across borders can be prohibited. The most common situation is a based on transferring personal data to countries that do not have “adequate data protection laws”. This is a common theme in the GDPR. Other examples are data covered by national security legislation.

For personal data, transfers to inadequate locations may require specific legal instruments to be put in place in order for this to be considered compliant with the stricter region’s legal requirements.

3.1.1.4 Regional examples

Australia

  • Privacy act of 1988
  • Australian consumer law (ACL)

The privacy act has 13 Australian privacy principles (APP’s) that apply to all sectors including non-profit organizations that have an annual turnover of more than 3 million Australian dollars.

In 2017 the Australian privacy act was amended to require companies to notify affected Australian residents and the Australian Information Commissioner of breaches that can cause serious harm. A security breach must be reported if:

  1. There is unauthorized access or disclosure of personal information that can cause serious harm
  2. Personal information is lost in circumstances where disclosure is likely and could cause serious harm

The ACL protects consumers from fraudulent contracts and poor conduct from service providers, such as failed breach notifications. The Australian Privacy Act can apply to Australian customers/consumers even if the cloud provider is based elsewhere or other laws are stated in the service agreement.

China

China has introduced new legislation governing information systems over the last few years.

  • 2017: Cyber security law: applies to critical information infrastructure operators
  • May 2017: Proposed measures on the security of cross-border transfers of personal information and important data. Under evaluation for implementation at the time of issue of CCSP guidance v. 4.

The 2017 cybersecurity law puts requirements on infrastructure operators to design systems with security in mind, put in place emergency response plans and give access and assistance to investigating authorities, for both national security purposes and criminal investigations.

The Chinese security law also requires companies to inform users about known security defects, and also report defects to the authorities.

Regarding privacy the cybersecurity law requires that personal information about Chinese citizens is stored inside mainland China.

The draft regulations on cross-border data transfer issued in 2017 go further than the cybersecurity law.

  • New security assessment requirements for companies that want to send data out of China
  • Expanding data localization requirements (the types of data that can only be stored inside China)

Japan

The relevant Japanese legislation is found in “Act on the Protection of Personal Information (APPI). There are also multiple sector specific laws.

Beginning in 2017, amendments to the APPI require consent of the data subject for transfer of personal data to a third party. Consent is not required if the receiving party operates in a location with data protection laws considered adequate by the Personal Information Protection Commission.

EU: GDPR and e-Privacy

The GDPR came into force on 25 May 2018. The e-Privacy directive is still not enforced. TechRepublic has a short summary of differences between the two regulations (https://www.techrepublic.com/article/gdpr-vs-epPRrivacy-the-3-differences-you-need-to-know/):

  1. ePrivacy specifically covers electronic communications. It is evolved from the 2002 ePrivacy directive that focused primarily on email and sms, whereas the new version will cover electronic communications in general, including data communication with IoT devices and the use of social media platforms. The ePrivacy directive will also cover metadata about private communications.
  2. ePrivacy includes non-personal data. The focus is on confidentiality of communications, that may also contain non-personal data and data related to a legal person.
  3. The have different legal precedents. GDPR is based on Article 8 in the European Charter of Human Rights, whereas the ePrivacy directive is based on Article 16 and Article 114 of the Treaty on the Functioning of the European Union – but also Article 7 of the Charter of Fundamental Rights: “Everyone has the right to respect for his or her private and family life, home and communications.”

The CSA guidance gives a summary of GDPR requirements:

  • Data processors must keep records of processing
  • Data subject rights: data subjects have a right to information on how their data is being processed, the right to object to certain uses of their personal data, the right to have data corrected or deleted, to be compensated for damages suffered as a result of unlawful processing, and the right to data portability. These rights significantly affect cloud relationships and contracts.
  • Security breaches: breaches must be reported to authorities within 72 hours and data subjects must be notified if there is a risk of serious harm to the data subjects
  • There are country specific variations in some interpretations. For example, Germany required that an organization has a data protection officer if the company has more than 9 employees.
  • Sanctions: authorities can use fines up to 4% of global annual revenue, or 20 million EUR for serious violations, whichever amount is higher.

EU: Network information security directive

The NIS directive is enforced since May 2018. The directive introduces a framework for ensuring confidentiality, integrity and availability of networks and information systems. The directive applies to critical infrastructure and essential societal and financial functions. The requirements include:

  • Take technical and organizational measures to secure networks and information systems
  • Take measures to prevent and minimize impact of incidents, and to facilitate business continuity during severe incidents
  • Notify without delay relevant authorities
  • Provide information necessary to assess the security of their networks and information systems
  • Provide evidence of effective implementation of security policies, such as a policy audit

The NIS directive requires member states to impose security requirements on online marketplaces, cloud computing service providers and online search engines. Digital service providers based outside the EU but that supply services within the EU are under scope of the directive.  

Note: parts of these requirements, in particular for critical infrastructure, are covered by various national security laws. The scope of the NIS directive is broader than national security and typically requires the introduction of new legislation. This work is not yet complete across the EU/EEC area. Digital Europe has an implementation tracker site set up here: https://www.digitaleurope.org/resources/nis-implementation-tracker/.

Central and South America

Data protection laws are coming into force in Central and South American countries. They include security requirements and the need for a data custodian.

North America: United States

The US has a sectoral approach to legislation with hundreds of federal, state and local regulations. Organizations doing business in the United States or that collect or process data on US residents or often subject to multiple laws, and identification of the regulatory matrix can be challenging for both cloud consumers and providers.

Federal law

  • The Gramm-Leach-Bliley Act (GLBA)
  • The Health Insurance Portability and Accountability Act, 1996 (known as HIPAA)
  • The Children’s Online Privacy Protection Act of 1998 (COPPA)

Most of these laws require companies to take precautions when hiring subcontractors and service providers. They may also hold organizations responsible for the acts of subcontractors.

US State Law

In addition to federal regulations, most US states have laws relating to data privacy and security. These laws apply to any entity that collect or process information on residents of that state, regardless of where the data is stored (the CSA guidance says regardless of where within the United States, but it is likely that they would apply to international storage as well in this case).

Security breach disclosure requirements

Breach disclosure requirements are found in multiple regulations. Most require informing data subjects.

Knowledge of these laws is important for both cloud consumers and providers, especially to regulate the risk of class action lawsuits.

In addition to the state laws and regulations, there is the “common law of privacy and security”, a nickname given to a body of consent orders published by federal and state government agencies based on investigations into security incidents.

Especially the FTC (Federal Trade Commission) has for almost 20 years the power to conduct enforcement actions against companies whose privacy and security practices are inconsistent with claims made in public disclosures, making their practices “unfair and deceptive”. For cloud computing this means that when a certain way of working changes, the public documentation of the system needs to be updated to make sure actions are not in breach of Section 4 of the FTC Act.

1.3.2 Contracts and Provider Selection

In addition to legal requirements, cloud consumers may have contractual obligations to protect the personal data of their own clients, contacts or employees, such as securing the data and avoiding other processing that what has been agreed. Key documents are typically Terms and Conditions and Privacy Policy documents posted on websites of companies.

When data or operations are transferred to a cloud, the responsibility for the data typically remains with the collector. There may be sharing of responsibilities when the cloud provider is performing some of the operations. This also depends on the service model of the cloud provider. In any case a data processing agreement or similar contractual instrument should be put in place to regulate activities, uses and responsibilities.

3.1.2.1 Internal due diligence

Prior to using a cloud service both parties (cloud provider and consumer) should identify legal requirements and compliance barriers.

Cloud consumers should investigate whether it has entered into any confidentiality agreements or data use agreements that could limit the use of a cloud service. In such cases consent from the client needs to be in place before transferring data to a cloud environment.

3.1.2.3 External due diligence

Before entering into a contract, a review of the other party’s operations should be done. For evaluating a cloud service, this will typically include a look at the applicable service level, end-user and legal agreements, security policies, security disclosures and compliance proof (typically an audit report).

3.1.2.4 Contract negotiations

Cloud contracts are often standardized. An important aspect is the regulation of shared responsibilities. Contracts should be reviewed carefully also when they are presented as “not up for negotiation”. When certain contractual requirements cannot be included the customer should evaluate if other risk mitigation techniques can be used.

3.1.2.5 Reliance on third-party audits and attestations

Audit reports could and should be used in security assessments. The scope of the audit should be considered when used in place of a direct audit.

3.1.3 Electronic discovery

In US law, discovery is the process by which an opposing party obtains private documents for use in litigation. Discovery does not have to be limited to documents known to be admissible as evidence in court from the outset. Discovery applies to all documents reasonably held to be admissible as evidence (relevant and probative). See federal rules on civil procedure: https://www.federalrulesofcivilprocedure.org/frcp/title-v-disclosures-and-discovery/rule-26-duty-to-disclose-general-provisions-governing-discovery/.

There have been many examples of litigants having deleted or lost evidence that caused them to lose the case and be sentenced to pay damages to the party not causing the data destruction. Because of this it is necessary that cloud providers and consumers plan for how to identify and extract all relevant documents relevant to a case.

3.1.3.1 Possession, custody and control

In most US jurisdictions, the obligation to produce relevant information to court is limited to data within its possession, custody or control. Using a cloud provider for storage does not remove this obligation. Some data may not be under the control of the consumer (disaster recovery, metadata), and such data can be relevant to a litigation. The responsibility of a cloud provider to provide such data remains unclear, especially in cross-border/international cases.

Recent cases of interest:

  • Norwegian police against Tidal regarding streaming fraud
  • FBI against Microsoft (Ireland Onedrive case)

3.1.3.2 Relevant cloud applications and environment

In some cases, a cloud application or environment itself could be relevant to resolving a dispute. In such circumstances the artefact is likely to be outside the control of the client and require a discovery process to served on the cloud provider directly, where such action is enforceable.

3.1.3.3 Searchability and e-discovery tools

Discovery may not be possible using the same tools as in traditional IT environments. Cloud providers do sometimes provide search functionality, or require such access through a negotiated cloud agreement.

3.1.3.4 Preservation

Preservation is the avoidance of destruction of data relevant to a litigation, or that is likely to be relevant to a litigation in the future. There are similar laws on this in the US, Europe, Japan, South Korea and Singapore.

3.1.3.5 Data retention laws and record keeping obligations

Data retention requirements exist for various types of data. Privacy laws put restrictions on retention. In the case of conflicting requirements on the same data, this should be resolved through guidance and case law. Storage requirements should be weighed against SLA requirements and costs when using cloud storage.

  • Scope of preservation: a requesting party is only entitled to data hosted in the cloud that contains data relevant to the legal issue at hand. Lack of granular identifiability can lead to a requirement to over-preserve and over-share data.
  • Dynamic and shared storage: the burden of preserving data in the cloud can be relevant if the client has space to hold it in place, if the data is static and the people with access is limited. Because of the elastic nature of cloud environments this is seldom the case in practice and it may be necessary to work with the cloud provider on a plan for data preservation.
  • Reasonable integrity: when subject to a discovery process, reasonable steps should be taken to secure the integrity of data collection (complete, accurate)
  • Limits to accessibility: if a cloud customer cannot access all relevant data in the cloud. The cloud consumer and provider may have to review the relevance of the request before taking further steps to acquire the data.

3.1.3.7 Direct access

Outside cloud environments it is not common to give the requesting party direct access to an IT environment. Direct hardware access in cloud environments if often not possible or desirable.

3.1.3,8 Native production

Cloud providers often store data in proprietary systems that the clients do not control. Evidence is typically expected to be delivered in the form of PDF files, etc. Export from the cloud environment may be the only option, which may be challenging with respect to the chain of custody.

3.1.3.9 Authentication

Forensic authentication of data admitted into evidence. The question here is whether the document is what it seems to be. Giving guarantees on data authenticity can be hard, an a document should not inherently be considered more or less admissible due to storage in the cloud.

3.1.3.10 Cooperation between provider and client in e-discovery

e-Discovery cooperation should preferably be regulated in contracts and be taken into account in service level agreements.

3.1.3.11 Response to a subpoena or search warrant

The cloud agreement should include provisions for notification of a subpoena to the client, and give the client time to try to fight the order.

3.2 Recommendations

The CSA guidance makes the following recommendations

  • Cloud customers should understand relevant legal and regulatory frameworks, as well as contractual requirements and restrictions that apply to handling of their data, and the conduct of their operations in the cloud.
  • Cloud providers should clearly disclose policies, requirements and capabilities, including its terms and conditions that apply to the services they provide.
  • Cloud customers should perform due diligence prior to cloud vendror selection
  • Cloud customers should understand the legal implications of the location of physical operations and storage of the cloud provider
  • Cloud customers should select reasonable locations for data storage to make sure they comply with their own legal requirements
  • Cloud customers should evaluate and take e-discovery requests into account
  • Cloud customers should understand that click-through legal agreements to use a cloud service do not negate requirements for a provider to perform due diligence

CCSK Domain 2: Governance and Enterprise Risk Management

Governance and risk management principles remain the same, but there are changes to the risk picture as well as available controls in the cloud. We need in particular take into account the following:

  • Cloud risk trade-offs and tools
  • Effects of service and deployment models
  • Risk management in the cloud
  • Tools of cloud governance

A key aspect to remember when deploying services or data to the cloud is that even if security controls are delegated to a third-party, the responsibility for corporate governance cannot be delegated; it remains within the cloud consumer organization.

Cloud providers aim to streamline and standardize their offerings as much as possible to achieve economies of scale. This is different from a dedicated third-party provider where contractual terms can often be negotiated. This means that governance frameworks should not treat cloud providers with the same approach as those dedicated service providers allowing for custom governance structures to be agreed on.

Responsibilities and mechanisms for governance is regulated in the contract. If a governance need is not described in the contract, there exists a governance gap. This does not mean that the provider should be excluded directly, but it does mean that the consumer should consider how that governance gap can be closed.

Moving to the cloud transfers a lot of the governance and risk management from technical controls to contractual controls.

Cloud governance tools

The key tools of governance in the cloud are contracts, assessments and reporting.

Contracts are the primary tools for extending governance into a third party such as a cloud provider. For public clouds this would typically mean the terms and conditions of the provider. They are the guarantee of a given service level, and also describes requirements for governance support through audits.

Supplier assessments are important as governance tools, especially during provider selection. Performing regular assessments can discover if changes to the offerings of the cloud provider has changed the governance situation, in particular with regard to any governance gaps.

Compliance reporting includes audit reports. They may also include automatically generated compliance data in a dashboard, such as patch level status on software, or some other defined KPI. Audit reports may be internal reports but most often these are made by an accredited third party. Common compliance frameworks are provided by ISO 27017, ISO 38500, COBIT.

Risk management

Enterprise risk management (ERM) in the cloud is based on the shared responsibility model. The provider will take responsibility for certain risk controls, whereas the consumer is responsible for others. Where the split is depends on the service model.

The division of responsibilities should be clearly regulated in the contract. Lack of such regulation can lead to hidden implementation gaps, leaving services vulnerable to abuse.

Service models

IaaS mostly resembles traditional IT as most controls remain under direct management of the cloud consumer. Thus, policies and controls do to a large degree remain under control of the cloud consumer too. There is one primary change and that is the orchestration/management plane. Managing the risk of the management plane becomes a core governance and risk management activity – basically moving responsibilities from on-prem activities to the management plane.

SaaS providers vary greatly in competence and the tools offered for compliance management. It is often possible to negotiate custom contracts with smaller SaaS providers, whereas the more mature or bigger players will have more standardized contracts but also more tools appropriate to governance needs of the enterprise. The SaaS model can be less transparent than desired, and establishing an acceptable contract is important in order to have good control over governance and risk management.

Public cloud providers often allow for less negotiation than private cloud. Hybrid and community governance can easily become complicated because the opinions of several parties will have to be weighed against each other.

Risk trade-offs

Using cloud services will typically result in more trust put in third-parties and less direct access to security controls. Whether this increases or decreases the overall risk level depends on the threat model, as well as political risk.

The key issue is that governance is changed from internal policy and auditing to contracts and audit reports; it is a less hands-on approach and can result in lower transparency and trust in the governance model.

CSA recommendations

  • Identify the shared responsibilities. Use accepted standards to build a cloud governance framework.
  • Understand and manage how contracts affect risk and governance. Consider alternative controls if a contract leaves governance gaps and cannot be changed.
  • Develop a process with criteria for provider selection. Re-assessments should be regular, and preferably automated.
  • Align risks to risk tolerances per asset as different assets may have different tolerance levels.

#2cents

Let us start with the contract side: most cloud deployments will be in a public cloud, and our ability to negotiate custom contracts will be very limited, or non-existing. What we will have to play with is the control options in the management plane.

The first thing we should perhaps take note of, is not really cloud related. We need to have a regulatory compliance matrix in order to make sure our governance framework and risk management processes actually will help us achieve compliance and acceptable risk levels. One practical way to set up a regulatory compliance matrix is to map applicable regulations and governacne requirements to the governance tools we have at our disposal to see if the tools can help achieve compliance.

Regulatory source Contractual impact Supplier assessments Audits Configuration management
GDPR Data processing agreement Security requirements GDPR compliance Data processing acitvities audits Data retention Backups Discoverability Encryption
Customer SLA SLA guarantees
Uptime reporting
ISO 27001
Certifications Audit reports for certifications Extension of company policies to management plane

Based on the regulatory compliance matrix, a more detailed governance matrix can be developed based on applicable guidance. Then governance and risk management gaps can be identified, and closing plans created.

Traditionally cloud deployments have been seen as higher risk than on-premise deployments due to less hands-on risk controls. For many organizations the use of cloud services with proper monitoring will lead to better security because many organizations have insufficient security controls and logging in their on-premise tools. There are thus situations where a shift from hands-on to contractual controls is a good thing for security. One could probably claim that this is the case for most cloud consumers.

One aspect that is critical to security is planning of incident response. To some degree the ability to do incidence response on cloud deployments depends on configurations set in the management plane; especially the use of logging and alerting functionality. It should also be clarified up front where the shared responsibility model puts the responsibility for performing incident response actions throughout all phases (preparation, identification, containment, eradication, recovery and lessons learned).

The best way to take cloud into account in risk management and governance is to make sure policies, procedures and standards cover cloud, and that cloud is not seen as an “add-on” to on-premise services. Only integrated governance systems will achieve transparency and managed regulatory compliance.

How to reduce cybersecurity risks for stores, shops and small businesses

Crime in general is moving online, and with that the digital risks for all businesses are increasing, including for traditional physical stores – as well as eCommerce sites. This blog post is a quick summary of some risks that are growing quickly and what shop owners can do to better control them.

Top 10 Cybersecurity Risks

The following risks are faced by most organizations. For many stores selling physical goods these would be devastating today as they rely more and more on digital services.

How secure is your shop when you include the digital arena? Do you put your customers at risk?
  1. Point of sale malware leading to stolen credit cards
  2. Supply chain disruptions due to cybersecurity incidents
  3. Ransomware on computers used to manage and run stores
  4. Physical system manipulation through sensors and IoT, e.g. an adversary turning off the cooling in a grocery store’s refrigerators
  5. Website hacks
  6. Hacking of customer’s mobile devices due to insecure wireless network
  7. Intrusion into systems via insecure networks
  8. Unavailability of critical digital services due to cyber incidents (e.g. SaaS systems needed to operate the business)
  9. Lack of IT competence to help respond to incidents
  10. Compromised e-mail accounts and social media accounts used to run the business

Securing the shop

Shop owners have long been used to securing their stores against physical theft – using alarms, guards and locks. Here are 5 things all shop owners can do to also secure their businesses against cybersecurity events:

1 – Use only up-to-date IT equipment and software.

Outdated software can be exploited by malware. Keeping software up to date drastically reduces the risk of infection. If you have equipment that cannot be upgraded because it is too old you should get rid of it. The rest should receive updates as quickly as possible when they are made avialable, preferably automatically if possible.

2 – Create a security awareness program for employees.

No business is stronger than its weakest link – and that is true for security too. By teaching employees good cybersecurity habits the risk of an employee downloading a dangerous attachment or accepting a shady excuse for weird behavior from a criminal will be much lower. A combination of on-site discussions and e-learning that can be consumed on mobile devices can be effective for delivering this.

3 – Use the guest network only for guests.

Many stores, coffee shops and other businesses offer free wifi for their customers. Make sure you avoid connecting critical equipment to this network as vulnerabilities can be exposed. Things I’ve seen on networks like this include thermostats, cash registers and printers. Use a separate network for those important things, and do not let outsiders onto that network.

4 – Secure your website like your front door.

Businesses will usually have a web site, quite often with some form of sales and marketing integration – but even if you don’t have anything else than a pretty static web page you should take care of its security. If it is down you lose a few customers, if it is hacked and customers are tricked out of their credit card data they will blame your shop, not the firm you bought the web design from. Make sure you require web designers to maintain and keep your site up to date, and that they follow best practices for web security. You should also consider running a security test of the web page on regular intervals.

5 – Prepare for times of trouble.

You should prepare for bad things to happen and have a plan in place for dealing with it. The basis for creating an incident response plan is a risk assessment that lists the potential threat scenarios. This will also help you come up with security measures that will make those scenarios less likely to occur.

6 – Create backups and test them!

The best medicine against losing data is having a recent backup and knowing how to restore your system. Make sure all critical data are backed up regularly. If you are using a cloud software for critical functions such as customer relationship management (CRM) or accounting, check with your vendor what backup options they have. Ideally your backups should be stored in a location that is not depending on the same infrastructure as the software itself. For example – if Google runs your software you can store your backups with Microsoft.

7 – Minimize the danger of hacked accounts.

The most common way a company gets hacked is a compromised account. This very often happens because of phishing or password reuse. Phishing is the use of e-mails to trick users into giving up their passwords – for example by sending them to a fake login page that is controlled by the hacker. Three things you can do that will reduce this risk by 99% is:

  • Tell everyone to use a password manager and ask them to use very long and complex passwords. They will no longer need to remember the passwords themselves so this will not be a problem. Examples of such software include 1Password and Lastpass.
  • Enforce two-factor authentication wherever possible (2FA for short). 2FA is the use of a second factor in addition to your password, such as a code generated on you mobile in order to log in.
  • Give everyone training on detection of social engineering scams as part of your awareness training program.

All of this may seem like quite a lot of work – but when it becomes a habit it will make your team more efficient, and will significantly reduce the cybersecurity threats for both you and your customers.

If you need tools for awareness training, risk management or just someone to talk to about security – take a look at the offerings from Cybehave – intelligent cloud software for better security.

Running an automated security audit using Burp Professional

Reading about hacking in the news can make it seem like anyone can just point a tool at any website and completely take it over. This is not really the case, as hacking, whether automated or manual, requires vulnerabilities.

A well-known tool for security professionals working with web applications is Burp from Portswigger. This is an excellent tool, and comes in multiple editions from the free community edition, which is a nice proxy that you can use to study HTTP requests and responses (and some other things), to the professional edition aimed at pentesting and enterprise which is more for DevOps automation. In this little test we’ll take the Burp Professional tool and run it using only default settings against a target application I made last year. This app is a simple app for posting things on the internet, and was just a small project I did to learn how to use some of the AWS tools for deployment and monitoring. You find it in all its glory at https://www.woodscreaming.com.

Just entering the URL http://www.woodscreaming.com and launching Burp to attack the application first goes through a crawl and audit of unauthenticated routes it can find (it basically clicks all the links it can find). Burp then registers a user, and starts probing the authenticated routes afterwards, including posting those weird numerical posts.

Woodscreaming.com: note the weird numerical posts. These are telltale signs of automated security testing with random input generation.

What scanners like Burp are usually good at finding, is obvious misconfigurations such as missing security headers, flags on cookies and so on. It did find some of these things in the woodscreaming.com page – but not many.

Waiting for security scanners can seem like it takes forever. Burp estimated some 25.000 days remaining after a while with the minimal http://www.woodscreaming.com page.

After runing for a while, Burp estimated that the remaining scan time was something like 25.000 days. I don’t know why this is the case (not seen this in other applications) but since a user can generate new URL paths simply by posting new content, a linear time estimation may easily diverge. A wild guess at what was going on. Because of this we just stopped the scan after some time as it was unlikely to discover new vulnerabilities after this.

The underlying application is a traditional server-driven MVC application running Django. Burp works well with applications like this and the default setup works better than it typically does for single page applications (SPA’s) that many web applications are today.

So, what did Burp find? Burp assigns a criticality to the vulnerabilities it finds. There were no “High” criticality vulns, but it reported some “Medium” ones.

Missing “Secure” flag on session cookies?

Burp reports 2 cookies that seem to be session cookies and that are missing the Secure flag. This means that these cookies would be set also if the application were to be accessed over an insecure connection (http instead of https), making a man-in-the-middle able to steal the session, or perform a cross-site request forgery attack (CSRF). This is a real find but the actual exposure is limited because the app is only served over https. It should nevertheless be fixed.

A side note on this: cookies are set by the Django framework in their default state, no configuration changes made. Hence, this is likely to be the case also on many other Django sites.

If we go to the “Low” category, there are several issues reported. These are typically harder to exploit, and will also be less likely to cause major breaches in terms of confidentiality, integrity and availability:

  • Client-side HTTP parameter pollution (reflected)
  • CSRF cookie without HTTPOnly flag set
  • Password field with autocomplete enabled
  • Strict transport security not enforced

The first one is perhaps the most interesting one.

HTTP paramter pollution: dangerous or not?

In this case the URL parameter reflected in an anchor tag’s href attribute is not interpreted by the application and thus cannot lead to bad things – but it could have been the case that get parameters had been interpreted in the backend, making it possible to have a person perform an unintended action in a request forgery attack. But in our case we say as the jargon file directs us: “It is not a but, it is a feature”!

So what about the “password field with autocomplete enabled”? This must be one of the most common alerts from auditing software today. This can lead to unintended disclosure of passwords and should be avoided. You’ll find the same on many well-known web pages – but that does not mean we shouldn’t try to avoid it. We’ll put it on the “fix list”.

Are automated tests useful?

Automated tests are useful but they are not the same as a full penetration test. They are good for:

  1. Basic configuration checks. This can typically be done entirely passively, no attack payloads needed.
  2. Identifying vulnerabilities. You will not find all, and you will get some false positives but this is useful.
  3. Learning about vulnerabilities: Burp has a very good documentation and good explanations for the vulnerabilities it finds.

If you add a few manual checks to the automated setup, perhaps in particular give it a site-map before starting a scan and testing inputs with fuzzing (which can also be done using Burp) you can get a relatively thorough security test done with a single tool.

Defending against OSINT in reconnaissance?

Hackers, whether they are cyber criminals trying to trick you into clicking a ransomware download link, or whether they are nation state intelligence operatives planning to gain access to your infrastructure, can improve their odds massively through proper target reconnaissance prior to any form of offensive engagement. Learn how you can review your footprint and make your organization harder to hack.

https://cybehave.no

Cybehave has an interesting post on OSINT and footprinting, and what approach companies can take to reduce the risk from this type of attack surface mapping: https://cybehave.no/2019/03/05/digital-footprint-how-can-you-defend-against-osint/ (disclaimer: written by me and I own 25% of this company).

tl;dr – straight to the to-do list

  • Don’t publish information with no business benefit and that will make you more vulnerable
  • Patch your vulnerabilities – both on the people and tech levels
  • Build a friendly environment for your people. Don’t let them struggle with issues alone.
  • Prepare for the worst (you can still hope for he best)

Storing seeds for multifactor authentication tokens

When setting up an application to use two-factor authentication for example with Google Authenticator, each user will have a unique seed value for the authenticator. The identity server will require knowledge of the seed to verify the token – meaning you will have to store it and retrieve it somehow. This means that if an attacker gets access to the storage solution that links OTP secret seeds to user ID’s (e.g. usernames), the protocol is broken. So, trying to think up some options for securing the secrets – we cannot hash and salt it because it breaks the OTP authentication flow. We are hence left with encrypting the seed before storing it.

The most practical seems to be a symmetric crypto approach, the question is what to use as the crypto key. Here are some approaches I’ve seen people discuss that all seem bad: 

  • User password: if you can phish the password, then you can also generate the OTP provided you know which algorithm/library is used
  • A static application secret: should be safe provided that secret is never leaked but using a static secret means that if it is compromised, all users are compromised. Still better than the user password, though. 
  • Using non-static user level meta data to create a unique key for each user that is not vulnerable to phishing or guessing. Typically visible to admins.
Get username/password
Verify username/password
Get OTP seed (encrypted)
Get metadata and reconstruct encryption key
Verify OTP
Authenticate user and store timestamp and other auth metadata
Construct new encryption key
Encrypt seed
Store in database

The question is what metadata to use. We need the following properties to be true:

  • Not possible to guess for a third party even if we tell what metadata it is
  • Not possible to reconstruct for an administrator with access to the account
  • Not possible to phish or obtain through social engineering or client side attacks

There are many possibilities but here is one possible solution that would satisfy all the above requirements:

Key = Password (Not available to admins) + Timestamp for last login (not guessable/phishable)

Combining VueJS and Django to build forms with custom widgets

This post is brief and explains a pattern that may be dangerous but still is very handy for combining VueJS with Django templates for dynamic forms. Here’s the case: I need to build a form for sending out some messages. One of the form widgets is a <select> tag where each <option> is a model instance from Django. The widget will then show the name of that model instance in the UI, but this does not provide enough context to be useful, we also need some description text. There are basically two options for how to handle this:

  1. Use the form “as-is” but provide the extra context in the UI by pulling some extra information and building an information box in the UI.
  2. Create a custom widget, and bind it to the Django model form using a hidden field.

Both are probably equally good, but I went with the second option. So here’s what I did:

  1. Build a normal Django model form, but change the widget for the field in question to type “HiddenInput” in the form.py file.
  2. Build a selector widget using VueJS that allows the user to get the desired content and review the various options with full context (including images and videos, things you can’t put inside a dropdown list. We are binding the selected choices to frontend data using the v-model directive in VueJS.
  3. Set the hidden field to set its value based on the data value stored in the frontend using that v-model directive
  4. Process the form as you normally would with a Django model form.

The form definition remains very simple. Here’s the relevant class from this example:

class MailForm(forms.ModelForm):

    class Meta:
        model = Campaign
        fields = ('name','to','elearning',)
        widgets = {
            'elearning': forms.HiddenInput(attrs={':value': 'module.pk'})
        }

The selector widget can take any form you could desire. The point in this project was to show some more context for the “eLearning” model. The user here gets notification about enrollment in an eLearning module by e-mail. The administrator setting up the program needs to get a bit of context about that e-learning, such as the name of the module, a description of its content, and perhaps a preview of a video or other multimedia. Below is an example of a simple widget of this type. The course administrator can here browse through the various options by clicking next, and the e-mail form is automatically updated.

Of course, to do that binding we need a bit of JavaScript in the Django template. We need to perform the following tasks to make our custom widget work:

  1. Fetch information about all the options from the server. We need to create an API endpoint for this, that can deliver JSON data to the frontend.
  2. Set the data item bound to the Django form based on the user’s current selection

Now the form can be submitted and processed using the normal Django framework patterns – but with a much more context-rich selection widget than a simple dropdown list.

Is it safe to do this?

Combining frontend and server-side rendering with different templates for HTML rendering can be dangerous. See this excellent write-up on XSS vulnerabilities that can be the result from such combinations: https://github.com/dotboris/vuejs-serverside-template-xss.

This is a problem when user input is injected via the server-side template as the user can supply the interpolation tags as part of the input. In our case there is no user input in those combinations. However, if you need to take user input and rerender this using the server-side templates of some framework like Django, here are some things you can do to harden against this threat:

  • Use the v-pre directive in VueJS
  • Sanitize the input to discard unsafe characters, including VueJS delimiters
  • Escape generated output from the database to avoid injections making it as executable JavaScript reaching the user’s context