Commercial VPN’s: the Twitter security awareness flamewar edition

A lot of people worry about information security, and perhaps rightly so. We are steadily plagued by ransomware, data breaches, phishing attacks and password stealers; being reminded of good security habits regularly is generally a good thing. Normally, this does not result on angry people. Except for on the Internet of course, and perhaps in particular on Twitter, the platform made for online rage.

Being angry on the Internet: does a VPN help?

Here’s a recent Tweet from infosec awareness blogger John Opdenakker (you can read his blog here https://johnopdenakker.com):

If you click this one you will get some responses, including some harsh ones:

And another one. Felt like an attack, perhaps it was an attack?

So far the disagreement is not quite clear, just that some people obviously think VPN’s are of little use for privacy and security (and I tend to agree). There are of course nicer ways of stating such opinions. I even tried to meddle, hopefully in a somewhat less tense voice. Or maybe not?

This didn’t really end too well, I guess this was the end of it (not directed at me but at @desdotdev.

This is not a very good way to discuss something. My 2 cents here, beyond “be nice to each other”, was really just a link to this quite good argument why commercial VPN’s are mostly not very useful (except if you want to bypass geoblocking or hide your ip from the websites you visit):

A link to a more sound discussion of the VPN debacle

Risks and VPN marketing

For a good writeup on VPN’s not making you secure, I suggest you read the gist above. Of course, everything depends on everything, and in particular your threat model does. If you fear that evil hackers are sitting on your open WiFi network and looking at all your web traffic to non-https sites, sure, a VPN will protect you. But most sites use HTTPS, and if it is a bank or something similar they will also use HSTS (which makes sure the initial connection is safe too). So what are the typical risks of the coffee shop visiting internet browsing person?

  • Email: malware and phishing emails trying to trick you into sharing too much information or installing malware
  • Magecart infected online shopping venues
  • Shoulder surfers reading your love letters from the chair behind you
  • Someone stealing your phone or laptop while you are trying to fetch that cortado
  • Online bullying threatening your mental health while discussing security awareness on Twitter
  • Secret Chinese agents spying on your dance moves on TikTok

Does a VPN help here? No, it doesn’t. It encrypts the traffic between your computer, and a computer controlled by the VPN company. Such companies typically register in countries with little oversight. Usually the argument is “to avoid having to deliver any data to law enforcement” and besides “we don’t keep logs of anything”. Just completely by coincidence the same countries tend to be tax havens that allows you to hide corporate owner structures as well. Very handy. So, instead of trusting your ISP, you set up a tunnel to a computer entirely controlled by a company owned by someone you don’t know, in a jurisdiction that allows them to do so without much oversight, where they promise not to log anything. I am not sure this is a win for privacy or security. And it doesn’t help against China watching your TikTok videos or a Magecart gang stealing your credit card information on your favourite online store.

One of the more popular VPN providers is ExpressVPN. They provide a 10-step security test, which asks mostly useful questions about security habits (although telling random web pages your preferred messaging app, search engine and browser may not be the best idea) – and it also asks you “do you use a VPN”. If you answer “no” – here’s their security advice for you:

ExpressVPN marketing: do you use a VPN?

It is true that it will make it hard to snoop on you on an open wireless network. But this is not in most people’s threat models – not really. The big problems are usually those in our bullet point list above. ExpressVPN is perhaps one of the least scare-mongering VPN sellers, and even they try to scare you into “but security/privacy anxiety” buying their product. The arguments about getting around geoblocking and hiding your ip from the websites you visit are OK – if you have a need to do that. Most people don’t.

When VPN’s tell you to buy their service to stay safe online, they are addressing a very narrow online risk driver – that is negligible in most people’s threat models.

So what should I do when browsing at a coffee shop?

If you worry about the network itself, a VPN may be a solution to that, provided you trust the VPN itself. You could run your own VPN with a cloud provider if you want to and like to do technical stuff. Or, you could just use your phone to connect to the internet if you have a reasonable data plan. I would rather trust a regulated cell provider than an unregulated anonymous corporation in the Caribbean.

Email, viruses and such: be careful with links and attachments, run endpoint security and keep your computer fully up to date. This takes you a long way, and a VPN does not help at all!

Magecart: this one can be hard to spot, use a credit card when shopping online, and check your statements carefully every month. If your bank provides a virtual card with one-time credit card numbers that is even better. Does a VPN help? No.

Theft of phones, laptops and coffee mugs? Keep an eye on your stuff. Does a VPN help? Nope.

Online bullying? Harder to fight this one but don’t let them get to you. Perhaps John is onto something here? If you feel harassed, use the block button 🙂

Secret Chinese agents on TikTok? No solution there, except not showing your dance moves on TikTok. Don’t overshare. Does a VPN help? Probably not.

Protecting the web with a solid content security policy

We have been used to securing web pages with security headers to fend off cross-site scripting attacks, clickjacking attacks and data theft. Many of these headers are now being deprecated and browser may no longer respect these header settings. Instead, we should be using content security policies to reduce the risk to our web content and its users.

Protect your web resources and your users with Content Security Policy headers!

CSP’s are universally supported, and also allows reporting of policy violations, which can aid in detecting hacking attempts.
Mozilla Developer Network has great documentation on the use of CSP’s: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy.

CSP by example

We want to make it even easier to understand how CSP’s can be used, so we have made some demonstrations for the most common directives we should be using. Let us first start with setting the following header:

Content-Security-Policy: default-src ‘self’;

We have created a simple Flask application to demonstrate this. Here’s the view function:

A simple view function setting a CSP header.

Here we are rendering a template “index.html”, and we have set the default-src directive of the CSP to ‘self’. This is a “fallback” directive in case you do not specify other directives for key resources. Here’s what this does to JavaScript and clickjacking, when other directives are missing:

  • Blocks inline JavaScript (that is, anything inside tags, onclick=… on buttons, etc) and JavaScript coming from other domains.
  • Blocks media resources from other domains, including images
  • Blocks stylesheets from external domains, as well as inline style tags (unless explicitly allowed)

Blocking untrusted scripts: XSS

Of course, you can set the default-src to allow those things, and many sites do, but then the protection provided by the directive will be less secure. A lot of legacy web pages have mixed HTML and Javascript in <script> tags or inline event handlers. Such sites often set default-src: ‘self’ ‘unsafe-inline’; to allow such behaviour, but then it will not help protect against common injection attacks. Consider first the difference between no CSP, and the following CSP:

Content-Security-Policy: default-src: ‘self’;

We have implemented this in a route in our Python web app:

Adding the header will help stop XSS attacks.

Let us first try the following url: /xss/safe/hello: the result is injected into the HTML through the Jinja template. It is using the “safe” filter in the template, so the output is not escaped in any way.

Showing that a URL parameter is reflected on the page. This may be XSS vulnerable (it is).

We see here that the word “hello” is reflected on the page. Trying with a typical cross-site-scripting payload: shows us that this page is vulnerable (which we know since there is no sanitation):

No alert box: the CSP directive blocks it!

We did not get an alert box here, saying “XSS”. The application itself is vulnerable, but the browser stopped the event from happening due to our Content-Security-Policy with the default-src directive set to self, and no script-src directive allowing unsafe inline scripts. Opening the dev tools in Safari shows us a bunch of error messages in the console:

Error messages in the browser console (open dev tools to find this).

The first message shows that the lack of nonce or unsafe-inline blocked execution. This is done by the web browser (Safari).

Further, we see that Safari activates its internal XSS auditor and detects my payload. This is not related to CSP’s, and is internal Safari behavior: it activates its XSS auditor unless there is an X-XSS-Protection header asking to explicitly disable XSS protection. This is Safari-specific and should not be assumed as a default. The X-XSS-Protection header is a security header that has been used in Internet Explorer, Chrome and Safari but it is currently be deprecated. Edge has removed its XSS Auditor, and Firefox has not implemented this header. Use Content Security Policies instead.

What if I need to allow inline scripts?

The correct way to allow inline JavaScript is to include the nonce directive (nonce = number used once) or use a hash of the inline script. These values should then rather be placed in the script-src directive than in the default-src one. For more details on how to do this, see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/script-src#Unsafe_inline_script.

Let’s do an example of an unsafe inline script in our template, using a nonce to allow the inline script. Here’s our code:

Example code showing use of nonce.

Remember to make the nonce unguessable by using a long random number, and make sure to regenerate it each time the CSP is sent to the client – if not, you are not providing much of security protection.

Nonces are only good if they can’t be guessed, and that they are truely used only once.

Here we have one script with a nonce included, and one that does not have it included. The nonce’d script will create an alert box, and the script without the nonce tries to set the inner HTML of the paragraph with id “blocked” to “Hello there”. The alert box will be created but the update of the “blocked” paragraph will be blocked by the CSP.

Here’s the HTML template:

A template with two inline scripts. One with an inserted nonce value, one without. Which one will run?

The result is as expected:

Only the nonce’d script will run 🙂

Conclusion: Use CSP’s for protecting against cross-site scripting (XSS) – but keep sanitising as well: defence in depth.

What about clickjacking?

good explanation of clickjacking and how to defend against it is available from Portswigger: https://portswigger.net/web-security/clickjacking.

Here’s a demo of how clickjacking can work using to “hot” domains of today: who.int and zoom.us (the latter is not vulnerable to clickjacking).

Demo of Clickjacking!

Here’s how to stop that from happening. Add the frame-ancestors directive, and whitelist domains you want to be able of iframing your web page.

Content-Security-Policy: default-src: 'self'; frame-ancestors: 'self' 'youtube.com';

Summary

Protecting against common client-side attacks such as XSS and clickjacking can be done using the Content Security Policy header. This should be part of a defense in depth strategy but it is an effective addition to your security controls. As with all controls that can block content, make sure you test thoroughly before you push it to production!

Is COVID-19 killing your motivation?

The COVID-19 pandemic is taking its toll on all of us. One thing is staying at home, another is the thought of all the things that can go wrong. The virus is very infectious, and is likely to kill a lot of people over the next year. The actions we take, and need to take, to curb the damages of the spreading illness is taking freedoms we take for granted away from us. No more travel, no parties, not even a beer with coworkers. For many of us, even work is gone. No wonder motivation is taking a hit! How can we deal with this situation collectively and individually to make the best out of a difficult situation?

When news are mostly about counting our dead, it can be easy to lose faith in humanity

The virus is not only a risk to our health, it is also a risk to our financial well-being, and the social fabric of our lives. The actions taken to limit the spread of the virus and the load it will have on our healthcare systems, is taking its toll on our social lives, and perhaps also our mental health. It is probably a good idea to think through how important aspects of life will be affected, and what you can do to minimize the risk, and what you should prepare to do if bad consequences do materialize.

TopicRisksThings to do
FinanceJob loss

Real estate value loss
Minimize expenses and build a buffer of money
Ask bank for deferral of principal payments
Plan to negotiate if collateral for mortgage is no longer accepted due to real estate market collapse
Physical healthInfected by COVID-19Supplies in storage at home in case of isolation

Space to isolate to avoid infecting other family members
Mental healthFeeling of isolation

Depression
Avoid “crazy news cycles” and negative feedback on social media

Talk to friends regularly, not just coworkers

Get fresh air and some excercise every day

Have a contact ready for telemedicine, e.g. check if your insurance company offers this
Work Loss of visibility

Degradation of quality

Collaboration problems
Set up daily video calls with closes team members

Make results visible in digital channels

Practice active listening
Example individual risk assessment for COVID-19 life impact

News, social media and fake news

The news cycle is a negative spiral of death counts, stock market crashes and experts preaching the end of the world. While it is useful, and important, to know what the situation is to make reasonable decisions, it is not useful to watch negative news around the clock. It is probably a good idea to batch how much one should take in of the news during a crisis, for example to morning and afternoon news.

Social media tend to paint an even worse picture; taking the news cycle and twisting it into something more extreme. My Twitter feed is now full of people arguing we should go for full communism and introduce death penalties for people allowing children to play outside. It is OK to watch stuff like that a short while for entertainment, but it can easily turn into a force of negative influence and perhaps it would be better to take a break from that? Use filters to stay away from hashtags that bring you down without bringing anything useful.

DevSecOps: Embedded security in agile development

The way we write, deploy and maintain software has changed greatly over the years, from waterfall to agile, from monoliths to microservices, from the basement server room to the cloud. Yet, many organizations haven’t changed their security engineering practices – leading to vulnerabilities, data breaches and lots of unpleasantness. This blog post is a summary of my thoughts on how security should be integrated from user story through coding and testing and up and away into the cyber clouds. I’ve developed my thinking around this as my work in the area has moved from industrial control systems and safety critical software to cloud native applications in the “internet economy”.

What is the source of a vulnerability?

At the outset of this discussion, let’s clarify two common terms, as they are used by me. In very unacademic terms:

  • Vulnerability: a flaw in the way a system is designed and operated, that allows an adversary to perform actions that are not intended to be available by the system owner.
  • A threat: actions performed on an asset in the system by an adversary in order to achieve an outcome that he or she is not supposed to be able to do.

The primary objective of security engineering is to stop adversaries from being able to achieve their evil deeds. Most often, evilness is possible because of system flaws. How these flaws end up in the system, is important to understand when we want to make life harder for the adversary. Vulnerabilities are flaws, but not all flaws are vulnerabilities. Fortunately, quality management helps reduce defects whether they can be exploited by evil hackers or not. Let’s look at three types of vulnerabilities we should work to abolish:

  • Bugs: coding errors, implementation flaws. The design and architecture is sound, but the implementation is not. A typical example of this is a SQL injection vulnerability in a web app.
  • Design flaws: errors in architecture and how the system is planned to work. A flawed plan that is implemented perfectly can be very vulnerable. A typical example of this is a broken authorization scheme.
  • Operational flaws: the system makes it hard for users to do things correctly, making it easier to trick privileged users to perform actions they should not. An example would be a confusing permission system, where an adversary uses social engineering of customer support to gain privilege escalation.

Security touchpoints in a DevOps lifecycle

Traditionally there has been a lot of discussion on a secure development lifecycle. But our concern is removing vulnerabilities from the system as a whole, so we should follow the system from infancy through operations. The following touchpoints do not make up a blueprint, it is an overview of security aspects in different system phases.

  • Dev and test environment:
    • Dev environment helpers
    • Pipeline security automation
    • CI/CD security configuration
    • Metrics and build acceptance
    • Rigor vs agility
  • User roles and stories
    • Rights management
  • Architecture: data flow diagram
    • Threat modeling
    • Mitigation planning
    • Validation requirements
  • Sprint planning
    • User story reviews
    • Threat model refinement
    • Security validation testing
  • Coding
    • Secure coding practices
    • Logging for detection
    • Abuse case injection
  • Pipeline security testing
    • Dependency checks
    • Static analysis
    • Mitigation testing
      • Unit and integration testing
      • Detectability
    • Dynamic analysis
    • Build configuration auditing
  • Security debt management
    • Vulnerability prioritization
    • Workload planning
    • Compatibility blockers
  • Runtime monitoring
    • Feedback from ops
    • Production vulnerability identification
    • Hot fixes are normal
    • Incident response feedback

Dev environment aspects

If an adversary takes control of the development environment, he or she can likely inject malicious code in a project. Securing that environment becomes important. The first principle should be: do not use production data, configurations or servers in development. Make sure those are properly separated.

The developer workstation should also be properly hardened, as should any cloud accounts used during development, such as Github, or a cloud based build pipeline. Two-factor auth, patching, no working on admin accounts, encrypt network traffic.

The CI/CD pipeline should be configured securely. No hard-coded secrets, limit who can access them. Control who can change the build config.

During early phases of a project it is tempting to be relaxed with testing, dependency vulnerabilities and so on. This can quickly turn into technical debt – first in one service, then in many, and at the end there is no way to refinance your security debt at lower interest rates. Technical debt compounds like credit card debt – so manage it carefully from the beginning. To help with this, create acceptable build thresholds, and a policy on lifetime of accepted poor metrics. Take metrics from testing tools and let them guide: complexity, code coverage, number of vulnerabilities with CVSS above X, etc. Don’t select too many KPI’s, but don’t allow the ones you track to slip.

One could argue that strict policies and acceptance criteria will hurt agility and slow a project down. Truth is that lack of rigor will come back to bite us, but at the same time too much will indeed slow us down or even turn our agility into a stale bureaucracy. Finding the right balance is important, and this should be informed by context. A system processing large amounts of sensitive personal information requires more formalism and governance than a system where a breach would have less severe consequences. One size does not fit all.

User roles and stories

Most systems have diffent types of users with different needs – and different access rights. Hackers love developers who don’t plan in terms of user roles and stories – the things each user would need to do with the system, because lack of planning often leads to much more liberal permissions “just in case”. User roles and stories should thus be a primary security tool. Consider a simple app for approval of travel expenses in a company. This app has two primary user types:

  • Travelling salesmen who need reimbursements
  • Bosses who will approve or reject reimbursement claims

In addition to this, someone must be able of adding and removing users, granting access to the right travelling salesmen for a given boss, etc. The system also needs an Administrator, with other words.

Let’s take the travelling salesman and look at “user stories” that this role would generate:

  • I need to enter my expenses into a report
  • I need to attach documentation such as receipts to this report
  • I need to be able of sending the report to the boss for approval
  • I want to see the approval status of my expense report
  • I need to recieve a notification if my report is not approved
  • I need to be able of correcting any mistakes based on the rejection

Based on this, it is clear that the permissions of the “travelling salesman” role only needs to give write access to some operations, for data relating to this specific user, and needs read rights on the status of the approval. This goes directly into our authorization concept for the app, and already here generates testable security annotations:

  • A travelling salesman should not be able to read the expense report of another travelling salesman
  • A travellign salesman should not be able of approving expense reports, including his own

These negative unit tests could already go into the design as “security annotations” for the user stories.

In addition to user stories, we have abusers and abuse stories. This is about the type of adversaries, and what they would like to do, that we don’t want them to be able of achieving. Let’s take as an example a hacker hired by a competitor to perform industrial espionage. We have the adversary role “industrial espionage”. Here are some abuse cases we can define that relate to motivation of a player rather than technical vulnerabilities:

  • I want to access all travel reports to map where the sales personnel of the firm are going to see clients
  • I want to see the financial data approved to gauge the size of their travel budget, which would give me information on the size of their operation
  • I’d like to find names of people from their clients they have taken out to dinner, so we know who they are talking to at potential client companies
  • I’d like to get user names and personal data that allow med to gauge if some of the employees could be recurited as insiders or poached to come work for us instead

How is this hypothetical information useful for someone designing an app to use for expense reporting? By knowing the motivations of the adversaries we can better gauge the credibility that a certain type of vulnerability will be attempted exploited. Remember: Vulnerabilities are not the same as threats – and we have limited resources, so the vulnerabilities that would help attackers achieve their goals are more important to remove than those that cannot easily help the adversary.

Vulnerabilities are not the same as threats – and we have limited resources, so the vulnerabilities that would help attackers achieve their goals are more important to remove than those that cannot easily help the adversary.

Architecture and data flow diagrams

Coming back to the sources of vulnerabilities, we want to avoid vulnerabilities of three kinds; software bugs, software design flaws, and flaws in operating procedures. Bugs are implementation errors, and the way we try to avoid them is by managing competence, workload and stress level, and by use of automated security testing such as static analysis and similar tools. Experience from software reliability engineering shows that about 50% of software flaws are implementation erorrs – the rest would then be design flaws. These are designs and architectures that do not implement the intentions of the designer. Static analysis cannot help us here, because there may be no coding errors such as lack of exception handling or lack of input validation – it is just the concept that is wrong; for example giving a user role too many privileges, or allowing a component to talk to a component it shouldn’t have access to. A good tool for identificaiton of such design flaws is threat modeling based on a data flow diagram. Make a diagram of the software data flow, break it down into components on a reasonable level, and consider how an adversary could attack each component and what could be the impact of this. By going through an excercise like this, you will likely identify potential vulnerabilities and weaknesses that you need to handle. The mitigations you introduce may be various security controls – such as blocking internet access for a server that only needs to be available on the internal network. The next question then is – how do you validate that your controls work? Do you order a penetration test form a consulting company? That could work, but it doesn’t scale very well, you want this to work in your pipeline. The primary tools to turn to is unit and integration testing.

We will not discuss the techniques for threat modeling in this post, but there are different techniques that can be applied. Keep it practical, don’t dive too deep into the details – it is better to start with a higher level view on things, and rather refine it as the design is matured. Here are some methods that can be applied in software threat modeling:

Often a STRIDE-like approach is a good start, and for the worst case scenarios it can be worthwhile diving into more detail with attack trees. An attack tree is a fault tree applied to adversarial modeling.

After the key threats have been identified, it is time to plan how to deal with that risk. We should apply the defense-in-depth principle, and remeber that a single security control is usually not enough to stop all attacks – because we do not know what all possible attack patterns are. When we have come up with mitigations for the threats we worry about, we need to validate that they actually work. This validation should happen at the lowest possible level – unit tests, integration tests. It is a good idea for the developer to run his or her own tests, but these validations definitely must live in the build pipeline.

Let’s consider a two-factor authentication flow using SMS-based two-factor authentication. This is the authentication for an application used by politicians, and there are skilled threat actors who would like to gain access to individual accounts.

A simple data flow diagram for a 2FA flow

Here’s how the authentication process work:

  • User connects to the domain and gets an single-page application loaded in the browser with a login form with username and password
  • The user enters credentials, that are sent as a post request to the API server, which validates it with stored credentials (hashed in a safe way) in a database. The API server only accepts requests from the right domain, and the DB server is not internet accessible.
  • When the correct credentials have been added, the SPA updates with a 2fa challenge, and the API server sends a post request to a third-party SMS gateway, which sends the token to the user’s cell phone.
  • The user enters the code, and if valid, is authenticated. A JWT is returned to the browser and stored in localstorage.

Let’s put on the dark hat and consider how we can take over this process.

  1. SIM card swapping combined wiht a phishing email to capture the credentials
  2. SIM card swapping combined with keylogger malware for password capture
  3. Phishing capturing both password and the second factor from a spoofed login page, and reusing credentials immediately
  4. Create an evil browser extension and trick the user to install it using social engineering. Use the browser extension to steal the token.
  5. Compromise a dependency used by the application’s frontend, to allow man-in-the-browser attacks that can steal the JWT after login.
  6. Compromise a dependency used in the API to give direct access to the API server and the database
  7. Compromise the 3rd party SMS gateway to capture credentials, use password captured with phishing or some other technique
  8. Exploit a vulnerability in the API to bypass authentication, either in a dependency or in the code itself.

As we see, the threat is the adversary getting access to a user account. There are many attack patterns that could be used, and only one of them involves only the code written in the application. If we are going to start planning mitigations here, we could first get rid of the two first problems by not using SMS for two-factor authenticaiton but rather relying on an authenticator app, like Google Authenticator. Test: no requests to the SMS gateway.

Phishing: avoid direct post requests from a phishing domain to the API server by only allowing CORS requests from our own domain. Send a verification email when a login is detected from an unknown machine. Tests: check that CORS from other domains fail, and check that an email is sent when a new login occurs.

Browser extensions: capture metadata/fingerprint data and detect token reuse across multiple machines. Test: same token in different browsers/machines should lead to detection and logout.

Compromised dependencies is a particularly difficult attack vector to deal with as the vulnerability is typically unknown. This is in practice a zero-day. For token theft, the mitigation of using meta-data is valid. In addition it is good practice to have a process for acceptance of third-party libraries beyond checking for “known vulnerabilities”. Compromise of the third-party SMS gateway is also difficult to deal with in the software project, but should be part of a supply chain risk management program – but this problem is solved by removing the third-party.

Exploit a vulnerability in the app’s API: perform static analysis and dependency analysis to minimize known vulnerabilities. Test: no high-risk vulnerabilities detected with static analysis or dependency checks.

We see that in spite of having many risk reduction controls in place, we do not cover everything that we know, and there are guaranteed to be attack vectors in use that we do not know about.

Sprint planning – keeping the threat model alive

Sometimes “secure development” methodologies receive criticims for “being slow”. Too much analysis, the sprint stops, productivity drops. This is obviously not good, so the question is rather “how can we make security a natural part of the sprint”? One answer to that, at least a partial one, is to have a threat model based on the overall architecture. When it is time for sprint planning, there are three essential pieces that should be revisited:

  • The user stories or story points we are addressing; do they introduce threats or points of attack not already accounted for?
  • Is the threat model we created still representative for what we are planning to implement? Take a look at the data flow diagram and see if anything has changed – if it has, evaluate if the threat model needs to be updated too.
  • Finally: for the threats relevant to the issues in the sprint backlog, do we have validation for the planned security controls?

Simply discussing these three issues would often be enough to see if there are more “known uknowns” that we need to take care of, and will allow us to update the backlog and test plan with the appropriate annotations and issues.

Coding: the mother of bugs after the design flaws have been agreed upon

The threat modeling as discussed above has as its main purpose to uncover “design flaws”. While writing code, it is perfectly possible to implement a flawed plan in a flawless manner. That is why we should really invest a lot of effort in creating a plan that makes sense. The other half of vulnerabilities are bugs – coding errors. As long as people are still writing code, and not some very smart AI, errors in code will be related to human factors – or human error, as it is popularly called. This often points the finger of blame at a single individual (the developer), but since none of us are working in vacuum, there are many factors that influence these bugs. Let us try to classify these errors (leaning heavily on human factors research) – broadly there are 3 classes of human error:

  • Slips: errors made due to lack of attention, a mishap. Think of this like a typo; you know how to spell a word but you make a small mistake, perhaps because your mind is elsewhere or because the keyboard you are typing on is unfamiliar.
  • Competence gaps: you don’t really know how to do the thing you are trying to do, and this lack of knowledge and practice leads you to make the wrong choice. Think of an inexperienced vehicle driver on a slippery road in the dark of the night.
  • Malicious error injection: an insider writes bad code on purpose to hurt the company – for example because he or she is being blackmailed.

Let’s leave the evil programmer aside and focus on how to minimize bugs that are created due to other factors. Starting with “slips” – which factors would influence us to make such errors? Here are some:

  • Not enough practice to make the action to take “natural”
  • High levels of stress
  • Lack of sleep
  • Task overload: too many things going on at once
  • Outside disturbances (noise, people talking to you about other things)

It is not obvious that the typical open office plan favored by IT firms is the optimal layout for programmers. Workload management, work-life balance and physical working environment are important factors for avoiding such “random bugs” – and therefore also important for the security of your software.

These are mostly “trying to do the right thing but doing it wrong” type of errors. Let’s now turn to the lack of competence side of the equation. Developers have often been trained in complex problem solving – but not necessarily in protecting software from abuse. Secure coding practices, such as how to avoid SQL injection, why you need output escaping and similar types of practical application secuity knowledge, is often not gained by studying computer science. It is also likely that a more self-taught individual would have skipped over such challenges, as the natural focus is on “solving the problem at hand”. This is why a secure coding practice must deliberately be created within an organization, and training and resources provided to teams to make it work. A good baseline should include:

  • How to protect aginst OWASP Top 10 type vulnerabilities
  • Secrets management: how to protect secrets in development and production
  • Detectability of cyber threats: application logging practices

An organization with a plan for this and appropriate training to make sure everyone’s on the same page, will stand a much better chance of avoiding the “competence gap” type errors.

Security testing in the build pipeline

OK, so you have planned your software, created a threat model, commited code. The CI/CD build pipeline triggers. What’s there to stop bad code from reaching your production environment? Let’s consider the potential locations of exploitable bugs in our product:

  • My code
  • The libraries used in that code
  • The environment where my software runs (typically a container in today’s world)

Obviously, if we are trying to push something with known critical errors in either of those locations to production, our pipeline should not accept that. Starting with our own code, a standard test that can uncover many bugs is “static analysis”. Depending on the rules you use, this can be a very good security control but it has limitations. Typically it will find a hardcoded password written as

var password = 'very_secret_password";

but it may not find this password if it isn’t a little bit smart:

var tempstring = 'something_that_may_be_just_a_string";

and yet it may throw an alert on

var password = getsecret();

just because the word “password” is in there. So using the right rules, and tuning them, is important to make this work. Static analysis should be a minimum test to always include.

The next part is our dependencies. Using libraries with known vulnerabilities is a common problem that makes life easy for the adversary. This is why you should always scan the code for external libraries and check if there are known vulnerabilitie sin them. Commercial vendors of such tools often refer to it as “software component analysis”. The primary function is to list all dependencies, check them against databases of known vulnerabilities, and create alerts accordingly. And break the build process based on threshold limits.

Also the enviornment we run on should be secure. When building a container image, make sure it does not contain known vulnerabilities. Using a scanner tool for this is also a good idea.

While static analysis is primarily a build step, testing for known vulnerabilities whether in code libraries or in the environment, should be done regulary to avoid vulnerabilities discovered after the code is deployed from remaining in production over time. Testing the inventory of dependencies against a database of known vulnerabiltiies regulary would be an effective control for this type of risk.

If a library or a dependency in the environment has been injected with malicious code in the supply chain, a simple scan will not identify it. Supply chain risk management is required to keep this type of threat under control, and there are no known trustworthy methods of automatically identifying maliciously injected code in third-party dependencies in the pipeline. One principle that should be followed with respect to this type of threat, however, is minimization of the attack surface. Avoid very deep dependency trees – like an NPM project 25000 dependencies made by 21000 different contributors. Trusting 21000 strangers in your project can be a hard sell.

Another test that should preferably be part of the pipeline, is dynamic testing where actual payloads are tested against injection points. This will typically uncover other vulnerabilities than static analysis will and is thus a good addition. Note that active scanning can take down infrastructure or cause unforeseen errors, so it is a good idea to test against a staging/test environment, and not against production infrastructure.

Finally – we have the tests that will validate the mitigations identified during threat modeling. Unit tests and integration tests for securtiy controls should be added to the pipeline.

Modern environments are usually defined in YAML files (or other types of config files), not by technicians drawing cables. The benefit of this, is that the configuration can be easily tested. It is therefore a good idea to create acceptance tests for your Dockerfiles, Helm charts and other configuration files, to avoid an insider from altering it, or by mistake setting things up to be vulnerable.

Security debt has a high interest rate

Technical debt is a curious beast: if you fail to address it it will compound and likely ruin your project. The worst kind is security debt: whereas not fixing performance issues, removing dead code and so on compunds like a credit card from your bank, leaving vulnerabilities in the code compunds like interest on money you lent from Raymond Reddington. Manage your debt, or you will go out of business based on a ransomware compaign followed by a GDPR fine and some interesting media coverage…

You need to plan for time to pay off your technical debt, in particular your securiyt debt.

Say you want to plan using a certain percentage of your time in a sprint on fixing technical debt, how do you choose which issues to take? I suggest you create a simple prioritization system:

  • Exposed before internal
  • Easy to exploit before hard
  • High impact before low impact

But no matter what method you use to prioritize, the most important thing is that you work on getting rid of known vulnerbilities as part of “business-as-usual”. To avoid going bankrupt due to overwhelming technical debt. Or being hacked.

Sometimes the action you need to take to get rid of a security hole can create other problems. Like installing an update that is not compatible with your code. When this is the case, you may need to spend more resources on it than a “normal” vulnerability because you need to do code rewrites – and that refactoring may also need you to update your threat model and risk mitigations.

Operations: your code on the battle field

In production your code is exposed to its users, and in part it may also be exposed to the internet as a whole. Dealing with feedback from this jungle should be seen as a key part of your vulnerability management program.

First of all, you will get access to logs and feedback from operations, whether it is performance related, bug detections or security incidents. It is important that you feed this into your issue management system and deal with it throughout sprints. Sometimes you may even have a critical situation requiring you to push a “hotfix” – a change to the code as fast as possible. The good thing about a good pipeline is that your hotfix will still go through basic security testing. Hopefully, your agile security process and your CI/CD pipeline is now working so well in symbiosis that it doesn’t slow your hotfix down. In other words: the “hotfix” you are pushing is just a code commit like all others – you are pushing to production several times a day, so how would this be any different?

Another aspect is feedback from incident response. There are two levels of incident response feedback that we should consider:

  1. Incident containment/eradication leading to hotfixes.
  2. Security improvements from the lessons learned stage of incident response

The first part we have already considered. The second part could be improvements to detections, better logging, etc. These should go into the product backlog and be handled during the normal sprints. Don’t let lessons learned end up as a PowerPoint given to a manager – a real lesson learned ends up as a change in your code, your environment, your documentation, or in the incident response procedures themselves.

Key takeaways

This was a long post, here are the key practices to take away from it!

  • Remember that vulnerabilities come from poor operational practices, flaws in design/architecture, and from bugs (implementation errors). Linting only helps with bugs.
  • Use threat modeling to identity operational and design weaknesses
  • All errors are human errors. A good working environment helps reduce vulnerabilities (see performance shaping factors).
  • Validate mitigations using unit tests and integration tests.
  • Test your code in your pipeline.
  • Pay off technical debt religiously.

Two-factor auth for your Node project, without tears

  • How 2fa works with TOTP
  • Code
  • Adding bells and whistles
  • How secure is OTP really?

How 2fa works with TOTP

TOTP is short for time-based one-time password. This is a commonly used method for a second factor in two-factor authentication. The normal login flow for the user is then: first log in with username and password. Then you are presented with a second login form, where you have to enter a one-time password. Typically you will have an app like Google Authenticator or Microsoft Authenticator that will provide you with a one-time code that you can enter. If you have the right code, you will be authenticated and you gain access.

How does the web service know what the right one-time code is?

Consider a web application using two-factor authentication. The user has Google Authenticator on his or her phone to provide one-time passwords – but how does the app know what to compare with?

That comes from the setup: there is a pre-shared secret that is used to generate the tokens based on the current time. These tokens are valid for a limited time (typically 30, 60 or 120 seconds). The time here is “Unix time” – the number of seconds since midnight 1 January 1970. TOTP is a special case for a counter-based token created with the HOTP algorithm (HOTP = HMAC based one-time password). Both of these are described in lengthy detail in RFC documents, and the one for TOTP is RFC 6238. The main point is: both token generator (phone) and validator (web server) needs to know the current unix time and they need a pre-shared secret. Then the token can be calculated a function of the time and this secret.

A one-time password used for two-factor authentication is a function of the current time and a pre-shared key.

Details in RFC 6238
The basics of multi-factor authentication: something you know + something you have

How do I get this into my app?

Thanks to open source libraries, creating a 2fa flow for your app is not hard. Here’s an example made on Glitch for a NodeJS app: https://aerial-reward.glitch.me/login.

The source code for the example is available here: https://glitch.com/edit/#!/aerial-reward. We will go through the main steps in the code to make it easy to understand what we are doing.

Step 1: Choose a library for 2FA.

Open source libraries are great – but they also come with a risk. They may contain vulnerabilities or backdoors. Doing some due diligence up front is probably a good idea. In this case we chose speakeasy because it is popular and well-documented, and running npm audit does not show any vulnerabilities for the library although it hasn’t been updated in 4 years.

Step 2: Activate MFA using a QR code for the user

We assume you have created a user database, and that you have implemented username- and password based login (in a safe way). Now to the MFA part – how can we share the pre-shared secret with the device used to generate the token? This is what we use a QR code for. The user will then log in, be directed to a profile page where “Activate MFA” is an option. Clicking this link shows a QR code that can be scanned with an authenticator app. This shares the pre-shared key with the app. Hence: the QR code is sensitive data, so it should only be available when setting up the app, and not be stored permanently. The user should also be authenticated in order to see the QR code (using username and password).

In our example app, here’s the route for activating MFA after logging in.

app.get('/mfa/activate', (req, res) => {
  if (req.session.isauth) {
    var secret = speakeasy.generateSecret({name: 'Aerial Reward Demo'})
    req.session.mfasecret_temp = secret.base32;
    QRCode.toDataURL(secret.otpauth_url, function(err, data_url) {
      if (err) {
        res.render('profile', {uname: req.session.username, mfa: req.session.mfa, qrcode: '', msg: 'Could not get MFA QR code.', showqr: true})
      } else {
        console.log(data_url);
        // Display this data URL to the user in an <img> tag
        res.render('profile', {uname: req.session.username, mfa: req.session.mfa, qrcode: data_url, msg: 'Please scan with your authenticator app', showqr: true}) 
      }
    })
  } else {
    res.redirect('/login')
  }
})

What this does is the following:

  • Check that the user is authenticated using a session variable (set on login)
  • Create a temporary secret and store as a session variable, using the speakeasy library. This is our pre-shared key. We won’t store it in the user profile before having verified that the setup worked, to avoid locking out the user.
  • Generate a QRCode with the secret. To do this, you need to use a qrcode library, and we used the one qrcode, which seems to do the job OK. The speakeasy library generates an otpauth_url that can be used in the QR code. This otpauth_url contains the pre-shared secret.
  • Finally we are rending a template (the profile page for the user) and supplying the QR code data url to a template (res.render).

For rendering this to the end user we are using a Pug template.

html
  head
    title Login
    link(rel="stylesheet" href="/style.css")
  body
    a(href="/logout") Log out
    br
    h1 Profile for #{uname}
    p MFA: #{mfa}
    unless mfa
      br
      a(href="/mfa/activate") Activate multi-factor authentication
    if showqr
      p= msg
      img(src=qrcode)
      p  When you have added the code to your app, verify that it works here to activate.
      a(href="/mfa/verify") VERIFY MFA CODE
    if mfa
      img(src="https://media.giphy.com/media/81xwEHX23zhvy/giphy.gif")
      p Security is important. Thank you for using MFA!

The QR code is shown in the profile when the right route is used and the user is not already using MFA. This presents the user with a QR code to scan, and then he or she will need to enter a correct OTP code to verify that the setup works. Then we will save the TOTP secret in the user profile.

How it looks for the user

The profile page for the user “safecontrols” with the QR code embedding the secret.

Scanning the QR code on an authenticator app (many to choose from, FreeOTP from Red Hat is a good alternative), gives you OTP tokens. Now the user needs to verify by entering the OTP. Clicking the link “VERIFY MFA CODE” to do this brings up the challenge. Entering the code verifies that you have your phone. When setting things up, the verification will store the secret “permanently” in your user profile.

How do I verify the token then?

We created a route to verify OTP’s. The behavior depends on whether MFA has been set up yet or not.

app.post('/mfa/verify', (req, res) => {
  // Check that the user is authenticated
  var otp = req.body.otp
  if (req.session.isauth && req.session.mfasecret_temp) {
    // OK, move on to verify 2fa activation
    var verified = speakeasy.totp.verifyDelta({
      secret: req.session.mfasecret_temp,
      encoding: 'base32',
      token: otp,
      window: 6
    })
    console.log('verified', verified)
    console.log(req.session.mfasecret_temp)
    console.log(otp)
    if (verified) {
      db.get('users').find({uname: req.session.username}).assign({mfasecret: req.session.mfasecret_temp}).write()
      req.session.mfa = true
      req.session.mfarequired = true
      res.redirect('/profile')
    } else {
      console.log('OTP verification failed during activation')
      res.redirect('/profile')
    }
  } else if (req.session.mfarequired) {
    // OK, normal verification
    console.log('MFA is required for user ', req.session.username)
    var verified = speakeasy.totp.verifyDelta({
      secret: req.session.mfasecret,
      encoding: 'base32',
      token: otp,
      window: 6
    })
    console.log(verified)
    if (verified) {
      req.session.mfa = true
      res.redirect('/profile')  
    } else {
      // we are pretty harsh, thrown out after one try
      req.session.destroy(() => {
        res.redirect('/login')
      })
    }
  } else {
    // Not a valid 2fa challenge situation
    console.log('User is not properly authenticated')
    res.redirect('/')
  }
})

The first path is for the situation where MFA has not yet been set up (this is the activation step). This is checked that the user is authenticated and that there is a temporary secret stored in a session variable. This happens when the user clicks the “VERIFY…” link on the profile page after scanning the QR code, so this session variable will not be available in other cases.

The second path checks if there is a session variable mfarequired set to true. This happens when the user authenticates, if an MFA secret has been stored in the user profile.

The verification itself is done the speakeasy library functions. Note that you can use speakeasy.totp.verify (Boolean) or speakeasy.totp.verifyDelta (gives a time delta). The former did not work for some reason, whereas the Delta version did, which is the only reason for this choice in this app.

How secure is this then?

Nothing is unhackable, and this is no exception to that rule. The security of the OTP flow depends on your settings, as well as other defense mechanisms. How can hackers bypass this?

  • Stealing tokens (man-in-the-middle or stealing the phone)
  • Phishing with fast use of tokens
  • Brute-forcing codes has been reported as a possible attack on OTP’s but this depends on configuration

These are real attacks that can happen, so how to protect against them?

  • Always use https. Never transfer tokens over insecure connections. This protects against man-in-the-middle.
  • Phishing: this is more difficult, if someone obtains your password and a valid token and can use them on the real page before the token expires, they will get in. Using meta-data to calculate a risk-score can help: sign-in from new device requires confirmation by clicking a link sent in email, force password reset after x failed logins, etc. None of that is implemented here. That being said, OTP-based 2FA protects against most phishing attacks – but if you are a high-value target for organized crime of professional spies you probably should think about more secure patterns. Alternatives include push notifications or hardware tokens that avoid typing something into a form.
  • Brute-force: trying many OTP’s until you get it right is possible if the “window” is too long for when a code is considered valid, and you are not logged out after trying 1 or more wrong codes. In the code above the window parameter is set to 6, which is very long and potentially insecure, but the user is logged out if the OTP challenge fails, so brute-force is still not possible.

Keeping your conversations private in the age of supervised machine learning and government snooping

Most of us would like to keep our conversations with other people private, even when we are not discussing anything secret. That the person behind you on the bus can hear you discussing last night’s football game with a friend is perhaps not something that would make you feel uneasy, but what if employees, or outsourced consultants, from a big tech firm are listening in? Or government agencies are recording your conversations and using data mining techniques to flag them for analyst review if you mention something that triggers a red flag? That would certainly be unpleasant to most of us. The problem is, this is no longer science fiction.

You are being watched.

Tech firms listening in

Tech firms are using machine learning to create good consumer products – like voice messaging that allows direct translation, or digital assistants that need to understand what you are asking of them. The problem is that such technologies cannot learn entirely by themselves, so your conversations are being recorded. And listened too.

Microsoft: https://www.vice.com/en_us/article/xweqbq/microsoft-contractors-listen-to-skype-calls

Google: https://www.theverge.com/2019/7/11/20690020/google-assistant-home-human-contractors-listening-recordings-vrt-nws

Amazon: https://www.bloomberg.com/news/articles/2019-04-10/is-anyone-listening-to-you-on-alexa-a-global-team-reviews-audio

Apple: https://www.theguardian.com/technology/2019/jul/26/apple-contractors-regularly-hear-confidential-details-on-siri-recordings

All of these systems are being listened in to in order to improve speech recognition, which is hard for machines. They need some help. The problem is that users have not generally been aware that they conversations or bedroom activities may be listened in to by contractors in some undisclosed location. It certainly doesn’t feel great.

That is probably not a big security problem for most people: it is unlikely that they can specifically target you as a person and listen in on everything you do. Technically, however, this could be possible. What if adversaries could bribe their way to listen in to the devices of decision makers? We already know that tech workers, especially contractors and those in the lower end of the pay scale, can be talked into taking a bribe (AT&T employee installing malware on company servers allowing unauthorized unlocking of phones (wired.com), Amazon investigating data leaks for bribe payments). If you can bribe employees to game the phone locking systems, you can probably manipulate them into subverting the machine learning QA systems too. Because of this, if you are a target of high-resource adversaries you probably should be skeptical about digital assistants and what you talk about around them.

Governments are snooping too

We kind of knew it already but not the extent of it. Then Snowden happened – confirming that governments are using massive surveillance program that will capture the communications of everyone and make it searchable. The NSA got heavily criticized for their invasive practices in the US but that did not stop such programs from being further developed, or the rest of the world to follow. Governments have powers to collect massive amounts of data and analyze it. Here’s a good summary of the current US state of phone record collection from Reuters: https://www.reuters.com/article/us-usa-cyber-surveillance/spy-agency-nsa-triples-collection-of-u-s-phone-records-official-report-idUSKBN1I52FR.

The rest of the world is likely not far behind, and governments are using laws to make collection lawful. The intent is the protection of democracy, freedom of speech, and the evergreen “stopping terrorists”. The only problem is that mass surveillance seems to be relatively inefficient at stopping terrorist attacks, and it has been found to have a chilling effect on freedom of speech and participation in democracy, and even stops people from seeking information online because they feel somebody is watching them. Jonathan Shaw wrote an interesting comment on this on Harvard Magazine in 2017: https://harvardmagazine.com/2017/01/the-watchers.

When surveillance makes people think “I feel uneasy researching this topic – what if I end up on some kind of watchlist?” before informing themselves, what happens to the way we engage, discuss and vote? Surveillance has some very obvious downsides for us all.

If an unspoken fear of being watched is stopping us from thinking the thoughts we otherwise would have had, this is a partial victory for extremists, for the enemies of democracy and for the planet as a whole. Putting further bounds on thoughts and exploration will also likely have a negative effect on creativity and our ability to find new solutions to big societal problems such as climate change, poverty and even religious extremism and political conflicts, the latter being the reason why we seem to accept such massive surveillance programs in the first place.

But isn’t GDPR fixing all this?

The GDPR is certainly a good thing for privacy but it has not fixed the problem. It does apply to the big tech firms and the adtech industry but it really hasn’t solved the problem, at least not yet. As documented in this post from Cybehave.no, privacy statements are still too long, too complex, and too hidden for people to care. We all just click “OK” and remain subject to the same advertising driven surveillance as before.

The other issue we have here is that the GDPR does not apply to national security related data collection. And for that sort of collection, the surveillance state is still growing with more advanced programs, more collection, and more sharing between intelligence partners. In 2018 we got the Australian addition with their rather unpleasant “Assist and access” act allowing for government mandated backdoors in software, and now the US wants to backdoor encrypted communications (again).

Blocking the watchers

It is not very difficult to block the watchers, at least not from advertisers, criminals and non-targeted collection (if a government agency really wants to spy on you as an individual, they will probably succeed). Here’s a quick list of things you can do to feel slightly less watched online:

  • Use an ad-blocker to keep tracking cookies and beacons at bay. uBlock origin is good.
  • Use a VPN service to keep your web traffic away from ISP’s and the access of your telephone company. Make sure you look closely at the practices of your VPN supplier before choosing one.
  • Use end-2-end encrypted messaging for your communications instead of regular phone conversations and text messages. Signal is a good choice until the US actually does introduce backdoor laws (hopefully that doesn’t happen).
  • Use encrypted email, or encrypt the message you are sending. Protonmail is a Swiss webmail alternative that has encryption built-in if you send email to other Protonmail users. It also allows you to encrypt messages to other email services with a password.

If you follow these practices it will generally be very hard to snoop on you.

Vacation’s over. The internet is still a dumpster fire.

This has been the first week back at work after 3 weeks of vacation. Vacation was mostly spent playing with the kids, relaxing on the beach and building a garden fence. Then Monday morning came and reality came back, demanding a solid dose of coffee.

  • Wave of phishing attacks. One of those led to a lightweight investigation finding the phishing site set up for credential capture on a hacked WordPress site (as usual). This time the hacked site was a Malaysian site set up to sell testosteron and doping products… and digging around on that site, a colleague of mine found the hackers’ uploaded webshell. A gem with lots of hacking batteries included.
  • Next task: due diligence of a SaaS vendor, testing password reset. Found out they are using Base64 encoded userID’s as “random tokens” for password reset – meaning it is possible to reset the password for any user. The vendor has been notified (they are hopefully working on it).
  • Surfing Facebook, there’s an ad for a productivity tool. Curious as I am I create an account, and by habit I try to set a very weak password (12345). The app accepts this. Logging in to a fancy app, I can then by forced browsing look at the data from all users. No authorization checks. And btw, there is no way to change your password, or reset it if you forget. This is a commercial product. Don’t forget to do some due diligence, people.

Phishing for credentials?

Phishing is a hacker’s workhorse, and for compromising an enterprise it is by far the most effective tool, especially if those firms are not using two-factor authentication. Phishing campaigns tend to come in bursts, and this needs to be handled by helpdesk or some other IT team. And with all the spam filters in the world, and regular awareness training, you can reduce the number of compromised accounts, but it is still going to succeed every single time. This is why the right solution to this is not to think that you can stop every malicious email or train every user to always be vigilant – the solution is primarily: multifactor authentication. Sure, it is possible to bypass many forms of it, but it is far more difficult to do than to just steal a username and a password.

Another good idea is to use a password manager. It will not offer to fill in passwords on sites that aren’t actually on the domain they pretend to be.

To secure against phishing, don’t rely on awareness training and spam filters only. Turn on 2FA and use a password manager for all passwords. #infosec

You do have a single sign-on solution, right?

Password reset gone wrong

The password reset thing was interesting. First on this app I registered an account with a Mailinator email account and the password “passw0rd”. Promising.. Then trying the “I forgot” on login to see if the password recovery flow was broken – and it really was in a very obvious way. Password reset links are typically sent by email. Here’s how it should work:

You are sent a one-time link to recover your password. The link should contain an unguessable token and should be disabled once clicked. The link should also expire after a certain time, for example one hour.

This one sent a link, that did not expire, and that would work several times in a row. And the unguessable token? Looked something like this: “MTAxMjM0”. Hm… that’s too short to really be a random sequence worth anything at all. Trying to identify if this is a hash or something encoded, the first thing we try is to decode from Base64 – and behold – we can a 6-digit number (101234 in this case, not the userID from this app). Creating a new account, and then doing the same reveals we get the next number (like 101235). In other words, using the reset link of the type /password/iforgot/token/MTAxMjM0, we can simply Base64 encode a sequence of numbers and reset the passwords for every user.

Was this a hobbyist app made by a hobbyist developer? No, it is an enterprise app used by big firms. Does it contain personal data? Oh, yes. They have been notified, and I’m waiting for feedback from them on how soon they will have deployed a fix.

Broken access control

The case with the non-random random reset token is an example of broken authentication. But before the week is over we also need an example of broken access control. Another web app, another dumpster fire. This was a post shared on social media that looked like an interesting product. I created an account. Password this time: 12345. It worked. Of course it did…

This time there is no password reset function to test, but I suspect if there had been one it wouldn’t have been better than the one just described above.

This app had a forced browsing vulnerability. It was a project tracking app. Logging in, and creating a project, I got an URL of the following kind: /project/52/dashboard. I changed 52 to 25 – and found the project goals of somebody planning an event in Brazil. With budgets and all. The developer has been notified.

Always check the security of the apps you would like to use. And always turn on maximum security on authentication (use a password manager, use 2FA everywhere). Don’t get pwnd. #infosec

Securing media stored in cloud storage buckets against unauthorised access

Insecure direct object reference (IDOR) is a common type of vulnerability online. Normally we think of this as a vulnerable parameter in a URL or a form that allows forced browsing, but file downloads can also be an issue here. For a general background on IDOR and how to secure against it, see this cheatsheet from OWASP.

Our case is a bit different. Consider storing files in a cloud storage bucket (Google Cloud Storage, Amazon S3, etc). This may be for a file sharing site for example, where users are allowed to upload documents that are then stored in a bucket. We only want the users with the right authorisation to have access to these files. What are our options?

  1. Use cloud identity management and bucket security rules to manage access. This may be impractical as we don’t necessarily want to give app users IAM users in the cloud environment, but where applicable it is a direct solution to our little security problem.
  2. Allow full access to the bucket from the app and manage user permissions in the app.
  3. Make the object public but use non-descriptive and random filenames so unauthorised users cannot easily guess the right path. Maintain the link to contextual data in the backend code to not expose it publicly.
  4. Same as 3 but with a signed URL – a temporary ‘secret’ URL where permissions can be controlled without creating specific IAM users.

Google has made a list of best practices for cloud storage here. In our use case we want the shared object to have permanent permissions. Let us consider how to achieve acceptable security using option 2.

A simple architecture for sharing files securely

For this set-up there are a few things we need to take care of:

  1. For uploaded files do not expose the actual bucket meta data or file names to the user in the frontend. Create a reference in the database that maps to the object name in the bucket
  2. Manage access to objects through the database references, for example by adding a “shared with” key containing user ID’s for all users who are going to have read access to the object.
  3. Do not make the object publicly accessible. Instead use a service account IAM user for the application and allow the permissions you need. Download content to the app, and relay this to the frontend using the mapping described above to avoid exposing the actual object name.

What are the threat vectors to this method for securing shared files?

This is a relatively simple setup that avoids making a bucket, or objects in that bucket, publicly available. It is still possible to exploit to gain unauthorised access but this is no longer as easy as finding an unsecured bucket.

Identity spoofing: a hacker can take on the identity of a user of the application, and thus get access to the files this user has access to. To avoid this, make sure to follow good practices for authentication (strong passwords, two-factor authentication). Also keep identity secrets on the client side hard to get at by securing the frontend against cross-site scripting (XSS), turning on security headers and setting parameters on cookies to avoid easy exposure.

Database server: A hacker may try to guess the database credentials directly, either using a connection string or through the management plane of a cloud provider. Make sure to use multiple layers of defence. If using a cloud accessible database, make sure the management plane is sufficiently secured. Use IP whitelisting or cloud security groups to limit access to the database, and use a strong authentication secret.

Bucket security: Hackers will look for publicly available buckets. Make sure the bucket is not accessible from the internet. limit accessibility to the relevant cloud security group, or from whitelisted IP addresses if accessed from outside the cloud.

Monitoring: turn on monitoring of file access in the application, and consider also logging access on database and bucket level. Regularly review logs to look for unauthorised access or unusual behaviour.

CCSK Domain 5: Information governance

Information governance is the management practices we introduce to enusre that data and information complies with organizational policies, standards and strategy, including regulatory, contractual and business objectives. 

There are several aspects of cloud storage of data that has implications for information governance. 

Public cloud deployments are multi-tenant. That means that there will be other organizations also storing their information in the same datacenter, on the same hardware. The security features for account separation will thus be an important part of achieving information compliance in most cases. 

As data is shared across cloud infrastructure, so is the responsibility for securing the data. To define a working governance structure it is important to define data ownership and who the data custodian is. The difference between the two, is that the former is who actually owns the data (and is accountable for its governance), and the latter who manages the data (and is responsible for ensuring compliance in practice). 

When we host third-party data in the cloud, we are introducing a third-party into the governance model. This third-party is the cloud provider; the information governance now depends on the provider’s management practices and technologies offered by the cloud provider. This complicates the regulatory compliance considerations we need to make and should be taken into account when designing a project’s regulatory compliance matrix. First, legal requirements may change because the cloud stores, or makes data available, in more geographical regions that would otherwise be the case. Compliance, regulations, and in particular privacy, should be carefully reviewed with regard to how governance is managed in the cloud for customer data. Further, one should ensure that customer requirements to deletion (destruction) of data is possible to satisfy given the technical offerings from the cloud provider. 

Moving data to the cloud provides a welcome opportunity to review and perhaps redesign information architectures. In many organizations information architectures have evolved over a long time, perhaps with little planning, and may have resulted in a fractured model where it is hard to manage compliance. 

Cloud information governance domains

Cloud computing can have an effect on multiple aspects of data governance. The following list defined issues the CSA has described as affected by cloud artifacts: 

Information classification. Often tied to storage and handling requirements, that may include limitations on access, location. Storing information in an S3 bucket will require a different method for access control than using a file share on the local network. 

Information management practices. How data is managed based on classification. This should include different cloud deployment models (or SPI tiers: SaaS, PaaS, IaaS). You need to decide what can be allowed where in the cloud, with which products and services and with which security requirements. 

Location and jurisdiction policies. You need to comply with regulations and contractual obligations with respect to data storage, data access. Make sure you understand how data is processed and stored, and the contractual instruments in place to manage regulatory compliance. One primary example here is personal data under the GDPR, and how data processing agreements with cross-border transfer clauses can be used to manage foreign jurisdictions. 

Authorizations. Cloud computing does not typically require much changes to authorizations but the data security lifecycle will most likely be impacted. The way authorization controls are implemented may also change (e.g. IAM practices of the cloud vendor for account level authorization). 

Ownership. The organization owns its data and this is not changed when moving to cloud. One should be careful with reviewing the terms and conditions of cloud providers here, in particular SaaS products (especially those targeting the consumer market).

Custodianship. The cloud provider may fully or partially become the custodian, depending on the deployment model. Encrypted data stored in a cloud bucket is still under custody of the cloud provider. 

Privacy. Privacy needs to be handled in accordance with relevant regulations, and the necessary contractual instruments such as data processing agreements must be put in place. 

Contractual controls. Contractual controls when moving data and workloads to control will be different from controls you employ in an on-premise infrastructure. There will often be limited access to contract clause negotiations in public cloud environments. 

Security controls. Security controls are different in cloud environments than in on-premise environments. Main concepts are security groups and access control lists.

Data Security Lifecycle

A data security lifecycle is typically different from information lifecycle. A data security lifecycle has 6 phases: 

  • Create: generation of new digital content, or modification of existing content
  • Store: committing digital data to storage, typically happens in direct sequence with creation. 
  • Use: data is viewed, processed or otherwise used in some activity that does not include modification. 
  • Share: Information is made accessible to others, such as between users, to customers, and to partners or other stakeholders. 
  • Archive: data leaves active use and enters long-term storage. This type of storage will typically have much longer retrieval times than data in active storage. 
  • Destroy. Data is permanently destroyed by physical or digital means (cryptoshredding)

The data security lifecycle is a description of phases the data passes through, without regard for location or how it is accessed. The data typically goes through “mini lifecycles” in different environments as part of these phases. Understanding the physical and logical locations of data is an important part of regulatory compliance. 

In addition to where data lives and how it is transferred, it is important to keep control of entitlements; who accesses the data, and how can they access it (device, channels)? Both devices and channels may have different security properties that may need to be taken into account in a data governance plan. 

Functions, actors and controls

The next step in assessing the data security lifecycle is to review what functions can be performed with the data, by a given actor (personal or system account) and a particular location. 

There are three primary functions: 

  • Read the data: including creating, copying, transferring.
  • Process: perform transactions or changes to the data, use it for further processing and decision making, etc. 
  • Store: hold the data (database, filestore, blob store, etc)

The different functions are applicable to different degrees in different phases. 

An actor (a person or a system/process – not a device) can perform a function in a location. A control restricts the possible actions to allowed actions. The key question is: 

What function can which actor perform in which location on a given data object?

An example of data modeling connecting actions to data security lifecycle stages.

CSA Recommendations

The CSA has created a list of recommendations for information governance in the cloud: 

  • Determine your governance requirements before planning a transition to cloud
  • Ensure information governance policies and practices extent to the cloud. This is done with both contractual and security controls. 
  • When needed, use the data security lifecycle to model data handling and controls. 
  • Do not lift and shift existing information architectures to the cloud. First, review and redesign the information architecture to support the current governance needs, and take anticipated future requirements into account. 

CCSK Domain 4 – Compliance and Audit Management

This section on the CCSK domains is about compliance management and audits. This section goes through in some detail aspects one should think about for a compliance program when running services in the cloud. The key issues to pay attention to are:

  • Regulatory implications when selecting a cloud supplier with respect to cross-border legal issues
  • Assignment of compliance responsibilities
  • Provider capabilities for demonstrating compliance

Pay special attention to: 

  • The role of provider audits and how they affect customer audit scope
  • Understand what services are within which compliance scope with the cloud provider. This can be challenging, especially with the pace of innovation. As an example, AWS is adding several new features every day. 

Compliance 

The key change to compliance when moving from an on-prem environement to the cloud is the introduction of a shared responsibility model. Cloud consumers must typically rely more on third-party auudit reports to understand compliance arrangement and gaps than they would in a traditional IT governance case. 

Many cloud providers certify for a variety of standards and compliance frameworks to satisfy customer demand in various industries. Typical audit reports that may be available include: 

  • PCI DSS
  • SOC1, SOC2
  • HIPAA
  • CSA CCM
  • GDPR
  • ISO 27001

Provider audits need to be understood within their limitations: 

  • They certify that the provider is compliant, not any service running on infrastructure provided by that provider. 
  • The provider’s infrastructure and operations is then outside of the customer’s audit scope, relying on pass-through audits. 

To prove compliance in a servicec built on cloud infrastructure it is necessary that the internal parts of the application/service comply with the regulations, and that no non-compliant cloud services or components are used. This means paying attention to audit scopes is important when designing cloud architectures. 

There are also issues related to jurisdictions involved. A cloud service typically will let you store and process data across a global infrastructure. Where you are allowed to do this depends on the compliance framework, and you as cloud consumer have to make the right choices in the management plane. 

Audit Management

The scope of audits and audit management for information security is related to the fulfillment of defined information security practices. The goal is to evaluate the effectiveness of security management and controls. This extends to cloud environments. 

Attestations are legal statements from a third party, which can be used as a statement of audit findings. This is a key tool when working with cloud providers. 

Changes to audit management in cloud environments

On-premise audits on multi-tenant environments are seen as a security risk and typically not permitted. Instead consumers will have to rely on attestations and pass-through audits. 

Cloud providers should assist consumers in achieving their compliance goals. Because of this they should publish certifications and attestations to consumers for use in audit management. Providers should also be clear about the scope of the various audit reports and attestations they can share. 

Some types of customer technical assessments, such as vulnerability scans, can be liimted in contracts and require up-front approval. This is a change to audit management from on-prem infrastructures, although it seems most major cloud providers allow certain penetration testing activities without prior approval today. As an example, AWS has published a vulenrability anpenetration testing policy for customers here: https://aws.amazon.com/security/penetration-testing/

In addition to audit reports, artifacts such as logs and documentation are needed for compliance proof. The consumer will in most cases need to set up the right logging detail herself in order to collect the right kind of evidence. This typically includes audit logs, activity reporting, system configuration details and change management details. 

CSA Recommendations for compliance and audit management in the cloud

  1. Compliance, audit and assurance should be continuous. They should not be seen as point-in-time activities  but show that compliance is maintained over time. 
  2. Cloud providers should communicate audit results, certifications and attestations including details on scope, features covered in various locations and jurisdictions, give guidance to customers for how to build compliant services in their cloud, and be clear about specific customer responsibilities. 
  3. Cloud customer should work to understand their own compliance requirements before making choices about cloud providers, services and architectures. They should also make sure to understand the scope of compliance proof from the cloud vendor, and ensure they understand what artifacts can be produced to support the management of compliance in the cloud. The consumer should also keep a register of cloud providers and services used. CSA recommends the Cloud control matrix is used to support this activity (CCM).