Most of us would like to keep our conversations with other people private, even when we are not discussing anything secret. That the person behind you on the bus can hear you discussing last night’s football game with a friend is perhaps not something that would make you feel uneasy, but what if employees, or outsourced consultants, from a big tech firm are listening in? Or government agencies are recording your conversations and using data mining techniques to flag them for analyst review if you mention something that triggers a red flag? That would certainly be unpleasant to most of us. The problem is, this is no longer science fiction.
You are being watched.
Tech firms listening in
Tech firms are using machine learning to create good consumer products – like voice messaging that allows direct translation, or digital assistants that need to understand what you are asking of them. The problem is that such technologies cannot learn entirely by themselves, so your conversations are being recorded. And listened too.
All of these systems are being listened in to in order to improve speech recognition, which is hard for machines. They need some help. The problem is that users have not generally been aware that they conversations or bedroom activities may be listened in to by contractors in some undisclosed location. It certainly doesn’t feel great.
That is probably not a big security problem for most people: it is unlikely that they can specifically target you as a person and listen in on everything you do. Technically, however, this could be possible. What if adversaries could bribe their way to listen in to the devices of decision makers? We already know that tech workers, especially contractors and those in the lower end of the pay scale, can be talked into taking a bribe (AT&T employee installing malware on company servers allowing unauthorized unlocking of phones (wired.com), Amazon investigating data leaks for bribe payments). If you can bribe employees to game the phone locking systems, you can probably manipulate them into subverting the machine learning QA systems too. Because of this, if you are a target of high-resource adversaries you probably should be skeptical about digital assistants and what you talk about around them.
The rest of the world is likely not far behind, and governments are using laws to make collection lawful. The intent is the protection of democracy, freedom of speech, and the evergreen “stopping terrorists”. The only problem is that mass surveillance seems to be relatively inefficient at stopping terrorist attacks, and it has been found to have a chilling effect on freedom of speech and participation in democracy, and even stops people from seeking information online because they feel somebody is watching them. Jonathan Shaw wrote an interesting comment on this on Harvard Magazine in 2017: https://harvardmagazine.com/2017/01/the-watchers.
If an unspoken fear of being watched is stopping us from thinking the thoughts we otherwise would have had, this is a partial victory for extremists, for the enemies of democracy and for the planet as a whole. Putting further bounds on thoughts and exploration will also likely have a negative effect on creativity and our ability to find new solutions to big societal problems such as climate change, poverty and even religious extremism and political conflicts, the latter being the reason why we seem to accept such massive surveillance programs in the first place.
But isn’t GDPR fixing all this?
The GDPR is certainly a good thing for privacy but it has not fixed the problem. It does apply to the big tech firms and the adtech industry but it really hasn’t solved the problem, at least not yet. As documented in this post from Cybehave.no, privacy statements are still too long, too complex, and too hidden for people to care. We all just click “OK” and remain subject to the same advertising driven surveillance as before.
The other issue we have here is that the GDPR does not apply to national security related data collection. And for that sort of collection, the surveillance state is still growing with more advanced programs, more collection, and more sharing between intelligence partners. In 2018 we got the Australian addition with their rather unpleasant “Assist and access” act allowing for government mandated backdoors in software, and now the US wants to backdoor encrypted communications (again).
Blocking the watchers
It is not very difficult to block the watchers, at least not from advertisers, criminals and non-targeted collection (if a government agency really wants to spy on you as an individual, they will probably succeed). Here’s a quick list of things you can do to feel slightly less watched online:
Use an ad-blocker to keep tracking cookies and beacons at bay. uBlock origin is good.
Use a VPN service to keep your web traffic away from ISP’s and the access of your telephone company. Make sure you look closely at the practices of your VPN supplier before choosing one.
Use end-2-end encrypted messaging for your communications instead of regular phone conversations and text messages. Signal is a good choice until the US actually does introduce backdoor laws (hopefully that doesn’t happen).
Use encrypted email, or encrypt the message you are sending. Protonmail is a Swiss webmail alternative that has encryption built-in if you send email to other Protonmail users. It also allows you to encrypt messages to other email services with a password.
If you follow these practices it will generally be very hard to snoop on you.
Are you planning to offer a SaaS product, perhaps combined with a mobile app or two? Many companies operating in this space will outsource development, often because they don’t have the right in-house capacity or competence. In many cases the outsourcing adventure ends in tears. Let’s first look at some common pitfalls before diving into what you can do to steer the outsourced flagship clear of the roughest seas.
Common outsourcing pitfalls
I’ve written about project follow-up before, and whether you are building an oil rig or getting someone to write an app for you, the typical “outsourcing pitfalls” remain the same:
Lack of documentation requirements
Testing is informal
No competence to ask the right questions
No planning of the operations phase
Lack of privacy in design
Weak follow-up: without regular follow-up the sense of commitment can get lost for the service provider. It is also increasing the chances of misunderstandings by several magnitudes. If I write a specification of a product that should be made, and even if that specification is wonderfully clear to me, it may be interpreted differently by a service provider. With little communication underway towards the product, there is a good chance the deliverable will not be as expected – even if the supplier claims all requirements have been met.
Another big mistake by not having a close follow-up process, is lost opportunities in the form of improvements or additional features that could be super-useful. If the developer gets a brilliant idea, but has no one to approve of it, it may not even be presented to you as the project owner. So, focus on follow-up – if not you are not getting the full return on your outsourcing investment.
Lack of documentation requirements: Many outsourcing projects follow a common pattern: the project owner writes a specification, and gets a product made and delivered. The outsourcing supplier is then often out of the picture: work done and paid for – you now own the product. The plan is perhaps to maintain the code yourself, or to hire an IT team with your own developers to do that. But…. there is no documentation! How was the architecture set up, and why? What do the different functions do? How does it all work? Getting to grips with all of that without proper documentation is hard. Really hard. Hence, putting requirements to the level of documentation into your contracts and specifications is a good investment with regards to avoiding future misunderstandings and a lot of wasted time trying to figure out how everything works.
Informal or no testing: No testing plan? No factory acceptance test (FAT)? No testing documentation? Then how do you determine if the product meets its quality goals – in terms of performance, security, user experience? The supplier may have fulfilled all requirements – because testing was basically left up to them, and they chose a very informal approach that only focuses on functional testing, not performance, security, user experience or even accessibility. It is a good idea to include testing as part of the contract and requirements. It does not need to be prescriptive – the requirement may be for the supplier to develop a test plan for approval, and with a rationale for the chosen testing strategy. This is perhaps the best way forward for many buyers.
No competence to ask the right questions: One reason for the points mentioned so far being overlooked may be that the buying organization does not have the in-house competence to ask the right questions. The right medicine for this may not be to send your startup’s CEO to a “coding bootcamp”, or for a company that is primarily focused on operations to hire its in-house development team – but leaving the supplier with all the know-how leaves you in a very vulnerable position, almost irrespective of the legal protections in your contract. It is often money well spent to hire a consultant to help follow-up the process – ideally from the start so you avoid both specification and contract pitfalls, and the most common plague of outsourcing projects – weak follow-up.
No planning of operations: If you are paying someone to create a SaaS product for you – have you thought about how to put this product into operation? Often important things are left out of the discussion with the outsourcing provider – even if their decisions have a very big impact on your future operations. Have you included the following aspects into your discussions with the dev teams:
Application logs: what should be logged, and to what format, and where should it be logged?
How will you deploy the applications? How will you mange redundancy, content delivery?
Security in operations: how will you update the apps when security demands it, for example through the use of dependencies/libraries where security holes become known? Do you at all know what the dependencies are?
Support: how should your applications be supported? Who picks up the phone or answers that chat message? What information will be available from the app itself for the helpdesk worker to assist the customer?
Lack of privacy in design: The GDPR requires privacy to be built-in. This means following principles such as data minimization, using pseudonomization or anonymization where this is required or makes sense, means to detect data breaches that may threaten the confidentiality and integrity (and in some cases availability) of personal information. Very often in outsourcing projects, this does not happen. Including privacy in the requirements and follow-up discussions is thus not only a good idea but essential to make sure you get privacy by design and default in place. This also points back to the competence bit – perhaps you need to strengthen not only your tech know-how during project follow-up but also privacy and legal management?
A simple framework for successful follow-up of outsourcing projects
The good news is that it is easy to give your outsourcing project much better chances of success. And it is all really down to common sense.
First, during preparation you will make a description of the product, and the desired outcomes of the outsourcing project. Here you will have a lot to gain from putting in more requirements than the purely functional ones – think about documentation, security, testing and operations related aspects. Include it in your requirements list.
Then, think about the risk in this specification. What can go wrong? Cause delays? Malfunction? Be misunderstood? Review your specification with the risk hat on – and bring in the right competence to help you make that process worthwhile. Find the weaknesses, and then improve.
Decide how you want to follow-up the vendor. Do you want to opt for e-mailed status reports once per week? The number of times that has worked for project follow-up is zero. Make sure you talk regularly. The more often you interact with the supplier, the better the effect is on quality, loyalty, and priorities. Stay on the top priority list for your supplier – if not your product will not be the thing they are thinking about when coming to the office in the morning. Things you can do to get better project follow-up:
Regular meetings – in person if you are in the same location, but also on video works well.
Use a chat tool such as Slack, Microsoft Teams or similar for daily discussions. Keep it informal. Be approachable. That makes everything much better.
Always focus on being helpful. Avoid getting into power struggles, or a very top-down approach. It kills motivation, and makes people avoid telling you about their best ideas. You want those ideas.
Competence. That is the hardest piece of the pussle. Make sure you take a hard look at your own competence, and the competence you have available before deciding you are good to go. This determines if you should get a consultant or hire someone to help follow-up the outsourcing project. For outsourcing of development work, rate your organization’s competence within the following areas:
Security: do you know enough to understand what cyber threats you need to worry about during dev, and during ops? Can you ask the right questions to make sure your dev team follows good practice and makes the attack surface as small as it should be?
Code development: do you understand development, both on the organizational and code level? Can you ask the right questions to make sure good practice is followed, risks are flagged and priorities are set right?
Operations: Do you have the skills to follow-up deployment, preparations for production logging, availability planning, etc?
User experience: do you have the right people to verify designs and user experiences with respect to usability, accessibility?
Privacy: do you understand how to ensure privacy laws are followed, and that the implementation of data protection measures will be seen as acceptable by both data protection authorities and the users?
For areas where you are weak, consider getting a consultant to help. Often you can find a generalist who can help in more than one area, but it may be hard to cover them all. It is also OK to have some weaknesses in the organization, but you are much better off being aware of them than running blind in those areas. The majority of the follow-up would require competence in project management and code development (including basic security), so that needs to be your top priority to cover well.
Now we are going to assume you are well-prepared – having put down good requirements, planned on a follow-up structure and that you more or less have covered the relevant competence areas. Here are some hints for putting things into practice:
Regular follow-up: make sure you have formal follow-up meetings even if you communicate regularly on chat or similar tools. Make minutes of meetings that is shared with everyone. Make sure you make the minutes – don’t empower the supplier to determine priorities, that is your job. The meetings should all be called for with agendas so people can be well prepared. Here are topics that should be covered in these meetings:
Progress: how does it look with respect to schedule, cost and quality
Ideas and suggestions: useful suggestions, good ideas? If someone has a great idea, write down the concept and follow-up in a separate meeting.
Problems: any big issues found? Things done to fix problems?
Risks: any foreseeable issues? Delays? Security? Problems? Organizational issues?
Project risk assessment: keep a risk register. Update it after follow-up meetings. If any big things are popping up, make plans for correcting it, and ask the supplier to help plan mitigations. This really helps!
Knowledge build-up: you are going to take over an application. There is a lot to be learned from the dev process, and this know-how often vanishes with project delivery. Make sure to write down this knowledge, especially from problems that have been solved. A wiki, blog, and similar formats can work well for this, just make sure it is searchable.
Auditing is important for all. It builds quality. I’ve written about good auditing practices before, just in the context of safety, but the same points are still valid for general projects too: Why functional safety audits are useful.
Make sure to have a factory acceptance test. Make a test plan. This plan should include everything you need to be happy with to say you will take it over:
Functions working as they should
Performance: is it fast enough?
Security: demonstrate that included security functions are working
Usability and accessibility: good standards followed? Design principles adhered to?
Initial support: the initial phase is when you will discover the most problems – or rather, your users will discover them. Having a plan for support from the beginning is therefore essential. Someone needs to pick up the phone or answer that chat message – and when they can’t, there must be somewhere to escalate to, preferably a developer who can check if there is something wrong with the code or the set-up. This is why you should probably pay the outsourcing supplier to provide support in the initial weeks or months before you have everything in place in-house; they know the product best after making it for you.
Knowledge transfer: the developers know the most about your application. Make sure they help you understand how everything works. During the take-over phase make sure you ask all questions you have, that you have them demo how things are done, take advantage of any support contracts to extend your knowledge base.
This is not a guarantee for success – but your odds will be much better if you plan and execute follow-up in a good manner. This is one way that works well in practice – for all sorts of buyer-supplier relationship follow-up. Here the context was software – but you may use the same thinking around ships, board games or architectural drawings for that matter. Good luck with your outsourcing project!
Comments? They are very welcome, or hit me up on Twitter @sjefersuper!
tl;dr; SaaS apps often have poor security. Before deciding to use one do a quick security review. Read privacy statements, ask for security docs, and test authentication practices, crypto and console.log information leaks before deciding if you want to trust the app or not. This post gives you a handy checklist to breeze through your SaaS pre-trial security review.
Over the last year I’ve been involved in buying SaaS access for lots of services in a corporate setting. My stake in the evaluation has been security. Of course, we can’t do active attacks or the like to evaluate if we are going to buy a software, but we can read privacy statements, look for strange things in the HTML and test their login process if they allow you to set up a free account or a trial account (most of them do). Here’s what I’ve generally found of challenges:
Big name players know what they are doing, but they have so many options for setup that you need to be careful what you select, especially if you have specific compliance needs.
Smaller firms (including startups) don’t offer a lot of documentation, and quite often when talking to them they don’t really have much in place in terms of security management (they may still make great software, it is just harder to trust it in an enterprise environment)
Authentication blunders are very common. From HR support systems to payroll to IT management tools, we’ve found a lot of bad practices:
Ineffective password policies (5 digits and digits only, anyone?)
A theoretical password policy in place but validation is only performed client-side (that means it is easy to trick the server into setting a very weak password, or even no password in same cases, if you should so desire)
Lack of cookie security for session cookies (missing HTTPOnly and Secure flags, allowing for cookie theft by XSS attacks or man-in-the-middle attacks)
Poor password reset processes. In one case I found the supposedly random string used for a password reset link to be a base 64 encoding of my username…. how hard is that to abuse?
When you ask about development practices and security testing, many firms will lack such processes but try to explain that they implicitly have them because their developers are so smart (even with the authentication blunders mentioned above).
Obviously, at some point, security will be so bad that you have to say “No, we cannot buy this shiny thing, because it is rotten on the inside”. This is not a very popular decision in some cases, because the person requesting the service probably has some reason to request it.
In order to evaluate SaaS security without spending too much time on it I’ve come up with a process I find works pretty well. So, here’s a simple way to sort the terrible from the average!
Read the privacy statement and the terms and conditions. Not the whole thing, that is what lawyers are for (if you have some), but scan it and look for anything security related. Usually they will try to explain how they protect your personal data. If it is a clear and understandable explanation, they usually know what they are doing. If not, they usually don’t.
Look for security or compliance documentation, or request it from their sales/support team. Specific questions to consider are:
Do they offer encryption at rest (if this is reasonable to expect)?
Do they explain how they store passwords?
Do they use trustworthy data centers, and have some form of compliance proof for good practice for their data centers? SOC-2 reports, ISO certificates, etc?
Do they guarantee limited access for insiders?
Do they explain what cryptographic controls they are using for TLS, signatures and at-rest encryption? That means how they perform key management and exchange, what ciphers they allow, and the key strength they require?
Do they say anything about vulnerability management and security testing?
Do they say anything about incident handling?
Perform some super-lightweight testing. Don’t break the law but do your own due diligence on the app:
Create a free account if possible, and test if the authentication practices seem sound:
Test the “I forgot my password functionality” to see if the password reset link has an expiry time, and if it is a unguessable” link
Try changing your password to something really bad, like “password” or “dog”. Try to replay post requests if stopped by client-side validation (this can be done directly from the dev tools in Firefox, no hacker tool necessary)
Try logging in many times with the wrong password to see what happens (warning, lockout,… or most likely, nothing)
Check their website certificate practices by running sslscan, or by using the online version at ssllabs.com. If they support really weak ciphers, it is an indicator that they are not on the ball. If they support SSL (pre TLS 1.0) it is probably a completely outdated service.
Check for client-side libraries with known vulnerabilities. If they are using ancient versions of jQuery and other client side libraries, they are likely at risk of being hacked. This goes for WordPress sites as well, including their plugins.
Check for information leaks by opening the console (Ctrl + I in most browsers). If the console logs a lot of internal information coming from the backend, they probably don’t have the best security practices.
So, should you dump a service if any of these tests “fail”? Probably not. Most sites have some weak points, and sometimes that is a tradeoff for some reason that may be perfectly legitimate. But if there are a lot of these “bad signs” in the air and you would be running some critical process or storing your company secrets in the app, you will be better off finding another service to suit your needs – or at least you will be able to sleep better at night.
This has really been the year of marketing and doomsday predictions for companies that need to follow the new European privacy regulations. Everyone from lawyers to consultants and numerous experts of every breed is talking about what a big problem it will be for companies to follow the new regulations, and the only remedy would be to buy their services. Yet, they give very few details on what those services actually are.
Let’s start with the new regulation; not the “what to do” but the “why it is coming”. The privacy conscious search engine duckduckgo.com gives the following definition:
Privacy: The state of being free from unsanctioned intrusion: a person’s right to privacy.
Who hasn’t felt his or her privacy intruded upon by marketing networks and information brokers online? The state of modern business is the practice of tracking people, analyzing their behavior and seeking to influence their decisions, whether it is how they vote in an election, or what purchasing decisions they make. In the vast majority of cases this surveillance based mind control activity occurs without conscious consent from the people being tracked. It seems clear that there is a need to protect the rights to privacy and freedoms of individuals in the age of tech based surveillance.
This is what the new regulation is about – the purpose is not to make life hard for businesses but to make life safe for individuals, allowing them to make conscious decisions rather than being brain washed by curated messages in every digital channel.
Still, for businesses, this means that things must change. There may be a need for consultants but there is also a need for practical tools. The way to dealing with the GDPR that all businesses should follow is:
Understand why this is necessary
Learn to see things from the end-user or customer perspective
Learn the key principles of good privacy management
Create an overview of what data is being gathered and why
With these 4 steps in place, it is much easier to sort the good advice from the bad, and the useful from the wasteful.
Most businesses are used to thinking about risk in terms of business impact. What would the consequence to our business be, and how likely is it to happen? That will still be important after May 2018, but this is not the perspective the GDPR takes. If we are going to make decisions about data protection for the sake of protecting privacy, we need to think about the risk the data collection and processing exposes the data subjects to (data subject is GDPR speak for people you store data about).
What consequences could the data subjects see from the processing itself? Are you using it for profiling or some sort of automated decision making? This is the usual “privacy concern” – and rightly so. Even felt it is creepy how marketers track your digital movements?
Another perspective we also need to take is what can the consequences for individuals be of data abuse? This can be a data breach like the Equifax and Uber stories we’ve heard a lot about this fall, or it can be something else, like an insider abusing your data, a hacker changing the data so that automated decisions don’t go your way, or that the data becomes unavailable and thereby stopping you from using a service or getting access to something. The personal consequences can be financial ruin, a poor reputation, family troubles or perhaps even divorce?
A key principle in the GDPR is data minimization; you shall only process and store the data where you have a good reason and a legal basis for doing so. Practicing data minimizaiton means less data that can be abused, thereby a lower threat to the freedoms and rights of the persons you process data about. This is perhaps the most important principle of good privacy practice: try to avoid processing or storing data that can be linked to individuals as far as you can (while getting your things done).
Surprisingly, many companies have no clue what personal data they are storing and processing, who they share it with, and why they do it. The starting point for achieving good privacy practice, good karma – and even GDPR compliance – is knowing what you have and why.
From scare-speak to tools and practice
We’ve had a year of data breaches, and we’ve had a year of GDPR themed conference talks, often with a focus on potential bankruptcy-inducing fines, cost of organizational changes and legal burdens. Now is the time for a more practical discussion; getting from theory to practice. In doing this we should all remember:
There are no GDPR experts: the regulation has yet not come into force, and a both regulatory oversight and practical implementations are still in its infancy phases
The regulation is risk-based: this means the data controllers and processors must take ownership of the risk and governance processes.
Documenting compliance should be a natural part of performing the thinking and practical work required to safeguard the privacy of customers, users, employees, visitors and whatever category of people you process data related to.
We need practical guidance documents, and we need tools that make it easier to follow good practice, and keep compliance documentation alive. That’s what we should be discussing today – not fines and stories about monsters eating non-compliant companies.
If you are like most people, you don’t read privacy statements. They are boring, often generic, and seem to be created to protect businesses from lawsuits rather than to inform customers about how they protect their privacy. Still, when you know what to look for to make up your mind about “is it OK to use this product”, such statements are helpful.
Even so, there is much to be learned from looking at a privacy statement. If you are like most people you are not afraid of sharing things on the internet, but still you don’t want the platforms you use to abuse the information you share. In addition, you would like to know what you are sharing. It is obvious that you are sharing a photo when you include it in a Facebook status update – but it is obvious that you are sharing your phone number and location when you are using a browser add-on? The former we are generally OK with (it is our decision to share), the latter not so much – we are typically tricked into sharing such information without even knowing that we do it.
So-called anonymous information: approximate geo-location, hardware specs, browser type and version, date of software installation (their add-on I presume), the date you last used their services, operating system type and version, OS language, registry entries (really??), URL requests, and time stamps.
Personal information: IP address, name, email, screen names, payment info, and other information we may ask for. In addition you can sign up with your Facebook profile, from which they will collect usernames, email, profile picture, birthday, gender, preferences. When anonymous information is linked to personal information it is treated as personal information. (OK….?)
Other information: information that is publicly available as a result of using the service (their socalled VPN network) may be accessed by other users as a cache on your device. This is basically your browser history.
125 million users accept that their personal data is being harvested, analysed and shared at will by a company that provides “VPN” with no encryption and that accepts the use of “password” as password when signing up for their service.
So, here’s the take-away:
What they collect
How they collect it
What they are using the information for
With whom do they share the informaiton
How do they secure the information?
Think about what this means for the things that are important to your privacy. Do you accept that they do the stuff they do?
What is the worst-case use of that information if the service provider is hacked? Identity theft? Incriminating cases for blackmail? Political profiling? Credibility building for phishing or other scams? The more information they gather, the worse the potential impact.
Finally, never trust someone claiming to sell a security product that obviously does not follow good security practice. No SSL, accepting weak passwords? Take your business elsewhere, it is not worth the risk.
One of the biggest challenges of our time is climate change. The world struggles to get our ongoing path to environmental destruction under control. Today is Earth Day. This day is for most people about avoiding meat, taking public transport, using reusable shopping bags, drinking wine instead of beer, and turning lights off – but nerds can do more than that. Our biggest challenge is to reduce the climate gas emissions from transport.
Information technology has a gigantic role to play in the solution to that problem:
Self-driving cars, buses, metros make public transport cheaper. But can they be hacked? Of course they can.
Smart assistants using AI to help plan your day, your travel and to optimize your choices also with regard to environmental footprint can do a lot. But can they be hacked, thereby destroying all hope of privacy protection? Sure they can.
Telework can reduce the need to travel to work, and the need for business travel to talk to people in other locations. This brings a whole swath of issues: privacy, reliability. If people don’t trust the solutions for communication, system access, and if they don’t work reliably, people will keep boarding planes to meet clients and driving cars to go to the office.
Cloud services are nice. They make working together over distances a lot easier. Cloud services require data centers. If the reliability of a data center is not quite up to expectations the standard solution is to replicate everything in another datacenter, or for the customer perhaps to replicate everything in his or her own datacenter, or possibly mirroring it to another cloud provider. This may not be seen as necessary if the reliability is super-good with the primary provider – particularly the ability to deal with DDoS attacks. Building reliable datacenters is therefore part of the climate solution – in addition to providing datacenters with green energy and efficient cooling systems.
OK, so DDoS is a climate problem? Yes, it is. And what do cybercriminals need to perform large-scale DDoS attacks? They need botnets. They get botnets by infecting IoT devices, laptops, phones, workstations and so on with malware. Endpoint security is therefore, also, a climate issue. Following sensible security management is therefore a contributor to protecting the environment. So in addition to choosing the bus over the car today, you can also help Mother Earth by beefing up the security on your private devices:
Make sure to patch everything, including routers, cell phones, laptops, smart home solutions, alarm systems, internet connected refrigerators and the whole lot.
Stop using cloud services with sketchy security and privacy practices. Force vendors to beef up their security by using your consumer power. And protect your own interests at the same time. This is doing everyone a favor – it makes AI assistants and such trustworthy, making more people use them, which favors optimized transport, consumption and communications.
Prioritize efficient, safe and secure telework. Use VPN when working from coffee shops, and promote the “local work global impact” way of doing things. Being able to avoid excessive travel, whether it is to the office or to a client on the other side of the globe, your decisions have impact. Especially if you manage to influence other people to prioritize the same things.
Happy Earth Day 2017. Promote climate action through security practices!
Recently, a group called Shadow Brokers released hundreds of megabytes of tools claimed to be stemming from the NSA and other intelligence organizations. Ars has written extensively on the subject: https://arstechnica.com/security/2017/04/nsa-leaking-shadow-brokers-just-dumped-its-most-damaging-release-yet/. The leaked code is available on github.com/misterch0c/shadowbroker. The exploits target several Microsoft products still in service (and commonly used), as well as the SWIFT banking network. Adding speculation to the case is the fact that Microsoft silently released patches to vulnerabilities claimed to be zerodays in the leaked code prior to the actual leak. But what does all of this mean for “the rest of us”?
There are two key questions we need to ask and try to answer:
Should threat models include domestic nation state actors, including illegal use of intelligence capabilities against domestic targets?
Does the availability of the leaked tools increase the threat from secondary actors, e.g. organized crime groups?
Taking on the first issue first: should we include domestic intelligence in threat models for “normal” businesses? Let us examine the C-I-A security triangle from this perspective.
Confidentiality: are domestic intelligence organizations interested in stealing intellectual property or obtaining intelligence on critical personnel within the firm? This tends to be either supply chain driven if you are not yourself the direct target, or data collection may occur due to innocent links to other organization’s that are being targeted by the intelligence unit.
Integrity (data manipulation): if your supply chain is involved in activities drawing sufficient attention to require offensive operations, including non-cyber operations, integrity breaches are possible. Activities involving terrorism funding or illegal arms trade would increase the likelihood of such interest from authorities.
Availability: nation state actors are not the typical adversary that will use DoS-type attacks, unless it is to mask other intelligence activities by drawing response capabilities to the wrong frontier.
The probability of APT activities from domestic intelligence is for most firms still low. The primary sectors where this could be a concern are critical infrastructure and financial institutions. Also firms involved in the value chains of illegal arms trade, funding of terrorism or human trafficking are potential targets but these firms are often not aware of their role in the illegal business streams of their suppliers and customers.
The second question was if the leak poses an increased threat from other adversary types, such as organized crime groups. Organized crime groups run structured operations across multiple sectors, both legal and illegal. They tend to be opportunistic and any new tools being made available that can support their primary cybercrime activities will most likely be made use of quickly. The typical high-risk activities include credit card and payment fraud, document fraud and identity theft, illicit online trade including stolen intellectual property, and extortion schemes by direct blackmail or use of malware. The leaked tools can support several of these activities, including extortion schemes and information theft. This indicates that the risk level does in fact increase with the leaks of additional exploit packages.
How should we now apply this knowledge in our security governance?
The tools use exploits in older versions of operating systems. Keeping systems up-to-date remains crucial. New versions of Windows tend to come with improved security. Migration prior to end-of-life of previous version should be considered.
In risk assessments, domestic intelligence should be considered together with foreign intelligence and proxy actors. Stakeholder and value chain links remain key drivers for this type of threat.
Organized crime: targeted threats are value chain driven. Most likely increased exposure due to new cyberweapons available to the organized crime groups for firms with exposure and old infrastructure.
One of the vulnerabilities that are really easy to exploit is when people leave super-sensitive information in source code – and you get your hands on this source code. In early prototyping a lot of people will hardcode passwords and certificate keys in their code, and remove it later when moving to production code. Sometimes it is not even removed from production. But even in the case where you do remove it, this sensitive information can linger in your version history. What if your app is an open source app where you are sharing the code on github? You probably don’t want to share your passwords…
Getting this sensitive info out of your repository is not as easy as deleting the file from the repo and adding it to the .gitignore file – because this does not touch your version history. What you need to do is this:
Merge any remote changes into your local repo, to make sure you don’t remove the work of your team if they have commited after your own last merge/commit
Remove the file history for your sensitive files from your local repo using the filter-branch command:
Although the command above looks somewhat scary it is not that hard to dig out – you can find in the the Github doc files. When that’s done, there’s only a few more things to do:
Add the files in question to your .gitignore file
Force write to the repo (git push origin –force –all)
Tell all your collaborator to clone the repo as a fresh start to avoid them merging in the sensitive files again
Also, if you have actually pushed sensitive info to a remote repository, particularly if it is an open source publicly available one, make sure you change all passwords and certificates that were included previously – this info should be considered compromised.
The EU is ramping up the focus on privacy with a new regulation that will be implemented into local legislations in the EEC area from 2018. The changes are huge for some countries, and in particular the sanctions the new law is making available to authorities should be cause for concern for business that have not adapted. Shockingly, a Norwegian survey shows that 1 in 3 business leaders have not even heard of the new legislation, and 80% of the respondents have not made any effort to learn about the new requirements and its implications for their business (read the DN article here in Norwegian: http://www.dn.no/nyheter/2017/02/18/1149/Teknologi/norske-ledere-uvitende-om-ny-personvernlov). The Norwegian Data Protection Authority says this is “shocking” and says all businesses will face new requirements and that it is the duty of business leaders to orient themselves about this and act to comply with the new rules.
Here’s a short form of key requirements in the new regulation:
You need to do a risk assessment for privacy and data protection of personal data. The risk assessment should consider the risk to the owner of the data, not only the business. If the potential consequences of a data breach are high for the data owner, the authorities should be involved in discussions on how to mitigate the risk.
All new solutions need to build privacy protections into the design. The highest level of data protection in a software’s settings must be used as default, meaning you can only collect a minimum of data by default unless the user actively changes the settings to allow you to collect more data. This will have large implications for many cloud providers that by default collect a lot of data. See for example here, how Google Maps is collecting location data and tracking the user’s location: https://safecontrols.blog/2017/02/18/physically-tracking-people-using-their-cloud-service-accounts/
All services run by authorities and most services run by private companies will require the organization to assign a data protection officer responsible for compliance with the GDPR and for communicating with the authorities. This applies to all businesses that in their operation is handling personal data on a certain scale and frequency – meaning in practice that most businesses must have a data protection officer. It is permissible to hire in a third-party for this role instead of having an employee to fill the position.
The new regulation also applies to non-European businesses that offer services to Europe.
The new rules also apply to data processing service providers, and subcontractors. That means that cloud providers must also follow these rules, even if the service is used by their customer, who must also comply.
There will be new rules about communication of data breaches – both to the data protection authorities and to the data subjects being harmed. All breaches that have implications for individuals must be reported to the data protection authorities within 72 hours of the breach.
The data subjects hold the keys to your use of their data. If you store data about a person and this person orders you to delete their personal data, you must do so. You are also required to let the person transfer personal data to another service provider in a commonly used file format if so requested.
The new regulation also provides the authorities with the ability to impose very large fines, up to 20 million Euros or up to 4% of the global annual turnover, whichever is greater.This is, however, a maximum and not likely to be the normal sanctions. A warning letter would be the start, then audits from the data protection authorities. Fines can be issued but will most likely be within the common practice of corporate fines within the country in question.
“Personal data should be processed in a manner that ensures appropriate security and confidentiality of the personal data, including for preventing unauthorised access to or use of personal data and the equipment used for the processing.”
This means that you should implement reasonable controls for ensuring the confidentiality, integrity and availability of these data and the processing facilities (software, networks, hardware, and also the people involved in processing the data). It would be a very good idea to implement at least a reasonable information security management system, following good practices such as described in ISO 27001. If you want a roadmap to an ISO 27001 compliance management system, see this post summarizing the key aspects there: https://safecontrols.blog/2017/02/12/getting-started-with-information-management-systems-based-on-iso-27001/.
Nobody likes being tracked. Still, most people store a detailed account of their movements on their phones, often shared with multiple apps. If you can get access to some of these user accounts you can track their whereabouts down to a relatively detailed level. In real time.
Tracking People – Google Maps Style
I have a 9-year old son, and one night I was watching the news with him. There was one story that got him curious, and almost angry, and rightly so. It was about a smartwatch for kids that was becoming popular as a gift from parents. This particular smartwatch allowed surveillance of the kids’ whereabouts via GPS, and they could also call their kids on the watch instead of giving them a phone. Parents can also turn on a one-way audio channel that allows them to eavesdrop on their kids’ conversations. My son told me he thinks this is unfair and outrageous – and I completely agree. He felt somewhat better when I told him that he is not going to get a watch like that (he has an analog wristwatch, a ting with no internet). More kids should care about privacy too.
Note to parents and teachers: when we teach kids it is OK to be spied on, they will be less concerned with privacy in adult life too. Surveillance is a key feature of authoritarian regimes, and our world is increasingly moving in the wrong direction here. Don’t teach your kids it is OK for people to spy on them. Protect our democracies by teaching your kids to take care of their own privacy.
Next I asked him if he believed people could be spied on using their cell phones. The answer was “maybe”. I told him that they can but most parents don’t know how, and hopefully have no wish to do so. He has an Android phone – and I told him that I’ve made the settings on that phone such that it does not store his location data in a Google account but that most people who use Android never think about this. And the same goes for Apple. He was skeptical about this, so I told him I would turn on the “timeline” feature on my own Android, and show him later what the app stores on Google’s servers. Afterwards I showed him what they track (including where I was, how much time I spent where, the photos I’d taken, and the like).
Think it through if you choose to turn on features like Google’s timeline. Or rather, think it through if you are not taking steps to turn it off – it is on by default.
Talk to your kids about privacy. The habits they learn now is what they bring with them into adulthood. Teaching your kid about privacy is an important contribution to safeguarding democracy and freedom of speech.
The example here was using Android phones. The story is the same on other platforms, and with other mapping and location based service providers than Google.
If your password is leaked, change it immediately.
What do you think about this? Let me know in the comments! Especially – do you think it is OK to track your kids like the watch described above? Are you OK with Google and their competitors storing such detailed location data about you?