Running an automated security audit using Burp Professional

Reading about hacking in the news can make it seem like anyone can just point a tool at any website and completely take it over. This is not really the case, as hacking, whether automated or manual, requires vulnerabilities.

A well-known tool for security professionals working with web applications is Burp from Portswigger. This is an excellent tool, and comes in multiple editions from the free community edition, which is a nice proxy that you can use to study HTTP requests and responses (and some other things), to the professional edition aimed at pentesting and enterprise which is more for DevOps automation. In this little test we’ll take the Burp Professional tool and run it using only default settings against a target application I made last year. This app is a simple app for posting things on the internet, and was just a small project I did to learn how to use some of the AWS tools for deployment and monitoring. You find it in all its glory at https://www.woodscreaming.com.

Just entering the URL http://www.woodscreaming.com and launching Burp to attack the application first goes through a crawl and audit of unauthenticated routes it can find (it basically clicks all the links it can find). Burp then registers a user, and starts probing the authenticated routes afterwards, including posting those weird numerical posts.

Woodscreaming.com: note the weird numerical posts. These are telltale signs of automated security testing with random input generation.

What scanners like Burp are usually good at finding, is obvious misconfigurations such as missing security headers, flags on cookies and so on. It did find some of these things in the woodscreaming.com page – but not many.

Waiting for security scanners can seem like it takes forever. Burp estimated some 25.000 days remaining after a while with the minimal http://www.woodscreaming.com page.

After runing for a while, Burp estimated that the remaining scan time was something like 25.000 days. I don’t know why this is the case (not seen this in other applications) but since a user can generate new URL paths simply by posting new content, a linear time estimation may easily diverge. A wild guess at what was going on. Because of this we just stopped the scan after some time as it was unlikely to discover new vulnerabilities after this.

The underlying application is a traditional server-driven MVC application running Django. Burp works well with applications like this and the default setup works better than it typically does for single page applications (SPA’s) that many web applications are today.

So, what did Burp find? Burp assigns a criticality to the vulnerabilities it finds. There were no “High” criticality vulns, but it reported some “Medium” ones.

Missing “Secure” flag on session cookies?

Burp reports 2 cookies that seem to be session cookies and that are missing the Secure flag. This means that these cookies would be set also if the application were to be accessed over an insecure connection (http instead of https), making a man-in-the-middle able to steal the session, or perform a cross-site request forgery attack (CSRF). This is a real find but the actual exposure is limited because the app is only served over https. It should nevertheless be fixed.

A side note on this: cookies are set by the Django framework in their default state, no configuration changes made. Hence, this is likely to be the case also on many other Django sites.

If we go to the “Low” category, there are several issues reported. These are typically harder to exploit, and will also be less likely to cause major breaches in terms of confidentiality, integrity and availability:

  • Client-side HTTP parameter pollution (reflected)
  • CSRF cookie without HTTPOnly flag set
  • Password field with autocomplete enabled
  • Strict transport security not enforced

The first one is perhaps the most interesting one.

HTTP paramter pollution: dangerous or not?

In this case the URL parameter reflected in an anchor tag’s href attribute is not interpreted by the application and thus cannot lead to bad things – but it could have been the case that get parameters had been interpreted in the backend, making it possible to have a person perform an unintended action in a request forgery attack. But in our case we say as the jargon file directs us: “It is not a but, it is a feature”!

So what about the “password field with autocomplete enabled”? This must be one of the most common alerts from auditing software today. This can lead to unintended disclosure of passwords and should be avoided. You’ll find the same on many well-known web pages – but that does not mean we shouldn’t try to avoid it. We’ll put it on the “fix list”.

Are automated tests useful?

Automated tests are useful but they are not the same as a full penetration test. They are good for:

  1. Basic configuration checks. This can typically be done entirely passively, no attack payloads needed.
  2. Identifying vulnerabilities. You will not find all, and you will get some false positives but this is useful.
  3. Learning about vulnerabilities: Burp has a very good documentation and good explanations for the vulnerabilities it finds.

If you add a few manual checks to the automated setup, perhaps in particular give it a site-map before starting a scan and testing inputs with fuzzing (which can also be done using Burp) you can get a relatively thorough security test done with a single tool.

Defending against OSINT in reconnaissance?

Hackers, whether they are cyber criminals trying to trick you into clicking a ransomware download link, or whether they are nation state intelligence operatives planning to gain access to your infrastructure, can improve their odds massively through proper target reconnaissance prior to any form of offensive engagement. Learn how you can review your footprint and make your organization harder to hack.

https://cybehave.no

Cybehave has an interesting post on OSINT and footprinting, and what approach companies can take to reduce the risk from this type of attack surface mapping: https://cybehave.no/2019/03/05/digital-footprint-how-can-you-defend-against-osint/ (disclaimer: written by me and I own 25% of this company).

tl;dr – straight to the to-do list

  • Don’t publish information with no business benefit and that will make you more vulnerable
  • Patch your vulnerabilities – both on the people and tech levels
  • Build a friendly environment for your people. Don’t let them struggle with issues alone.
  • Prepare for the worst (you can still hope for he best)

Storing seeds for multifactor authentication tokens

When setting up an application to use two-factor authentication for example with Google Authenticator, each user will have a unique seed value for the authenticator. The identity server will require knowledge of the seed to verify the token – meaning you will have to store it and retrieve it somehow. This means that if an attacker gets access to the storage solution that links OTP secret seeds to user ID’s (e.g. usernames), the protocol is broken. So, trying to think up some options for securing the secrets – we cannot hash and salt it because it breaks the OTP authentication flow. We are hence left with encrypting the seed before storing it.

The most practical seems to be a symmetric crypto approach, the question is what to use as the crypto key. Here are some approaches I’ve seen people discuss that all seem bad: 

  • User password: if you can phish the password, then you can also generate the OTP provided you know which algorithm/library is used
  • A static application secret: should be safe provided that secret is never leaked but using a static secret means that if it is compromised, all users are compromised. Still better than the user password, though. 
  • Using non-static user level meta data to create a unique key for each user that is not vulnerable to phishing or guessing. Typically visible to admins.
Get username/password
Verify username/password
Get OTP seed (encrypted)
Get metadata and reconstruct encryption key
Verify OTP
Authenticate user and store timestamp and other auth metadata
Construct new encryption key
Encrypt seed
Store in database

The question is what metadata to use. We need the following properties to be true:

  • Not possible to guess for a third party even if we tell what metadata it is
  • Not possible to reconstruct for an administrator with access to the account
  • Not possible to phish or obtain through social engineering or client side attacks

There are many possibilities but here is one possible solution that would satisfy all the above requirements:

Key = Password (Not available to admins) + Timestamp for last login (not guessable/phishable)

Combining VueJS and Django to build forms with custom widgets

This post is brief and explains a pattern that may be dangerous but still is very handy for combining VueJS with Django templates for dynamic forms. Here’s the case: I need to build a form for sending out some messages. One of the form widgets is a <select> tag where each <option> is a model instance from Django. The widget will then show the name of that model instance in the UI, but this does not provide enough context to be useful, we also need some description text. There are basically two options for how to handle this:

  1. Use the form “as-is” but provide the extra context in the UI by pulling some extra information and building an information box in the UI.
  2. Create a custom widget, and bind it to the Django model form using a hidden field.

Both are probably equally good, but I went with the second option. So here’s what I did:

  1. Build a normal Django model form, but change the widget for the field in question to type “HiddenInput” in the form.py file.
  2. Build a selector widget using VueJS that allows the user to get the desired content and review the various options with full context (including images and videos, things you can’t put inside a dropdown list. We are binding the selected choices to frontend data using the v-model directive in VueJS.
  3. Set the hidden field to set its value based on the data value stored in the frontend using that v-model directive
  4. Process the form as you normally would with a Django model form.

The form definition remains very simple. Here’s the relevant class from this example:

class MailForm(forms.ModelForm):

    class Meta:
        model = Campaign
        fields = ('name','to','elearning',)
        widgets = {
            'elearning': forms.HiddenInput(attrs={':value': 'module.pk'})
        }

The selector widget can take any form you could desire. The point in this project was to show some more context for the “eLearning” model. The user here gets notification about enrollment in an eLearning module by e-mail. The administrator setting up the program needs to get a bit of context about that e-learning, such as the name of the module, a description of its content, and perhaps a preview of a video or other multimedia. Below is an example of a simple widget of this type. The course administrator can here browse through the various options by clicking next, and the e-mail form is automatically updated.

Of course, to do that binding we need a bit of JavaScript in the Django template. We need to perform the following tasks to make our custom widget work:

  1. Fetch information about all the options from the server. We need to create an API endpoint for this, that can deliver JSON data to the frontend.
  2. Set the data item bound to the Django form based on the user’s current selection

Now the form can be submitted and processed using the normal Django framework patterns – but with a much more context-rich selection widget than a simple dropdown list.

Is it safe to do this?

Combining frontend and server-side rendering with different templates for HTML rendering can be dangerous. See this excellent write-up on XSS vulnerabilities that can be the result from such combinations: https://github.com/dotboris/vuejs-serverside-template-xss.

This is a problem when user input is injected via the server-side template as the user can supply the interpolation tags as part of the input. In our case there is no user input in those combinations. However, if you need to take user input and rerender this using the server-side templates of some framework like Django, here are some things you can do to harden against this threat:

  • Use the v-pre directive in VueJS
  • Sanitize the input to discard unsafe characters, including VueJS delimiters
  • Escape generated output from the database to avoid injections making it as executable JavaScript reaching the user’s context

Security awareness: the tale of the Minister of Fisheries and his love of an Iranian former beauty queen

An interesting story worthy of inspiring books and TV shows is unfolding in Norway. The Minister of Fisheries, Per Sandberg (born 1960), from the Progress Party (a populist right party), spent his summer holiday in Iran together with his new girlfriend, a 28-year old former beauty queen who fled to Norway do escape forced marriage when she was 16. The minister brought his smartphone, where he has access to classified information systems. He forgot to inform the prime minister before after he left, a breach of security protocol. He ignored security advice from the Norwegian security police, responsible for national security issues and counter-intelligence. He is still a member of the cabinet. This post is an attempt at making sense of this, and what the actual risk is. A lot of people in Norway have had their say in media about this case, both knowledgeable voices, and less reasonable ones.

Some context: Norwegian-Iranian relations

Traditionally there has been little trade between Iran and Norway. Recently, following the nuclear agreement between Iran and the US, UK, France, China, Russia and Germany this has started to change. Norway has seen significant potential for exports to Iran of fish and aquaculture technologies. In the last year or so, Minister Sandberg has been central to this development (see timeline further down on Sandberg’s known touch points with Iran).

In the Norwegian public skepticism of the Iranian regime is high, and there has been vocal criticism of establishing trade relationships over human rights concern.

The Norwegian Iranian interest spheres are also intersecting in the Middle East. Iran has established tighter relations with Russia since 2016 when it started to allow Russian bombers to take off from Iranian air force bases for bombing missions inside Syria. The Norwegian-Russian relations are strained, following the response of NATO and the EU to Russian operations in the Ukraine, influencing of western elections and a general intensification of cyber operations against Norwegian targets (see open threat assessment from Norwegian military intelligence: https://forsvaret.no/fakta/undersokelser-og-rapporter/fokus2018/rapport (in Norw.). Operations against Norwegian government officials by Iranian services may thus also be driven by other interests of Iran than direct Norwegian-Iranian relations.

Sandberg: who is he and what could make him a viable target for intelligence operations?

This is a presentation of Sandberg taken from the web page of the Ministry of Trade, Industry and Fisheries (https://www.regjeringen.no/no/dep/nfd/organisation/y-per-sandberg/id2467677/). Note that his marital status is “married” – but he separated from his wife in May 2018. Sandberg is a vocal figure in Norwegian politics. He has been known to be against immigration and a supporter of strict immigration laws. He has repeatedly been accused of racism, especially by the opposition. He has long held top positions in the Progress Party, which has been a part of a coalition cabinet together with the conservatives (Høyre), and more recently also with the moderately liberalist party “Venstre” (meaning left but it is not a socialist party). Sandberg is known for multiple controversies, summarized on this Wikipedia page: https://en.wikipedia.org/wiki/Per_Sandberg#Controversies. This involves addressing the parliament after having too much to drink, losing his driver’s license due to speeding and finally he was also convicted for violence against an asylum seeker in 1997.

Sandberg has been married since 2010 to 2018 to Line Miriam Sandberg, who has been working as a state secretary for the Ministry of Health since 2017. They recently separated.

His new girlfriend

Sandbergs new girlfriend came to Norway when she was 16 (or 13/14 the first time according to some sources) to flee forced marriage to a 60-year old man in Iran. She is now a Norwegian citizen and is 28 years old. She has participated in several beauty contests in 2013-2014. After she first came to Norway, she was not granted asylum and returned to Iran. Iran sent her back to Norway again because she did not have any identification papers when arriving, and she was adopted by a Norwegian family. A summary of known facts about Letnes and how she gained access to Iran after being returned to Norway without ID papers when she was a teenager was written in Norwegian by Mahmod Fahramand (https://www.nettavisen.no/meninger/farahmand/per-sandbergs-utfordring/3423519653.html). Farahmand is currently a consultant with the auditing and consulting firm BDO and has background from the Norwegian armed forces. He often writes opinion pieces about security related topics. To summarize some of Farahmand’s points.

  • Letnes was returned to Norway and was later adopted by her foster family
  • She has been a “go-to-person” for journalists wanting to get in touch with Iranian officials and has been known to have close relationships with the Iranian embassy in Oslo
  • Iran does not allow Iranian-born indivdiuals to enter Iran without an Iranian passport. If they do not have this, they will need to get access to their birth certificate or otherwise prove to the Iranian government that they in fact have a right to an Iranian passport. Since Letnes fled Iran to seek protection from the threat of her family, it seems she must have gotten access to this without contacting her family, Farahmand argues.

Letnes had her application for asylum turned down 3 times before getting it approve. The reason for the change of the decision of the immigration authorities in 2008 is not known (Norw: https://www.nrk.no/trondelag/–jeg-er-kjempeglad-1.6236578). In addition, it has become known in media in the last few days that Letnes applied for a job with Sandberg’s ministry, suggesting she could act as a translator and guide for Sandberg’s communications with Iran in matters related to fishery and aquaculture trade, which she did not get. Sandberg denied any knowledge of this prior to media inquiring about it. The job application was sent in 2016. She also registered a sole proprietorship in January this year, B & H GENERAL TRADING COMPANY. BAHAREH LETNES, a company to trade with Iran in fish, natural gas, oil and technology (corporate registration information: https://w2.brreg.no/enhet/sok/detalj.jsp?orgnr=920188095). The company has according to Letnes not had any activity so far, according to media reports.

A honeytrap? Possibly. A security breach? For sure.

The arguments from Farahmand’s article above, together with the fact that Letnes tried to get a job for Sandberg in 2016, could easily indicate that Letnes sought to get close to Sandberg. She has sought multiple touchpoints with him since he was appointed Minister of Fisheries in 2015.

This would be a classical honeytrap, although a relatively public one. Sandberg has failed to follow security protocol on many occasions in his dealings with Letnes and Iran. Obvious signs of poor security awareness on behalf of the Minister:

  • He brought his government issued cell phone to Iran and left it unattended for longer periods of time where they stayed at the time
  • He did not tell the office of the Prime Minister about his travel to Iran before leaving. This is a breach of security protocol for Norwegian ministers
  • His separation from his wife became known in May this year
  • He has announced his “original vacation plans got smashed, so the trip to Iran was a last-minute decision”. He was supposed to go on holiday to Turkey, which he had also reported to his Ministry and the office of the Prime Minister, in accordance with security protocol (Norw: https://www.aftenposten.no/norge/politikk/i/e1xzJO/Fiskeriminister-Per-Sandberg-bekrefter-at-han-reiste-til-Iran-uten-a-informere-departementet-eller-statsministeren )
  • The Norwegian government was made aware of Sandberg’s presence in Iran when they received an e-mail from the Iranian embassy in Oslo, requesting official meetings with Minister Sandberg while he was in Iran

Iranian TTP

According to Kjell Grandhagen, former head of Norwegian military intelligence, Iran has a very capable and modern intelligence organization. He holds it as highly likely that Sandberg’s government issued phone, which he left unattended a lot of the time while in Iran, has been hacked (https://www.digi.no/artikler/tidligere-sjef-for-e-tjenesten-tror-per-sandbergs-mobil-har-blitt-hacket/442930). According to this CSO summary, Iran has serious capabilities within both HUMINT and cyber domains. Considering the known cyber capabilities of Iran, and the looming sanctions from the Trump administration, getting both information and leverage over a key politician in a NATO country becomes even more interesting, not only to Iran but also to Russia.

Coming back to Iran’s more recent tighter cooperation with Russia, it is not unlikely that they are also initiating a closer relationship when it comes to intelligence gathering. The use of honey traps has been a long-standing Russian tactic for information gathering and getting leverage over decision makers. In 2015, Norwegian police warned against Russian intelligence operations targeting politicians, including the use of honey traps (https://finance.yahoo.com/news/norwegian-police-warning-citizens-against-195510994.html).

A summary: why is he still in office?

The facts and arguments presented above should indicate two things very clearly:

  • Based on publicly known information, it is clearly possible that Iranian intelligence is targeting Per Sandberg. They may have an asset close to him, as well as having had physical access to his smartphone that has direct access to classified information systems.
  • Further, Sandberg has broken established security protocol, and although admitting this, he does not seem to appreciate the potential impact

The effect of a top leader not taking security seriously is very unfortunate. Good security awareness in an organization depends heavily on the visible actions of its people at the top – in business as well as in politics. A breach of security policy without getting any personal consequences on this level sends a very poor message to other politicians and government officials. It also sends a message to adversaries that targeting top-level politicians is likely to work, even if there are numerous indicators of a security breach. There should be no other possible conclusion of this than relieving Mr. Sandberg of his position – which would set him free to further develop his relationship with the Iranian beauty queen.

How a desire for control can hurt your security performance

Lately we have seen a lot of focus on security in social media – professionals, companies, organizations trying to increase security awareness. A lot of the information out there is about “control” and “compliance”. The downside of a risk management regime based on strict rules, controls and compliance measures has been demonstrated again and again throughout history, and I’ve also written about it before in terms of getting users on board with the security program. My background is from the oil and gas industry – an industry that has seen several horrific accidents. Two of the more well-known and dramatic ones are the Piper Alpha accident 30 years ago this year, and the Deepwater Horizon blowout in 2010. In both of these cases, investigations pointing to “root causes” concluded with a degraded safety culture, lack of attention to real risks and partially blamed a prescriptive approach to safety. The same arguments are equally valid for security incidents. The goal should be to find a balance between security and operational flow.

acrobatics action balance ballet
Balancing security against performance is necessary to operate with flow and still be at trusted business partner.

Some examples of potentially unhelpful helpfulness

If you are looking for security advice online, it is easy to find. A lot of it will tell you to “trust nobody, lock down everything”. From a traditional security point of view this makes sense – but it does not take the risk context into picture, nor does it balance measures against operational needs (such as keeping your store open, or being able to try new things to innovate and create new products).

Here’s Dr. Eric Cole (he knows a lot about security but sometimes I think his advice is a bit draconian)

Change control is important – but gathering a “change control board” for every change you do may be overkill – if you want ot stay “agile” and able to respond to changing demands.

Another common “rule” that actually does make a lot of sense is not to give end-users admin rights to their work computers. But…. needs will vary. If you are trying to develop new technology but your developers have to go through a lot of red-tape to try out a new technology, it will certainly have some ill effects on your teams ability to innovate. On the other hand, giving developers all free reigns in the name of the “sacred innovation gods” is also not a very good idea. The whole thing is about balance.

Risk acceptance and balance

Security controls are often cumbersome for people. Airport security, nightclub bouncers, two-factor authentication, no admin rights. Security leads to limitation of access in a large number of cases. This obviously has a downside when it comes to how fast we can innovate, how quickly we can produce. The benefit is reduced probability and impact of a major incident – and such incidents are very expensive. The amount of security cumbersomeness people are willing to accept and live with will normally depend on how bad it can get if someone hacks you. If your system controls nuclear weapons, a power plant, or perhaps the production at a chemical plant, incidents can cause real disasters leading to financial and environmental ruin, as well as a large number of fatalities. In this case, you will probably accept a lot of security controls to minimize the chance of something like that happening.

On the other hand, if you are selling some new hot service online, you still need people to trust your service, and you need to comply with privacy laws. This means your security must still be good – but you may nevertheless adopt a slightly higher risk acceptance than in the nuclear weapons case.

The trick is to find a good balance between acceptable risk performance, and good operational flow. This in itself will contribute to greater security performance overall, as the human factor side to cyber risk is very large, something that is often undervalued when designing security controls. To do this in a coherent manner we bring you the mighty tool of the…. risk based threat model.

Threat models for a balanced security strategy

A threat model is best made for a specific system or subsystem. The system can be everything from “the company network” to a specific application or a small software component. The thinking remains the same but the details in your model will change. The whole purpose of it is to understand how and Adversary can perform an Action on a Target to achieve an Objective. There are many ways to model this explained in the literature, but we won’t go into details about them here. If you want the details you can search for attack trees, STRIDE, cyber kill chain.

Context. You need to understand 3 things about the risk context:

  • Who are the stakeholders and what is their interest in the system? Owners, employees, users, customers, suppliers, attackers, insiders
  • What does the system itself do?
  • Who are the threat actors? Use threat intelligence to understand how adversaries approach the system and the supply chain you are a part of.

Inventory and data flow. Create a data flow diagram on the architectural level. Include relevant information such as protocols and main technologies. Describe what each asset is used for, and make a list of what data is being processed and transferred. Make trust boundaries visible in your diagram. For the inventory, consider the potential impact of confidentiality, integrity or availability losses.

Abuse cases. Consider how the various processes and data transfer operations can be abused by an adversary with sufficient access. Access can be physical access, stolen credentials, through malware or direct use of software vulnerabilities. The abuse case is your primary tool for understanding how controls can stop the adversary’s actions.

Detection and mitigation. Your system is probably not wide open to attack. List the most important controls you have in place already. The main purpose of this is to check if you are missing something obvious that you probably should be doing to stop attacks.

Evaluate and prioritize. Evaluate the threats according to the estimated risk. Prioritize controls that will help you reduce the risk of unacceptable actions being taken by adversaries to your most important assets and operational capabilities. Make sure you do not over-stretch the organization’s capabilities – focus on what matters the most first.

Thinking through your context and what you value brings you a long way alone, in particular with solid baseline controls. Maintaining a threat model that is kept up to date regularly with new threat intelligence and other context changes also allows you to ensure you do not fall behind how the world moves. Taking risks is fine, but know what risks you can afford to take – when you do that, you can choose the point for balancing security and performance.

 

 

 

 

How to manage risk and security when outsourcing development

Are you planning to offer a SaaS product, perhaps combined with a mobile app or two? Many companies operating in this space will outsource development, often because they don’t have the right in-house capacity or competence. In many cases the outsourcing adventure ends in tears. Let’s first look at some common pitfalls before diving into what you can do to steer the outsourced flagship clear of the roughest seas.

Common outsourcing pitfalls

I’ve written about project follow-up before, and whether you are building an oil rig or getting someone to write an app for you, the typical “outsourcing pitfalls” remain the same:

  • Weak follow-up
  • Lack of documentation requirements
  • Testing is informal
  • No competence to ask the right questions
  • No planning of the operations phase
  • Lack of privacy in design

Weak follow-up: without regular follow-up the sense of commitment can get lost for the service provider. It is also increasing the chances of misunderstandings by several magnitudes. If I write a specification of a product that should be made, and even if that specification is wonderfully clear to me, it may be interpreted differently by a service provider. With little communication underway towards the product, there is a good chance the deliverable will not be as expected – even if the supplier claims all requirements have been met.

Another big mistake by not having a close follow-up process, is lost opportunities in the form of improvements or additional features that could be super-useful. If the developer gets a brilliant idea, but has no one to approve of it, it may not even be presented to you as the project owner. So, focus on follow-up – if not you are not getting the full return on your outsourcing investment.

Lack of documentation requirements: Many outsourcing projects follow a common pattern: the project owner writes a specification, and gets a product made and delivered. The outsourcing supplier is then often out of the picture: work done and paid for – you now own the product. The plan is perhaps to maintain the code yourself, or to hire an IT team with your own developers to do that. But…. there is no documentation! How was the architecture set up, and why? What do the different functions do? How does it all work? Getting to grips with all of that without proper documentation is hard. Really hard. Hence, putting requirements to the level of documentation into your contracts and specifications is a good investment with regards to avoiding future misunderstandings and a lot of wasted time trying to figure out how everything works.

Informal or no testing: No testing plan? No factory acceptance test (FAT)? No testing documentation? Then how do you determine if the product meets its quality goals – in terms of performance, security, user experience? The supplier may have fulfilled all requirements – because testing was basically left up to them, and they chose a very informal approach that only focuses on functional testing, not performance, security, user experience or even accessibility. It is a good idea to include testing as part of the contract and requirements. It does not need to be prescriptive – the requirement may be for the supplier to develop a test plan for approval, and with a rationale for the chosen testing strategy. This is perhaps the best way forward for many buyers.

No competence to ask the right questions: One reason for the points mentioned so far being overlooked may be that the buying organization does not have the in-house competence to ask the right questions. The right medicine for this may not be to send your startup’s CEO to a “coding bootcamp”, or for a company that is primarily focused on operations to hire its in-house development team – but leaving the supplier with all the know-how leaves you in a very vulnerable position, almost irrespective of the legal protections in your contract. It is often money well spent to hire a consultant to help follow-up the process – ideally from the start so you avoid both specification and contract pitfalls, and the most common plague of outsourcing projects – weak follow-up.

No planning of operations: If you are paying someone to create a SaaS product for you – have you thought about how to put this product into operation? Often important things are left out of the discussion with the outsourcing provider – even if their decisions have a very big impact on your future operations. Have you included the following aspects into your discussions with the dev teams:

  • Application logs: what should be logged, and to what format, and where should it be logged?
  • How will you deploy the applications? How will  you mange redundancy, content delivery?
  • Security in operations: how will you update the apps when security demands it, for example through the use of dependencies/libraries where security holes become known? Do you at all know what the dependencies are?
  • Support: how should your applications be supported? Who picks up the phone or answers that chat message? What information will be available from the app itself for the helpdesk worker to assist the customer?

Lack of privacy in design: The GDPR requires privacy to be built-in. This means following principles such as data minimization, using pseudonomization or anonymization where this is required or makes sense, means to detect data breaches that may threaten the confidentiality and integrity (and in some cases availability) of personal information. Very often in outsourcing projects, this does not happen. Including privacy in the requirements and follow-up discussions is thus not only a good idea but essential to make sure you get privacy by design and default in place. This also points back to the competence bit – perhaps you need to strengthen not only your tech know-how during project follow-up but also privacy and legal management?

A simple framework for successful follow-up of outsourcing projects

The good news is that it is easy to give your outsourcing project much better chances of success. And it is all really down to common sense.

outsourcing_framework
Activities in three phases for improving your outsourcing management skills

Preparation

First, during preparation you will make a description of the product, and the desired outcomes of the outsourcing project. Here you will have a lot to gain from putting in more requirements than the purely functional ones – think about documentation, security, testing and operations related aspects. Include it in your requirements list.

Then, think about the risk in this specification. What can go wrong? Cause delays? Malfunction? Be misunderstood? Review your specification with the risk hat on – and bring in the right competence to help you make that process worthwhile. Find the weaknesses, and then improve.

Decide how you want to follow-up the vendor. Do you want to opt for e-mailed status reports once per week? The number of times that has worked for project follow-up is zero. Make sure you talk regularly. The more often you interact with the supplier, the better the effect is on quality, loyalty, and priorities. Stay on the top priority list for your supplier – if not your product will not be the thing they are thinking about when coming to the office in the morning. Things you can do to get better project follow-up:

  • Regular meetings – in person if you are in the same location, but also on video works well.
  • Use a chat tool such as Slack, Microsoft Teams or similar for daily discussions. Keep it informal. Be approachable. That makes everything much better.
  • Always focus on being helpful. Avoid getting into power struggles, or a very top-down approach. It kills motivation, and makes people avoid telling you about their best ideas. You want those ideas.

Competence. That is the hardest piece of the pussle. Make sure you take a hard look at your own competence, and the competence you have available before deciding you are good to go. This determines if you should get a consultant or hire someone to help follow-up the outsourcing project. For outsourcing of development work, rate your organization’s competence within the following areas:

  • Project management (budgets, schedule, communications, project risk governance, etc)
  • Security: do you know enough to understand what cyber threats you need to worry about during dev, and during ops? Can you ask the right questions to make sure your dev team follows good practice and makes the attack surface as small as it should be?
  • Code development: do you understand development, both on the organizational and code level? Can you ask the right questions to make sure good practice is followed, risks are flagged and priorities are set right?
  • Operations: Do you have the skills to follow-up deployment, preparations for production logging, availability planning, etc?
  • User experience: do you have the right people to verify designs and user experiences with respect to usability, accessibility?
  • Privacy: do you understand how to ensure privacy laws are followed, and that the implementation of data protection measures will be seen as acceptable by both data protection authorities and the users?

For areas where you are weak, consider getting a consultant to help. Often you can find a generalist who can help in more than one area, but it may be hard to cover them all. It is also OK to have some weaknesses in the organization, but you are much better off being aware of them than running blind in those areas. The majority of the follow-up would require competence in project management and code development (including basic security), so that needs to be your top priority to cover well.

Work follow-up

Now we are going to assume you are well-prepared – having put down good requirements, planned on a follow-up structure and that you more or less have covered the relevant competence areas. Here are some hints for putting things into practice:

  • Regular follow-up: make sure you have formal follow-up meetings even if you communicate regularly on chat or similar tools. Make minutes of meetings that is shared with everyone. Make sure you make the minutes – don’t empower the supplier to determine priorities, that is your job. The meetings should all be called for with agendas so people can be well prepared. Here are topics that should be covered in these meetings:
    • Progress: how does it look with respect to schedule, cost and quality
    • Ideas and suggestions: useful suggestions, good ideas? If someone has a great idea, write down the concept and follow-up in a separate meeting.
    • Problems: any big issues found? Things done to fix problems?
    • Risks: any foreseeable issues? Delays? Security? Problems? Organizational issues?
  • Project risk assessment: keep a risk register. Update it after follow-up meetings. If any big things are popping up, make plans for correcting it, and ask the supplier to help plan mitigations. This really helps!
  • Knowledge build-up: you are going to take over an application. There is a lot to be learned from the dev process, and this know-how often vanishes with project delivery. Make sure to write down this knowledge, especially from problems that have been solved. A wiki, blog, and similar formats can work well for this, just make sure it is searchable.
  • Auditing is important for all. It builds quality. I’ve written about good auditing practices before, just in the context of safety, but the same points are still valid for general projects too: Why functional safety audits are useful.

Take-over

  • Make sure to have a factory acceptance test. Make a test plan. This plan should include everything you need to be happy with to say you will take it over:
    • Functions working as they should
    • Performance: is it fast enough?
    • Security: demonstrate that included security functions are working
    • Usability and accessibility: good standards followed? Design principles adhered to?
  • Initial support: the initial phase is when you will discover the most problems – or rather, your users will discover them. Having a plan for support from the beginning is therefore essential. Someone needs to pick up the phone or answer that chat message – and when they can’t, there must be somewhere to escalate to, preferably a developer who can check if there is something wrong with the code or the set-up. This is why you should probably pay the outsourcing supplier to provide support in the initial weeks or months before you have everything in place in-house; they know the product best after making it for you.
  • Knowledge transfer: the developers know the most about your application. Make sure they help you understand how everything works. During the take-over phase make sure you ask all questions you have, that you have them demo how things are done, take advantage of any support contracts to extend your knowledge base.

This is not a guarantee for success – but your odds will be much better if you plan and execute follow-up in a good manner. This is one way that works well in practice – for all sorts of buyer-supplier relationship follow-up. Here the context was software – but you may use the same thinking around ships, board games or architectural drawings for that matter. Good luck with your outsourcing project!

Comments? They are very welcome, or hit me up on Twitter @sjefersuper!