6 things everyone can do to avoid hacking by cyber criminals

Protecting your personal data is important, whether you are a teenager or in retirement. A lot of people are confused about what they can do to avoid becoming victims of internet fraud. Cyber criminals use phishing attacks – email scams where they trick you to click a download link to viruses, or to open attachments that are no good for you. This is by far the most common way to steal someones data, and to abuse it.

lockouthorse
Keep the criminals away from your personal data!
There are two common ways hackers steal your money;

  • They hold your computer hostage by using a virus type known as ransomeware or cryptovirus. What this does is it locks your computer files with a password, and they demand money to give you the password back. Sometimes they make you pay several times and don’t give it back to you anyway.
  • They steal your payment data, like a credit card. Malware that monitors your purchases can send your credit card data to the hacker, who then abuses the credit card by buying things or paying to himself by setting up a merchant account with a credit card processing company like PayPal.

The question is: what can you do to avoid this? The following list contains about the same information that big companies are offering their employees as cyber awareness training. If you follow these 6 rules you will reduce the risk of this type of cyber attack by around 90%:

  1. Always be critical of e-mails you receive and don’t open links or attachments you are not sure about. Check the actual internet address of the link and see if it makes sense before opening it. Copy and paste it into your browser instead of clicking it to see this. Don’t visit the site if it looks suspicious.
  2. Keep all of your software up-to-date.  Software upgrades are normally security fixes – they are removing vulnerabilities hackers need to attack your software. Only use software from reputable sources.
  3. Don’t use public open wifi without a virtual private network (VPN). A VPN creates a protected path for your data communication with the internet, making it impossible for hackers on the network to read your data traffic.
  4. Always have antivirus software running on your computer, and a firewall.
  5. Regularly back up your data. You can use a USB drive for this, and disconnect it when you are done backing up. This way hackers can’t lock your files away from you, because you have a safe backup they cannot reach.
  6. Don’t use the same passwords for many online sites. Sites on the internet are hacked quite often, and if you have used the same email and password on many sites, they get access to everything. ID theft is a big problem, partly because people use the same password everywhere.

What are the things that need to be considered when doing a risk assessment?

My answer to What are the things that need to be considered when doing a risk assessment?

Answer by Håkon Olsen:

The process can be summed up in three layers – the continuous flows of stakeholder communication, risk assessment itself, and the risk treatment.

risk_management

Here’s my answer from Quora:

Risk assessments can be performed at many levels of granularity but the same general structure of the process can be used for all such assessments. There is an ISO standard that describes this approach which is generally recognized as best practice (ISO 31000). This involves:

  • Defining the context
  • Identification or risk factors
  • Analysis of risk (likelihood and impact)
  • Evaluation of risk
  • Treatment planning
  • Monitoring of risk and treatment
  • Stakeholder communication and consulting

The context includes the scope of your assessment, who the stakeholders are, what is considered acceptable and not acceptable risk levels and how the value chain is affected by the risk exposure.

Identificaiton of risk factors can be done in many ways, but the use of “guidewords” is very common, hooks to get the ideas running. This is a sort of guided brainstorming, taking past experience into account but also avoiding disregarding events that have not yet happened. Typical guidewords for the risk to an office building could be; fire, bomb threat, hurricane, power outage, robbery. The list of guidewords must be tailored to the scope, and the context in general.

Analysis of risk means assessing how likely each scenario is, and what the potential impact can be. This can be done in a purely qualitative way, or it can be a sofisticated mathematial modeling excercise involving computer simulations and advanced statistics. The point is to arrive at an assessment of how likely something is to happen, and how bad it would be.

In evaluation of the risk you sort which risks must be reacted to, and which ones you can disregard. You typically prioritize risks that are both likely and with a potentially serious outcome. Thse risks are usually unacceptable to leave as they are. Then there is an intermediate ground with risks that are somewhat likely, or somewhat bad, or bouth, that you may want to do something with. In many areas these risks are treated if actions can be found that will reduce them without adding excessive cost – often referred to keeping risk ALARP (as low as reasonably practicable – a UK legal concept).

Treatment planning is all about what you do about your risks. You can build barriers to reduce the likelihood of the event happening (automatic pressure relief valves on pressure cookers), or that will reduce the impact (sprinklers to fight fires). This is called mitigation. You can also in many cases defer the risk to other partis through buying insurance – but this is not always possible. You can also avoid the risk if you cannot find a reasonable way to deal with it by stopping the risky activity, or redesigning whatever it is you do. Finally, you may also choose to accept the high rist because you think the rewards are great enough to justify it.

Over to practice; you need to monitor the risk level and the integrity or quality of the barriers you have built. If risk is building up you need to take action. This is a continuous activity, something banks, chemical factories and airlines do a lot of.

Finally, and perhaps one of the most overlooked parts of risk assessments, is communication. You have a lot of stakeholders that you should have identified in the concept description. Keeping them involved and engaged throughout your assets lifecycle is key to managing risk effectively. You can read more about the people management aspect of stakeholder engagement here: 4 steps to engaging people in risk conversations (my blog – lots of stuff about risk assessment there, have a look around!)

What are the things that need to be considered when doing a risk assessment?

Are we ready security -wise to ditch the office?

Here’s an article I shared on linkedIn some time ago – it spurred some interesting discussion about how digital transformation is changing the way we work and how we look at attendance. A key question not discussed in that piece is “are people able of protecting corporate data and intellectual property when the social fabric of the physical office is dismantled? I’d love to hear your thoughts about that!

IMG_0988
Technically we can work from anywhere – but are people able to maintain the necessary level of information security? 

Excerpt: Telecommuting has been a thing for some years. It works well for some, and not at all for others. Technology has come a long way, and it should now be possible to interact and work remotely for most types of “knowledge work”. In spite of this, we just can’t make it really work. More often than not, when trying to have a video conference at work, we spend 20-25 minutes to set the meeting up and make everything work. Usually because someone at the other end doesn’t know how to use his or her equipment. Clearly, technology is not enough by itself, it is necessary for people to learn how to use it. And, unfortunately, “professional” communication equipment has extremely bad UX design. Compare a top-of-the-line conferencing set up with Skype or Google Hangouts – there is a real difference in ease of use, and the feel of the whole thing.

Read the rest of the article on LinkedIn: Do we need physical attendance at work?

4 steps to engaging people in risk conversations

Risk management is about managing uncertainty; it is the planning, monitoring and handling of the unexpected. All of this happens in a specific context. You have something you want to protect from various risks, and you have the people who depend on that something. Communication with those people is key to all phases of risk management. If you cannot involve your colleagues, your suppliers and your customers in the way you deal with risk, you are going to fail. Let us first look at who the people you most likely need to deal with are.

The boss

The supplier

The workhorse

The consumer

The boss is responsible for the stuff you are trying to protect and must be involved in determining which risks are OK to take. The boss also needs to own the outcome and make sure everyone is on board pulling in the same direction. Getting the boss on your team should be a high priority.

The suppliers are all the people you depend on to do what you do, to make what you do. If the suppliers don’t want to play ball you are going to have a hard time understanding what can hit you, and you may not be able to deal with difficulties without their help. Communication can be difficult here, because the suppliers also have their own context and see the world from a different mountain top than you do.

The workhorse is the doer, the expert, your colleagues. These are the people you need to understand how things work, and the people you need to take action. If they don’t work with you on dealing with risk, you will definitely not succeed. Not all workhorses are going to want to help – this is where you need to engage through others; the boss, other workhorses that already are engaged, and perhaps even the suppliers and consumers (hopefully not).

The consumer is the customer, the client, the user. It is the people who depend on you to provide a service or product. Risks hitting you are hitting the entire supply chain, and the consumer may be the people who have the most to lose from bad risk management. The consumer may also be able to help with dealing with risks, and in resolving difficult situations. Involving the consumer in your risk management should always be a priority.

 

Your communication style must be tailored to the role of the person you are trying to involve, and to the ability of that person to contribute. If you do not think about this in advance, communication is not likely to be successful. This is why you need a plan.

Step 1: Make a communication plan

The different roles need different information to feel engaged. They may also have different interests in the asset you are trying to protect. The key to creating engagement through communication is to tailor your plan to the interests of your stakeholders. That being said, you also need this to be a two-way street; you need feedback, you need to gather information. Your communication plan shouldn’t be a long and formal document, a simple plan where you think through the key aspects of communication with each stakeholder is enough. The key steps are:

  • Identify the stakeholders and roles: who are they, what are their roles, what interest do they have in your asset, how much time do they have to support you and what do you need each of them to do?
  • Plan what each person needs to be involved in and how
  • Plan how you distribute information in various channels to the stakeholders – a matrix or table is a nice way of doing this in a condensed format. Think face-to-face meetings, town-halls, e-mails, intranet/web spaces, social media, phone calls, whatever channel you are planning to use. Keep in mind that effective communication works best in the channel the receiver prefers
  • Set up a schedule for how often you are going to communicate with each stakeholder – and make sure you don’t make it a “set and forget”

Step 2: Value relationships as much as results

Risk management is people management. Often risk managers are quite technocratic by nature and prefer to focus on results and technical matters. This is of course necessary, but you also need to value the relationships you have with the people you are communicating with. This means spending time with people, thanking them for their contributions and actively listening to what they have to say – even if it is not related to you risk management activities.

Step 3: Don’t give pole position to compliance

Compliance is important but far too often risk management is reduced to a checklist exercise of controls. This mindset is detrimental to good communication and can contribute to increased risk. The most important risk in risk assessments is overlooking the obvious – and the reason people do this is because they are not engaged in the process. Don’t forget compliance but use it as a driver for continuous improvement instead of being the focus in every activity.

Step 4: 30.000-foot view

With regular intervals, you should take a step back and reflect. Ask yourself open ended questions and try to find answers based on your experience with the various stakeholders in the project.

  • What did Mr. X contribute with?
  • What did Ms. Y not tell me and why not?
  • Do I have what I need?
  • Who is satisfied with their involvement and who is dissatisfied? Why?
  • What do I need to change to get what I need?
  • What do I need to change to make sure every stakeholder feels valued?

If you follow these steps, things may still go wrong. The chances are, however, that you will get much more useful involvement, much more engagement from the people you need to deal with, than if you go about communication in an unplanned ad-hoc way.

How do you update failure rates and test intervals based on limited data observations?

There is one post on this blog that consistently receives traffic from search engines; namely this post on the effect of uncertainty on PFD calculations in reliability engineering: https://safecontrols.wordpress.com/2015/07/21/uncertainty-and-effect-of-proof-test-intervals-on-failure-probabilities-of-critical-safety-functions/

072115_1313_Uncertainty2.png

It is interesting to see the effect on the dynamic probability of failure on demand from a theoretical perspective. Consider now instead the problem of collecting operational data and adjusting the test intervals to optimize uptime while keeping within the PFD constraints given by the SIL requirement. To do this in a robust manner, one must take the uncertainty in the data into account. We are seeking to solve this problem:

In other words; maximize the test interval while keeping the upper confidence bound on the average value of the PFD above the set value C, given that the standard deviation of the rate of dangerous undetected failures is known. To make things more practical, we consider a simple SIL loop where the PFD value is dominated by the final element. We make the simplification, for the sake of the calculation, that a single component is the loop. Let us then assume we have 20 valves of the same type that have operated over an aggregated 400 000 hours, and we have a theoretical failure rate of 10-6 per hour for these valves. We have not had any real demand trips, and the original test frequency was once per year. Testing has revealed that one valve had a dangerous failure in its first year of operation. Can we use this to extend the test interval without increasing the risk to our assets?

A naïve estimate the failure rate based on our observations indicate a failure rate of 1.25 x 10-6, which is obviously better than the a posteriori estimate from the design data. However, the design data is based on a larger data set and should not be disregarded if we wish to be reasonably sure about our decisions. So, the expected mean time to failure would be somewhere between 114 years and 913 years – a significant difference. SINTEF has released a report that gives a simplified approach to updating the failure rate. This approach requires you to define a conservative estimate of the failure rate based on the a priori data – often chosen to be the double of the original failure rate: λDU_CE = 2 λDU. Uncertainty parameters are then calculated based on the Gamma distribution as

Then the combined (updated) failure rate estimate is given as

where is the number of dangerous failures observed, and is the aggregate operational time. Using this on our example gives us

What is going on here – the combined failure rate is higher than the a priori? The expected number of failures in 400.000 hours with an a priori MTTF of 1 million hours is clearly less than 1 – and we had one failure. So the estimate is sound. SINTEF’s methodology will give you lots more details, including credibility intervals for the Bayesian updates.

So – now to the test intervals – if the new combined failure rate is accepted – we should probably test more often, right? It depends, SINTEF argues that it is important to be conservative when updating test intervals to make sure insufficient data do not lead us astray. They propose the following simple rule:

If the new failure rate is less than half of the original failure rate, and the upper 90% confidence bound on the new failure rate is lower than the a priori failure rate, the test interval can be doubled.

If the failure rate is more than double the original failure rate, and the lower 90% confidence bound on the new failure rate is higher than the a priori failure rate, the test interval can be halved (e.g. from one year to every 6 months).

This means that in our case – the test interval stays the way it is.

Why two-factor authentication is not foolproof but still good to use

A few days ago, I asked my followers on Twitter if they used the two-factor authentication on Twitter, or if they knew what two-factor authentication was in the first place. The result was that almost no one is using this. Most accounts that are hacked, are hacked because users are stupid and use terrible passwords, and they use the same passwords on every site where they have an account. This means that whenever some news site with terrible security is hacked, the hacker has access to more or less all these users’ accounts, including e-mail, social media and their favorite online stockbroker… This is admittedly bad.

Two-factor authentication comes to the rescue – using this, you cannot log in simply because you have the password and the username. You also need to have some third-party security token, like an app on your phone, a confirmation SMS sent to you, or a code generator device. If the hacker does not have access to this third-party token, he or she cannot take over your account. That is, at least, the design intent.

So, how can a hacker bypass the need for a third-party security token, or getting your leaked password in the first place? They can use a good old phishing attack. Set up a web page that looks like the one you want to log in to, trick you to go to the fake site and enter the login data, and then use these to access the real page. This process is illustrated below.

First, you need to trick the user to visit your fake login page, typically by sending some form of e-mail asking to log in and update something. The user submits username and password in the fake view, that you transfer to the real view in order to generate the third-party confirmation code. The real page, believing it is communicating directly with the legitimate user, sends the confirmation code to the user’s call phone. The user then submits this through your fake login view. Then you, as the hacker, will have access to the confirmation code, and you can take over the account. Depending on the type of site you can execute actions, or at least gain insight to the user’s personal data. For very poorly secured financial applications you can even steal the user’s money.

Of course, this is much more complicated to get to work than simply stealing a username and password, or brute-forcing a weak password, so 2-factor authentication makes a lot of sense. But like all barriers you put in place in risk management, it is not a magic pill solving all headaches. You still need to keep your guard up – don’t fall for phishing scams, don’t use the same password on multiple sites, use strong passwords and keep up to date on security features on the sites you are using and that are critical to you. Sites like Facebook and Google’s products can send you an e-mail or text you whenever there is a new login, with location and type of computer/browser. This is a very good extra layer of security.

To sum it up: use two-factor authentication, but also don’t forget to follow other common good security practices.

Major discount grocery store chain (REMA 1000) exposes their whole customer database

REMA1000 did not use any form of authentication on their customer database used by a loyalty program. They claim that this is nothing to worry about. I disagree. Identity theft, blackmail and potential surveillance are threats worth worrying about.

REMA1000, a Norwegian discount store chain, recently released a new customer loyalty program they named ‘Æ’. The letter ‘Æ’ is also the local word for ‘I’ in the Norwegian dialect in the area where Rema1000 is headquartered (Trondheim, the city where I live).

 

ae_rema
The Æ app promising you discounts. And previously it was exposing your data to the world.

 

The way the loyalty system works, is that you install an app on your smartphone, and register your debit card in the app. Whenever you make a purchase they will register what you have bought, and you are offered a 10% discount on the 10 items you spend the most money on, as well as on all vegetables and fruits. Sounds like a sweet deal, right?

The problem is only that the app was launched without requiring any form of authentication between the app and the backend database. This is reported by the Norwegian newspaper Aftenposten.no today. The vulnerability would allow anyone to download customer data from their database, down to each item purchased, as well as key customer data such as phone mumbers and partial credit card numbers. The vulnerability was discovered by infosec professional Hallvared Nygård, who spoke to Stavanger Aftenblad about the issue (another Norwegian newspaper).

In a comment to Aftenposten, Rema1000 claims that they “take the situation seriously”, and accuse the security researcher of having obtained access to the information in an illegal way. They say customers have no reason to worry with regard to security with regard to the data they leave with the stores.

This attitude shows a lack of understanding of security risks from REMA1000. First of all, lack of authentication between frontend and backend in a web application is close to inexcusable. It would be disovered by any reasonable web app security scanner. Protecting database access through secure authentication is the core concept of web application security and should be taught in any introduction to secure development class at your nearest university. Even more worrisome is perhaps that REMA1000 claims customers have nothing to worry about. Identity theft, blackmail and surveillance is pretty serious stuff to worry about if you ask me.  On top of this, REMA1000 is seemingly looking to blame the security researcher for reporting the vulnerability.

 

How machine learning can help the spread of fake news

During the American election campaigns in 2016 fake news was the new big thing, with Russia being accused of orchestrating an intelligence campaign to influence the outcome of the presidential election. Regardless of what Russia did or did not do, spreading the message efficiently requires both that traditional media pick it up to grant it credibility, and that people share it on social media platforms to get as much coverage as possible. Machine learning can play many roles in this, and we will look at an obvious use case, which is pretty much the same way recommendations work on Netflix or Amazon – by use of feature-based labelling.

Any “news” article will have several features. Examples of features are:

  • Language style (using a readability metric)
  • Length of article (word count)
  • Use of celebrities (none, light, medium, heavy)
  • Visual intensity (none, light, medium, heavy)
  • Shock factor (none, light, medium, heavy)

Let us say we consider a news article successful if it receives more than 100k shares on Facebook, or if it is quoted on CNN. So, our news articles can be SUCCESSFUL or NOT SUCCESSFUL depending on these criteria.

One simple but often efficient way we can use machine learning to understand what makes an article successful is to use existing data to train a decision surface. Say we have a collection of 200 news articles, and that we can check whether they are successful or not (they are labelled). This is our training set. Based on that, we can use statistics to find out which features will help us predict which label to apply to which data point. If we boil this down to two factors (language style and word count), we can create a scatter plot of these articles. By analyzing our set of training data, we seek to learn how we can exploit the factors to make our fake news spread. We have plotted our training data in a scatter plot to inspect it visually.

What we learn from simply looking at the plot is that the article should be fairly short, and intermediately difficult in readability (seems to be somewhere between 60 and 80 on the Flesch index, corresponding to articles that can be read by high school graduates).

Using a classification algorithm like the Naïve Bayesian classification algorithm, we can generate a decision surface based on our data.

Everything that falls into the red region will be predicted as successful. Giving up on the ability to plot the features in a single scatter plot, we can feed the algorithm with our full feature set, allowing it to figure out more factors we should care about when creating our fake news campaign.

This shows that the same methods used to drive recommendation engines, can also be used to learn how to best influence people – useful both in marketing, and in trying to “rig elections”. By the way, this simple labeling of data using classifiers like above is one arm of machine learning, known as supervised learning. The data set used in this post was randomly generated – so it didn’t really teach you how to create efficient fake news articles – but it did show you how you can find out.

Automation and identity loss

Everybody automates. And everything can be automated. We are giving up human contact to achieve higher efficiency. Companies will need fewer workers, and most senior managers who are trained to view this through the “shareholder value” lense see this as a great development that reduces the cost base of their companies. As an example, Rune Bjerke, CEO of the biggest Norwegian bank DNB, recently said he is convinced that in 5 years the staff of the bank will be halved. 

For the consumer this means that dealing with the bank is a personalized experience based on collected data and machine learning instead of human interaction. This may be efficient but it leaves less room for flexibility and for a more meaningful and real customer relationship. 

If we push hard on automating everything, the jobs humans do today will be mostly unnecessary. Interactions with firms will be by proxy through computers. We need to start thinking about the path we are taking. Today there is a vacuum in the area of regulations, and generally in the thinking around how people can find purpose in life when jobs are few and our identities can no longer be interchangeable with our professional titles. 

40 tracking cookies from 2 news sites: this is why you need VPN

You have probably (hopefully) been told that open wifi is insecure, and that you should use a virtual private network to encrypt and protect your traffic. Most people don’t do this, perhaps because it seems hard to do?

Opera software now offers free VPN. It is built into the browser on the desktop, and a standalone app on smartphones. It also comes with the ability to block tracking cookies! Those are cookies that track the pages you look at on the web – for commercial purposes (or so they claim). An old but nice nontechnical write-up on tracking cookies is found at geek.com. The difference from back then is that big data and AI have amplified trackers abilities to spy on you and analyze your online life. 

How many trackers are you exposed to by visiting high traffic news sites? Here’s what Opera VPN reported after visiting CNN.com and Bloomberg.com without clicking a single link on those pages. 

40 trackers? I have no interest in feeding ad networks with my online habits. I suggest you go ahead and activate VPN and cookie filters on you mobile in addition to your desktop, also when browsing on secure networks!