Social engineering is a the key attack vector in most cyber attacks. In fact, almost every successful compromise starts with an e-mail. In spite of spam filters, password policies and never ending e-learning on the dangers of links in e-mails, we still keep falling for those malicious credential harvesting campaigns. What more can we do?

Let’s start by looking at why we are so eager to click those links. Is there something in human nature that nudges us towards risky behaviors online? And – if we understand those natural urges – can we do something with the way we design applications and user experience that will reduce the number of people getting phished?

Photo by Simon Abrams on Unsplash

The problem with the “dangerous link” is often not that we do not know that links may be dangerous. We may even have learned from our local cybersecurity expert to hover over links to discover the true target of that link – even at Cybehave we have that in a module in our e-learning system; it does have its place in the scheme of things. The problem is, however, more often related to attention. We are busy people and we want to go about doing our jobs. Our goal when using a computer is not to verify that links are safe (unless you work in cybersecurity, perhaps) – it is to get something done – to order that flight, to read that product specification PDF or something similar. So, we usually have some task related goals that we want to get to as fast as we can. And usually we have lots of these, perhaps even more than we can deal with in a normal 8-hour day. In order to detect that the download link that looks like it is from microsoft.com actually is pointing to scary-domain.biz we need to stop and think before we click. If we actually do that depends on a number of things:

  • How much task overload do we have?
  • Did we get enough sleep last night?
  • Do we have addiction like behaviors – like internet addiction?
  • Do we feel positive or negative about IT security messages?

Obviously our ability to stop and think before we click can be influenced by a number of things – primarily playing to our feelings. So here’s a question – can we get around that difficulty by creating designs that are less inviting to abuse?

The username and password monster

Most applications still use only a username and a password to authenticate – and for those that offer two-factor authentication as an optional feature very few users actually turn this on. With a simple username and password login workflow, it is often easy for a hacker to copy the design of that web page and send a convincing email with a link to the fake page to steal the username and password. How can we stop this from happening?

Lately there has been a lot of talk online about passwordless authentication. One such possibility is Webauthn, a very secure standard that you can read about in this Ars Technica article. The only problem? It isn’t really used much yet.

Another solution that is very easy to implement is the “magic link” – like the one offered by Slack. Create a random access token and e-mail a one-time link to the registered user’s email account. When the link is clicked the user is logged in – no questions asked. This relies on the user having secured his or her email account properly.

The best thing about the magic link pattern is that it removes the “phishing for passwords” attack vector. To abuse the workflow the attacker would need to take over the email account. Preferably any email account used in this manner should be protected by two-factor authentication.

Other things we can do

In many cases it may be unrealistic to completely remove username and password based authentication. What can we then do to reduce the risk of phishing?

Perhaps we could nudge the user to check that the form is actually on the domain we expect it to be? Here’s an example from a Cybehave web app.

https://app.cybehave.com/login

Here the user is asked to check the URL in the browser – to see that the page is indeed cybehave.com.

Another thing is to encourage the use of a password manager such as LastPass or 1Password. Password managers would offer to fill in passwords for you on known domains but wouldn’t do so on a phishing page. Someone who is used to relying on a password manager would thus be less likely to be phished.

To further support password managers, we can also add a redirect to the password reset from from /.well-known/password-reset.html – a practice that makes it easier for password managers to help people reset their passwords after a compromise.

Key takeaways about UX and security

The following can be a summary of this post:

  1. Make it easy for users to avoid phishing by thinking about security in user workflows, especially authentication
  2. Avoid the use of passwords where appropriate
  3. Nudge users to build stronger security habits by adding small hints in the UI

Leave a Reply