Lancaster hosted the Cyber Security Challenge this year.  I spoke about the value of considering users in cyber security.  Some kind soul transcribed my speech — here it is (sans graphics):

Good afternoon everyone.  I’m Paul Taylor, co-director of the security center here at Lancaster, and a psychologist whose research focuses on developing methods for measuring human behavior and making inferences from that behavior about people’s intent. It might be behavioral signs that a person’s going to concede in a negotiation, signs that they are lying, signs that they are distressed, and so on.  I’m sorry that I can’t be there with you in person today—I hope you’re having a productive few days and enjoying Lancaster.  I’m really please however to still have the opportunity to share with you a few thoughts about the role of users in cyber security.

Typically the story goes something like this: Users, huh, they’re such a thorn in the system.  Leaving passwords under mouse-mats, clicking links that are quite clearly spam, using Facebook as though it’s only going to be read by nice people.  They are the reasons for why our technology fails, right? Basically, we have to build bigger and better systems so that the error-prone human can be managed.

Well, although there are elements of truth in all that, I want to flip that idea on its head.  I want to present to you a few examples that encourage you to see humans as an asset.  A component that, when used effectively in the system, can promote cyber security.

I’m not the first to focus on such positives.  There are many small-scale examples of this already in the mainstream.  For example, online banking systems are already taking advantage of human associative memory – the idea that places are associated with sights, smells, memories, and so in ways that cannot be guessed or cracked though an algorithm.  In these systems, rather than ask you to present a password, they show a picture and ask customers to recall a component of associative memory.  Human memory affords an opportunity for good cyber security that other approaches do not.

There are actually two ways to argue for a positive view of humans in the cyber world.  The first, which we’ll get out of the way quickly, is that asserting humans are the problem in the system is an assertion based on flawed logic.  It’s flawed because we are not considering the negatives against the positives, that is, how many security breaches or other cyber-problems are obverted by appropriate human behavior.

Let me clarify this with an example.  I’m about to board a plane [I made this presentation remotely, from Heathrow] and, frankly, I’m rather pleased to see that a dapper set of pilots have boarded in front of me.  How keen would you be to fly if there were no pilots and the whole thing was automated?  Automation will be more accurate than pilots and the decisions made will be far quicker.  So shouldn’t automation be better?  The problem is we have no idea because nobody has ever determined how many errors and near misses are avoided by pilot behavior.  We know how many crashes are caused by pilots but not how many are averted.

Chances are that its better to have pilots because they are excellent sensemakers—they can make good judgments about novel situations that no software could anticipate.  I suggest the same is true in cyber security.  One of your tasks should be identifying ways to promote the users role in making sense of what is going on.  Use their sensemaking skills as an asset.

That first opportunity aside, what I really want to talk to you about today is the second possibility of using human behavior as a data point for improving security.  Psychologists have learned to tell quite a lot from user behavior online and in the workplace.  In my research I’m particularly interested in what our language use says about us.  The way in which you communicate reveals psychologically important things about your traits—who you are as a person—and your state—how you are at this present time.  For example, language use provides clues about your personality, your emotional state, the clarity of your thoughts and the extent to which you are focused on the past, the present or the future.  But it also provides other possibilities.  For example, the field of authorship attribution looks to identify a person’s linguistic fingerprint that can then be subsequently identified in other pieces of text – you can match a single writer using multiple usernames in a forum, for example.  The language you use also changes depending on who you associate with.  You adopt a common social vocabulary and this can give away who you hang out with.

So how is all this useful to our efforts to enhance cyber security.  Let me briefly talk through a few examples.

One example that you may have read about already is the work by Awais Rashid and his colleagues on detecting adults trying to allure children in online teenage chat-rooms.  They recognized that the way in which adults communicate is fundamentally different from that of a teenager and that, critically, even an adult trying to pose as a child allows some of his or her adult tendencies to seep through.  These behavioral differences therefore allow the adults’ communication to be differentiated from the children’s in a reliable way.  These distinctions can then be used to drive an early warning system that either alerts the children, or acts discretely by alerting the police.  The critical lesson from this example, then, is that different behavioral patterns can allow us to act proactively in identifying a threat.

The child chat-room example is static, but what about if we monitor a person’s language use over time.  One of our recent projects examined the extent to which it is possible to detect insider threat—somebody acting maliciously to damage an organization or sneak out commercially sensitive material—based on their interaction with co-workers.  Here what is key is the ability to examine language over time to spot changes in behavior.  In our work we did this by running day long simulations of an organsational environment in which we monitored multiple aspects of worker behavior—the documents they used, who they interacted with, their email content, and so on.  At the beginning of the day everybody was a co-worker.  However part way through we offered a few people £50 if they’d sneak some information out of the system for us – as you might have guest, no one refused!

At the point of tasking somebody to be an insider, we found that there emails became more self-focused – they used greater singular rather than plural pronouns; showed greater negative affect – as they became more annoyed by those around them; and showed more cognitive processing as they had to deal with being an insider compared to their co-workers. At the interpersonal level, insiders showed significantly more deterioration in the degree to which their language mimicked other team members over time. Our findings demonstrate how language may provide an indirect way of identifying employees who are undertaking an insider attack.

So those are two examples of how behavior can be used as an asset to enhance cyber security.  More important, however, is that I hope you’re a little more convinced than before that cyber security can be enhanced through systematic capturing and analysis of human behaviour.