Unlocking your cyber security awareness program with insights from behavioural science. 

Secure Click News Unlocking your cyber security awareness program with insights from behavioural science. 

Unlocking your cyber security awareness program with insights from behavioural science. 

 

                                                                                                                                                                                                                                               



   
A conversation with Dr Alexander Stein, an expert in human decision-making and behavior, and Founder and Managing    Principal of Dolus Advisors, a consultancy that helps CEOs, senior management teams, and boards address psychologically complex enterprise challenges. In conversation with Robert Scanlon (SecureClick).







How did you get interested in human decision making?

My professional background is as a psychoanalyst. I have a master’s in social science and a doctoral degree in psychoanalysis. I’m also a fully trained, licensed, and accredited practitioner with more than two decades of clinical experience.

But psychoanalysis can be used for more than treating patients. It’s one of the most powerful tools for understanding the human mind and decoding the complexities of decision making. It’s unparalleled for examining the choices people make and seeing the influences that give rise to those choices.Shortly after I had started my clinical practice, I decided to expand the scope of my professional life to take my training and expertise beyond the walls of my consulting office, to transition into being a corporate consultant. I founded Dolus Advisors in 2001. The firm focusses on helping senior executives and boards address psychologically complex enterprise challenges beyond the capabilities of conventional business consulting. We advise the people at the top of an organisation holding primary decision-making authority, responsibility, and influence over the organisational ecosystem. Over time, the practice areas have expanded beyond leadership, culture, and ethical governance to include the psychodynamics of fraud, corruption, executive malfeasance, and other abuses of power. We also address human-factor vulnerabilities in cybersecurity and work with leaders in the emerging technology space to mitigate unintended consequences in technologies that assume unsupervised decision-making functions in human affairs.

So, are humans good at making decisions?

Yes and no. It’s an incredibly broad question, of course! Part of the complexity of our decision making is that so much of it occurs off-the-radar—outside of conscious awareness—or is considered irrational in some way. There are many impingements, challenges, and impedances to making clearly thought-through, strategically objective, highly calculated decisions. There’s always a lot of static or noise both from inside and outside ourselves. We have to clarify the signal so that the best possible decision can be made. It’s a thicker project than many people recognise.

Where does this static and noise affecting decision making come from?

It’s highly individual. Everyone has internal turbulence and their own sub-frequencies that cloud the channel. We’re all constituted of our own histories and what goes on in our own psychologies and our own emotional lives. It’s about how we manage our experiences in the world and how we address problems. It’s about our relationships with risk, fear, ambition, and other subterranean influences. We also have to consider what happens in groups, teams, and the role of situational influences that can enhance or impede our decision making.

During the pandemic, many remote-working employees were put in a situation where they were often making their own decisions about IT security. How would change of context like that affect decision making?

Remote work is a very new thing. The technology that’s enabled us to continue working through the pandemic didn’t exist before. Every organisation needs to cut itself a bit of slack with regard to the unbelievable learning curve that was required, under duress, with very little warning, to prepare their workforce for remote operations and to contend with all sorts of unforeseen consequences.

That said, the only thing that changes with remote work is the contours and the frame of what constitutes “the office.” The people involved are exactly the same. So, will people be different when they’re at home rather than the office? In terms of constructing protocols, systems and guardrails, buffers, and risk mitigants, there are of course certain technological and administrative shifts which have to be made. But at core, most of the fundamentals are the same. Of course, there are exceptions. For example, people with young children have to deal with all kinds of intrusions on their workspace, time, and attention. These sorts of distractions were not in the office. 



We saw a huge increase in phishing emails during the pandemic. Many organisations were shocked to discover that their employees, or indeed one of their management team, fell for such a scam. Why do people keep falling for these scams?

The fact that people are shocked is part of the problem. It ought to be as predictable as the forces of gravity. It’s intrinsic to who we are and it’s inevitable. In many regards, fraudsters, scammers, and social engineers who exploit people’s normal vulnerabilities are perfect entrepreneurs …. aside of course from the fact that what they do is harmful and illegal! Their work as predatory actors is purpose-built to exploit aspects of who we all are to their own advantage. To anticipate otherwise is to be wilfully blind to the high likelihood that such events will inevitably happen.

Much of my work with organisations involves more accurately forecasting and better mitigating these risks to reduce the number of incidents. Part of this begins by reorienting executive thinking to the absolute probability of these sorts of events happening. People will always do stuff that is risky and wrong. People will often repeat the same mistakes and ignore what they’ve just been taught. These are defining features of the negligent insider threat.

In addition, it’s enormously difficult for most people to be vulnerable in front of other people, and even with themselves, about things that are evidence of some kind of short-coming or failure. Everyone who exploits people understands this, even if only intuitively and not by having studied a psychology textbook. By and large, people will hide. As a result, the likelihood of getting away with malicious acts or having additional time to do something else is pretty high because the majority of victims are not going to leap up and run to their board or their boss and say, “Look what just happened to me.” There are fears about reprimand or punishment or even dismissal. There are a host of negatives which contribute to retarding or shutting down the things that could be done to resolve the problem quickly, and be able to do a forensic on it so that repairs can be constructed and implemented.

If I’m an IT manager or CEO, how do I create a culture where an employee who, for instance, has inadvertently clicked on a malicious attachment or link, admits that they make a mistake?

You’re absolutely correct to use the word ‘culture’ in your question – that’s a critical element. You actually can’t just do it as the IT manager. Creating an enterprise that has a cyber-safe culture baked into it is a holistic endeavour. It requires psychological safety at scale. It has to be a culture that comes from not only the tone at the top but modelling from the top and all the way throughout the organisation with authenticity and without hypocrisy. The messaging should communicate that while no one wants mistakes to happen, it’s understandable that they will. Stepping forward is encouraged and courageous, but the need for courage should be reduced so that reporting incidents becomes normalised and less fraught.    

You need to empower and enable the people in your organisation to serve as guardians of their own decision making and the organisation’s policies and functions. This means that attackers’s opportunities and abilities are reduced and potentially thwarted but not in a technocratic or technical way. It’s critical to focus on culture and human decision making, reducing pressure points and distributing risk vulnerabilities.




What do you mean by “distributing risk vulnerabilities”?

We must move away from a single point of failure. An organisation might be already in a state of heightened risk when it’s possible that you have one employee who puts a USB stick into a port and allows malicious code in, or who falls prey to a phishing email and clicks a link that allows malicious code or access to internal systems or the crown jewels of the organisation. If your organisation is structured in such a way that one person who otherwise does not occupy a position of significant institutional responsibility, who does not hold that much power, can create that much damage, there are all sorts of underpinning issues at play and you’ve just had a very nasty spotlight shone on that problem. Distributing risk means you understand how to fortify your perimeter and reduce attack surface—not just technically but in respect of the human ecosystem influencing people’s behaviours and decisions. It should never come down to one person who makes a mistake and a house of cards comes tumbling down. There must be a robust degree of hygiene and care. It’s not just about caution.
 
How do you make the distinction between caution and care?

Caution comes from the starting point of approaching everything as a danger. There is nothing wrong with that – caution is extremely important in being safe. Care is not predicated on the presence of any potential danger or risk but rather about attention to micro-details.
 
You mentioned there about modelling at the top. Can you give us an example of that?

With a former client of mine, a major American company with a workforce of thousands, the CISO has a very innovative mind regarding these challenging problems. One of his mandates as CISO was to ensure to that his organisation was as cyber safe as possible within the bounds of all kinds of regulatory confines. It was critical to him to present to his colleagues, not only in the c-suite and to the board but to the entire workforce, that cyber safety and cyber security was and is an institutional issue of paramount importance that goes beyond conventional training and manuals, checklists, watching videos or doing phishing tests. It was predicated on instilling an enterprise-wide cyber safe mindset and culture. That was one of the starting points for the project for which I had been called in to architect and collaborate on with one of the large global consulting firms handling the roll-out. That CISO represents a sterling example which ought to be emulated in terms of a broad-minded and visionary approach to looking at the problems and the potential solutions to them.

So, in building a more cyber safe culture, where did you start?

In my training as a scientist of the mind and clinician, and as an experienced consultant, the most important starting point for solving any problem is understanding what the problem is. I always begin from a diagnostic posture, coming into the organisation and trying to access as clearly as possible, “What actually are the problems here?” and not just, “What are the problems generally?” Every organisation is wrestling in various ways with sets of common problems regarding cyber security – attackers, social engineering, vulnerabilities, remote work, distractions, unintended recklessness – a million things. It’s also unique in every organisation - why X is a vulnerability as opposed to Y. Or, what is it about this culture that seems to exacerbate certain potential problems which are not potential problems in other organisations. Part of my expertise entails identifying root causes that are exceedingly difficult to discern or might be hiding in plain sight.






Off-the-shelf products can only go so far in mitigating certain risks because unless it’s customised to the organisation and its culture and its particulars there are going to be gaps.

The other side of that is that there is a huge distinction between teaching and learning, or instructing and integrating. So merely setting out for people, “This is what needs to be done to make things better,” is only step one. There are many other steps that need to be created and then followed in terms of enabling a workforce or culture to understand what is being presented – how and why it works, to build engagement not just alignment, to foster the capacity to onboard and integrate so that people understand what they should do, not just what they shouldn’t do. People should be able to perform at a reasonably high level with those learnings even under duress, pressure, or threat of embarrassment or any of those structures and predispositions that social engineers will exploit. All of that takes quite a bit of investment from an organisation – it takes time, money, work. It takes a lot of involvement. There are a lot of organisations, even if they have the bandwidth and budget, that don’t have the appetite or commitment for what’s truly required. They’ll engage in a process of minimisation, denial, and justification. They might say, “We’re going to spend X on that thing, and we’re sure that will be good enough.” Then when it doesn’t work, they’re shocked, blame the consultant, and look for another solution. My point is they can’t be truly surprised at the current disappointing outcome or the probable upcoming one either, because they made a decision a long time ago to do something inadequate but didn’t recognise what the unintended consequences would be.





 
You made a distinction between teaching and learning; can you tell us the difference?

The launch points for the project I referenced earlier—also applicable to many other situations—were to a) think about how people learn, and b) identify some of the impediments to learning. The particular problem set in that organisation is not unique to them. It concerns attention and engagement to programs: getting people to be motivated and invested and to be accountable and responsible in what is being asked and required of them.So, thinking about these issues, not just from a corporate or organisational perspective but through the lens of classroom management can be a good starting point. For example, how do the best teachers reach students who are struggling, disinterested, distracted, or who have things going on in their lives and at home which make it difficult from them to learn or to study or focus? This paradigm is not cross-applicable in absolute terms because we’re not dealing with parents and children. We’re dealing with adults in professional situations where people are compensated and have all sorts of professional responsibilities and credentials. But many of the same principals are still largely at play in respect to how those dynamics are managed and leveraged to best effect.One of the problems with most conventional cyber security awareness and learning programs is that although they use the language of the classroom, everything’s handled in a rather didactic and two-dimensional way. It doesn’t look at employees as students. It approaches learning as something they just have to do.
 
That would mirror our own findings. Many organisations want to approach a topic like cyber security awareness training without thinking about the intrinsic motivation that employees might have with relation to the topic. They want to launch into a cyber security awareness program even when there might be little employee motivation to engage with the topic.

From my perspective, that illustrates the enormous gulf between awareness and strategic planning. There will be inevitable misfires. It’s that the leaders triggering these decisions don’t understand they are going to miss their target. The expectation that you can achieve a solution that is fundamentally about people but without involving the people directly and in a robust way is just bad planning.There are all sorts of assumptions of what will work or not and unless it’s data-driven and psychologically sophisticated, you’re just throwing things against the wall and seeing what sticks. That’s not a great allocation of resources to a learning project. You need to begin with as-is metrics—a baseline of where people are in terms of knowledge, capacity, habits, concerns, defaults, what their culture is like to work in not just how leadership characterizes it, and so on. It’s like going to a doctor for a general physical exam—temperature, weight, blood pressure—but without taking a history or asking any questions. What’s important is an accurate outset assessment from which to establish goal-setting and strategising how you’re going to get there. In the absence of that, how are you actually going to structure something with rigour and discipline that has a likelihood of success at scale?

Another element of this is that the declaration of, “We want this and not that” already hamstrings downstream features of what can and perhaps should be done. Many people have different learning styles and learning capacities. If you have elements of your workforce that reject self-directed learning, as one example, or are less capable of integrating the information that comes in that form, your outcome will be less than optimal. A really sophisticated multi-channel, multi-dimensional, multi-faceted cyber security awareness training program has to have different modalities that are adaptable to different learning preferences. That might sound bulky or overly-complicated. While it’s not simple, it is executable and solvable. And if your goal at the end is to have a learning outcome that works, why wouldn’t you do everything possible to set it up for success?





When you talk about different learning styles, a lot of professionals in the e-learning industry will tell you that theories on learning styles have been discredited. What would you say to that?

Are you kidding? First of all, absolutely false. I would want to know by whom, and show me the research. It’s troubling when you encounter that kind of hubris and narrow-mindedness. Unfortunately, there are a lot of professionals out there who might not know what they’re talking about. They mean well but are out of their depth, haven’t actually read or understood the studies, or don’t understand the value of teaching in ways that optimise for learning. That prejudice is its own separate challenge.

To go back to your question about modelling a particular type of culture, there you have an executive decree that is wilfully blind to its own biases. I can guarantee you that mindset is going to play out in lots of other decision making from the c-suite – no matter what it’s about. People will pick up on that. When you have a leader who says, “That doesn’t work,” or, “I know that’s a waste,” you’re communicating so many other things to people all around you. One message is that it’s unacceptable to admit there’s something you don’t know, which is antithetical to developing a speak-up culture that encourages the workforce to feel safe in admitting errors and reporting wrongdoing.







So, in terms of organisational learning, what makes a great teacher?

Well, I don’t consider myself a great teacher, but from my perspective as a student who loves great teachers, I think the answer is that they bring everything to life. They instil in everyone a desire to want to know whatever’s being discussed, to see that there is more to learn. They make learning exciting, dynamic, stimulating, and applicable. Their teaching is filled with life and possibilities. It’s less about, “It will be better for you if you do this,” but rather, “It would be so great to know this—let’s explore!” Inspirational teaching has a positivistic component to it which is enlivening and instills a self-generating desire for engagement. Great teaching can be taught and learned, and wonderful teachers will make the world of difference for students in classrooms and in organisations. 

Another factor to bear in mind, in the context of cyber security awareness training, and this existed before the pandemic and remote work, is enabling a workforce to understand that cyber security is not confined to the office, and that when you take your devices home or to a coffee-shop the same principles apply. And how you think, act, and what you’re aware of is not just about work. It’s about who you are and how you function in the world. It’s about stimulating and motivating people to understand how and why cyber security is important. It’s also about explaining to them the how and why of the value to them and their organisation. It’s about explaining how they can be cyber security champions and advocates. They need to understand that they are holders of important responsibilities, not just weak-links or potential failure points.

Another crucial thing to understand is that awareness is not a behaviour. So, it goes back to thinking about the diagnostic baseline, for example, “What are the problems we’re trying to solve?” One of the most common misnomers in almost every program I come across across is the mistaken idea that there is a direct link between awareness and behaviour. The common thinking is that if people are trained to be more aware of X and Y, then it is a natural, logical conclusion that their behaviour will be different as a result of knowing that. That reflects an enormous psychological illiteracy. What people are aware of can be situational, environmental; it can also be internal. What’s required to create behaviour change is not solely expanding the field of awareness.
 
How do we close this gap between awareness and behaviour?


One way is treating them separately and then building a bridge. So, taking a step back from the conventional construction which assumes that they are pre-integrated. For example, you can create guard rails that create optionality in certain kinds of stressor moments. If your awareness program is set up in a way which gives the employee a due diligence process. If you get a phone call, for example, “here are things you’re supposed to do. Here are the things you’re not supposed to do.” You’re not enhancing awareness and you’re not setting them up to change their behaviour because there are so many other micro-elements—like being distracted or being stressed, or different people’s innate attitudes toward displeasing or disappointing others or prohibitions against the liberty to be what I call intelligently disobedient. They will not be engaged by virtue of that training. So, there’s a difference for common sense options for what they can do versus dictating what not to do.

The other thing to understand is that no matter what you tell people or how you instruct them, there are all kinds of pre-established default responses that everyone has that you can’t legislate against or simply override. There are people who, no matter how many times you tell them not to click on a certain type of link, they are going to do it. Applying the same sort of learning process to everyone to enhance their awareness of the clear and present danger will not move the needle – especially for those people for whom it does not make sense. So even at scale, there has to be a much more targeted approach that’s brought in. Even if it’s e-learning, there are all sorts of ways of doing it. Gamification. Videos. There are tons of delivery solutions. But the underlying conceptual architecture is what matters most. You have to take into account ways of leveraging the power of behavioural science as a proactive defence. Using educational programs that are structured and informed by psychologically sophisticated tools, and not just using seemingly more efficient technocratic solutions.


 
The studies of Kahneman and Tversky on human decision making are very applicable when it comes to cyber security awareness training. However, many learning approaches in this area are predicated on the learnings using their System 2 brain when in fact the cyber criminals exploit people’s System 1 brain. Is this a major weakness?

Yes, the hackers know what you and I are talking about right now. This is a well-established, industry-wide vulnerability that is rarely addressed or resolved by most of what is out there. Do not underestimate the potency of malevolent creativity.

Another thing I would mention with regard to bridging the gap between awareness and behaviour is creating multidisciplinary teams that harmonise all of these different components – security, technology, management, and behavioural science. Any info-sec program that only looks at cyber security as a cyber or a technical issue, and doesn’t bring in experts – and I’m not just talking about HR, legal, or risk and compliance personnel – is already hampering the likelihood of success. And it’s not just about reporting to the board or getting buy-in or alignment from your CEO, it’s about having a team-based approach – a multidisciplinary, cross-business function approach to it.






Many cyber security awareness programs do not get their participants to reflect on their own self-awareness. Why is self-awareness important?

One of the most common tools of social engineering is pressure – some form of stressor or ploy to agitate and intensify urgency: “I need this right now,” or, “You’ve got to click on this immediately if you want to unlock the system and avoid disaster.” Or some ruse that appeals to people’s pro-social impulses to help others in distress—“I’ll be stuck,” the scammer will passionately claim, “if you don’t get me back into my account now!” It’s using a manipulative technique that’s turning the heat up on somebody, grabbing them by the collar or duping them into thinking they’re being a good citizen: “This is what you need to do right now. Don’t stop. Don’t think.” This stressor event is designed to override your cognitive processes with more primitive emotionally-driven decision-making. So, if you have people who have absolutely no idea how they respond under pressure, or what their baseline reflexes are in a situation like that, anything else that you try to get them to do is meaningless. This is the basic flaw in most phishing tests. It determines only that a mistake was or wasn’t made but doesn’t probe what people were thinking or feeling when the decision was made. Self-awareness in that regard can be exponentially useful in building knowledge a person has about, for example, “When this happens – this is what happens to me.”
 
Can this sort of self-awareness be taught?
 
Well, you have to start shining a light on it. It has to be illustrated in a way that the person is not embarrassed, or becomes defensive, or wants to reject it because it makes them uncomfortable. There are a lot of elements that have to be contended with sensitively. You have to use a lot of social and emotional intelligence in generating self-awareness. But once self-awareness is reflected back, once someone is receptive – “Oh, I see what happens,” – then you actually have a solid platform and beginning basis for getting people to understand their own limitations.
 
So, finally, what would your key takeaways be for somebody building a cyber security awareness program?

  • You need to engage all the stakeholders.

  • You need a psychologically and psychosocially sophisticated approach to understanding what the actual problems are and to designing solutions that accurately address them.

  • Culture is paramount. You need to create a cyber-safe culture, not just build and implement technocratic processes.

  • Cybersecurity is fundamentally a human issue not a technological one. Cyber security is just an advance on the Trojan Horse or poison pen letter. It’s one person doing something against another person, or the equivalent at scale.

  • Any organisation that thinks these issues are solvable without actually understanding the human dimensions is wrong.


Alexander - thank you very much for those erudite and actionable insights!  

  
Alexander Stein PhD is Founder of Dolus Advisors. He is an expert in human decision-making and behavior, and serves as a trusted advisor to CEOs, senior management teams and boards. Trained and licensed as a psychoanalyst, he advises executives, founders, and directors across a broad array of industries on issues involving leadership, culture, governance, ethics, risk, and other organizational matters with complex psychological underpinnings. 
 
Dolus Advisors is a bespoke psychodynamic management consultancy that helps business leaders address psychologically complex enterprise challenges beyond the capabilities of conventional business consulting. In addition to a core practice in leadership, culture, and ethical governance, Dolus is frequently engaged to provide specialist assistance in the psychodynamics of fraud, corruption, and abuses of power, and to address human-factor vulnerabilities in cybersecurity and mitigate unintended consequences in technologies that assume decision-making functions in human affairs.

 

www.dolusadvisors.com
https://www.forbes.com/sites/alexanderstein/#1c06a3246220
www.linkedin.com/in/alexandersteinphd
https://www.dolusadvisors.com/subscribe


Got a question?

If you would like to make an enquiry about any of our services click the "Contact Us Now" button and fill in your details.