There’s a common (if dismissive) joke among security professionals that says, “the biggest vulnerability in any system is between the chair and the computer.” People are just as easily tricked as computers, perhaps more easily. In fact, IBM’s 2018 Cyber Security Intelligence Index reported that human error was responsible for two-thirds of compromised records including a historic 424% jump in misconfigured cloud infrastructure.
To complicate matters further, over half of all security attacks are caused by those who had insider access to organisations’ IT systems. Companies themselves can be responsible for hundreds or thousands of employees, each with their own unique set of behaviours, motives and working practices. Detection technology and security packages, no matter how sophisticated, will always be limited by this human factor, which is often thwarted by these social engineering techniques.
An old-fashioned con: how we are duped
Social engineering at its core is the art of lies and manipulation, the oldest tactics in the book of deception. Through typical social engineering online today, humans are psychologically manipulated by exploiting cognitive biases and schema in order to steal information.
There are several ways that cybercriminals can manipulate via social engineering. One of the most prevalent is phishing, in which cybercriminals seek to obtain private information or credentials via seemingly legitimate means. For example, a victim could receive an email that appears to be from a co-worker, vendor or other business associate, asking a user to share log in details, passwords or financial information.
Similarly, spear phishing targets individuals and organisations to acquire information by masquerading as a legitimate entity. Russian hacking group Fancy Bear recently attempted a spear phishing attack, wherein victims were almost tricked into visiting mimicked US midterm election campaign domains allowing the group to see and steal login information of users.
Lesser known tactics: archaic trickery
Another typical confidence trick for information gathering purposes is waterholing or ‘watering hole attack’, named so as frequently visited websites are exploited. This exploit will drop malware into their machines, such as a remote access trojan, whereby the attacker can then begin exfiltrating data.
Even more archaically, the ‘baiting’ method sees an attacker luring their victims into executing code, usually by piquing their curiosity or otherwise convincing them to run hardware or software with hidden malware. For example, innocent looking USB sticks handed out at a conference as giveaways could actually contain malware – a person risks a malware infection anytime they accept and use a USB given to them.
Pretexting is when an attacker creates a plausible scenario that they trick their victim to play along with in order to steal their information. It relies on fostering a false sense of trust with the victim, who is convinced to give the attacker the benefit of the doubt.
How security training can mitigate social engineering hacks
Increasing the awareness of social engineering scams and methods at an organisation via ongoing training, alerts and testing is an effective way to improve security and behaviour. Attackers probe for weaknesses not only in software code and in networks but in individuals as well. Social engineering hacks have been so successful because they require no knowledge of code – it can be as simple as tricking a user into clicking on an ad, a video or an email. With attacks coming from nation states, hacktivists and financially-motivated threat actors, organisations need to invest in training and technology upgrades like two-factor authentication to make gaining access more difficult. Each device and each individual represents a potential attack vector.
Paul Farrington, Director, EMEA Solution Architects at Veracode