Even the Experts Can Be Fooled

When even experts in social engineering can be fooled, it is important to ensure a defense in depth strategy for your business’ information security.  KnowBe4, one of the country’s largest providers of cybersecurity and social engineering training, got fooled by a North Korean IT worker intent upon loading their network with malware. 

KnowBe4 had a job opening. They were looking for someone for their internal Artificial Intelligence (AI) team.  What they received instead was a valuable training lesson in advanced social engineering. They were fooled. But unlike many companies, they disclosed the failure. Their experience might save others from a similar fate. 

Fortunately, they caught the imposter early enough so there was no breach or illegal access to the company’s systems.  They stopped him before he could do any damage.  Here is how it happened, how they stopped it, and some lessons learned. 

The human resources team did their jobs.  Background checks came back clean because the imposter was using a valid but stolen US-based identity.  They conducted 4 video conference-based interviews validating that the person matched the photo on the application.   The imposter took a stock photo and used AI to merge his features to the photo.  HR even verified his references. 

Once hired, the imposter asked to have his laptop sent to a farm. Not the kind you’re thinking of. It was “an IT mule laptop farm.”  The laptop farm is like an office filled with laptops and computers hackers use. They connect remotely from North Korea to the laptop farm. It was a good thing KnowBe4 restricted new employee access and didn’t allow access to the production systems. 

Once the imposter had been successfully hired and his laptop had been delivered, it was time for him to embed his malware onto the company network.  He downloaded and attempted to execute malware.  He then used some technical trickery to cover his tracks. 

The good news is the company security operations center (SOC) was alerted to potentially dangerous behavior and called the imposter.  The imposter claimed criminals must have compromised his router.  The SOC team quickly isolated his computer from the rest of the network preventing his access to valuable systems and data.  The imposter was unresponsive once he figured out that he was caught.  

Here are some lessons learned.  When a company uses remote workers with remote computers, the company should have a way to scan the device ensuring there are no other connections on the device.  When hiring workers, don’t rely simply on email references.  Do not ship laptops to locations that don’t match the applicant’s address.  Make sure applicants are not using Voice over IP (VOIP) phone numbers.  Lastly, watch for discrepancies in address and date of birth.  

With all the process failures, KnowBe4 did not suffer a breach.  They understood defense in depth.  They had multiple lines of defense in case one (the employee screening process) was breached.  All their laptops had endpoint detection and response (EDR) software loaded and they had a SOC watching over their network.  The EDR stopped the malware from executing and alerted the SOC. The SOC team isolated the computer right away and escalated the issue.   

When it comes to protecting your business, you cannot rely on the minimal protections.   Firewalls and anti-virus are useful, but they do not stop a hacker from entering through your email or your browser.  Technology, like EDRs and SOCs, may save the day, but must be backed up with tried-and-true policies and training.   Although KnowBe4 is an expert in social engineering, they got scammed due to lax hiring policies.  They have since updated their hiring policies.  Remember, a fool may learn from his own mistakes, but a wise man learns from the mistakes of others.   Be the wise man. 

Beware: Phishing Attacks Enter the Deepfake Era 

Bob’s boss was asking for something really weird. A wire transfer this big was never done. In all the years Bob worked for Alice, she had never asked for a transfer of this magnitude. But there she was in the zoom meeting, in the flesh (well, digital flesh anyway). How was Bob to know that wasn’t really Alice? 

In the digital dimension, threats to our life aren’t always the mortal kind. They also lurk behind screens, ready to exploit our human weaknesses. Those are the ones that we too often overlook. While phishing attacks are nothing new, they have evolved. Welcome to the Deepfake world. Oh, is that word new to you? Well, buckle up. You need to learn it… and fast. A deepfake is a video or audio of yourself or someone you know created by Artificial Intelligence (AI) out of parts and pieces of other audio or video. With deepfake voice and video capabilities, cybercriminals can now mimic your trusted contacts (like your boss) and authority figures (like your spouse) with alarming accuracy, aiming to deceive and manipulate you. If you use the internet to do banking or email, you are a target. You need to understand the risks and implement precautionary measures to safeguard your online identity and personal information. 

Deepfake technology uses AI to combine audio and video recordings, seamlessly grafting a person’s likeness onto another’s voice or image. This tool, once restricted to Mission Impossible, is real. And it has been weaponized by cybercriminals seeking to exploit your trust in familiar voices and faces. 

Imagine receiving a phone call. On the other end someone is demanding you confirm sensitive account information. The voice on the other end sounds EXACTLY like your boss, complete with the cadence and intonation you’ve come to recognize. Or perhaps you receive an email from your biggest client requesting urgent wire transfers, accompanied by a convincing video message imploring immediate action. In both scenarios, the other person isn’t a person at all. It’s an AI impostor, leveraging deepfake technology to deceive and manipulate you. 

The consequences of falling victim to a deepfake phishing attack can be dire – from financial fraud and identity theft to reputation damage and compromised personal data. The ramifications are deep. Being deceived by someone you trust, even if it was a fake someone, creates a psychological fissure that erodes your confidence in digital communications and exacerbates feelings of vulnerability and distrust. 

The threat posed by deepfake phishing attacks is unsettling. But there are proactive steps you can take to mitigate risks and bolster your defenses. 

Verify Identities: Before responding to any requests for sensitive information or financial transactions, independently verify the identity of the sender through alternative channels. Contact your bank or employer directly by phone using a number you know to be good to confirm the legitimacy of any requests. 

Exercise Caution: Whenever you receive unsolicited emails, phone calls, or messages treat them with profound skepticism. This is especially true if they contain urgent or unusual requests. Scrutinize the content for inconsistencies or irregularities. It may indicate a phishing attempt. 

Stay Informed: Find someone you trust to keep you informed about emerging cybersecurity threats and trends, including advancements in deepfake technology. Educate yourself and your loved ones about the risks posed by phishing attacks.  

Use Multi-Factor Authentication: Implement multi-factor authentication wherever possible to add an extra layer of security to your online accounts. This additional step can help thwart unauthorized access, even if your credentials are compromised. 

Report Suspicious Activity: If you encounter a suspected deepfake phishing attempt, report it to the relevant authorities, such as your IT department, cybersecurity agency, or the Federal Trade Commission. 

The emergence of deepfake technology underscores the evolving nature of cyber threats and the importance of proactive cybersecurity measures. By remaining vigilant, verifying identities, and staying informed, you can safeguard yourself against the perils of deepfake phishing attacks. Together, we can navigate the digital landscape with resilience and confidence, thwarting cybercriminals at every turn. 

The original article was publish in the Sierra Vista Herald and can be found here.

The Cyber Guys: Swatting customers, cyber hackers’ new extortion method

What you are about to read is fiction, but the scenario is feasible and, in a few months, may be likely.

Bob was sitting on the couch watching the Chiefs play the Bills. The Bills had just made a touchdown, bringing the score to Bills 17, Chiefs 10. Suddenly the front door burst open and a heavily armed group of people flowed into his home. In moments Bob was on the floor face down, arms behind him zip tied. Bob was under arrest.

Bob wasn’t guilty of a crime. He was the victim of a horrible extreme prank called “swatting.” Someone had accused Bob of posting extreme anti-government threats on social media. Bob’s social media account had been compromised, then filled with anti-government rants. Enough evidence to justify the temporary chaos you just witnessed.

Why was Bob targeted? Unfortunately, he was the client of a medical center that recently had fallen victim to a cyber-extortion group. The patient information was stolen (including Bob’s) and the threat group promised that if the ransom wasn’t paid, the threat group would make life a literal hell for the patients.

Because Bob had the bad habit of reusing his passwords it was trivial for the threat group to take over Bob’s social media account using his stolen credentials and make those false posts. Bob became the first of many to endure such humiliation.

The story is fictitious. But the threat is real. Swatting as a service is the latest tactic threat actors are using to coerce businesses into paying cyber ransom. You are truly just a pawn. Because cyberattack reports are so common today, we’ve become overwhelmed and desensitized to the implications of the threat. But now the implications are physical. Visits from actual police to your home. So far, the police visits have resulted in only momentary inconvenience for the victim and a waste of police resources. But it is conceivable this will escalate.

You are probably thinking, “There’s no way this could happen. Who would ever go to such an extent just to get money?”

The reason you think this is because you are not evil. But there are truly evil people who absolutely don’t care about the pain this causes innocent people. The effort it would take to conduct such a campaign as described above is very little on the part of the threat actor, especially in the age of artificial intelligence.

An AI bot can easily craft the content for social media posts at scale. The level of effort on the part of the human is then as little as copying and pasting the content into a compromised social media account.

But you can do something to make sure it isn’t you who suffers. First, if you don’t absolutely need social media, you can cancel your accounts. One principle of cybersecurity is “if you don’t need it, remove it.” If you do use your social media accounts, make sure you use a password manager like Bitwarden to create and securely store your passwords.

Lastly, you do have a right to ensure your data is secure. The tactic described above has been used against medical centers. Your protected health information is governed by the Health Information Portability Accountability Act. You have the right to ensure your medical provider is protecting you. Ask it to provide you with evidence it is doing more than the bare minimum. If it refuses to show you, then you may consider changing doctors.

I know this sounds extreme, but so is “swatting.”

Original article was featured in the Sierra Vista Herald and can be found here.