Automated Red Teaming: The Pros and Cons

76
Phil Robinson

Phil Robinson, Principal Security Consultant and Founder of Prism Infosec

Much has been made over the potential benefits of Continuous Automated Red Teaming (CART) over recent years and there have been a number of start-ups in this space. The concept sees software used to scan and probe the network for vulnerabilities on a continuous basis, making red teaming viable for smaller companies. But is this realistic, and can it deliver in the same way?

Unlike penetration testing that is limited in scope with specific parameters and even objectives, red teaming is the closest you can get if you’re looking to emulate a malicious attack. It is incredibly assorted as well as probing for entry points on the network will seek to gather intelligence using social engineering techniques. When the red team does run-up against resistance, they will pivot the attack to explore other options using their knowledge of new threats and vulnerabilities as well as existing attacker tactics, techniques, and procedures (TTPs).

CART solutions claim to be able to automate many of these processes by searching the network and systems for possible entry points to exploit (such as exposed databases, open ports, and credentials) before taking the attack to its logical conclusion and evaluating risk, and recommending remediation actions. They are often compared to vulnerability scanning software but are more closely related to Breach Attack Simulation (BAS), the difference being that while BAS requires agents on the network, CART is now increasingly being offered as SaaS.

Also Read: Managing Identities and Entitlements to Mitigate Cloud Security Risks

What some may not realize is that there is already a great deal of automation that takes place in red teaming. Testers will customize their own testing labs to test and scale TTPs, and only once they have found an entry point will they hop on to escalate the attack, utilizing their knowledge of the system at hand and its source code. An experienced tester will then react as new data comes in, improvising and pivoting the attack just like a real attacker would. While they will have runbooks for specific exploits, they will often social engineer on the fly to gain leverage.

Malicious attacks these days are primarily aimed at exfiltrating data or seeding ransomware and they all tend to start in a similar way. An email phishing campaign is used to gain access to the domain controllers, then admin before compromise of the endpoints and it’s often only when the attack needs to pivot that the attacker will step in. So, whether the assault is carried out by a red teamer or a genuine attacker, the secret sauce is human ingenuity and guile.

Automated red teaming is nowhere near close to achieving this today. Current solutions tend to offer regimented scenarios or require customization, which is resource-intensive. While they’re able to simulate specific attacks such as a phishing campaign or by harvesting data from endpoints and can even move laterally and escalate, their attempts tend to be noisy or generate errors which then trigger blue team alerts. Most importantly of all, they can’t innovate.

Also Read: Three Steps CISOs Can Take to Strengthen Supply Chain Cybersecurity

Good red teaming, on the other hand, is stealthy and relentless. It has a far greater array of TTP in its armory, such as voice-based phishing attacks, which can be used to persuade or extract information and breakthrough defenses that would otherwise have held fast.

The current sell for automation is that it commoditizes red teaming and makes it a scalable mass proposition. While it can’t compare to red teaming, it does offer the ability to test more frequently using common-and-garden techniques such as sniffing, spoofing, or brute force attacks. This means that when new applications are added or policies are updated, a test can be run as a matter of course, and it’s for these reasons that we can expect some form of hybridized red teaming to finally emerge. But the reality is that automated offerings will always be auxiliary to true red teaming because there simply is no substitute for the human brain.

For more such updates follow us on Google News ITsecuritywire News. Please subscribe to our Newsletter for more updates.