Written by: Nick Palmer, Technical Director, Europe – For as long as I’ve been in security, the conversation has never been far away from the need for automation and the suggestion that only this can truly save us. Taking the prevailing notion that criminals and nation-states are increasingly using machine learning and automation to attack their targets, there is a real sense in this view. With a predicted estimate of 7 million new malware samples detected in 2019, and cybercrime set to cost businesses $2 trillion this year, clearly, new strategies are needed. Add to this the fact that dwell time does seem to be coming down, but the time taken to escalate privileges on networks seems to be coming down too. The figure I always quoted to customers was three days to admin on a well-protected network. However, I’ve spoken to Red Teams who can go from the internet to Domain Admin in seventeen hours. Carbanak managed it in two.
The automation story in security isn’t new, tracking back to the earliest days of Intrusion Detection Systems (IDS) that ultimately were enabled to execute automation rules on the basis of convictions and dubbed Intrusion Prevention Systems (IPS). For reasons that will become apparent, IPS was never really a discipline that caught on, primarily because false positives inevitably resulted in some form of interruption to normal service. With the security skills shortage beginning to bite, and an emerging cynicism from business leaders that incremental spending year on year is just breeding better attackers, the need for such technologies has never been greater.
One of the other compounding issues for security automation is the lack of effectiveness and interest in producing good correlation rules at the SIEM level and enabling automation when they are breached. Taking the classic example of a credential used from different geographies simultaneously, easily breached when a local VPN server is down but ultimately resulting in a call to the helpdesk when the disconnected user can’t log in. Clearly, the concept is sound, but its execution in a production setting almost feels naïve when you realise how easily breached such rules are. Further to this, with attackers aware of network anomaly detection solutions and fully aware of how to circumvent them: An attacker lands on an endpoint, turns off the local AV, scrapes memory for credentials and pivots to a server he finds in the RDP cache. No anomaly detection solution is interested in that, because it represents usual administrative behaviour.
As a result, the quality of the feeds into automation rules have to be scrutinised and tough questions asked about how to improve them. If this is not a key focus, then the already beleaguered security teams will be fighting friendly-fire DOS events internally and have even more trouble keeping pace with their day jobs. The last three years I’ve spent my career advocating the need for high-fidelity alerts, and I think there is an excellent source available in Deception technologies. When you put a dense mesh of fake assets onto a network that no legitimate system or user should interact with, you have created a trustworthy feed for events that should necessarily go straight to the top of your workflow.
Not to oversimplify, because legitimate whitelisting activity should always be conducted as part of onboarding any new event source, but once the asset scanners, inventory scanners and intermittent scripts that poll the environment for new and interesting things have been exempted, then a deception solution should be absolutely quiet in peacetime. Nothing should talk to something deceptive. Indeed, a good deception solution can obscure itself from all but the most determined inquisitive users, leaving only insider threats and ‘bona fide’ bad guys and malware left as the triggers for events on these platforms. Arguably, only when you have an alert of the highest quality can you reasonably trust an automation rule to take the source IP of interest off the network having triggered a forensics capture and mobilised the investigative teams to move on the target.
The vast majority of successful entries to the modern network use well-crafted spear-phishing to encourage users to click on links and compromise their own machines. Once the attacker has a point of presence on the network and is performing his early phase reconnaissance of both the entry point and the surrounding assets, he will generate telemetry that many security monitoring solutions are not interested in – stealthy network scans using TCP half-scans, WMI and RDP service enumeration on endpoints of interest, attempts to move laterally using credentials harvested on the network. A deception solution treats these events with the highest severity because the anomaly for a fake system on the network is that nothing should talk to it. If something DOES talk to it, then an alert worthy of full credibility and severity is generated. These are the types of alerts that can be fed into automation platforms with increased confidence, and this is how a discipline that has been discussed for decades can actually take wing.