Deception Technology: Authenticity and Why It Matters
This article is the second in a five-part series being developed by Dr. Edward Amoroso in conjunction with the deception technology team from Attivo Networks. The article provides an overview of the central role that authenticity plays in the establishment of deception as a practical defense and cyber risk reduction measure.
Requirements for authenticity in deception
The over-arching goal for any cyber deception system is to create target computing and networking systems and infrastructure that will be indistinguishable by an adversary from actual assets – including both live production and test environments. While this would seem an obvious consideration, it turns out to be quite challenging technically to build such deception in practice. Except for Attivo Networks, others will attempt to achieve this through emulation.
The system attribute that best achieves this goal is authenticity, because once a human or automated malicious actor gains access to a planted deceptive system – whether purposefully or incidentally – no evidence should exist that a decoy or trap has been reached. It is also insufficient to suppress only obvious forms of evidence. Subtle indicators of inauthenticity often found in low-interaction, emulated environments are also unacceptable, especially in the presence of a capable adversary.
The primary functional computing requirements for achieving authenticity in deployed deception can be listed as follows:
- Interface – It goes without saying that a decoy must project an interface that every accessing entity would expect. A deceptive system, for example, should run the same operating systems, application software, and services as seen in product. It should also be able to match the network attributes seen in the environment.
- Performance – The temporal characteristics of a deceptive system must be also within the expected parameters of accessing entities. Unusually slow response times or an inability to authenticate with services like Active Directory, for example, might be a hint that a badly designed decoy has been put in place.
- Content – The accessible information for a decoy must match the expectations of the adversary. While this might include breadcrumb information, it will also include configuration and administrative data and data files that appear visible to the accessing entity.
- Access – The access parameters – including identification, authentication, and authorization – must match the expectations of the adversary. Readily accessible decoy systems that are lax in their access security or exposed vulnerabilities will be a hint that deception is in place.
- Behavior – The behavior exhibited during any interaction with a decoy must match the specific system expectations of the adversary, including the ability to be high-interaction and continue the engagement with attacker as new commands or instructions are delivered.
Depending on the specifics of the deception being deployed, there might be additional authenticity-related functional requirements, especially in cases where a decoy is being put in place to mimic a domain-specific capability. This can include decoy systems that support a sector-specific capability (e.g., a banking service) or ones that are designed for some specialized capability (e.g., IoT).