A “complete Security” approach –in the sense I introduced in the previous article (http://carlos-trigoso.com/2014/04/28/security-lost-and-recovered-2/ ) applies a modal logic to grasp the fundamental aspects of any Security arrangement. This is a “deontic logic,” i.e. a logic of obligation, prohibition, interdiction and permission, which is able to represent the various moments of a Security model.
In current organisational contexts we frequently find that there is a techno-centric focus which limits Security strategy. Under these conditions, organisations lack data quality policies and processes, and any information controls are isolated within technical systems. Management teams address problems reactively, and the organisation suffers because of deficient availability of correct information.
As in other areas of Information Technology, for example in data management, Security is affected by the lack of standardisation and validation. When data improvement programs are limited to single applications, the organisation as a whole is unable to operate efficiently, and when this happens in the Security space, the organisation is unable to trace a security boundary and verify if its policies are working or not.
Marek Sergot addressed a modal structure to study the enforcement of access policies in distributed environments (M. Sergot, “Contractual Access Control”, 2002). Very appropriately, Sergot noticed that existing models of Security emphasise the modalities of Permission and Prohibition, while neglecting other modalities, in particular those aspects related to “obligation. (See: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.19.6298)
To remedy this lack, Sergot proposed an extension of the deontic approach introducing the notions of “entitlements” and “obligations” to grant access. This supported Marek’s idea that “controlling the normative state of a community” and “monitoring the contractual performance” of a Security model had to be part of the complete structure.
Moving beyond Sergot’s suggestions, and adopting a different terminology, I have been looking into the the four modalities to determine how these map to a common language of access controls. A summary of this approach is represented by the following diagram. (See also: http://carlos-trigoso.com/fundamental-conceptions-of-information/persistence-of-techno-centrism/ )
Deontic Obligation (“Must Give Access”) represents here an “obligation to read/write data,” which should be understood as the position of the actor who commits resources. The logical “contrary” to this would be represented by Prohibition (“Must Not Give Access”), which maps to “prohibition from reading/writing,” which in terms of access controls can be understood as “selection” or the “allocation of trust” (and distrust). The other two “corners” of the deontic square must be then Interdiction (“May Not Give Access”) and Permission (“May Give Access”).
As a very useful reference I usually show the logic modalities developed by Anselm of Canterbury in the XIIth Century –and brilliantly studied by Sarah Uckelman and Douglas Walton. (See: http://carlos-trigoso.com/2010/04/05/st-anselm-of-canterbury-logic-of-action-2/ )
The Anselmian modal terms are:
-Facere non esse
-Non facere esse
-Non facere non esse
Which can be interpreted very clearly as a “logic of action” as follows:
– To make something happen
– To make something not to happen
– Not to make something happen
– Not to make something not to happen
This is useful, because only when we interpret access controls as a mesh of actions and interactions we can see how pertinent it is to apply a modal logic in the realm of Security. In turn, a modal mapping also allows us to relate Security principles with the fundamental legal conceptions formulated by one of the most important legal theorists of the last century, Wesley Newcomb Hohfeld (1879 – 1918).
Hohfeld’s correlated legal conceptions of Power, Immunity, Disability and Liability, together with those of Right, Privilege, No Right and Duty constitute a powerful framework for the formulation of a truly complete Security theory.
For example it can be said that Power and Liability are correlated in the same way as Obligation is to Permission. And Immunity and Disability are correlated in the same way as Prohibition and Interdiction are related to each other. (As happened when Hohfeld introduced his terms for the first time, the specific meaning of the Security concepts has to arise from the conceptual mesh itself and not from partial or separate definitions of each term. See: http://carlos-trigoso.com/fundamental-conceptions-of-information/section-1/ )
Using this approach I propose to go even further and say that the Security Perspectives of Trust Definition and Trust Verification (upper left and lower left of the logical structure presented in the previous article) are linked in the same way as Power and Liability, while Trust Allocation and Trust Enforcement (the upper right and lower right corners) are correlated in the same way as Immunity and Disability are linked to each other.
We can reach this model by starting from an expanded concept of Information as well (after all, we cannot address Security without such a framework). There are four modalities of Information underlying the four Security Perspectives (See: http://carlos-trigoso.com/fundamental-conceptions-of-information/ )
– Information as subject (corresponding to information as intangible and subjective-abstract)
– Information as relationship (corresponding to information as intangible and subjective-concrete)
– Information as an object (corresponding to information as a tangible and objective entity, or “data”)
– Information as a process (corresponding to information as tangible and objective-abstract, or “event”)
For each of these modalities we can assign a core access control modality. So, for example, for information as a “subject” we can establish the obligations of the sharing party (the data custodian in most cases). In other words, for Information as a Subject, Obligation represents the definition of read and write controls. Similarly for Information as an Object, the modality of Interdiction represents the enforcement of these controls normally associated with Security Operations in the IT department.
For Information as Relationship, Prohibition represents the selective permissions applied to various actors. I tend to think here of the organisational structures and the roles associated with business functions. And for Information as Process, Permission refers to compliance with the established permissions, not the permissions themselves.
We have been looking at a multi-faceted logical structure, but this can also be presented in a more direct way as a duality of Risk and Trust. To do so we must be to define risk and trust as intertwined concepts, so to set it as a basis for a Security strategy or architecture. The term “trust” is frequently rejected by technologists who are limited by their mechanistic approach. Confusion arises when people compare the idea of “trust” with other terms which seem equivalent or closely related: belief, confidence, assurance, etc. All of these seem too “subjective” to the techno-centric mind-set who yearns for something “measurable.”
To avoid this confusion, we must agree that “trust” cannot be defined in isolation, as a metaphysical principle. My view is that “trust” is is only meaningful as a correlate of “risk,” in the same way as an obligation is only comprehensible in as a correlate of duty. In other words, risk appears only in the context of trust, and vice-versa. For trust to exist there must be risk takers.
Sadly, in the “modern” technological approach, even when people try to avoid the perceived ambiguity of the concept, is is easy to assume that trust represents a unilateral affair, where one actor holds a “belief” about another actor, or “relies” on a second actor being “compliant.” This, in turn, requires some assertions about the second actor’s intentions and capabilities; as well as control capabilities, i.e. the ability of the first actor to detect and verify compliance by applying some measurements. In this way the definition of trust again dissolves into a series of related terms and clarity is lost.
When “trust” is defined in this way, the problem is that it ends up as an “internal” affair of the trusting party, and a series of trade-offs arise immediately: control versus cost, control versus complexity, and risk versus opportunity. Hence, any clarity achieved in the process is completely lost. Ultimately the observer becomes unable to see the correlation of risk and trust and we –with some reason– feels inclined to “discard” the idea of Trust altogether, as if it were useless.
The usual scheme also falls apart once there are more than two parties at play: Does the higher “confidence” level of the an intermediary party affect the “confidence” level of he first party? Do changes in the state of belief of the second party affect the level of trust of the first party? These problems multiply when the mediating parties are technological or physical components. Although these components cannot in principle hold any beliefs or states of “trust,” the techno-centric view tends to attribute these states to inanimate machines and systems, while human interaction, recedes into obscurity. Trust in the “platform” replaces human, cultural, social and political trust.
This is definitely not a good foundation for a Security strategy!
The main consequence of this involution is that –in the absence of a proper definition of Trust as a network of meaning–, and due to undue reliance on “trust in the technical platform” the correlated concept of Risk is also degraded.
In the absence of proper notions of Trust definition and allocation (i.e. a wider mesh of meaning) techno-centric “trust” derives into an idea of “risk” that is exclusively focused on the machines and devoid of organisational meaning. As this happens, instead of an overarching framework and structure supporting the definition and establishment of Trust, we have an overwhelming tendency to “avoid risk” and a damaging emphasis on “confidentiality” by means of encryption only.
The lack of articulation of Risk and Trust runs parallel to the blurring of the deontic model of Obligation, Prohibition, Interdiction and Permission, that should be the basis of the Security policies. In this sense, the most damaging consequence is the disappearance of the modality of Obligation and the organisational tasks that are associated with it. Through oblivion, the techno-centric organisation arrives at a world with prohibitions and permissions but without obligations.
It is now time to overcome these limitations and look ahead into the post-techno-centric and post-cryptographic Security context, a world where cryptography and confidentiality are repositioned taking into account persistent adversaries with unlimited computational power and able to watch all information exchanges of the organisation. Adi Shamir recommended such a shift in the Security professions during his intervention at the RSA Conference in 2012.
This new level of complexity requires new definitions of Risk and Trust and a vision of Security that relies on an expanded logic of modal and legal concepts as described above. The following points summarise the general structure of the proposed approach.
•Trust Definition + Trust Establishment = Security Centred on Trust Management
•In this space, Security is articulated from the perspectives of Definition and Selection (Trust Definition and Allocation)
•This “half” of the structure corresponds to the Subjective position, the position of the business leader, the owner, the strategist, but also that of the group, the organisation, Society in general
•Trust Enforcement + Trust Monitoring = Security Centred on Risk Management
•In this space Security is articulated from the perspectives of Protection and Verification (Trust Enforcement and Verification)
•The lower “half” of the structure corresponds to the Objective position, the position of the implementer, the controller, the auditor, but also that of the engineer, the technologist, the IT organisations in general.