This is a draft of the book with the same title now available through Amazon:
Dependency of Information Security
In our study of Information Security and Identity Management the most important idea should be “information,” not “security.” Security is the predicate of information: we say that information is “secure,” not that we practice some security “for” information. Now, what is “information”? In this period of dominant techno-centrism, the notion of information seems self-evident and beyond doubt, and Security professionals never discuss what is meant by it. We concentrate on what security could mean, but not on what we are “securing.” Because of that presupposition, our entire worldview becomes weak and degenerates into a series of “keywords” and simplifications.
Here is a text from the standard CISSP CBK book: “Information security practices protect the assets of the organisation through carrying out managerial, technical, and operational controls. Information assets must be managed correctly to lessen the risk of financial loss — just as financial assets are managed through finance departments and human assets (people) are managed and cared for by the human resources department and associated code of conduct and employment policies and practices. Failure to protect the information assets from loss, destruction, or unexpected alteration can result in significant losses of productivity, reputation, or finances. Information is an asset that must be protected, as well as the software and hardware, which support the storage and retrieval of the information.” – This condenses everything I need to comment on about traditional approaches to Security.
We seem content with elementary definitions like these, focused on the idea of “safe-guarding” an organisation’s data from unauthorised access or change; and we also take as a definitive truth that organisations have “informational assets,” presuming that we know exactly what these are. Once the definition is given –and this happens in almost all manuals, textbooks and documents about Information Security– the terms are rarely discussed in depth.
The most important task in this period is therefore to examine and fully determine what information is, and what role it has in society, economics, business, and professional practices. We then need to have an understanding of what informational assets are, and finally what it means to “secure” them. Only then can we start speaking about IT Management and Security on a solid foundation.
Any direction or proposal for IT and Security management needs to consider these domains are enclosed within a wider sphere. IT and Information Security are dependent areas of business operations. Business management itself is dependent on the surrounding social and economic structures and practices. Therefore, any try to resolve problems in the limited context of IT will fail if it does not express a real business, economic and social context.
IT and Security management need a philosophical stance, a search for principles and truth that may anchor better future solutions. Considering this need, though, it is difficult to be optimistic in the face of the permanent anti-philosophical stance of the technical professions. The IT specialities and Security are not accustomed to this, even despite the permanent flow of sub-optimal and failed goals and technologies that we see every day. This anti-philosophical stance reveals itself in the illusion of technical progress and the assumption that all technical progress is good. It is also visible in the pervasive lack of knowledge of the problematic of Information Theory and the role of information in society.
To do philosophy is, largely, to think with abstractions. Technical professionals or specialists close to the technical areas dislike “abstractions” and seem prefer “concrete” or practical matters. There is a great confusion in this rejection of the abstract. In fact, thought is impossible without abstractions, and the mere act of assuming or accepting some principles is already a major abstraction underlying what we commonly think as Information Security.
As mentioned above, Security professionals take for granted what information is and take it more or less as a given object or “value” existing “inside” of organisations, an object of value that needs to be “protected” against unauthorised access. Here you see already abstractions at play: the terms around information, for example the “value” of information, are abstract presuppositions, not concrete or practical at all, but instead an inherited, socially conditioned abstraction.
This presupposition sits like a background or a curtain behind everything we discuss when we talk about security. A background so opaque and extended that we do not even see it is there. It is an abstraction in the most common sense of the word.
So involuntarily, when refusing to discuss Information Theory or other areas that might clarify the context of our profession, when expressing preference for “practical things” we are giving in to inherited, given, ideological abstractions, and we do so uncritically and passively.
My work follows a different path, a philosophical path, starting with the analysis of presuppositions and worldviews. Ideas are produced by analysis and synthesis, aggregation and inference, analogy and selection of explanations that fit the evidence. Philosophy has a bad name in the techno-centric professions, but is nothing else but the science of thinking and it has a rigorous practice. Philosophy needs a desire to search deeper, beyond consensual knowledge because, after all, accepted truths are what determine our current predicaments, what has brought us to where we are now.
Four Classes of Access Control
A good starting point for this journey is analysing the notion of Access Management, an everyday idea used by Security experts. We think we know what this means, what we are talking about, when we present our solutions for “user access control.” We are so used to this idea that we don’t stop to consider how it is constructed, what parts it has, what variations it admits; but, if we had an idea of the complex ramifications of access management, we could start building or rebuilding a Security theory around this foundation.
A first analysis shows there are four classes of access controls:
- Granting access to resources: by allowing users or their tools to access data sets or data stores.
- Limiting access to resources, to ensure a user (or his/her tools) do not do more than they are authorised to do.
- Preventing access to resources, to ensure that users do not damage these or access information intended for others.
- Terminating access to resources: by removing access rights from users or their tools when these are not authorised or lose authorisation to access data sets or data stores.
When considering these forms, we immediately see an application of the four elementary perspectives mentioned in previous sections:
- Granting access corresponds to “direction” disciplines, i.e. activities around trust definition
- Limiting access corresponds to “selection” disciplines, i.e. activities around trust allocation
- Preventing access corresponds to “protection” disciplines, i.e. activities around trust enforcement.
- Terminating access corresponds to “verification” disciplines (compliance); i.e. activities around detection and validation of user access.
So here, we have a first analysis of Access Management. I believe that this analysis cannot be disputed and is self-evident for all Security specialists. Nevertheless, this approach, simple as it seems, immediately takes us beyond the traditional or standard “Protection” approach. From the side of the modalities of access control, we have various activities that are far more complete and complex than reducing security to access and access to mere protection.
The second level of abstraction should be to identify and reveal the object of access controls. Although we in the Security arena talk every day about access controls, we rarely discuss what we are protecting. Some colleagues dismiss any discussion, saying it is obvious we are “protecting information.” With that, the discussion is declared finished. In my experience, when the Security practitioner reaches that point, he or she feels self-justified as being “pragmatic.” Everybody seems to “know” the object of our protection efforts. Information, within the techno-centric view, appears as an object, a material substance that can be encased, covered, stored, and manipulated as a thing.
This ideological thinking and presupposition is especially visible among technology vendors, where the ideology also becomes part of the commercial positioning effort. Some companies make this plain when identifying as “suppliers of information security” or even “specialists in information.” These discourses take for granted that the audience, perhaps company executives or advisers, will automatically agree with shared notions of what Information is, and what this means for Information Security.
Not all technologists, consultants, or vendors are impervious to theoretical thinking, and some will remember research or academic results when pressed for a deeper view of these matters. For example, people will remember the ISO 27001 or the CISSP Common Body of Knowledge training, and maintain there is after all a Security Theory based on “security models.”
That is true: a small set of security models exists, none of which is a theory, but a more or less complete set of logical statements focused on user access controls, for example Bell-La Padula, Biba, Discretionary Access Control, Clark-Wilson and Non-Interference. All these models are based on ideas around the nature of information and the notion of “information flow.” So for example, these models abstract human actions for reading or writing data from or into information stores. A presupposition of all these models is a concept of information, where it is reduced to a material form or storage of written signals. One of these models, Nondeducibility theory , is particularly interesting, because it is clearly based on information theory and the notion of “information flow.” This model assumes that information flows in a system between “high-level objects” and “low-level objects” ; in this framework, there are several interpretations possible, as analysed by John MacLean; for example, a Security policy could allow for information flow from low-level objects towards high-level objects. This is not so relevant here, as I want to focus on the use of the “flow of information” idea. Either in the earlier notions of writing and reading, or in the ideas of information sharing or flowing, what is clearly at play is a notion of information as a substance, that somehow moves from one entity to another. This conception is very much at the base of all our Security models.
Intuitively and in common discourse, we assume the “reality” of information as a substance with properties similar to water or air, that is, a substance that moves and flows between containers or other objects, “passing” from one to the other. When information “flows” for example from a protected envelope to a lesser protected one, we interpret this as a “data leak.”
This is realistic enough, and perhaps such a definition of data leak may be enough to give base to a “security policy.” Nevertheless, in any organisation, Security practices are in fact impossible to reduce to the models that we have inherited and all “security principles” we know. In reality, the most clever logical access morel (write, read, deduce, etc.) can only represent a small fragment of an organisation’s Security practices, as we have always a combination of levels, sometimes dozens of variations of these models are needed to manage documents and data in all possible situations.
As organisations evolve, even military ones have to adopt and combine several Security models and leave behind uniform mechanisms for access control. More so, this happens in the private sector, where it is normal to find different “models” for different divisions, and even within divisions and application types. The trend goes even beyond that, as we are seeing different user types covered by the same “model” as well as different models for the same user type, i.e. for the same security classification for users.
This situation shows that when looking into the well-known “security models” cited above, we are not in front of Security Theories, but only fragmentary formulations for very specific cases, some of which were perhaps suitable when computers usage was rare and limited to bureaucratic organisations. So where do we stand now? What is Access Management in this new situation?
Access and Indirection, Secrecy and Authenticity
Another aspect these models have in common, besides a similar notion of information as an object and security as encasement or “protection,” is that all of them equate Security with “secrecy.” It is not strange they do so, as many of the models were developed during the introduction of electronic computing into military and public agencies.
Under those circumstances, “user access” had to be considered as an event that potentially represented a threat for secrecy and control. In other words, user access is conceived in terms of enforcement of access permissions or “access control lists.” This reductionist approach was enough in a period where users had what can be termed “direct” access to a resource, for example, directly typing their user names into a login screen and reading information off the screen itself; although even in those cases information access was never direct. A notion of access inherited from printed or written materials, held in folders or cabinets, behind doors, was expeditiously imposed on a different environment consisting on electronic computers and binary media.
Up to some point, this new abstraction–seeing electronic devices and media as paper and cabinets—does work when routes of access are direct, as indicated above. This maintains the illusion of a person either being authorised or not to “see” some information or to “write” data into the repository he or she is accessing. In reality, since the beginning of electronic computing, access is indirect, and the whole evolution of these technologies led to increasing indirection, remoteness, and mediation of access. We could summarise what has happened by saying that access is always indirect and mediated, with computing tools, including software processes or other users, who–in turn—initiate other processes and/or use other tools.
The chain of mediation becomes longer and more complex as we move from the period of the mainframe, to the client-server world, to the internet era. In this last context, all actions are mediated actions, actions executed by processes or tools, which we can properly call “agents.” Another reality based on this transformation is that agents and persons, or users and processes are in many points undistinguishable from one another, meaning the same as a person accesses a process and a process accesses data, a process can also “access” a user in a reversed chain of events.
Given this, both processes and individuals appear in the system as agents, and both have some form of identification data attached to them. At least they have a name (a process name!). It is in the nature of things though, that names are not intrinsically attached to processes, nor users, as well as users are not attached to processes or vice-versa. In simpler terms, we can say the information technology world is essentially discontinuous, as the connections between the parts are instrumental, temporary, and external.
In our life, as consumers, citizens, or workers, we see this in action when we have different names in different contexts and use different tools to read or write similar sets of information. Therefore, user access evolves in this way becoming more and more complex, given there is no unambiguous attachment of the user to his or her tools or to the processes and names under which these tools and processes execute.
Immediately we see the whole notion of Access Management has to evolve to consider not only the immediate act of “seeing” some data in a folder or a folder in a cabinet, but also the relationships between the agents, the names of the agents (i.e. how the agents are known in a system) and the tools or processes initiated by the agents.
We also need to consider delayed actions, meaning by that those processes that are long running or–being mediated—execute over time and indirectly access data. Multiple users will also launch the same tool, but read or write different data sets, thereby underlining the growing complexity of access controls. Similarly, it is normal in all types of organisations that users have different authorisation levels for the same tool and the same data set, which is an added level of control that we term in the trade “fine-grained access controls,” an extra layer of access management which is itself mediated by the first layer of user authentication (i.e. name validation).
This picture can be completed by saying that for each of the transitions depicted above, moving from the individual user to stored data, passing from process to process, there will be one or more instances of “user authentication.” People and processes are authenticated by some attributes they present or have. This will become relevant in a later discussion around user authentication (name validation) and authorisation.
What needs to be highlighted here, in our analysis of Access Management, is that all user or process actions are ultimately “reading” or “writing” operations. Strictly speaking, there are only two types of security technical, material contexts: read and write. These two possible actions lie at the bottom of the two primary Security principles of “confidentiality” (associated with reading) and “integrity” (associated with writing). Out of these two, we can develop the entire logic of Access Management, and each of the Security models pointed out earlier in this chapter.
Now—in light of the idea of indirection and agent action—we can say that each of those modalities will have a number of instances, for example, direct and indirect granting of access rights, as well as direct or indirect prevention of access. It is often the case that we grant access to an individual, but not to a process or tool, in which case the mediating tool also needs a mechanism for authentication and authorisation. It is a common experience too that users lose access rights (for example their user names are invalid), but their tools continue to have permissions to read and write data.
Therefore, security becomes a far more nuanced and complex task than that of protecting a specific object. In fact, even if we retain the abstraction of “protection” and information secrecy, we clearly see that Security needs to be applied not only to the “final” object or “target,” perhaps a data store, but also to each of the agents in the chain of access.
If action becomes indirect, a chain of action along a series of agents and objects, Security also becomes indirect and “distributed” along that chain. This put the goal of Security in a different light, showing that access controls are not set on the object or target (the usual “pot of gold at the end of the rainbow”) but instead on the entire chain that leads to the object. I prefer to think that Security does not protect information itself, but the actions that either write or read data. Alternatively, we could use a much better concept of information, by understanding it not as an object or thing, but as a relationship between the user and data, or between the agent and the object.
Security in these expanded terms is Security (not only protection!) of the “act of information,” which can be reading or writing data, as well as transmitting data (we see the action from the side of the object or the origin of the data).
The four modalities of access (granting, limiting, preventing, and terminating) are moments of the information chain, parallel to other correlated conditions which have to exist in all information acts: for example, granting access corresponds to an act of trusting the user, while limiting access corresponds to an act of selectively allocating trust. This is another level of expansion, if we think of it as another step away from the idea of Security as a discipline of secrecy and protection. So there are three moments of this analysis: first we see how secrecy and integrity correspond to elementary reading and writing operations. Then we reveal how Security becomes indirect and mediated as the access chain expands. Then we understand that we are not practicing Security around some material object called “information” but effectively working on the act of information itself (which implies a relational concept of information), and finally we return to Access Management modalities, where we note that it is simultaneously a profession of trust and risk management. John Arnold has developed similar ideas in his Collaboration-Oriented Architecture papers.
Modalities of Risk and Trust
To go beyond the risk-based, protection-centric, Security stance we need to adopt a worldview that naturally combines multiple perspectives. This is similar to the change proposed by Magoroh Maruyama in his theory of transcultural epistemological types. The same as a unilateral, techno-centric perspective produces a Security practice focused on protection technologies, a balanced, multi-ocular worldview leads to better and more complete Security strategies and programmes. In the model I propose, there are four aspects, which are integral to Security as a whole: Direction, Selection, Protection, and Verification. These in turn can be linked to the four Access areas of “granting, limiting, preventing, and terminating” which we considered.
The dominant trend in the Security disciplines and the market is the perspective of Protection. The Verification view comes second. Distant third and fourth places are occupied by the disciplines of Direction and Selection. This sequence reflects largely the historical evolution of Security models and technologies. It is clear, for example, how protection disciplines correspond to the initial periods of IT implementation in the military, industry, and academia, while the verification disciplines prospered with the increase of legal and regulatory compliance in the decade of the 20th century. The disciplines around trust definition and allocation (Direction and Selection) are less developed and often are confused with the others.
- Trust Definition, in this new Security Management approach, is seen as a question of Direction (Trust Definition). In this context, user identity is a matter of “Distinction” of the user among other users.
- Trust Establishment, is a question of Selection (Trust Allocation); and user identity is a matter of “Membership” of the user in some group or category.
- Trust Enforcement, becomes a question of Protection (Trust Enforcement); and identity is then seen as an “Object;” better said, as the data objects that stand in for the user (user name, credentials, attributes).
- Trust Validation is understood as question of Verification (Trust Validation) and Identity is a matter of “Context” (meaning that an identity is valid within a context and invalid outside of it).
Trust Definition and Trust Establishment are reflected in a view of Security “in” the organisation, and they answer questions around the benefit of utilising IT technologies, trust management and user enablement. Complementarily, Trust Enforcement and Trust Validation materialise in a view of Security “for” the organisation, aiming at assurances and actions in terms of Data Control, Compliance, Protection, and Privacy.
In other words, Security “in” the organisation is the “subjective” position, the position of the business leader, the owner, the strategist, but also that of the group, the organisation, Society in general. Security “for” the organisation is the “objective” position, the position of the implementer, the controller, the auditor, but also that of the engineer, the technologist. That is the position of IT organisations in general.
It is clear the subjective and the objective positions have to arrive at different ideas of Access Management, but it is also clear these two positions are interdependent and cannot exist separately.
Security “for” the organisation revolves around processes of Trust Enforcement and Trust Validation. Overall, it can be described as Security centred on risk management. At this level, Identity Management deals with individual identity as an object and as a context. More precisely, it works on a complex combination of objects (user data) and contexts (for example infrastructures and services).
Security “in” the Organisation, in turn, moves around processes for Trust Definition and Trust Establishment, and it can be described as Security centred on trust management. This aspect of Security Management deals with individual identity as distinction and membership. This effectively means that Security here defines and allocates trust levels, depending on the identity of the individual and his or her membership into groups or roles.
Therefore, we have here four modalities of risk and trust and two major groupings. The modalities are Trust Definition, Allocation, Enforcement and Verification, and the groupings are Trust-focused and Risk-focused Security. These modalities and groupings are conceptual, but condense the basic principles of our profession. In fact, we have arrived at this model through a process that can be called “unfolding” or opening up of ideas that are present in embryonic form in reality. The complete picture of these concepts can be seen in my previous work, for example in my article “What Security Shall Be.”
When opening the concept of Information Security, as we have done in the previous sections, we arrived at four perspectives of identity, as distinction, as membership, as object and as context. This is our starting point to address now – the present and the future of information theory.
Information theory needs by itself a detailed analysis that is beyond the remit of this book, so here I will point to information as it appears in the Security discipline. We have progressed towards a more correct idea of information when noting that IT Management and Security disciplines are not about an information object, but an information chain. The four Security perspectives operate on this chain from different angles and covering different but complementary concerns.
What is the origin of the “object” and “flow” theory of information that Security disciplines have taken up uncritically? I believe the objective or material concept of information is a construct that moves across many disciplines, beyond and around Information Technologies and Security. It is perhaps rooted in the physical sciences, but we need to note these, as well as mathematics and other sciences, do not have a single concept of information, and the debate is still open about the nature, the measure, and the implications of it.
I believe the current, dominant idea of information as an object is both inherited from some natural science branches, and from some particular interpretations, but it is not a general scientific concept. As it was accepted uncritically, it appeared as a “fundamental idea,” supported by science, when in fact scientific literature has not produced a definitive, universal theory about this.
An all-encompassing definition of information as an object and material flow was given by Harold Borko: “Information science is that discipline that investigates the properties and behaviour of information, the forces governing the flow of information, and the means of processing information for optimum accessibility and usability. It is concerned with that body of knowledge relating to the origination, collection, organisation, storage, retrieval, interpretation, transmission, transformation, and utilisation of information.” I quote this here because it reflects very closely what the techno-centric Security practitioners have in mind when they speak about information.
Perhaps the most extended and used theory of information is that of Claude Shannon, even if the author himself agreed that his definition of information covers only a minor aspect of it, the transmission of signals in noisy channels (or media). Shannon explicitly excluded any meaning or content of information. Several authors have pointed out this limitation.
For example, Ernst von Weizsäcker writes: “The reason for the ‘uselessness’ of Shannon’s theory in the different sciences is frankly that no science can limit itself to its syntactic level.”
Similarly, J. Peil writes: “Information is neither a physical nor a chemical principle like energy and matter, even though the latter are required as carriers.”
More significant, for our purposes—in the realm of Information Security—is the opinion of the “father” of cybernetics, Norbert Wiener (1894–1964), who asserted: “Information is information, neither matter nor energy. Any materialism which disregards this, will not survive one day.”
Wiener’s suggestion especially should call the attention of the IT professional and lead him or her to study in more depth what the nature of information is. On the one hand, as I pointed out above, there is no established or definitive theory of information; on the other hand, major contributors to this domain agree that information is not a material object. I would also add that if it is not a material object, it does not flow.
To consolidate this idea we need to look briefly into the work by Jon Barwise and Jerry Seligman with the confusing title of “Information Flow”. The authors present a theory based on the notion of “information flow” but, simultaneously, suggest that information is not an object and not a material flow: “Our primary interest is not so much in the ways information is processed but in the very possibility of one thing carrying information about another. The metaphor of information flow is a slippery one, suggesting the movement of a substance when what occurs does not necessarily involve either motion or a substance. The value of the metaphor lies largely in the question it raises. How do remote objects, situations and events carry information about one another without any substance moving between them?”
Therefore, the authors deny a material or flowing nature of information, and even posit some forms of “remote,” indirect or mediated, action, but dedicate the entire book to a “metaphor” of flow. It would have been clearer to explain that information does not flow and the inadequacy of the metaphor. I think that at least in the Security domain, this “metaphor” reveals a fundamental problem and leads to other mistaken ideas and ineffective practices.
It is not hard to follow from the need to abandon the notion that information is composed either of “bits” or “data” or “information stores,” in one way or another a set of objects in need of protection by the Security professionals.
If information is not an object and it does not flow, what is it? How did it become what it is in the current world and why has it become a matter of confusion?
Definitions of Information
Once we leave the one-sided, techno-centric approach, information ceases to be seen only as a thing or an object, and we can conceive it as something that causes a change in our conscious perceptions. In this sense, information is a perceived difference, or, as Bateson said, “A difference which makes a difference,” i.e. a difference in the world which causes a mental difference. This moves the emphasis from the object to the subject, from information as a thing to information because of interaction.
On the other hand, information may also be defined by change in the “mental system itself” only triggered but not caused by the external world. This leads to definitions of information that are focused on mental qualities and the ability of the mind to form ideas and images. Finally, information can be also conceived as something independent of the external world, more or less in an idealistic fashion, something living in the mind.
Humberto Maturana and Francisco Varela also deny that information is a substance: “Notions such as coding and transmission of information do not enter in the realisation of a concrete autopoietic system because they do not refer to actual processes in it. […] The notion of coding is a cognitive notion which represents the interactions of the observer, not a phenomenon operative in the observed domain.”
In this sense, it is possible to speak about information and assume at the same time the systems involved are autopoietic, “closed” systems that do not exchange information with the environment.
Maturana and Varela write: “Organism A does not and cannot determine the conduct of organism B, due to the nature of the autopoietic organisation itself, every change that an organisation undergoes is necessarily and unavoidably determined by its own organisation.”
According to this theory, interaction and communication are possible but not via “information flows,” as autopoietic entities do not have “inputs” or “outputs.” This line of thought is taken further by the German sociologist Niklas Luhmann in his theory of organisation: “In the context of the autopoietical reproduction the environment exists as irritation, disturbance, noise, and it only becomes meaningful when it can be related to the system’s decision-making connections. This is only the case when the system can understand which difference it makes for its decision-activity when the environment changes or does not change in one or the other respect.”
These thoughts are not isolated, and come from a philosophical tradition that does not reduce information to statistical or syntactic elements. Some philosophers go so far as to emphasise the “mental” pole of information, making it independent of the world, while others take the middle path and, like Luhmann, maintain that while there is no information flow, there is systemic coupling, i.e. interaction. Looking at all these interpretations alongside the traditional thing-like nature of information, we can begin to reshape our idea of Security and expand our understanding.
While for Shannon information is inversely proportional to probability, for Wiener it is directly proportional to probability. For Shannon, information and order are opposite concepts, while for Wiener information and order are co-dependent. In fact, for Wiener, the entropy of a system (i.e. its randomness), is a measure of disorganisation, whereas for Shannon, entropy is a measure of information (positive entropy or the number of potential choices of the sender).
These differences reveal Shannon approached the theory of information as an electric engineer, while Wiener was a proponent of cybernetics, or the theory of control, which asserts the principles of feedback and systems coupling. For Shannon the flow of information is unidirectional, for Wiener it forms a circle between the participating systems.
In the Security domains, when we speak of “risk” we usually mean information risk. For historical and cultural reasons we assume as self-evident that information is an object (a “resource”) but also that it has an intrinsic value—like a piece of gold has value–that it can be stored, that it moves between information sources and consumers, and that it needs to be protected. As I said before, these assumptions are rarely questioned and Security professionals are averse to such a discussion. In light of the critical remarks that I have documented in previous sections, I hope that this resistance or lack of interest may decrease.
The importance of this arises from all the problems around the idea of risk and Security investment decisions are linking to the underlying conceptions of information. In this chapter I summarily showed where those differences point, and in following sections I will explore how these perspectives of Security and information are aligned with four stable “world hypotheses,” using the terminology of Stephen C. Pepper.
We will see that beyond the idea of information as an object, a substance that flows and can be stored, Security architectures and programmes can be designed around other paradigms that do not take information as a substance. Speaking about the four world hypotheses he observed in philosophical thinking, Pepper used the term “mechanism” to refer to those tendencies that relied on mechanical metaphors around simple or complex machines. The mechanistic metaphor is indeed in the foundations of computer sciences and we in Security only inherited that position.
Different approaches to Information Security will arise if we change our notion of information, from the mechanistic hypothesis to a different one. For example, the sub-disciplines of compliance are based on metaphors of context and history–Pepper called these “contextualism”–but moving from one metaphor to another is not what I am proposing here. Equally, we could adopt other metaphors but still have unilateral views of our work and our goals.
Real change will come when we aim for what could be called a “fifth hypothesis” which would articulate the other four in a non-eclectic manner. Starting from there, we should analyse and disentangle the “established truths” of our profession, as we have done with the idea of information. When doing that we immediately notice that our philosophical approach is also valid for other accepted ideas, for example that of the “value of information.”
Many assumptions and ideas hinge on this notion. After all, everything that we predicate for the disciplines of Access Management depends on the idea of “value” – more precisely, on the idea that we are protecting or managing access to valuable resources. Without the associated idea of value, there would be no reason for the Security practices and for the methods of “risk assessment.” As it happens with the theory of information, Security professionals do not talk about the value of information. The “value” of information is an absolute given—another presupposition—that remains hidden from critical thought.
Sometimes we refuse analysis and questions because even the thought of these leaves an uncomfortable vacuum and threatens to undermine our professional identity and our work environment. I have seen talented professionals, highly educated and articulate, leave this kind of debate abjectly declaring they would not discuss anything that put their jobs into question. I am sorry to report that this does not sound like a good basis for professional ethics.
As we have seen, risk analysis is part of a line of thinking that depends of the notion of value of information, and information itself is articulated around a mechanistic ideology. From this, we may infer the concept of value is also a mechanistic and techno-centric idea far from the economic and sociological guidelines that should illuminate our work.
In a different chapter, I will address how the four perspectives interpret value, and where we can surpass the techno-centric stance, but now let us look into the effects the new approach will have on the Access Management disciplines.
Here I want to introduce the concept of assurance, which is relevant to complete our analysis of Access Management. In this, I follow the work of the Arca Systems approach distinguishing Security Assurance from risk-based concepts.
Now let us consider as a tool Kan Zhang’s “secure system” model, where a completely secure machine or system is described as one that allows no information exchanges (this still within the mechanistic metaphor of flows of information). This imaginary system was proposed to explain how a Security policy effectively reduces the “security” of a system by granting access to it. Granting access, in this model, increases the Security risk. This thought experiment shows the entire conceptual framework of traditional Security at play: Security expressed in terms of object and flow, Access Management formulated as null access, i.e. as absolute “protection” of the object, and risk expressed in terms of increased access and reduced protection.
If we now return to the suggested definition of assurance, we can immediately see such a scenario is not compatible: a high assurance rating is not compatible with a high Security rating. Indeed, without any information flow (i.e. “highest” protection), there is no way of measuring assurance, because it depends on user needs and actual use of the machines or networks of the IT system. This means that Security defined in terms of the object may not express and manage all the possible instances of Access Management, leading to contradictory and paradoxical results. Assurance can be defined as confidence in the controls, for example in the fact these are granting, limiting, preventing, and terminating access at a suitable level and to accepted individuals or groups; but it is more useful to define assurance as confidence in the result of applying the controls, i.e. in the information they generate over the system under observation.
If the system is isolated, then all controls, from any definitions and scopes, will not produce useful information about the system or its isolated users, and therefore there cannot be any assurance. Paradoxically we would still be able to say the system is “secure” in the sense of being encased, protected and beyond most conceivable threats.
Reducing uncertainty and increasing assurance in a system under Access Management requires setting up known policies for granting, limiting, preventing, and terminating access, as well as fool-proof methods of access data aggregation, including data on the security components themselves. This is a conception of Access Management, one that is not focused on “protection” in itself.
This is the most important conclusion up to now: Information Security shall not hinge on “protecting information,” but on generating high assurance levels, which in turn demand good quality information about the organisation or system under consideration. The system—from this point of view—is a chain of actors and objects linked in interaction and the task of security is to direct, select, protect, and verify those interactions. Only at this level can we speak of having “control” over the secured system as a whole!
I think the debate would deepen and change if we adopted a multipolar philosophy, on the lines of Pepper’s four World Hypotheses. Individually, we may or may not change our root metaphors, our essential hypotheses or views, but we might start considering there are other concepts of Information, other ideas of Security, and therefore other ways to approach our professional tasks in general, and Identity and Access Management in particular.
While in the past security was mostly associated and even attached to the “protection” ethos, a complete and a more business-centred vision allows us to develop other complementary strategies. It is essential to understand there is no Security without business direction, especially without the definition of what the business model calls for as a circle of trust. The business policies come first, and the definition of what we want to have as a trusted environment is a precondition to all the rest.
Overall, then, the need for Identity and Access Management processes is part of the growth of the security disciplines, and their increased linkage with business concerns. While the first thrust of Security solutions was mostly technical, recently there is increasing emphasis on performance and economic considerations, and these depend on business process transformation.
 A definition of security: “…protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction,” according to the U.S. Legal Information Institute, Title 44, Chapter 35, Subchapter 111, §3542.
 Official (ISC)2 Guide to the CISSP CBK, “Information Security and Risk Management”, 2006
 The standard approach to informational assets: “The purpose of computer security is to protect an organization’s valuable resources, such as information, hardware, and software. Through the selection and application of appropriate safeguards, security helps the organization’s mission by protecting its physical and financial resources, reputation, legal position, employees, and other tangible and intangible assets.” – NIST “An Introduction to Computer Security”, 1995
 “Controlling access to systems, services, resources, and data is critical to any security program. Without a comprehensive approach to control access, there are few options to managing the security posture of an organization. The ability to clearly identify, authenticate, authorize, and monitor who or what is accessing the assets of an organization is essential to protecting the environment from threats and vulnerabilities.” — Official (ISC)2 Guide to the CISSP CBK, “Access Control”, 2006
 A security model is a formal description of a security policy.
 D.E. Bell, L. LaPadula, Leonard J. “Secure Computer Systems: Mathematical Foundations” , 1973
 K.J. Biba, “Integrity Considerations for Secure Computer Systems”, 1977.
 Defined by the “Trusted Computer System Evaluation Criteria” (TCSEC), 1983
 D. Clark, D. Wilson, “A Comparison of Commercial and Military Computer Security Policies”, 1987
 J.A.Goguen and J.Meseguer, “Security Policies and Security Models”, 1982
 For example “reading up” or “writing down” actions.
 D.Sutherland, “A Model of Information”, 1986
 Meaning objects with higher or lower levels of protection.
 J. McLean, “Security Models and Information Flow”, 2003
J. Arnold, Collaboration Oriented Architecture – Securing a De-perimeterised Enterprise https://docs.google.com/document/edit?id=1IEVxlJesGn7h_vK1pa4yvjXY59ERtVqzn6yvX6mAlaM
 Page dedicated to M. Maruyama’s work: http://www.heterogenistics.org/maruyama/personal/biography.html
 See C. Trigoso, “Four Perspectives of Risk and Trust in Cloud Computing”, 2011 http://carlos-trigoso.com/public/four-perspectives-on-risk-and-trust/
 C. Trigoso, “What Security Shall Be”, 2011 http://carlos-trigoso.com/2011/04/04/what-security-shall-be/
 H. Borko, “Information science: What is it?”, 1968
 C. Shannon, “The Mathematical Theory of Communication”, 1964
 E. v. Weizsäcker, “Offene Systeme I – Beiträge zur Zeitstruktur von Information,Entropie und Evolution”, 1974
 J.Peil, “Einige Bemerkungen zu Problemen der Anwendung des Informationsbegriffs in der Biologie“, 1971, 2007
 N. Wiener, “Cybernetics”, 1968
 J. Barwise, J. Seligman, “Information Flow – The Logic of Distributed Systems”, 1997
 Gregory Bateson, “Steps to an Ecology of Mind”, 1972
 An autopoietic system is defined as a system constituted by processes interlaced in the form of a network of productions of components, which realise the network that produce them and constitute it as a unity. An autopoietic system is a closes system, and it exhibits the property of “self-reference” according to Maturana and Varela, i.e. the ability to operate as a self-organising system.
 H. Maturana, F. Varela, “Autopoiesis and Cognition: the Realization of the Living”, 1973
 H. Maturana, F. Varela, “Autopoiesis and Cognition: the Realization of the Living”, 1973
 N. Luhmann, “Organization”, In Kupper and Ortmann (Ed), Mikropolitik, 1988
 Stephen C. Pepper, American philosopher, 1891-1972
 S. Pepper, “World Hypotheses – A Study in Evidence”, 1942
 Bill J. Harrell, “Five World Hypotheses”, http://people.sunyit.edu/~harrell/Pepper/pep_wh-select01.htm
 See: D.J. Landoll, R. J. Williams, “An Enterprise Assurance Framework” – Arca Systems, Inc.
 K. Zhang, “A Theory For System Security”, Cambridge University, 1997
 J.R. Williams, G. F. Jelen,”A Framework For Reasoning About Assurance”- Arca Systems, Inc., 1998. See also: J.R. Williams, G. F. Jelen, “A Practical Approach To Improving And Communicating Assurance” Arca Systems, Inc. – http://www.aspectsecurity.com/documents/Arguing.pdf