2. Freeing Identity From “Risk Avoidance”

This is a draft of the book with the same title now available through Amazon:

 http://www.amazon.co.uk/Fundamental-Conceptions-Information-Identity-Logistics/dp/1484990021/ref=sr_1_1?s=books&ie=UTF8&qid=1369520084&sr=1-1

A Key Issue

A key issue for all organisations is the result or “value” gained from their investments[i]. This is especially relevant considering the large proportion of capital investments that go each year to information technologies.[ii]  In the 1980s researchers found evidence that information technologies offered competitive advantages to organisations[iii], but some authors argued that IT benefits were difficult to measure, and that “Despite years of impressive technological improvements and investments there is not yet any evidence that information technology is improving productivity or any other measure of business performance.”[iv]

According to Hannu Salmela[v] studies on the relationship between IT investments and business performance could not show a positive relationship. Studies that are more recent show the same problem between IT investment and results[vi] . These reports coincide with what we, as IT and Security professionals, note in our work. We see low correlation between investment and organisational performance and in particular in the Security space, a tendency to constrain Security investment because organisations cannot find that correlation and prefer to mitigate or accept risks instead of addressing the problems in a comprehensive fashion.

There are some major differences between investments in IT security and ordinary investments. The main difference is that it is hard to decide the economic utility of Security[vii]. This is caused by the nature of the IT security measures. Investing in IT security or products usually will not provide direct returns in the sense of a measurable positive cash flow. If we take the conventional approach to Security, their main utility lies instead in reducing risks.

The second problem is that it is hard to determine the costs of IT in general and of Security solutions in particular. There are direct costs and indirect costs of any Security programme, and there are visible and invisible costs of the systems and processes that are replaced (or should be replaced) by any proposed Security solutions. It is equally hard to understand the financial justification to replace current processes.

The Security industry adopted early on the notion of Annual Loss Expectancy, to try to produce a financial argument for Security investments. ALE measures loss expectancy in terms of “single loss expectancy” multiplied by the “annual rate of occurrence” of a negative security event. This measure, still present in many Security manuals, is nevertheless difficult to use, and its output is not useful. This is because all ALE metrics need estimations for severity and probability of “loss events.” These, in turn, are not based on event frequencies but on expert estimates and recommendations. A key cause here is that historical data of security events is very rarely published and is scarce even within organisations. In other words, Security based on risk estimations is hard because it is not objective.

Economic Concepts

As Security experts, we see that subjective approaches are unreliable, but many times, we apply these to fill in a “gap” in our methods. In the Security professional circles there is frequent criticism of the “lack of understanding” of “security measures” we find among business teams. This is erroneously classified as a “cultural” problem, as if non-security managers and specialists were by education or convention unable to understand our traditional approach.

Security professionals tend to distance themselves from the questionable and subjective methods around risk assessment by saying that, after all, the client is who identifies the risks and “we only provide the method.” I think that this is precisely the problem. Perception of risk is different at different organisational levels, and risk levels change depending on organisational changes and the actions of the different participants.

In fact, to a large extent, Security professionals are frustrated by the uncertain results of Security projects, but these have very little to do with risk calculations, and depend on correlations of forces, collaboration and contraposition of different levels and parts of the organisation. Our calculations may be correct, but the perceptions of risk, and the weights assigned to Security events are not objective at all.

Conventional risk analysis only shows how much the customer may “lose” under attack if all the conditions are known regarding the materialised security threat. This approach fails for a simple reason: any risk assessment changes the probabilities of the assumed Security events. Are Security experts considering the results of their own actions and those of their clients? Instead of continuing with the traditional approach, our work should assess and show how the business must change to be secure and compliant.

Economic and organizational ideas should be used systematically in the Security space. This is particularly difficult with working in the Public Sector; because economic concepts are absent from Security investment decisions. The notion of a return on investment becomes formal and loses meaning. This is a serious difficulty, but even in the Private Sector, most Security programmes are limited to the prevention of certain outcomes, and organisations do not calculate incident responses or remedial actions in economic terms. It becomes impossible to define a cost-benefit analysis.

Another challenge is how to determine what a security investment is due to the large variety of sub-areas and disciplines in our domains. For example, are Identity Management projects “Security investments”? For many organisations, Identity Management is not an area in Security but an operational or delivery aspect of IT in general.

This should lead us to open the discussion, and study Security and Identity Management in relation to organisational development and maturity. Technical experts tend to agree that it is an error to focus on computing technologies knowing that risk management is not a technological problem; but surprisingly it is not equally easy to agree on the fact that a risk management focus—even one rooted in business considerations–is enough to develop a Security strategy. In this work I show how we can move away from a techno-centric direction and from a risk-based Security conception.

Don Parker made a fundamental contribution in this new direction in several articles and books[viii].

Parker writes: “There are too many interrelated unknown and known variables, with unknown values. They all change in unknown ways over time, depending on unknown future circumstances such as system and business changes, labour disputes, social and political changes, unknown enemies’ failures and successes, and enemy and defender frailties and irrationalities. It is generally agreed there is inadequate loss experience data to support quantitative risk assessment applied to a specific instance, because of victims’ needs for confidentiality. In addition, humans are notoriously bad at qualitative risk assessment. Finally, there is no proof of effectiveness or reported experience of performing security risk assessments cited in the security literature, because they are proprietary and confidential. “

I find these words by Parker –a recognised world authority in Information Security—should be enough motivation to explore to what extent the prevalent techno-centric risk-based methods are either fostering or halting better security practices. Let me underline that (the same as Parker) I do not think that we should abandon risk assessment methods, but that we must integrate these into a wider framework anchored on other Security perspectives.[ix]

Decision Making

Martin Geddes described some years ago a pervasive problem in business decision making[x]: “I bet you’ve seen the following happen. There was an annual budget round, or some other big resource allocation decision. A guru from finance or strategic planning was tasked with producing the World’s Most Complex Spreadsheet. Filled with tabs and links, it lists the options and the measurement criteria. Some criteria are hard numbers; some are softer issues that reasonable people could differ about. Each option is given a score for each criterion. That score could be its rank (inverted, to make the best one score highest). Or could just be a simple scale like 1-10. Or it could be some formulaic derivative of something like expected revenue. At the end, they all get combined by the formula from Hell (or just SUM), and a summary splurged into a PowerPoint deck.

“Then there is a Big Meeting. A number of exceedingly well-paid executives are called in to bless the result. But they do not like it. There is a big argument, and some ‘adjustments’ are made. Success in getting up the priority list is largely guided by force of personality and imaginative over-statement of expected project benefits. A few weeks later the senior VP of finance comes back from vacation, decides she does not like the outcome, and strikes out a project or two. In the meantime, a product development team keeps working on a project that did not make the cut, because the company has already invested so much in it, and you cannot kill the thing now. It is just work in progress, you know. The organization flounders in meeting its mission.”

We in the Security arena have seen this happening so many times! To change this situation, Geddes suggests using the Analytic Hierarchy Process (AHP), first proposed by Thomas L. Saaty[xi]. A NIST paper discussing AHP for Security Investment shows the application of this method to Security assessments.[xii]  The authors, still within a traditional approach, say that “the goal, Risk-Based Information System Security, appears at the top tier of the hierarchy,” while “the next lower level lists factors contributing to the goal, such as Confidentiality, Integrity, and Availability. These in turn serve as criteria for selecting among investment alternatives A, B, and C.”

The essence of AHP is the mathematical method used to aggregate numeric factors across the hierarchy. The authors correctly add: “This feature of being able to arrange the elements of a complex selection process into partial hierarchies and sub-hierarchies makes the AHP flexible and allows it to be tailored to an enterprise’s level of investment management maturity.” Indeed, this is a good method leading to simultaneous evaluation of factors in a particular situation. We could easily adopt AHP for our normal, risk-based assessments, but this would only add a very sophisticated method on top of a weak foundation. How many reasons and influences should we select for the calculation?

As the NIST paper itself points out: “A low level of investment management maturity will require only a few criteria and sub-criteria to be ordered and evaluated with respect to an overall goal, whereas at a high level of investment management maturity there may be multiple levels, criteria, and sub-criteria.” How can we fit into this model the extraordinary complexity of investment management in global corporations?

So, let us keep in mind the direction suggested by Geddes and the value of the AHP method, but we need to develop a subdivision of the Security concerns based on principle, not on consensus or “committees of experts.” We may or may not use the AHP method itself, but its systemic approach is exemplary.

As I will show in later chapters of this book, my subdivision of the Security concerns into four paradigms or perspectives[xiii]  is a principled approach, by which the interaction of organisational and economic forces can be modelled and managed. The four Security perspectives are forces operating in any group or organisation, and must be studied and integrated, articulated and directed to ensure the completeness of our decision.

How are Decisions Made?

These thoughts lead to the realisation that the real problem is not how we demonstrate risk to the organisational actors, but how Security investment decisions are made with or without risk assessments or in spite of these. It is also vital to understand how risk assessments change the landscape that is being analysed. Ideally, our investigations should lead us to understand how Security is determined by the organisation itself. It is a guiding theory of this investigation the most important factor in any Security programme is the organisation, i.e. the structure and the dynamics of it.

Looking at this in an empirical way, the number of variables seems difficult to manage. The main source of uncertainty is organisation complexity itself, so we need to find a vantage point or a principle to reduce uncertainty and allow us to aggregate information.

There is much academic research about security investment decisions. In particular I note the work by G. Lawrence and M. Loeb[xiv], whose investment model paradoxically suggests investment required to protect an asset does not necessarily increase with the vulnerability of it. Loeb infers that protecting highly vulnerable information can be inordinately expensive and a firm may be acting rationally by concentrating investment on the protection of “mid-range vulnerabilities.”

Beyond the puzzling appearance of this advice, we can recognise many of our own experiences, where the approach varies depending on the characteristics of the organisation, and how these classify their “mid-range vulnerabilities.” In this sense Loeb’s approach is meaningful but does not solve the problem and may serve as a justification for the current state of affairs in our profession.

If we take the experience of the Identity Management sub-discipline, we see that it has been positioned and “sold” as a tool, as a mechanism to gain savings in the user management cycle (by implementing password management for example).

Only a few years ago a new trend arose in our specialty, promoting role-based access control instead of service desk functions (like password management). This was a good trend and it showed the discipline was growing towards an extended strategy relying less on direct savings and more on organisational transformation.

The “service desk” strategy in some cases led to quick benefits as the reduction in incident calls and the cost of support personnel; but this focus minimises conscious, systemic business change. The result was Identity and Access Management remained as one more “tool” in the IT department, instead of becoming a pillar of enterprise security.

It is difficult to measure the economic contribution of Identity Management when it is constrained to service desk concerns. The business process engineering effort is not included in the calculations, and therefore the benefits obtained in service desk savings are often isolated, overvalued, and disconnected from other financial fundamentals.

For many experts, such a reduction of the Security and Identity sphere was comfortable, because it allowed them to avoid issues of organisational change. Sadly this also confined our discipline to the outskirts of Security, while this in turn remained on the margins of IT and enterprise-level thinking.

This type of adaptation was always associated with the refusal to address deeper questions that lie in the foundations of our profession. To progress from there, complex processes need to be addressed and new ideas are needed, in particular around the ideas of information and the “value of information.” As we have seen, it is difficult to measure the value of Security investments and that seems to be widely accepted. There is no consensus though around the “value of information”; but how could we measure the value saved or contributed by security if we do not have a clear concept of the object or subject we are protecting in the first place? What are we protecting?

There is a wide array of risk-taking propensities, from the entrepreneurial risk-taking, capitalist stances, to public sector, risk-avoiding positions. In all cases, the Security expert needs the concept of risk, both for those clients leaning towards risk-taking and for those taking a risk-avoidance stance.  In other words, to be complete, risk management has to be a language both of risk-taking and risk-avoiding perspectives. Contrary to this, the dominant risk analysis disciplines do not reflect risk-taking propensities or business-like risk analysis, but quite a different stance where they try to “fundament” or justify security investment seeking only the avoidance of losses, and not business advantages in the market.

I see a series of results arising from this slant: risk-based security investment justifications are not positioning Security investment properly for normal risk-taking business operators and leaders. While these justifications are more acceptable for risk-avoiders in management and technical positions, Security investments end purely as expenses and therefore have to contend for resources with alternative IT expense items. The result is under-investment in Security across private and public sectors.

What we find in the market is that, in the whole, Security managers are frequently making sub-optimal or even wrong decisions. Following traditional schemas, the principle of “confidentiality” comes first, followed by “integrity” and “availability”. This reflects the well-known Security “triad” or “CIA”[xv].

Years ago, Donn Parker proposed a new model, the “Parkerian Hexad”[xvi], with the following classification:

  • Confidentiality
  • Possession or Control
  • Integrity
  • Authenticity
  • Availability
  • Utility

 

This expansion of the Security sub-areas is the right way to progress away from the initial limitations of our profession. Parker carefully distinguishes for example Confidentiality from Possession (Control), and Integrity from Authenticity. This leads to a better understanding of the real tasks of Security. Above all, we see that our goal is not only to “protect” certain objects, but also to ensure their validity (through Integrity and Utility, for example). The classical approach has no notion of Utility!

For Parker “Utility” means usefulness, including in particular data format and quality and he remarks that Utility should not be confused with Availability. Is our profession ready to assimilate this conceptual expansion? Many years have passed since the proposal has been made with very little progress, but I think that we are now in a different situation, and the old models will be surpassed.

Risk and Probability Theory

Historically, the notion of risk has usually been defined in terms of loss and uncertainty. In classical decision theory the word “risk” describes a situation where both the possible “states of nature” and the “probabilities” associated with these states are known. In more recent times, risk is associated with unknown probabilities, mixing the sense of loss with the idea of uncertainty.

Real risk is frequently defined as the combination of chance and negative consequences in the real world. Different to this, perceived risk is defined as the estimate of “real risk” made without a theoretical model of the world. This dichotomy shows a very old tendency in Western rationalism by which perception and reality are separated. On the one hand, reality is assumed external to perception and on the other perception is taken as autonomous. This is the famous Gegenstand[xvii] relation examined by H. Dooyeweerd[xviii], where the object of knowledge is separated from the process of knowledge in Western abstract thinking.

Risk and the perception of risk are one and the same “problem” or, better said, there is no risk outside of perception. Following this, there is a direct and complete argument against risk-based Security, using probability theory and standard Bayesian analysis[xix].

We need to start from the intrinsic duality of probability calculus[xx]: on the one hand, probability appears as based on frequencies of observations, on the other, as a judgment of authority or opinion. This duality exists since the beginning of Probability theory, and has never been resolved. In current Security methods, the frequency of observations and the authority of opinion are not clearly separated but mixed and confused.

If we apply Probability Theory to risk-based Security, we can see that our profession adopted the language of risk only superficially but did not extract all the effects of that move. In earlier sections we have seen how there was a tendency to speak about risk in terms of risk avoidance only, and now we can also understand why that is the case. A full adoption of Bayesian analysis would have led to a balanced view of risk and trust, i.e. risk and trust. A theory of trust is necessary to avoid the dead end of risk-based Security.

Now the goal is to demonstrate that while “risk management” is unilaterally defined around uncertainty and loss, “trust management” needs to be rooted in purpose and initiative. Instead of taking the existence of a trust boundary as an assumption, we start from the definition of the trust boundary (an act of risk-taking) as a precondition of a Security strategy.

Bayes´ Rule

I started these reflections by stating there is no “Security theory.” There are many presuppositions, conceptual frameworks and principles, but these are accepted uncritically or even by default, unconsciously. As Security practitioners we have consensus knowledge but no science. This allows the existence and negative influence of what I have called many times the “techno-centric” approach.

In our professional practice, the most important unacknowledged influence is that of “decision theory,” in particular decision theory as practiced by neoclassical economics. This school, developed in the past century, takes the existence of rational behaviour as the basis of economics. When considering decisions under uncertainty, an individual is called “rational” when he or she behaves as a Bayesian decision-maker, making choices according to three principles:

a)      Defining uncertainty as a probability (if a fact is not known, the decision-maker relies on probabilistic beliefs)

b)      Capturing information by updating the “prior beliefs” according to the Bayes’ rule

c)      Following the “expected utility principle,” which states the choice should maximise the weighted average of probabilities and utilities

.

The conflation of economic behaviour with “rationality” is due to Leonard Savage[xxi] and is called “subjective expected utility theory.” Savage’s goal was to reinforce the notion that to be “rational,” one had to be “economic,” in the sense of following economic rationality at an individual level.

I will not discuss here the problems of this approach or its validity (as this would need to address economic theory in itself). My point here is only that while economic theory may have a useable abstraction of the economic man, depicted as a “rational individual,” this by no means justifies utilizing this abstraction as a universal principle in other areas of knowledge, for example in organisation theory. In other words, I believe that organisational behaviour is only in part rational and/or individual. Even more importantly, individual rationality taken as the only principle of decision leads to a unilateral and destructive view of organisations of all types.

The rational-economic man is a successful abstraction in economic debate, if we judge success but the level of acceptance of neoclassical economics in academia and Government discourses since the 1980s. Nevertheless, the debate is more than open regarding the complete lack of success of this school in terms of economic development and income distribution in the countries that accepted this “rationality” as political goal.

Now, this global success of neoclassical economics and the theory of the “rational man” also influenced Business Management theories and, indirectly, all professions around information technologies. On the one hand this influence has been positive, linking the IT specialties to wider cultural and academic debates, but, because neoclassical economics was accepted uncritically, its principles became intertwined with existing pre-conceptions and ideologies. The notions around subjective probability and Bayesian statistics were transmuted into justifications for risk-based Security practices.

Let us review quickly what Bayesianism is about[xxii]. For two events A and B, with probabilities p(A) and p(B), and assuming that  p(B),p(A) –that is the joint probability– not equal to zero, the definition of Bayesian conditional probability of A when B is given is stated by the formula  p(A|B) = p(A & B) / p(B).

A& B here denotes the case where both A and B occur and p(A & B) denotes that event’s probability; while p(A | B) is the probability that event A will occur given the fact that B has already occurred.

Bayes’ rule is a way of converting a probability (e.g. p(A|B)) into a probability p(B|A). This means transforming the probability that A occurs given that B has occurred, to the probability that B occurs when A is given.

Bayesian theory is very well-established and is a major tool of scientific research and decision-making, but these principles are not used uniformly across all disciplines, and in particular they are used in a wrong way in some areas of business management, including IT and Security. The confusion centres on how the concept of “objective probability” is used within research to build decision scenarios. In the Security space, this refers to the use of “risk factors” for decision-making.

A major error consists in the arbitrary transfer of catalogued “risk factors” (which often are not objective but consensual or expert-driven) to risk assessments in particular cases. Instead of applying a different probability model for each specific case, the risk factor weightings are carried over to concrete assessments. The problem is that in concrete cases, only a subjective analysis is possible, based on correlation of judgments and risk models within the situation.

What this means is the Bayesian principle is misapplied and misunderstood. This also means that we should call subjective what is subjective, and not pretend there are any objective threat frequencies or scientific risk models. The problem does not stop there, because expert models have some value after all. The problem actually begins when the supposed expert risk model or consensual threat landscape blocks the development of an internal and subjective (Bayesian) risk analysis.

Out of all the methodologies that have appeared to elicit proper organisational security the best in my opinion is Carnegie Mellon’s Octave (http://www.cert.org/octave/). This is so because of the emphasis on organisational debriefing and input, versus undue emphasis on given or assumed threat scenarios.

Risk analysis is always subjective even when we use an “objective” Bayesian method, i.e. when we use frequency data as prior probabilities. There is no past risk, because risk is derived from conscious action (decision). This means the decisions and perspectives of all the actors in a particular situation determine directly and indirectly the actual risk and trust landscape. A consequence of this is that we can see now the most important error of risk-based Security is double: First, it is not based on objective measurements, but only subjective or consensual threat catalogues. Second, when it should proceed to analyse and consolidate subjective probability (i.e. at the level of organisational management) it does not do so, because it has already substituted the “given,” expert-derived probabilities for real ones.

Security as Insurance

In an often referenced paper, Kevin Soo Hoo[xxiii] addresses the problems of risk-based Security and indirectly points to the errors in probability theory. Donn Parker considers this paper as “the most complete mathematical model of risk assessment methods ever developed”[xxiv].

K. Soo Hoo’s work shows that, while trying to overcome the issues created by risk-based security, he proposes risk-based insurance for information assets. Parker probably did not think much of this change in emphasis, but I believe that it is very useful for Security practitioners still attached to the old methods.[xxv]

Soo Hoo writes: “In retrospect, three fatal flaws doomed the common framework and its ALE-based brethren to failure. The deficiencies are as much a reflection of the inventors’ biases as they are an illustration of the challenges that face any attempt to model computer security risks. First, the methodology’s scenario-generation mechanism created an assessment task of infeasible proportions. In any mathematical or computer modelling endeavour, a balance must be struck between model simplicity and faithful replication of the modelled system. If the model errs on the side of simplicity, then it may not be sufficiently accurate to be of any use. If, on the other hand, it errs on the side of faithful replication, then its implementation may be so overwhelming as to render it impracticable. This tension pervades every modelling effort. Unfortunately, the ALE-based methodologies tended to favour significantly greater detail than was efficiently feasible to describe.”[xxvi]

Contrary to this, Soo Hoo proposes a successive-recursive approach on the lines suggested by the US National Research Council: “an analytic-deliberative process . . . [whose] success depends critically on systematic analysis that is appropriate to the problem, responds to the needs of the interested and affected parties, and treats uncertainties of importance to the decision problem in a comprehensible way. Success also depends on deliberations that formulate the decision problem, guide analysis to improve decision participants’ understanding, seek the meaning of analytic findings and uncertainties, and improve the ability of interested and affected parties to participate effectively in the risk decision process.”[xxvii]

An example of the wrong approaches criticised by Soo Hoo, is the Factor Analysis of Information Risk (FAIR)[xxviii] . This model is valued by some Security experts as a better alternative to traditional risk assessments, but I find it seriously affected by wrong notions of probability and risk.

As indicated before, traditional risk assessment requires determining the likelihood of future harm involving specific information to be protected. In the great majority of cases this determination cannot be made internally to the organisation because there is insufficient loss experience in the specific circumstances being assessed, so the Security experts import some consensual model and substitute it for real Bayesian analysis. Risk assessment also would require estimations of future loss from each type of incident, but the value of the information involved is often not material and hard to determine. Frequency data and loss sizes must be combined in a logical way to get any results, but the quality of the data is poor.[xxix]

As Parker indicates, the last step mentioned above requires “selecting controls,” but risk assessments only recognise “how much could be lost,” not what kind of controls are necessary and much less what kind of technologies are optimal. Any controls have to be selected by a different set of experts in many cases, or through different methods and additional risk assessments to see if the controls work at all. More critically, the traditional approach does not take into account the change in the risk and trust landscape because of the risk assessment itself. This reveals the lack of proper probability theory concepts and mixes up objective with subjective Bayesian measures.

The FAIR model is also trapped in a series of confusions, for example when the authors describe risk as “the probable frequency and probable magnitude of future loss.” Here risk becomes a probability of frequency multiplied by a probability of magnitude of loss; i.e. a derived probability or a probability of a probability. The authors insist that “risk is a probability issue” and that “risk has both a frequency and a magnitude component.” The problem with all of this is the implied nature of risk, where decision and action are completely lost. Risk becomes once more a matter of classification of expectations in some standardised framework.

The confusion is notorious when the FAIR proponents write their risk definition “applies equally well regardless of whether we are talking about investment, market, credit, legal, insurance or any of the other risk domains including information risk,” and that “the fundamental nature of risk is universal, regardless of context.” What does “regardless of context” mean? What remains of Bayesian analysis here if the context becomes irrelevant? Are we talking about objective probabilities, given frequencies of events that are already classified in the FAIR framework? Certainly not, as the FAIR model is also a mechanism to collate subjective valuations. Do loss and magnitude of loss event probability pre-exist to risk assessments? I believe the FAIR team have not asked themselves this question, or the more important one: Do threat and risk landscapes change with risk assessments?

I believe that Soo Hoo tried to shift the debate by abandoning risk-based security and suggesting an alternative to justify IT Security initiatives. He settled on a type of “insurance,” based on market pricing of risk coverage. In spite of the good argumentation in Soo Hoo’s paper, this is only a partial answer to the problems we are discussing. Security insurance may be a complement but not a substitute to Security strategy and policy. On the positive side, Soo Hoo’s paper points to the essential conflict between an inherited “expert” threat model and the internally developed model based on “the needs of the affected parties.” This is also my aim, through the application of the Security Perspectives model; i.e. the contra-posed and correlated Security paradigms that have to be elicited and grasped through an analytic-deliberative process.

Luhmann on Trust

The whole direction of risk-based security is flawed in that it ignores both well-established probability theory and basic principles of organisational analysis. It is essential to bring into this discussion the idea that risk needs a decision-maker, and that risk implies trust.

Niklas Luhmann, the German sociologist, studied these problems and arrived at very valuable ideas that should be part of the Security discipline. He wrote “[Trust] depends not on inherent danger but on risk. Risks, however, emerge only as a component of decision and action. They do not exist by themselves. If you refrain from action you run no risk. It is a purely internal calculation of external conditions, which creates risk. Although it may be obvious that it is worthwhile, or even unavoidable, to embark on a risky course – seeing a doctor, for instance, instead of suffering alone – it nevertheless remains one’s own choice, or so it seems if a situation is defined as a situation of trust.

“In other words, trust is based on a circular relation between risk and action, both being complementary requirements. Action defines itself in relation to a particular risk as external (future) possibility, although risk at the same time is inherent in action and exists only if the actor chooses to incur the chance of unfortunate consequences and to trust. Risk is at once in and out of action: it is a way action refers to itself, a paradoxical way of conceiving action, and it may be appropriate to say that just as symbols represent a re-entry of the difference between familiar and unfamiliar into the familiar, so too risk represents a re-entry of the difference between controllable and uncontrollable into the controllable.

“Whether one places trust in future events, the perception, and evaluation of risk is a highly subjective matter. It differentiates people and promotes a different type of risk-seeking or risk-avoiding, trusting or distrusting, individuality. […]”[xxx]

I will take this line of thought further when correlating risk and trust in our practice, and the four perspectives of Security.

While FAIR and other frameworks and ontologies strive to “approach” the way economics understands risk, they fail due to uncritical and poor adoption of probability theory. The objective is confused with the subjective, and risk managers end up blocking real risk and trust modelling.

A correct understanding of risk implies that either in private or public sectors, the decision- maker, while defining a trust boundary, simultaneously takes risk, and accepts a risk level. Risk is neither purely objective nor purely subjective but objective and subjective at the same time. Decisions and actions, for example investment decisions, entail uncertainty about the outcomes and change the risk landscape during their inception. Therefore, the Security professional needs to grasp the business model of the organisation (the trust boundary) and the associated risk involved in it, overcoming the slant towards “protection” of assets. Valuable informational assets do not exist outside of the definition of the business model and the risk-trust correlation that arises from it.

In this way Security analysis becomes a proper micro-economic discipline and finally “aligns” with business.

Identity Management Beyond the Standard Approach

Identity Management, as a discipline or domain within IT Security, is also in need of “alignment.” To achieve this, my proposal is that we put the principles of Identity Data Management and Identity Data Ownership at the centre of attention. Contrary to appearances and technological trends, Identity Management is essentially data management and not a “tool” in operational security. A correct understanding will lead to the application of both economic and industry standards in the sphere of information management.

Among the Security disciplines, Identity Management is the most affected by organisational factors, and the one that impacts more organisational processes and structures. These ideas run against a major obstacle, which I call the traditional or “standard” approach to Security.

The starting point of the “standard approach” is identifying the enterprise “informational assets.” Later, these assets are assessed to estimate their “value” and the potential threats they are exposed to. This approach has its historical origins in the protection of business data repositories and central computing facilities and networks. The standard approach then proceeds to determine the level of risk, which is positioned as a “quantitative measure.” In the standard approach, there is no other way to address security and, thus, no other way to propose, design and operate security services.

My thesis is that Identity Management cannot be fruitfully approached in this way. In general, less than a quarter of the drivers and requirements for Identity Management can be associated with the notion of informational risk or even with informational “assets.” On the other hand, only a small part of investment requirements and decisions in this area can be determined by risk calculations.[xxxi] To move beyond these limitations, Identity Management needs to gain a balanced focus encompassing four areas: Direction, Selection, Protection, and Verification.[xxxii]

A similar correlation of perspectives could be applied to all security disciplines and not only to Identity Management, but it is nevertheless the case that this domain is more negatively affected by the “standard approach” than any other Security discipline. All Security disciplines should be rooted in the “Circle of Trust” (as defined in my previous work[xxxiii]): Trust is first defined, then it is established, then it is enforced, then–finally–it is verified. The circle of trust can thus be readily mapped to the areas of Direction, Selection, Protection, and Verification, in that order.

In the traditional or standard approach, Security is overwhelmingly associated with the perspective of Protection and secondarily with Verification disciplines, so we need to work towards an integrated view comprising four fundamental and complementary perspectives:

  • It is essential to understand there is no “security” without Direction (Governance), especially not without the definition of what the organization wants to preserve as a level and boundary of trust. The organization’s policies ownership structure comes first, and the definition of what is a trusted environment is a precondition for all the other perspectives.
  • The disciplines of Protection, mostly centred on network, platform and application security have been historically the “home” of our profession, but now a well-established trend to go beyond perimeter and zone protection is changing this.[xxxiv]
  • In the past few years, accelerated by increasing regulatory pressure, the area of Verification (Compliance) has come to a second place in importance after Protection. It is evident there can be no Compliance without Protection and Direction.
  • Finally, even more recently in history, still immature, comes the perspective of Selection. It has grown out of the Protection quadrant, where it stayed as simple “access controls” and includes now role management, provisioning and authorisation workflows.

Overall, then, Identity Management sub-processes appear as natural components of the Security disciplines and their expansion across the enterprise. This also reflects increasing linkage with business and organisational concerns. While the initial layers of security solutions were mostly technical the more recent are business-based and cannot exist without business process changes as indicated in other parts of this work. All of this has important consequences for security architecture and investment decisions.

For many years, Security architecture and investment decisions have been dominated by preference for the Protection disciplines. Building up the perimeter and guaranteeing security zones was generally equated to “securing the environment.” The Protection disciplines still form today the strongest area in any organisation in quantitative terms. This period has left a mark in the decision process. Decisions are led by a preference for risk-based or threat-vulnerability analyses.

The situation has evolved with emerging Verification and Compliance concerns. But this has not changed the fundamental idea of “asset protection” and “perimeter security.”

Nothing of the above denies the problems of Compliance and Vulnerability to external or internal attack, but I think that it can be demonstrated the Protection and risk-based approach effectively ignores important parts of business economics precisely because of its focus on a limited idea of risk (analysed in the previous section). On closer analysis, the traditional approach reveals its lack of a notion of business growth. Whereas in the business world investment is fundamentally done under the combined notions of investment risk and opportunity, in the Information Security world and in particular in Identity Management we are still working under the static and defensive notion of asset protection.

So, in conclusion, to present the whole case of Security, and of Identity Management in particular, it is essential to include those opportunities and benefits that can be derived from transformation and process efficiencies and are complementary to the other areas of security. For this reason I will explain in later sections how to overcome the standard approach by furthering the ideas of Identity Data Management and Identity Data Ownership.

Trust and Respect

Hans Wierenga recently published in SOA Magazine (Issue XLII: August 2010) a brilliant article[xxxv] analysing the predicament of the Security disciplines. Wierenga writes:

 “Unfortunately, the current information security vocabulary – in particular, as embodied in standards such as the ISO 27000 family of standards, CRAMM and COBIT – is structurally and fundamentally unsuitable for expressing the information security requirements of the 21st century. The key terms of this vocabulary are confidentiality, integrity, and availability, better known under the acronym CIA. As we shall show in this article, there are many, many goals which are not adequately covered by these terms, nevertheless must be achieved in order for an organisation to have good information security in the Internet age.”

 “However, the vocabulary is not the only problem with CIA: the way that it is applied is also inadequate. CIA is applied to the individual information assets of organisations, with little regard for the collective impact these assets have on the experiences of customers, suppliers, and employees. But it is this collective impact that determines the business value of information security. In other words, the security consultancy industry standards do not just employ the wrong words, but they also apply them to the wrong things. The CIA paradigm entirely ignores the fact the whole is more than the sum of the parts, blithely assuming that if each individual information system is secure the whole is too. This way of thinking is hardwired into the standard approach of the information security consultancy industry, which involves making an inventory of the information systems and then working out how to make each of them secure.”

In previous sections of this work I classified the standard Information Security approach as techno-centric or mechanistic, and explained how it is linked to the idea that information is an object that needs “protection.” In later sections I will explain how the machine metaphor leads to an idea of information as an object or physical substance. Wierenga does not employ a metaphor analysis or world-view approach, but he clearly sees the problems with the standard thinking arise from a specific ideology:

 “If all the money ever invested in implementing CIA [confidentiality, integrity, availability] was one giant waste, it wouldn’t matter because there is no way to tell. We may know the result of this investment, but not what the result would have been without it. Using words that do not adequately express the goals we wish to achieve, applying an approach that considers only the parts but not the whole, and not measuring how effective you are is a recipe for ineffective solutions. That need not be a problem if the whole point of the exercise is to enable those responsible to claim they took the best advice and did everything they could, but not everybody can afford to take such a position. In this paper we shall discuss how the conventional wisdom of the information security consultancy industry can be improved upon in order to deliver measurable business value. We shall introduce more fitting terms, which enable us to maximise this business value, and we shall introduce an approach that goes from the whole to the parts. The new terms – trust, respect and utility – enable us to focus on the business value of information security and lead to better information security solutions. We shall show how engendering trust, showing respect, and delivering utility change the information security landscape. We shall demonstrate how they improve on the CIA-goals and approach, and discuss whether it makes sense to incorporate the old wisdom into the new.”

On the basis of this approach, Wierenga proposes new guiding principles for Information Security — Trust, Respect and Utility– and further expands Trust with principles to “Create Transparency, Right Wrongs, Confront Reality, Clarify Expectations, Practice Accountability, and Keep Commitments.” Central to Wierenga’s thinking is the principle of Trust, which should be at the centre of Information Security. This is also essential to my approach to Information Security and Identity Management as the reader must have seen in previous sections.

With similar aims as those of Donn Parker and Hans Wierenga, I am proposing a replacement, not a variation of the standard CIA “triad,” by using the concepts of Direction, Selection, Protection, and Verification.[xxxvi] There is a potential mapping of this new model to the CIA triad, if we assume that Confidentiality roughly is reflected in the Selection perspective, Integrity may be seen in some cases as represented in Verification, and Availability in Protection, but this mapping is not satisfactory. On the other hand, the CIA triad misses the notions of Direction (or Governance).

The perspective of Direction reflects all those factors that escape the techno-centric or traditional approach. In particular, it is important to note the disciplines of Direction encompass definition of trust, assurance, intent, decision, and business model. The four-sided model does not make claims of complete originality. As I stated in a previous section, it is based on work by John Arnold and other Security thinkers, especially Donn Parker [xxxvii]and Hans Wierenga[xxxviii].

Wierenga understands that a deep change in Information Security needs a new vision:  “A new approach to information security is hardly possible without a new way of looking at information systems. In this paper we shall apply the service-oriented architecture paradigm for that purpose. The paradigm describes all interactions in terms of services, in which a requestor asks an agent for something to be done, and the agent ensures that it gets done and delivers a response to the requestor. This way of thinking can be applied at a business level, to describe interactions between organisations, at a functional level, to describe how the activities of which business processes are comprised interact, and at the level of information systems, in order to describe how systems and parts of systems interact. Applying it at all levels enables an organisation to make the connection between each and every part of its information processing and the business value that it delivers.”

Wierenga develops his work around the ideas and methods of Service Oriented Architecture (SOA), an effort that is rarely seen in the Security disciplines, often characterised by “point solutions” and remedial work. Thinking that SOA is something irrelevant now, either because of the Cloud or other perceived problems, would be seriously misinformed. I will show now the new period of Security in and for the Cloud, and constitute the natural and logical progression of SOA at a global level.

 



[i] Hannu Salmela, “Dynamic and emergent information systems strategy formulation and implementation”,2002

[ii] Hannu Salmela, “Assessing the Business Consequences of Systems Risk”, 2003

[iii] Porter and Millar, 1985; Parsons, 1983; McFarlan 1984

[iv] Brynjolfsson, 1993

[v] Salmela, 1997

[vi] Whiting, 1996; Pervan, 1998

[vii] Ross Anderson, “Why Information Security is Hard”, 2001 http://www.cl.cam.ac.uk/~rja14/Papers/econ.pdf

[viii] Se especially: Donn B. Parker, “Risks of Risk-Based Security”, Communications of the ACM, March 2007

[ix] Jeff Lowder, attempting to defend risk-based security, gives reason to Donn Parker: “Parker’s third supporting argument may be categorised as a “lack of evidence” argument. According to Parker, there is no study that demonstrates that security risk management actually works. In his words, “No study has ever been published to demonstrate the validity of information security risk assessment, measurement and control based on real experience”. I agree with Parker ‘s implicit assumption that we should require evidence that information security RA works. And I suspect that Parker is probably correct that there has been no study published that demonstrates the validity of ISRA [information security risk analysis] specifically. By itself, however that fact hardly calls into question the validity of the ISRA discipline. There also has been no empirical study published that demonstrates the invalidity of ISRA.” See: J. Lowder, ISSA Journal, December 2010.  Lowder is in very shaky ground here, especially because of his assumption that the probability of attack can be estimated for a particular organisation based on the beliefs of the expert community even if the evidence is not known. This would mean that risk and risk reduction are “measurable” purely relying on a subjective variant of Bayesian analysis.

[x] M.Geddes, “Expert Choices”, 2004,  http://www.telepocalypse.net/archives/000224.html

[xi] T.L. Saaty,  “The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation” , 1980

[xii] B. C. Lippiatt and S. K. Fuller, “An Analytical Approach to Cost-Effective, Risk-Based Budgeting for Federal Information System Security”, 2007, http://www.bfrl.nist.gov/oae/publications/nistirs/NISTIR_7385.pdf

[xiii] http://carlos-trigoso.com/public/security-perspectives-2012/

[xiv] Gordon, Lawrence A. and Martin P. Loeb, “Return on Information Security Investments: Myths vs. Reality,” Strategic Finance, November 2002

[xv] Information Security Handbook, 1997, http://www.cccure.org/Documents/HISM/ewtoc.html

[xvi] Donn B. Parker, “Fighting Computer Crime”, 1998

[xvii] “Object” in German

[xviii] See Glenn Friesen’s treatment of this problem here: http://www.members.shaw.ca/jgfriesen/Mainheadings/Epistemology1.html

[xix] For an introductory text see J. M. Bernando’s “Reference Analysis”,  http://www.uv.es/~bernardo/RefAna.pdf

[xx] Ian Hacking, “The Emergence of Probability”, 2006

[xxi] Leonard J. Savage, “The foundations of Statistics”, 1954)

[xxii] Probability calculus approach due to Thomas Bayes,  1701 – 1761, English mathematician

[xxiii] Kevin Soo Hoo, “How Much Is Enough? A Risk-Management Approach to Computer Security”, 2000

[xxiv] Donn B. Parker, “Making The Case  For Replacing Risk Based Security”, The ISSA Journal, 2006

[xxv] Soo Hoo cites a 1996 study titled “Vulnerability Analysis and Assessment Program Results”, to note that out of a total of 38,000 security breach attempts, 24,700 or 65% were successful, 988 or 2.6% were detected 988, and only 267 or  0.7% were reported. Security breaches underreporting is still pervasive across the world, but there are strong initiatives by governments to make breach reporting obligatory. In any case, the proportions noted here still a good approximation according to my own experience. Lack of effective security breach statistics is a severe obstacle for the assumed “objectivity” of risk-based security. Soo Hoo writes: “In July 1996, the agency [Defense Information Systems Agency (DISA)] issued its one and only publicly distributed report on this ongoing program’s results. The report estimated that 96 percent of the successful break-ins were undetected, and, of the few that were detected, only 27 percent were

Reported […].”

[xxvi] Kevin Soo Hoo, “How Much Is Enough? A Risk-Management Approach to Computer Security 2000, page 7

[xxvii] National Research Council, Committee on Risk Characterization, “Understanding Risk: Information Decisions in a Democratic Society”, National Academy Press, 1996

[xxviii] FAIR is a model proposed by the Risk Management Insight group See: http://riskmanagementinsight.com/media/docs/FAIR_brag.pdf

[xxix] To address these problems D. Parker proposes a “due care approach” in “Fighting Computer Crime: a new framework for protecting information, 1998

[xxx] N. Luhmann, “Familiarity,  Confidence,  Trust:  Problems  and Alternatives”, 2000 http://onemvweb.com/sources/sources/familiarity_confidence_trust.pdf

[xxxi] Identity Management projects should be assessed in terms of indirect and direct financial benefits and costs, as well as indirect and direct operational risks and opportunities.

[xxxii] http://carlos-trigoso.com/public/four-perspectives-on-risk-and-trust/

[xxxiii] http://carlos-trigoso.com/2010/08/15/iam-in-the-circle-of-trust/

[xxxiv] The most advanced views on this can be found in the Jericho Forum website. https://collaboration.opengroup.org/jericho/publications.htm

[xxxv] Hans Wierenga, “Why the Information Security Consultancy Industry Needs a Major Overhaul”, 2010, http://www.soamag.com/I42/0810-1.php

[xxxvi] See: http://carlos-trigoso.com/mind-maps/security-perspectives/

[xxxvii] See also: http://www.computersecurityhandbook.com/author-parker.html