4. The Context: Identity Becomes Data

This is a draft of the book with the same title now available through Amazon:

 http://www.amazon.co.uk/Fundamental-Conceptions-Information-Identity-Logistics/dp/1484990021/ref=sr_1_1?s=books&ie=UTF8&qid=1369520084&sr=1-1

 

Data Centric Security

Security architects and practitioners need to develop an integrated data model that will enable end-to-end user management and access control. I proposed this approach in 2006 and advocated a data model that could become the basis for the next period in Security and Identity management[1].

There have been important changes in the Security disciplines in the last decades; we have seen important transformations from the origin of these disciplines in Government and military sectors. For example, now we have an image of IT and Security as a complex of “processes,” similar to other business areas[2]. Security definitions have moved through several stages, passing from the early association with perimeter protection, to security education and risk management, and moving now to an emphasis on compliance and auditability.

On the negative side, these steps have not been reflected in all domains of our professions. The more recent business “alignment” ideas are still restricted to the management consulting branches, while the majority of the Security specialists still move within the “perimeter protection” ethos. While we know where to position Security in the general picture of enterprise and public organisations, it is not clear what to do in specific areas. The newer sub-disciplines like Identity Management suffer much under obscure definitions and stale ideas. From the “outside” of the Security disciplines, for example from the ITIL and COBIT practitioner domains, some help has come by linking security into the wider concerns of business and technology management[3].  These valuable contributions, nevertheless, while collating all the aspects and key indicators of Security, are nevertheless insufficient to solve other problems of principle and practice. These we need to address and solve by ourselves.

As a result, many business leaders are aware of the relevance of a security process, but mid-level management and technical personnel are in the dark about how to connect business models and objectives with their own practice.

Enterprise and organisational security is not only about “information security” as we learned in our training and our schools. The time where information was “discovered” and hyped as a “business resource” is over, and Security and Identity management are not restricted to “confidentiality, integrity and availability” anymore. As I have suggested in previous sections, it is now essential to blend Security principles and actions to a systemic view of the organisation. A key step in that direction is to link Security and Systems Management disciplines to achieve common economic goals of performance and efficiency.

That association is already reflected in much of the thinking around “Security as a service” and “Security in the Cloud,” both of which need a deeper integration of Security and Information Systems. To progress from there, we need to look now at the sub-domains within Security management, following the example of the ITIL disciplines, articulated around Service Support and Service Delivery and their respective areas. In the previous section I suggested that a good subdivision of Security would be that between Risk Management and Trust Management. This will also help bridge the gap between the business models and Security practices.

Here is how I think we should proceed.  The root of Security Management is a data structure. Although simple at the core, its ramifications are complex. Security Management (Risk and Trust Management as a whole) relies on a series of data associations or “mappings” as we would say in software development terms. A non-exhaustive list would be as follows:

  • Users to user names
  • Users to passwords
  • Users to tokens
  • Users to certificates
  • Users to accounts
  • Users to groups
  • Users to roles
  • Users to services
  • Users to processes (or agents)
  • Users to operating systems
  • Users to devices
  • Users to locations
  • Users to objects
  • Users to permissions
  • Users to audit events

 

Together, these mappings form a single data type at the core of all security technologies. This is not exactly what technology vendors tend to call “data-centric security”—another trendy name for “data protection”– but a cross-domain approach to all Security domains: Application, Identity, Infrastructure, Networks, Compliance, and others. This fundamental step will allow us to see enterprise and organisational solutions beyond the traditional “point solutions.” My thesis is that only this approach answers the need for efficient and secure data management across technological and platform boundaries.

This is a first insight for the Security practitioner and the expert technologist: Security needs information integration.  Without this, what we have is a collection of trendy but disparate products and “solutions” that need a few years if not months to become obsolete. These isolated solutions may be more or less effective, but on the whole do not increase trustworthiness of the IT services, and instead multiply risks and uncertainties at great cost for the organisation.

A data-centric approach would learn a lot for the standard disciplines of Information Architecture. Instead of the path followed by the various versions of Security Information Event Management (SIEM), still focused on the attack-defence paradigm, a proper data-centric Security would focus symmetrically on trust enablement and risk reduction. This means that it would collect, aggregate, analyse and communicate events related to performance as well as detected Security violations. There are aspects of this approach in various technologies in the market (some even in the Identity and Access Governance branch or IAG based around role-based access controls), but a complete data model is still unavailable.

What is a Security Policy?

Let’s look at the theoretical underpinnings of the proposed approach by using some academic results summarised by Kang Zhang at Cambridge University[4]. Zhang adopts as a starting point the notion that a “completely” secure system is one that does not allow the flow of information. Note that this model is strictly within the idea of Security as “protection” and “prohibition,” and also adopts the notion of information as “flow.” The important aspect, though, is that this starting point allows us to see how even in the standard theory (well represented by Zhang), a secure system is not one that has a well-defined and controlled Security policy, but one that does not have a security policy at all.

To understand this, we need to analyse the standard reasoning closely. If a completely secure system is one for which no “information flow” is allowed, then it follows that any “Security policy,” by opening up certain levels of access, will necessarily detract from the original completeness of protection. This is so because a Security policy can only specify which information exchanges are valid. A security policy (an access control matrix, for example), does not transform “insecurity” into “security,” but instead brings the Security level to a defined and/or accepted level. If the Security policy is well defined, then the factor of decrease will be a “known quantity,” or so the standard theory goes.

I believe that an acceptable, consensual definition of Security policy is that it represents an instrument (a formal, approved document) which maps users or processes to other individuals or processes. I rely on the reader having seen my definition of Access Management in the previous chapter, and how all access is mediated access for reading and writing operations. Mapping users and entities in the system sets up its “security policy.” By this mapping, users are able to interact with information sources, and information sources are able to reach users (or their agents). These relations can be seen as exchanges of information inside and outside of the system.

Normally even small organisations have multiple user registries, and various classes of users. This is part of organisational life and cannot therefore be classified as a problem. The problem arises not from the variety of users or the data they consume and generate, but from the diversity and combination of mappings between users and their resources (for example applications, web services, e-mail servers, files, networks, and so on).  Again, by a systemic understanding of Access Management, we should see that a Security policy does not implement only “protection” measures, but actually channels that assign levels of trusted access to specific users and groups.

A good Security policy then, even starting from the standard theory, is one that reduces “protection” levels selectively according to the business model, i.e. the trust boundary of the organisation. We certainly need solutions that are able to tell us (and the enterprise and organisational leaders when user is accessing a specific resource for what purpose, but the same emphasis should be put on knowing if the trusted users are using the information channels that are intended for them. In other words, we are back to the symmetrical model of Risk and Trust management. 

The standard SIEM technologies, still focused on protection, put us in a position where–by collecting and analysing all user access information–we can disallow invalid changes, and to roll back these changes after a security breach. Data-centric Security though should go further and promote, speed up, simplify and perform information exchanges that are predicated by the organisational and business model. To understand this transformation it is essential to mark a distinction between the ideas of “information assurance” and “information risk.”

The Security practitioner and the technology expert rarely consider these terms philosophically. We have already analysed in previous chapters how weak our definitions of “risk” are. It is also clear that we do not have balanced definitions of “trust” and “risk” but have seriously confused semantics around these concepts. A good test of the lack of clarity of our conceptions is that we rarely find an expert or a Security practitioner that will vouch for his or her own solution, always expressing these in cautious and hedging terminology. In other words, we as a profession are reluctant to speak about “assurance” in our metier.

Security experts like to speak about “secure” and “Insecure time”[5]. Insecure time is the period where the Security policy has failed to stop intrusions or malicious use of “information.” This corresponds to a realistic school of thought, erroneously called “pragmatic” school, which believes that a system is secure if its “secure time” is greater than its “insecure time.”[6] I cannot deny there is very sensible advice in this position, as it helps to move the specific technologies into the background and focus on a more holistic view of the organisation.  From the point of view of the business leader, it does not matter how you increase secure time versus insecure time. What matters is that you achieve good results. On the other hand, this definition lends itself to serious managerial misunderstanding, because there is no transparent way to determine when there is “too much” insecure time in a system or organisation. Let’s remember our previous result by which a Security policy effectively increases “insecurity” by enabling access channels and this will allow us to understand the comparison between secure and insecure time cannot be quantitative. In other words, the proportion of access and trust allocation and prohibition and risk avoidance is not a matter of rough numeric balance, but only the product of a conscious, deliberate, planned, determined decision rooted on the business model.

To achieve assurance, then, means to assure business objectives and not Security objectives, specifically Security assurance does not imply a maximum of “secure time” but certainty that we are achieving the level of access we have adopted as part of the business model. Transitioning to this stance requires addressing the entire life cycle of the organisation’s information systems, covering all its internal and external channels of interaction. For this, the best designs will be those that encompass all user mappings as mentioned before. This requires a switch from “feature implementation” (the techno-centric view) to “integration” work (the service-oriented architecture view), seeing each Security initiative as a data-centric project, as a data integration task.

Security Assurance

For too long of a time, it has been trendy to speak about information as being the “lifeblood” of the enterprise[7].  Equally easy has been the position of Security management as a special, even separated speciality in the context of Information Technology. However, historical transformations are making Security very similar to Systems Design and Management. For example, in the same way as the systems management database (also known as CMDB or Configuration Management Data Base) contains item dependency and inclusion mappings of the ICT systems, there is a nascent concept of a Security Management Database implicit in the disciplines of SIEM and Security Analytics.

Information interaction, bi-directional and multi-directional exchanges are part of the economic process, and information systems are part of the business infrastructure. While this is evident, an atmosphere of mystery and strangeness persists around Security, as we do not understand the fact discussed above: the most secure system is one that does not exchange information with the environment, while a system with a security policy is relatively more insecure. This paradox has driven many professionals to try to cancel the “problem” by proposing more security technologies, additional standards and processes, without caring for the overall meaning of Security in the organisation. With this rush towards supposedly new solutions (while we all know that nothing new has appeared in the security technologies in the past 20 years or so), we have just increased the fragmentation of the IT environments and have also introduced more complexity and dangers for the organisation. With this attitude we, the Security practitioners and experts, have effectively created our own problems.[8]

If instead we position Security Management at the same level as Services Management, following the example of the ITIL, we will be able to define the Security sub-areas in a very clear fashion. This in turn will clarify the skills and activities of the various Security practices.

In a recent article, I proposed a four-layered model to reorganise the Identity and Access Management domain according to this philosophy.[9] Other experts classify the Security and Identity disciplines in various manners, but with a common method consisting in listings of activities that can be easily seen in our practice. For example, a group of professionals based in the South of England recently published an approach to security process maturity defining five key areas[10]:

  • Protection:  Perimeter security, intrusion detection
  • Validation and Provisioning: User to username and password mapping, user to accounts and services mapping
  • Access and Integration: User to groups, roles and objects mapping
  • Compliance: Compliance with the law, individual rights and policies
  • Total Security Confidence: A continuing process of measurement security improvement

 

These sub-processes are seen simultaneously as phases in time, layers in organisational security, and parts of the total picture. They form an integrated Security Management process. The problem with this approach is that further classifications and refinements are always possible; for example here is a more detailed breakdown (which is one I adopted up to 2006)[11]:

  • Protect: User platform to network mapping
  • Detect: User to protocol and network layer mapping
  • Validate: User to user name and password mapping
  • Provision: User to services and accounts mapping
  • Authorise: User to groups and objects mapping
  • Integrate: User to roles mapping
  • Verify: User to security policies mapping
  • Audit: User to logged event mapping
  • Manage: User identity and access management policies and risk management
  • Improve: User identity and access management continuous improvement

 

The advantage of the second listing is the guiding principle, associating data mappings with each of the sub-disciplines. For example, the first process (Protection) maps the user platform to the network, because security practices at that level focus on hardware and infrastructural measures. Further analysis would give even more detailed pictures of the Security sphere, but this does not solve the problem. In my opinion, we need more than analytical approaches, i.e. more than classifications. A dual approach is necessary, both synthetic and analytical, to first determine the most meaningful “parts” of the Security practice, but also to organise these into a logical, coherent structure. The solution that I propose is to align the security disciplines with the Fundamental Conceptions of Information that are at the core of this book. My work on this matter leads for example to the concept of “eclosion” in Identity Management, by which I mean an “opening” or “unfolding” of the Security concepts around the four basic perspectives of Direction, Selection, Protection and Verification.[12]

Following this path, we arrive at an idea of Security assurance that is far from hype and trends. This idea of Assurance is not new though, and actually arose in earlier periods of IT academic work. The best exponent of it was Jeffrey Williams[13], of the Canadian company Arca Systems. I have adopted his notion of assurance as a measure of confidence in the accuracy of a Security measurement and not a measure of the degree of “satisfaction.” As Williams explains[14], a measure of satisfaction would depend on measurement of the security needs, but how do you measure what you need so you can measure what you do to satisfy it? As I indicated in a previous chapter, the most disturbing problem in standard risk analysis is that it does not consider how risk assessment itself changes the security needs and profile of an organisation. While there are many ways to express risk quantitatively, objectively or subjectively, there is none to express “security needs.”  So, by assuming a theory of assurance as “satisfaction,” we originate incomplete, obsolete, costly and problematic solutions.

Assurance is orthogonal to risk. They are different dimensions and should not be confused, because a high assurance rating is not equal to a high security and low risk rating. In the “secure system” model, let us remember, there is no information exchange. No information “goes in” and no information “goes out” as the standard model would admit. Is that scenario compatible with a high assurance rating? It is not, because if no users have access to the assets, this cannot represent any “security needs.”

Equally concerning is that, if a system has strong access controls but lacks auditing functions, how can we tell whether the installed technologies are functioning properly or at all? If we confuse assurance with security, the system will appear to be safe, while in fact there is high uncertainty about its state! Assurance, instead, has to be conceived as a measure of confidence on the information about information security, meaning by that a second or higher order of information.

Separating assurance and security becomes especially interesting when we consider the needs of the decision-maker also following the work by Arca Systems. After making a risk assessment, a decision-maker may have a quantitative idea of the risk level, but what happens if the confidence in the information gathered is low? Have we not seen innumerable cases where we and the business leaders we advise are not sure whether the risks noted (within the traditional model) are acceptable or not?

If confidence in the security information is high, then it may be sensible to add new security mechanisms. If the confidence is low, adding a new tool will increase the uncertainty in the system and potentially create new Security issues. To address this it is necessary to work out the assurance level aiming at a second degree of insight and investigating how certain our perceptions of the state of the system are.

The assurance theory I am summarising here shows the trend to multiply the technologies and services employed to enforce “security,” claiming to address uncertain or imagined levels of threat are more a problem than a solution. Most security products and services converge on the protection and verification sub-disciplines, but by doing this, vendors and consultants are answering to short-term preoccupations of business managers. It seems always easier (and less expensive) to secure one perimeter after another, than to discriminate the assurance levels of a large number of applications and communication networks.

We just need to ask the Security teams in any organisation how they are collecting and analysing the information sources provided by the IT infrastructure and the Security technologies, to understand how underdeveloped these processes are. If this is the case, what is the level of assurance that we are working at?  With this we have a better context to understand why the reactive insistence on risk management and the prevalent refusal to abandon risk-based security. A move towards a combination of risk and assurance does not mean leaving risk concepts behind, but putting them into a business perspective: what do we really know about our security position?

Reducing uncertainty in a secured system always requires setting up known channels of information and foolproof methods of data aggregation, including–most importantly– data on the security components themselves. Data flowing through those channels is meta-data (or second order information, as I said in previous sections). A single meta-data format is possible and necessary for each assurance level and information channel, building on the idea of mapping users to a variety of business objects. I understand this meta-information as the only source that can reduce uncertainty in the organisation and its security implementations. Meta-information or “second order” information can also be seen as “positive evidence,” as compared with the “negative evidence” that we can obtain from intrusion detection and security incidents or breaches. Positive evidence increases “secure time,” to use the standard language, while negative evidence increases uncertainty.

Starting from these ideas, it will be possible to design and complete Security “assured solutions,” contrasting with the lack of guarantees usually found in commercial security implementations. A similar lack of contractual assurance is usual in software offerings, as vendors are “not responsible” for the failures and limits of their products, but this is becoming less and less acceptable as the IT markets mature and adopt new business models themselves. To assure a solution in an uncertain environment, with increasing dangers and continuous economic and social change, we need to start by understanding that information security is not a technological matter, and the levels of “insecurity” in a system depend directly on the “security policies” that we apply, while Security management ultimately consists of user management processes.

Against the present dominance of point solutions and technological silos, we will finally see Security management as a cyclic change process, adjusting to the changes of the organisation, enabling and improving other business processes, and at the same time providing a decreasing unit cost and higher performance.

Internal and External Actors

In the decades since the rise of the electronic computer era, a constant factor of change has been the expansion and diversification of the categories of users. The initial use of computers was circumscribed to scientists and engineers developing the first computing engines in a handful of laboratories. In time the types of users extended, and a division of labour took hold, with different “levels” and use patterns. This change has been continuous and we are far from seeing its end. Currently the types of users accessing enterprise and organisational information sources are very diverse. Here is an incomplete list of what we see in the field:

 

  • System owners or application owners
  • Managers (line managers or department managers)
  • Employees (staff, including many types of users)
  • Contractors (temporary personnel)
  • Consultants (temporary personnel from consulting services)
  • Services (service suppliers including cloud-based and hosted services)
  • Partners (including joint venture management and personnel)
  • Suppliers (management and staff of the supply chain network)
  • Researchers (scientists, academics, researchers of public or private entities, also individual experts)
  • Officials (members of government agencies and regulators)
  • Corporations (corporate customers’ management and staff)
  • Consumers (private customers)
  • Distributors (distribution channels’ management and staff)
  • Third-party customers (customers from partners and other sources)
  • Visitors (unidentified or unauthenticated visitors to shared resources)

 

Any experienced Security professional will in fact state that this list is incomplete, lacking in detail for example, as it is common for national and global level corporations to have not only a wider but an ever-widening classification of users and potential users of their informational channels. In many if not all cases, this widening range of users is a product of a deliberate business strategy to either reach out to other parts of the population, or else a result of business acquisitions, mergers, contracts and projects that are part of the growth process of the enterprise or the organisation.[15]

Simultaneously with this change, we see the disappearing “boundary” or the external border of the organisation. This has been widely studied by the Jericho Forum under the concept of “deperimetrisation”.[16]  In relation to this, it can be said that both public and private organisations do not have an “internal space” anymore, with respect to information exchanges. For example, in IT-mediated transactions, users operate as customers of organisational services, and customers appear as users. Some organisations, recognising this trend are effectively using the same infrastructure and Security solutions for “internal” and “external” users.

What is most relevant for my argument here is that this “deperimetrisation” of the organisational and enterprise space forces us to look at all types of users and all their access routes (i.e. their interaction channels) as a common ground where the differences are not in the “identity” of the users but in what the users do in the various segments of the organisational network.  As the British academic David Chadwick put it: “It does not matter who you are but what you can do.”[17]

To complete this panorama we need to take note too of the fundamental changes in the numbers and the correlation of external versus internal users. It is normal to find now public and private organisations where the number of external users is 4 or 5 times larger than that of the internal (staff) users. Indeed, in public organisations the disparity is many times larger in some cases as the user population is spread across entire nations.  Some people say that in fact public organisations never had an “external boundary” in this sense.

Certainly, as this trend progresses, there will be a complex of identity schemes inside and outside of the organisation. Some will have weak assurance levels (or “identity proofs”); some will correspond to high requirements for assurance and authentication. In the whole, these combinations will increasingly appear as a transformation of the Security and Identity landscape, where the central aim of the IT services will converge on the management not of users, but of platforms. In more formal terms, as the complexity of user types, locations, routes and credentials increases, as the proportion of external to internal users changes in favour of the external users, the entire endeavour of “securing” the organisation and its informational resources becomes one of managing information itself.

By this I mean managing information about the users of information channels and also about these channels in all their complexity.  Because of economic, social and organisational transformation, we have entered a period where Identity becomes Data; to put it in striking terms, useful to understand what our new goals are. The work of the Jericho Forum around Collaboration-Oriented Architecture is also relevant.[18]

 

I have written before about expanding the concept of identity[19]. On the same lines and under the trends reviewed in the previous section, the concept of identity will develop, unfolding into a variety of internal and external “identities” for all types of organisations. It is important to explain here that this conception assumes there is no single identity that can be associated to a natural person (a biological individual).  In practice, both industry and government need to rely on a combination of identification instruments or credentials, including biometric data, to compose and integrate a “known identity.” 

Particular levels of integration or composition of these credentials are acceptable or valid for specific business or personal transactions.  For example in the UK it is normal to use a utility bill and a driving licence to buy a new phone contract. In other countries, with a national identity card, sometimes various credentials are embedded in that instrument.

So when writing about a variety of identities I do not refer to biological or personal identity, but only to identification instruments or events that serve as tokens of proof in public and private exchanges. This corresponds to my view that we are moving from “identity” conceived as an object (a thing) to “identity” seen as a subject and a membership in a relationship. In this sense, each identification instrument is a symbol of a particular relationship the individual holds with different organisations, institutions, countries, partnerships, and other human associations.

Secure Identity Management is Data Management

Another shift in understanding is necessary once we perceive the complexity and the high numbers of users we are reaching in the sphere of Security and Identity management. My take on this matter is that we need to apply ideas of performance and event management to master the new tasks. As Identity management and access control extends and changes in depth and form, the traditional approach centred on single-minded and simplistic “one-size-fits-all” mechanisms will cease to exist. It will become evident these mechanisms are in reality the source of many Security disasters.

We should instead develop Security tools and processes by applying computing capacities to their full extent to data management. After all, the computer era is all about performance in data management and nothing else. I am not chiefly speaking about cost here. It is well known that event management in the support disciplines has a cost reduction driver; for example, by handling millions of events per day in automated or semi-automated ways for data collection, aggregation, escalation and reporting. At a rate of about £1 per processed event it is easy to see that event management (translated to security event management) can bring important cost reductions; but the main point is that security event management has multiple benefits beyond cost reductions. The most critical benefit is the complete transformation of the Security emphasis, moving from passive-defensive compliance and “risk-based” measures to active, trust and risk-balanced solutions.

It is important to recognise that this area is not empty of progress, as we have seen the development of log management, event management and security analytics products. What is necessary now is to bring all these capabilities together and rearrange them under a better philosophy of identity management as intelligent data control. The new view of Security and Identity management will be focused on mastering the complexity of access routes and data movements between the sources, registries, directories, provisioning systems, access controls and applications or services. This direction will make Security operations similar to the manufacturing steps between raw materials and purchasing, production, inventory, service and packaging and conveyance to the consumers.

In the same manner, Secure Identity Data management must have the following characteristics in common with standard enterprise data management:

  • Data Governance (organisational assurance and accountability)
  • Data Extraction (data collection and extraction model from physical sources)
  • Data Publishing (data standards and contracts for distribution)
  • Initial staging (standards and locations for data storage)
  • Data quality (validation criteria and processes, data cleansing, and  data quality model)
  • Clean staging (standards and locations for data storage of clean data)
  • Transformation and Enrichment (standards and contracts for data transformation)
  • Staging or Publishing or Loading (standards and locations for loading)
  • Loading (loading, updating model)

 

When considering this fundamental shift, we need to return to early academic results which anticipated what we are seeing now[20] regarding organisations as “information processors.”[21] This view, which I still follow except for the view of information as a material flow, shows the capacity of an organisation depends on its structure, its decision- making capabilities, and the ability and experience of its people. In this perspective, it will not be difficult to bring to the foreground areas which have been investigated for many years in other aspects of business and public management, for example the relatively “old” concept of Data Governance. It is not difficult to understand that identity data must have a series of functions and processes around it:

  • Data sources
  • Data extraction or acquisition
  • Data processing
  • Data presentation
  • Decision making (authorisation processes)
  • Data custody
  • Data delivery
  • Data usage

 

When emphasising a data-centric approach, it is vital that this change in perspective does lead to focus only on the performance-related aspects of the problem. Efficiencies in data management are definitely at the forefront of the proposed change in direction, but equally important are the qualitative aspects of Security and Identity management. I will cover the cost management aspects of these processes in a later chapter of this book, but first let’s look in more detail into the data quality criteria that we should apply.

Data Quality Criteria in Security Management

Once we adopt this new thinking, it becomes clear the essential — I would say “classical” — ideas of enterprise and “information architecture” data management are applicable to our remit.  This is one more area where the “strangeness” or “uniqueness” of Security is disappearing and has to disappear to convert it into a suitable business operating partner of the organisation. The studies covering data quality and information architecture are so diverse in theme and depth that I can’t summarise all their aspects here. As a more synthetic starting point I refer the reader to the work by Richard Wang and Diana Strong published by the Massachusetts Institute of Technology.[22]

Wang and Strong propose a schema that aligns very well with my own approach to Security Management (i.e. the categories or perspectives of Direction, Selection, Protection and Verification). After studying many data quality dimensions they proposed four categories: Intrinsic, Contextual, Representational and Accessibility Data Quality. [23]

Each of the categories summarised several dimensions as follows:

  • Intrinsic Information Quality – representing  Accuracy, Believability, Reputation, Objectivity, Consistency, Completeness, Precision, Reliability and Correctness
  • Contextual Information Quality – representing  Relevance,  Timeliness, Amount, Currency, Detail, Comprehensiveness
  • Representational Information Quality – representing Understandability, Interpretability, Consistency, Arrangement, Appearance, Comparability, Compatibility
  • Interactional Information Quality – representing  Accessibility, Security, Availability, Usability, Convenience, Locatability, Privacy, Delivery

 

We could draw some immediate parallels with the Security perspectives; for example observing the link between “intrinsic information quality” and the disciplines of Security Direction. Equally important is the connection between “interactional information quality” and the disciplines of Protection. While that is worth continuing, more important at this point is to highlight that information qualities are needed at two levels by the Security disciplines and by Identity management in particular. First, we need information quality measures for the information we handle extracted from managed systems and user registries, and second, we require also information criteria for the information we produce and transmit inside and outside of the organisation.

For this reason it is not only important to see that we are managing data, as an “external observer” or entity handling “data flows” in the organisation, but we are also an observed observer, i.e. a source of information for the enterprise or organisation decision-makers. Data quality criteria are 100% applicable to Security information sources as well as to Security outputs, but this is precisely an area that needs urgent development, as few organisations have any integration between these two concerns. Looking into the collection and analysis of information sources, we see that this domain is largely elementary and driven only by yearly audit calendars. The second level, where Security systems themselves become a coherent source of information and work to reduce uncertainty in the organisation is still in the future.

This last assertion needs some qualification. It is true that –within the silo and point solutions tendency—some departments and some applications or systems do have “security information systems” in many organisations. So when I put forward a negative assessment of the current status of Security and Identity data management I am not referring to these fragmentary, technology-centred approaches. We need to set our aims well above the current patch-as-you-go paradigm and the never-ending “technology upgrades” and achieve excellence in our professional work. I think that we have been accustomed for too long to a state of mind where we manage failure instead of managing for excellence.[24]

In the Security professions, adopting an integrated data model and information quality criteria as described in this chapter shall signal a major step in that direction.



[1] Carlos Trigoso “The Path to Assured Security Solutions”, ISSA Journal, 2006

[2] Paul Evans,  “Information Security as a Business Process” , IT Network Solutions, 2004

[3] Cazemier & Overbeek, “ITIL Security Management”, Office of Government Commerce, United Kingdom, 20th April 1999

[4] Kan Zhang, “A theory for Systems Security”, Cambridge University, 1997

[5] Amit Singh, “A Taste of Computer Security”, http://www.kernelthread.com/publications/security/ , 2004

[6] More precisely, insecure time is the sum of the time it takes to detect an “incident” and the time it takes to react to the incident (over all incidents in a given interval).

[7] See: HP Neoview Enterprise Data Warehousing platform http://www.hp.com/hpinfo/newsroom/press_kits/2007/businesstechnology/brochure_neoview4AA0-7932enw.pdf

[10] Stuart Wilson, Chris Ayres “Modern Business Challenges – Compliance and Total Security Confidence”, Pirean, 2005

[11] Carlos Trigoso, “The Path to Assured Solutions”, ISSA Journal, 2006

[12] Carlos Trigoso, “Eclosion: The Future of Identity Management” http://carlos-trigoso.com/2010/12/22/eclosion-the-future-of-identity-management/, 2010

[13] J. Williams, “A Framework for Reasoning about Assurance”, Arca Systems, 1995

[14] J.Williams, D. Landoll, “An Enterprise Assurance Framework”, Arca Systems, 1996

[15] Adrian Seccombe, “Identity the New Perimeter”, Surrey University, 2010

[17] See Professor’s Chadwick work here: http://www.cs.kent.ac.uk/people/staff/dwc8/

[18] See: “Collaboration Oriented Architecture”   http://www.opengroup.org/jericho/COA_v1.0.pdf

[19] Carlos Trigoso, “Required: Varieties of Identity to deliver the value of Cloud Computing”, http://carlos-trigoso.com/2010/09/19/required-varieties-of-identity-to-deliver-the-value-of-cloud-computing/

[20] Jay Galbraith, “Organizational Design: An Information Processing View”, 1974

[21] See also: Gartner, “Consider Identity and Access Management as a Process, not a Technology”, 2005

[22] R. Wang, D. Strong, “Beyond accuracy: What data quality means to data consumers”, Journal of Management Information Systems, MIT, 1996

[23] See also: R. Y. Wang, H. B. Kon and S. E. Madnick, “Data Quality Requirements Analysis and Modeling”, 1992

[24] See Donald Mitchell, Carol Coles and Robert Metz, “The 2,000 Percent Solution: Free Your Organization from “Stalled” Thinking to Achieve Exponential Success”, 1999