-7- Quantitative Identity Management

Fundamental Conceptions Of Information As Applied to Identity Logistics – Chapter 7. © Carlos Trigoso 2012-2013

Information value and flow

As Identity becomes data with the global changes in Information Technology and organisational transformation, so the emphasis moves from Identity as a security item, to performing Identity data exchanges. It is a commonly accepted notion that information “flows” from one point to another, from the emitter to the receptor, inside and outside the organisation. This is a one-sided view associated with one of the four possible Perspectives or metaphors that I use in this book: the Machine metaphor. Within this perspective, Information is seen as a substance that flows from point to another. This opinion is associated with the belief that information flows in one direction, and that it is possible to speak of information as a one-directional flow of “data.” Other chapters in this book cover the implications of this view, and how it affects the practice of IT and Security management.

I think the mechanistic definition of information has a place in organisational and business management, but taken in isolation, it has counterproductive effects and leads to a wrong understanding of the challenges of Identity and Security management. The essential problem is that mechanistic definitions ignore that any signals arriving at the Receiver need (even in an electro-mechanical model) some form of enablement, activation or detection, so the signals are perceived at all. Interpretation of the signals also needs a complement of filtering, comparison, synchronisation and other Receptor-side capabilities to “make sense” for the party acting as consumer of the information.

The techno-centric view takes for granted that the receptor is already enabled in such a way that a unilateral flow of information from the “emitter” will “make sense,” but that is just an assumption and not an understanding of what is happening. What activates and enables the receptor? For any information perceived by the receptor there must be at least an equipotential information (a structured signal) emerging from the receptor itself. More generally, the receptor must be tuned to the signal. A good image of what happens at a physical level in a communication channel is that a “computation” takes place by which two tuned and correlated signals enact a two-way exchange between the emitter and the receiver.

At a different level, when we consider not only the technical or physical layer but also the organisation as a whole, the same principles are valid. The best approach to information exchanges is one where we assume not a unilateral flow but at least a two-way exchange of data. Translated to the business view, a good explanation must make use of the idea of the “value chain.” This approach refuses to attach value to information itself, and instead judges the benefit of a process by the results of the production and distribution processes as a whole.

The information exchange is a facet of the business process and is multilateral in nature. In this scenario, information and data become mediations and signals of the value process.

This leads us to an indirect way of measuring the value of information, one that does not need to think of information in terms of an object or “thing” that can be stored. In fact, in this approach, information is simultaneously a thing (a mark stored in electronic media) and a relationship (because stored data only becomes “information” when it is exchanged, and when it is subject to read and write processes by the participants).

In other words, information has no value until it arrives at the consumer, and this can be either on the emitter or on the receiver side. As there are consumers on both ends of the information transaction, we should also consider a new terminology and stop talking about an origin and a destination of the flow of information. Both ends are simultaneously “emitting” and “receiving” data.

Continuing with the image presented in chapter two, if a “secure,” “protected” environment has no information exchanges with the “exterior,” we could say that what needs to be protected in a Security strategy is not “information,” but the combined techno-economic or socio-technical process that provides information to the consumers (on both ends of the transaction). More generally, we could say these consumers are of various types, not only business management decision-makers but also all other types of individuals involved, including the consumers and operators in the production and distribution process.

Losing the ideology that we should work to protect a “thing” or an object will help us to address the challenges and solutions required for a business value chain and its information processes. In “The Management Information Value Chain,” Robert L. Phillips proposes an indirect way of measuring the “value of information.”i Focusing in particular on “management information,” Philips writes: “A management information system helps an organisation make better decisions. […] Perhaps the most common barrier to achieving the goal of competitive advantage is that management finds it difficult or impossible to measure how management information systems contribute to corporate value. Without this understanding, it is difficult or impossible to evaluate investments in management information systems on a basis consistent with other investments. As a result, management information system investment decisions are often based on arcane technical considerations that only specialists understand, and escalating MIS costs do not seem to be matched by corresponding benefits. No wonder many companies have found obtaining competitive advantage from their systems to be elusive.” To correct this, Phillips suggests the utility of management information systems and supporting activities is to provide information that enables better decisions. In this sense, the value of management information is equal to the increased profitability resulting from the better decisions that it enables.ii

The idea is simple and can be generalised beyond the “value chain” method. Most importantly, by assuming a bi-directional or multidirectional information exchange process, we will see “decision-makers” not only on the side of “management.” Every information producer and consumer is a decision-maker. Therefore, the value of the information processes and the systems supporting these can be measured by how they perform in enabling those decisions. If we consider how Security and Identity solutions enable information exchanges in an organisation, we will immediately see that confidentiality, availability, integrity and quality of information are required on all ends of the information network and the results of any investment in this sphere can be measured by how the solution supports decision-making. It is important to see that Phillips does not attribute an intrinsic value to information, but assigns all relevance to the process of “converting data into information,” i.e. into a business process.

If the conventional perspective (centred on Protection of assets) prioritises the “flows of data,” the new perspective –with a complete integration of the objective and subjective views—must underline the flows of identity data. We can still accept the relevance of data movements (even under the fictitious image of a material flow), but this has to be complemented with the correlated identity flows that interact with the data flows. If data transfers are the weft of the network, identity flows are the warp. One supports the other so the “value of data” materialises when users are enabled to consume it in many forms, and the “value of identity data” is realised when it is applied to business data transfers.

This becomes more visible if we consider the “extended enterprise business model,” meaning the data and identity exchanges across the value chain or value network. On the technical side, it has been observed that higher performance requires a tighter integration of the Information Technology systems across the extended enterprise. External users require access to systems hosted by the organisation and internal users need access to systems hosted by partners and service providers. Equally important are those requirements where organisational and external users need access to Cloud-based services that are outside both of the enterprise and the partner authentication domains. A mobile, diverse and geographically scattered workforce becomes more and more the reality of national and global corporations and especially of the new “virtual” organisations.

In this context, the “value” of information is dependent on the interplay of identity and business data, and the key questions are no longer those of “Protection” but those of “Selection” (Trust Allocation and user enablement).

Identity and Organisational Transformation

Many times, I have been asked, “What is Identity management and how does it work?” Many Security professionals are still unsure about the scope and nature of this discipline. Identity management is above everything else a Security discipline where the ultimate goal is to achieve efficiency and organisational excellence. It was already relevant in all types of

organisations where informational processes are more or less well “protected,” but Identity data is not considered an asset and is not managed with appropriate processes. It is more relevant in a stage where the majority of users are not within the organisational boundaries. It would be an error though to see Identity management as just a way of reducing complexity and costs of user management. We can do that for sure, by means of automation and workflow engines, but would reduce Identity Management to a normal technology-centric discipline. The truth is that neither automation nor workflows nor the desired user management tools can work by themselves, and Identity management always has a very strong component of organisational transformation.

Within the conventional framework, Identity management tended to put much emphasis on setting up role-based access controls. This still makes sense today for some areas of the organisation and some sets of applications that require an approach based on roles, but in the expanded enterprise, it is difficult if not impossible to express Identity management requirements in terms of an organisation-wide role model. Neither the diversity of applications, nor the variety of users and locations allow for a single model, and hence it is essential to have a more flexible, distributed approach. It is necessary to consider other ways to “manage” external identities, for example self-service, third-party registration, lightweight authentication and federation services. The “roles” of the external identity types cannot be defined in the same terms as the “internal roles,” as these depend on data owned by the organisation under employment contracts and job definitions. On the other hand, this evolution is leading to the recognition that external users should and can only be managed by the external entities themselves (including the assurance providers and the individuals involved).

Corresponding to this, the main direction of the effort is now not towards access control, but to access enablement. Selective enablement, providing access to a deeper, more complex and layered set of assurance levels finally amounts to access control. The emphasis is on giving access and selectively enabling access channels, so the control objective is achieved in a different way.

This different approach also means the focus is now on performance of the entire identity and data exchanges, and not on security. In the past, Identity data was primarily bound to separate “silos” or islands of IT solutions, and the evolution of Identity management was dependent on the upgrade and improvement road map of the other areas in the IT departments. In the new period, Identity management becomes less technological, more standardised, and moves both inside and outside of the organisation. In this sense, it becomes more and more independent of specific platforms or technology brands.

Today it is still difficult to see the result of this evolution, but the first steps have already been taken by many organisations, especially those that have seen the complete failure of attempts to centralise Identity management following the “enterprise” model. The current emphasis

on regulatory compliance, driven by legislation, also obscures the underlying transformation but will soon leave the forefront of business concerns, as people begin to see there are audit issues precisely when Identity is not owned and managed, whereas the Compliance emphasis is currently 100% reactive and improvised, and thus repetitive, costly, expeditious, and unsustainable.iii

In the new period, too, when Identity becomes data, the focus is on performance. By this, I mean the performance of the information exchange network as a whole. When speaking of performance, what I mean is Quantitative Identity Management, as announced by the title of the present chapter, but we still need to cover some other points before addressing the new idea.

What IT never did, and never will

When reading the current debate about the future of IT and Identity management, we see how a large part of the Security expert community is still trying to “save IT by means of IT.” It would be suitable to analyse all the motivations for this stance, and see for example why the IT professions are so attached to the survival of the traditional IT department in their organisations. Nevertheless, such analysis would go beyond the subject of this book and must be left for another time. Here I want to look only into the context in which Identity management is becoming a quantitative discipline even if the majority of the experts cannot recognise this.

Professionals react to predictions about the Cloud with either disbelief or smiling contempt. It is visible that IT professionals –especially veterans and Security experts—are not comfortable with the implications of the Internet and Cloud computing. Many opinions have been expressed, for example, against predictions the IT Department was becoming a thing of the past and intense debates have been generated by the suggestion that IT as we know it is about to disappear.

So for example, when in May 2003, Nicholas Carr predicted the end of Corporate Computing, his writings were heavily criticised by experts touting the “importance” of IT and the ability of IT departments to “bring value” to the organisations. Carr wrote: “Something happened in the first years of the 20th century that would have seemed unthinkable just a few decades earlier: Manufacturers began to shut down and take apart their waterwheels, steam engines and electric generators. They no longer had to run their own dynamos; they could simply buy the electricity they needed, as they required it, from new utility suppliers. Power generation was being transformed from a corporate function into a utility. Now, almost exactly a century later, history is repeating itself. Information technology is undergoing the same transformation.”iv

I am a witness of this transformation and have seen the evolution of the IT world since the time when the only computers in existence were housed in special rooms and attended by people in white coats. While I have seen the constant change of IT Departments (including in the area of Identity management), I also know IT never did Identity management well. In my mind, the only question is if the IT shops will ever do it well, now that the global transition to cloud-based services does not leave much time for more experiments and failures.

Perhaps the defenders of the IT Department have a point, because since Carr and others started to describe the challenges faced by the traditional approach, the IT Departments of this world have put up a strong fight. Guided by technology vendors with an interest in the existing enterprise platforms, the IT Departments have projected medium and long-term “adoption” of cloud technologies but always in a hybrid modality, as extensions of the current IT roadmaps.

This is reflected in the way technology choices are made. There is a simple way to discriminate which technologies in the market point to the future, and which ones cultivate the past and thrive from the continued entanglement of IT “solutions.” While there are criteria to discover the fit of a particular technology into an IT roadmap, we need to abandon the obvious approach. What does it mean, after all, to “fit” into an existing IT roadmap? In most cases, an IT roadmap will be a succession of upgrades and patches to pre-existing technologies, chosen under different circumstances and under now-forgotten reasons. In this context, “fitting” cannot be more than trudging a path along an already unsuccessful IT story.

For example, in the traditional Identity management, typical services do not extend to user types outside the enterprise boundary. As business expands, it is difficult to share information safely. Federation technologies should address this, but require costly changes for all parties. Small players never adopted Federation, and Partners did not agree who was the Identity Provider, and who would have control of user information. In this context, the Identity management becomes a succession of “upgrades” and “workarounds” where business programmes and both external and internal users have underperforming identity services.

Typical characteristics of the situation are:

· For every I&AM project delivered, there are always two or three in each organisation with severe challenges.

· The “enterprise IT” approach appears unable to address Global requirements.

· Typical challenges include: High CapEx and OpEx.

· Large amount of integration and customisation needed to support business applications.

· Complex, costly compliance management.

· “Solutions” are rigid and expensive to change.

· Technologies to cover employee, partner and joint venture authentication and provisioning with the different security levels and services are difficult to find.

On the other hand, change never stops, propelled by social and economic transformations. As the organisational boundary disappears and business develops based on extended collaboration circles, Identity becomes data and its management moves out of the enterprise for external access routes including mobile workers and consumers. In a short time, Identity services will scale to hundreds of millions of users covering every major industrial and post- industrial economy.

It is a known fact that IT Departments never did Identity management well, precisely because it was involved in a never-ending upgrade path to existing infrastructures and point solutions. As things evolve and the centre of gravity of Identity moves outside of the organisation, will the IT Department still have a chance to “manage” it? Is there room for some new experiment that will finally allow the IT department to do what it always held as a pending task? My take on this matter is that there is no more “time.” Even more, I believe the meaning of the historic transformation is that, as Identity becomes data, and Identity management becomes focused on quantitative measures and performance, the complexity and fragmentation of the current IT environments automatically disable these to compete with the new efficiency-focused cloud-based services. There will be more resistance to change, and there will be more hybrid cloud adoption as well as more confusion about the “lack of security” of the Cloud services, but the result is unavoidable. I am moderately certain that we will see a global switch to quantitative identity management –another way of speaking about a utility model—within the next three to five years; and I am sure that this will happen within the decade.

So, to return to our question: how can we discriminate those technologies that thrive from the past from those that open the gates to the future? The criterion is: look out for those technologies that are there to be “implemented” by IT, or to “help” IT deliver Identity services. These reveal the defensive actions of the IT Departments, which are still trying to remain relevant and hold on to their “realm.” Too little too late, I say. These are, in my view, the technologies and solutions that stand in the way of the great transformation.

Enterprise Identity Management Layers

Within the Enterprise framework, Identity management solutions consist of four layers of technology and processes. These organise the identity data flows and the user access to data with the aim of controlling access and achieving the goals of the organisation. The four layers are:

1. Identity Data Governance

2. Identity and Role Management

3. Identity Data Services

4. Identity Data Control

The following diagram shows the four layers and the various sub-processes they contain.

clip_image002

Identity management layers (© C. Trigoso 2010)

The Identity management functional layers are interdependent in one direction: the upper layers are dependent on the lower ones. The foundational layer, labelled “Identity Data Governance,” is the basis for the full development of Identity management in the organisation. It is useful to see these layers not as a one-off project, but as permanent processes that must be part of business operations.

Starting from this conception, an Identity management programme should have a series of steps or staggered projects by which each layer is built out and uses the resources made available by the underlying levels. Subsets of these capabilities can be implemented in advance of the full maturity of the basic layers (for example before completing the Identity Data Governance capabilities), but in that case those early developments could become isolated, point solutions unable to deliver their full value.

For the purposes of this book, it is important to explain the reason behind this layering of the Identity management solutions. The key to this architecture is the focus on identity as data but in the context of the interactions of users and resources. When considering the simplest case of user access control, we see at least two sets of data in interaction. On the one hand, we have the user access request (to a resource); on the other we have the user access approval. A simple image will reflect this basic fact:

clip_image004

Simple access control interaction (example)

The example is “simple” because the response from the Resource to the User is absent, as well as the interaction between the User and the Control function. In fact, each of the write operations (depicted as red arrows in the diagram) is a combination of reading and writing operations, and is strictly speaking bi-directional in nature.

To picture this we can have a look at another, more complete diagram:

clip_image006

Expanded user access interaction

In this case, we see two “write” operations, one between the user and the Resource, and one between the user management function and the item labelled as “Resource Control.” In turn, the Resource “reads” the permission from the Resource Control item. The important point here is not the diagram itself but the interaction of various operations. A more complex and realistic diagram would show the user request going first to the user management function, for example, but in the end all interactions amount to exchanges of information that eventually allow the user to read and/or write from/to the resource.

Fundamentally, however we complicate and expand this diagram, perhaps introducing more users and more resources, or even more steps in the authorisation and read/write operations, we can see there are information interactions that enable other information operations. More specifically, the user management and permission data exchange makes the user access (read/write) exchange possible. For this reason, I put the permission arrow as “crossing” or “orthogonal” to the access arrow. These two “arrows” depend on each other (in fact the permission exchange is triggered by the access request and the permission exchange carries user data); and the user access exchange depends on the permissions given. Nevertheless, these dependencies do not mean the two exchanges are the same or even that they occur in the same “channels.” As authentication and authorisation mechanisms become more complex and generic, these exchanges become distinct and separate; in fact, they use also different technologies and protocols to establish communication.

A wider picture of these exchanges in the organisation will have to make abstraction of the details of the exchanges, while retaining the essence of the matter: how permissions and user management enable user access. Let us consider now an image of multiple user access routes to resources from inside and outside the organisation. This will allow us to show also how user access information “flows” across the “layers” of the Identity management architecture.

clip_image008

Simplified Permission interactions (© C.Trigoso 2012)

This diagram shows the permission (authorisation) interaction for various types of users, for example employees accessing internal resources. In all cases, Identity and permission information moves through different layers, beginning with the organisational structure depicted here as “business structure” under the brown label on the left side of the diagram.

The other layers are Identity Data Management (Governance), Identity and Role Management, Identity Data Services and Identity Data Control. What we describe here in several steps represents the “vertical” red arrow in the previous diagram (Expanded User Access Interaction).

The experienced reader will see immediately that the “flow” shown here is extremely simplified, because it assumes that each user type (the green labels on the left) have a single path or direction of flow, until the information arrives at the last stage (Reporting). In reality, these data flows are more convoluted, because they have grown with the organisation and are the result of a multitude of partial solutions and isolated technologies. The real picture is more or less like this:

clip_image010

Typical Permission data interaction (©C.Trigoso 2012)

In this case I show also additional user types (external or third party users), but also the fact that user types are managed in a variety of ways and data is mixed and moved from one layer to another in more or less complicated “paths.” An even more detailed picture of the situation or real-world organisations would show that several user types are not managed at all, or that if these are managed, not all layers are in place, so that, for example, there is no monitoring or reporting in the user environment. Overall, this picture should work only as a motivation to consider the subject of Quantitative Identity Management from a new angle.

A factor must be added to this view: The increasing and overwhelming importance of external users in organisations. As the traditional organisational boundaries disappear, the types of users become more numerous, and the range of valid user access routes blurs the distinctions between internal and external users. The full scope of user types is usually wider than this summary list:

· Owner

· Manager

· Staff

· Contractor

· Consultant

· Partner

· Supplier

· Customer

· Federated user

· Visitor

While there are still differences related to the types of transactions these users need, many sub-sets of users in each of the categories have the same access requirements. Sometimes, users previously considered “external” have more access rights than “internal” users. Also, internal users interact with the organisation also as external customers and external users have access to internal information sources in various degrees. This leads to a situation where “enterprise Identity management” cannot be formulated separately from “customer or consumer Identity management.”

This leads to a situation too where the Identity landscape, including all data interactions, is much more complicated. The following image shows an increased number of user types on the left side and a more complex network of interactions. This is obviously only an example and each organisation will have a different set of information routes and actors.

clip_image012

Generalised Permission Data interaction (© C.Trigoso 2012)

If we compare the second diagram in this chapter with the situation depicted above, we need to keep in mind how permission data exchanges act as a “gate” or as an “enabler” for user access to resources. In the more complex diagram, we should now look at the Managed Systems and Applications, which represent Resources accessed by the users. These are shown in two columns under the label “Identity Data Services.” We see there two types of resources: Managed Systems and Application Services. It is not important now to explain these in more detail, but just to keep in mind that users request access and eventually work on these resources (Systems and Applications), and all productive work from their side is done at this stage. It is easy to see that if the data interactions that move across the diagram from left to right are not efficient, limited, or non-existent, business processes will be disabled or at least made less effective and costly.

The normal case in an organisation with an inefficient permissions data “flow” (in reality a series of bi-directional interactions) is that “the job gets done” anyway, although with serious

delays and absorbed costs arising from these. Users that are not “gated” or “enabled” via an Identity management solution still get access to the resources they need, but at great cost. Afterwards, these users remain attached to the target resources, and start amassing permissions, which in turn become a Security problem for the organisation. This is how Security problems arise from the lack of efficiency in Identity management.

If we consider these problems only from the standpoint of the Protection paradigm, we would be looking only at the effects and not the causes. For sure, excessive or lacking permissions at the Resource level (Systems and Applications) must be seen as Security issues, but the underlying problem is non-existent or underperforming user management. This conclusion can be visualised if we take another view of the informational exchanges in the organisation. An idealised Identity data “flow” –even if it is not mature and rationalised– looks like a series of steps and interactions:

clip_image014

Identity Data “Flow” example

I use this image to explain how user management rationalisation and streamlining has a direct impact on productivity and business performance. At the same time, this shows the “movement” of data from the identity repositories at the top to the applications and systems at the bottom is also a performance problem and needs to be addressed with more or less standard methods used in common data management.

Identity Information Logistics

It is convenient to change the language we are accustomed to, and consider Identity management as a type of Information Logistics. There are several definitions of this areav but there is agreement among the specialists, considering that the aims of Information Logistics are:

· To generate the correct information product

· … at the accurate point in time

· … in the correct format

· … in the correct quality

· … for the intended recipient

· … at the right location

These goals agree point by point with the aims of an Identity management programme, which should aim at generating the correct identity information (accurate, in the correct format, of acceptable quality) for the intended recipient (the user) and in the right location (in the managed application or system and in the correct geographical or business location).

According to Apelkrans and Abom,vi “the value of the information is dependent on the time and place, hence we introduced the value function V [where I= input and O= output].”

· V (I) = V (I (time, place))

· V(O) = V (O (time, place))

The authors add: “The desire is that ILP [Information Logistics Process] is a value-adding process which means that V (O) > V(I). During the ILP process time and even place can be changed, so information can be obsolete, distributed to the wrong place, etc. in global environments, the partners can move around, be substituted, etc. The problem of wrong place distribution is especially true for mobile workers. In the same way, the desire is that ILP shall be a quality increasing process. If we can find a way to measure quality we denote the quality function by Q: Q(O) > Q(i).”

For Identity management, this approach would mean the incoming information, for example Human Resources database records, is processed to become authentication records to facilitate user access to business resources. The output value then is higher than the input value as expressed by the Apelkrans-Abom formula. We see here a direct way of associating informational processes with a general notion of value related to the process itself, i.e. the exchange of information. In this case, we do not rely on ideas of either “intrinsic” or “relative” values, but on the quality and quantity of the exchange of information.

Other researchers have proposed more complete formalisms. These should be considered as we progress in the adoption and perfection of Quantitative Identity Management. Of particular interest is the model introduced by Vaidotas Petrauskas in 2006.vii Petrauskas focuses on “a three layer system where information flows connect material flows with decision-makers” and where the goal is to find a “fit of performance and cost.” His approach considers three “information flow parameters”: path, time and cost, which are defined as follows:

· Cost (of current/alternative process)

· Time (speed of information transfer)

· Path (number of flow network elements)

These parameters correspond to a “network flow” with information feedback. Information “moves” from a set of sources to a set of consumption points and passes through several administration and aggregation points, in a way similar to the Identity data flow model represented on page 123.

· Material points = M1 … Mm

· Registration points = R1 … Rr

· Data stores = S1… Sd

· Processing points = A1…Ap

· Decision points = D1 … Dd

· Feedback = F1 … Ff

The “path, time, cost function” (P, T, C) = x (y, z) is obtained by estimating the complexity of the nodes in the network and how the “flow” is implemented:

a) Measure of complexity: y = <M,D,R,A,S,F>, which results from the combination of material points, registration points, data stores, processing points and feedback channels.

b) Structure parameters: z = (z1…zm), which depend on the number of paths across the different layers of “points.”

Following these definitions, Petrauskas suggests the goal of information management in this context is to find a function f (P,T,C) which tends to a minimum value. The goal is to determine the set of paths and the level of complexity that will ensure the maximum efficiency in terms of information movement from the sources to the consumption points. Just considering the formal approach presented by this author shows the parallels with Identity data management, although the “consumption points” in our case are the systems and applications managed by the Identity solution and not “decision-makers.”

To complement these approaches to value and efficiency, it is also important to study the general form of the problem. What is the optimisation model that is most appropriate for identity data exchanges? From experience, we know the Identity management “problem” is of an organisational nature (as I have described in several sections of this book). By organisational I mean structural and functional and not only “cultural” or “subjective.”

This emphasis on structural and functional factors is there to counterbalance the usual inclination to see Security and Identity management problems only as issues of technological “improvement.” Nevertheless, we must also understand that even in the most mature

organisations, Identity management is never optimal and its defects cause problems elsewhere in business operations and in other areas of Information technology.

Network “flow” analysis and a rational approach to information logistics –as pioneered by researchers like Apelkrans, Abom and Petrauskas– will help to improve the situation. Nevertheless, while putting more emphasis on the goals of information exchange performance and efficiency, we should not be unduly optimistic, because there is also circumstantial and formal evidence that there is no known solution for Identity data management problems. A brief look into the mathematical complexity of the problem will help us adopt a moderately optimistic approach and focus not on mechanical optimisation of data processing, but on those improvements achieved through organisational change.

Before addressing the general form and complexity of the problem, I think it is useful to understand how Identity management issues affect the organisation and IT operations as a whole. Building on previous chapters in this book, where I showed the persistent problems in Security and Identity management, let us now consider the typical situation in an IT Department charged with a number of projects. In that context, failure and financial loss is not only caused by errors or problems within each project taken separately, but by the interaction between projects. As described by Ashok Mohanty,viii “In a multi-project execution department, projects arrive at intervals defined by business initiatives, and not by plans issued by the IT department itself. The schedule is then prepared considering type of work, duration, delivery dates, value and relevance for the business.” We know that project execution deviates from the intended schedule, and, as pointed to by Mohanty, project expediting is a “control action” for bringing projects back to schedule. In this context, for each project, the usual parameters are the expected start time and maximum allowed duration. It is usual to add a “margin” to these parameters, allowing for delays and problem management. To assess project performance, a common quantitative model would employ the following variables:

· Actual start time

· Expected duration

· Standard deviation in expected duration

· Maximum allowable finish time

· Fraction of project completed at time of review

· Estimation of completion time

· Value attached to the project finished at the estimated time

· Effectiveness of the index of expediting

· Percentage of time spent in delivery work of solutions (chargeable hours)

· Percentage of in-flight projects within target cost variance level

· Percentage of completed projects within target cost variance level

· Percentage of project backlog in man days

· Delivery cost, schedule, quality and scope

Mohanty formulates the standard calculation that follows from this approach: “If a project starts at expected start time t and takes expected duration d, it is completed at point A.

However, due to delays, the project may start late, at time t+x. The rate of progress may also be slower. At time t the project has progressed to point B. If the project progresses at this rate, it may be completed beyond the termination date.” Although trivial, this description is nevertheless precise, and is exactly the reasoning we use in everyday project management. Every IT project manager will recognise that the main category of impact in his or her work is what can be classified as “project delay.”

When correlating Identity management with Security and other IT project delivery issues, we rarely understand how these affect each other. In particular, there are no studies as to how the limitations and delays in Identity management slow down and increase the delivery costs in other areas of IT Programme delivery. This is an area still awaiting detailed study, but it will be necessary to overcome the persistent tendency of the IT departments to implement technologies lacking in Identity management capabilities.

This resistance to an integrated view is obviously negative, but it is also evidence that technology specialists tacitly understand their methods are unable to cope with organisational complexities, so they stay away from Identity issues and deliver even more fragmented solutions instead. Business teams and Security leads need to understand though, that if we start seeing Identity management as a Performance problem, then the impact on other areas will become clear. My experience shows the main impact of lacking or non- existent Identity solutions is on the overall duration and cost of Business and IT transformation programmes. The following discussion illustrates what I mean by this.

The impact of Identity management issues can be seen in all types of public and private entities, but it is especially visible and damaging in global organisations. The main cause and effect relationship is the increasing complexity of user types and applications across the organisation’s divisions slows down systems integration and leads to fragmented and costly workarounds. This in turn affects the delivery of shared systems and applications. The more the organisation progresses in the IT transformation road map, the more challenging it becomes to manage Identities.

clip_image016

Increasing Identity requirements in global IT programmes

Increasingly, the lack of Identity management capabilities slows down global transformation programme delivery. Infrastructure integration progresses initially at a more rapid pace, given the relatively small number of technologies and targets involved, allowing it to achieve its integration levels faster. The gap in terms of integration and scope increases with time so that Identity becomes a blocker and slows down IT transformation.

clip_image018

Identity management slows down other IT and business areas

Identity costs expand, while catching up with other programmes along several years. Costs expand continuously as the organisation progresses from Infrastructure to Non-Core applications (eventually local applications).

clip_image020

Identity management costs expand in a multi-year period

Though not researched at all, given the strange status of Identity management in Security and IT in general, it is enough to give a little attention to these problems to understand there is a direct relation between the efficiencies or inefficiencies of Identity data “flows” and how these impact other areas in the organisation.

In conventional Security and IT practice, the intuition of performance and effectiveness is embedded in several aspects of our advice and solution design. We also approach Identity management and access control with a series of pre-conceptions of what is considered “better” or more “valuable” for the organisation. Sadly, this is not enough, as one key aspect of our advice should be how to increase the performance of Identity data exchanges. While immersed in the risk-based approach and the mechanistic view of Information, we are unable to articulate this or even to think about the problem. Although sometimes we distance ourselves enough from the “Protection” concern, and start speaking about the “efficiencies” that are to be found in Identity management, we remain still unable to think about Performance.

Therefore, instead of working on minimising the value of the Path-Time-Cost function –as needed in the Petrauskas model– Security and Identity practitioners and experts advocate a mixture of technologies and forms of automation. Under the spell of the mechanistic paradigm, we assume that the key benefits are not related to organisational structure and operation from the organisation, but to the lack of some “tools” or “technologies” that will “help the IT department” cope with their obligations.

This happens, even though we know well that Identity management solutions create mainly indirect and non-financial benefits. These benefits are quantifiable but can be measured only in the organisation as a whole, i.e. in the correlation between Identity management and Business and IT programme delivery. Therefore, the emphasis on “Protection” and automation paradoxically distances the Security and IT practitioners from a proper quantitative approach.

Quite differently, in this book I show the main benefits come not from automation, but from the enablement of Business and organisational transformation. In other words, as indicated at the beginning of this chapter, the benefits come from the “gating” effect of identity data interactions and not from the automation of these flows themselves. This is the fundamental claim that is at the centre of the Quantitative Identity Management view.

Years of frustrating experience have led most Security experts to believe that Identity management benefits are indirect and difficult to measure. My suggestion is that these difficulties –while real—are caused by a lack of understanding that Identity data processes are bi-directional and consist of exchanges, and therefore can be measured only by their impact on the processes that are enabled or disabled by them. This is what I called the “gating” function of identity data exchanges in an earlier section of this chapter. Corresponding to this, a view of Identity management as a performance problem will finally allow us to have a quantitative approach where before we just had intuitions.

The following diagram shows the variety of impacts and areas where Identity management solutions have to be measured:

clip_image022

Identity management impact areas (© C. Trigoso 2010)

As we discussed in previous chapters, Identity systems are notoriously difficult to implement, upgrade, and to validate as investments. So, several questions arise: What should be

measured to assess the “current system” in a particular organisation? If roughly ¾ of the benefits are non-financial, how should we estimate investment returns? If non-financial benefits cannot be measured, should an Identity management investment decision rely only on expected financial returns? In addition, if we limit ourselves to the estimation of financial benefits, what are these exactly? Moreover, how do we assess the distinction between direct and indirect benefits in any case?

The Quantitative Identity Management approach I propose in this book begins by first representing and analysing the Identity systems—both the current and the proposed solutions–as networks of Identity data exchanges. The networks are composed of Identity sources, processors and consuming entities, as well as entities acting as authorising, validating and managing parties. While it is not new to formulate Identity systems as workflows and business processes, this approach goes further:

· It allows for decentralised workflow models including user driven or user started Identity data management

· It integrates security criteria about assurance (security information quality) and not chiefly about subjective “risk” assessments

· It introduces the notion that business services and projects are consumers of Identity data

· It moves from a defensive position, where Identity management is done to remediate and comply, to a trust management stance where the emphasis is on performance and data quality in support of business projects

· It puts IAM finally in the context of Service Oriented Architecture and Security as a service, while it does not assume a closed organisational context

· It separates the overall quantitative performance analysis from considerations about locating the participant entities, which can be either in or out of the organisational boundary

· It subordinates non-quantitative targets to hard targets of data availability and quality, including demonstrable account provisioning and de-provisioning (termination)

· It subordinates weak measures of user productivity and enablement to data availability and integrity measures (are users provisioned in time?)

In addition to that, a quantitative approach, when “Identity becomes data,” firmly centred on quality and availability, can address simultaneously the goals that I highlighted for any Security management programme:

· Adapting the organisation to a reality where there are more and diverse users outside than inside of the “boundary”

· Adopting a multifaceted view, where the organisation is a partner in the Cloud and not the only or main Identity provider

· Enabling business management to assert quality and cost control through efficiency comparison of services

· Enabling the organisation to estimate the impact of Identity data exchanges on IT programmes and transformation

· Overcoming the one-sided risk-based approach

· Preparing the organisation to the period “after” the Cloud, when IT services are consumed on a utility basis

· Revealing the nature of cost variations between services provided by suppliers

· Supporting the transformation of Security and Identity management into normal business operations and align the investment patterns with other areas of financial management

· Supporting the transition of organisations to Cloud infrastructure and services

On those lines, still following our intention to reveal the “form” of the problem and avoid an “optimistic” approach, we can now turn to a deeper assessment of these guidelines.

We need to consider first what type of optimisation problem we are facing. If we take the previous insights into the layers of the Identity architecture, the concept of the value function, the formal Path-Time-Cost model, and the interdependence of IT projects, we can continue this investigation productively.

A key insight is provided by the work on “system design” by Bahill, Chapman and Rozenblit.ix I will follow here in particular these authors’ idea that engineering systems design is a so- called NP complete problem.x By “systems design,” Bahill and his co-authors mean the process of “translating the customer’s needs into a buildable system design,” a task which “requires selecting subsystems from an allowable set and matching the interfaces between them.”

Bahill summarises the design task with the term of Systems Coupling Recipe, which is the graph or network formed by all the subcomponents of the solution. A systems design problem, as the author explains, can have many solutions but each of these will be some form of “connectivity” (i.e. a network) between the components of the system. A diagram illustrates this as applied to an Identity management environment:

clip_image024

Potential connectivity for Identity subsystems (based on Bahill, 2009)

This shows essentially an Identity management solution, where–after assessing the needs of the consuming services–we propose a series of chained systems and processes to cover those demands. When there are no services capable of supplying authentication and authorisation, we propose new systems or re-engineering of existing capabilities to obtain those results.

As Bahill explains, “NP-complete” is the name of a class of problems for which there is no known efficient deterministic mathematical (algorithmic) solution. All known algorithms for solving these problems have the property that as the problem size increases, the number of steps necessary to solve the problem increase exponentially. Problems that have efficient solution algorithms can be solved in several steps that increase at a slower rate; that is the solution complexity can be managed in a reasonable number of steps.

Bahill gives as example the sorting of a list of numbers—an operation that can be finished in n2 operations. If we have 10 numbers to sort, it would take at most 100 operations to perform the sort. A hard problem is different in that its size is in the exponent. Bahill gives as example an exponential problem where “the number of operations quickly exceeds the capability of any machine to compute a solution. For example, if there were a machine that could do 1012 operations per second (none yet exist) and there was a problem that required 1020 operations, it would take 1020 12 or 108 seconds or more than 3 years to solve.”

In systems engineering –following Bahill’s approach—the design problem can be described by stating the input/output relationships, the design constraints and the performance and cost figures that are relevant. For a given set of subsystems available to build the solution, a possible network of components can be configured that satisfies the given constraints. Bahill suggests the key is to aim at target values, considering there are no perfect solutions, but also that no definitive formula can be found.

Bahill and his associates compare the system design problem to the famous “Knapsack Problem.”xi Bahill writes: “the engineer can find a combined system such that the constraints of performance and cost are simultaneously satisfied. But this would be equal to satisfying the Knapsack Problem, which is NP complete by definition.” And later he remarks there are algorithms to obtain good solutions “within a few per cent of a theoretical optimal,” but a proper engineering approach is to replace the goal of maximisation with the goal of “satisfaction of constraints,” and “find a course of action that is good enough.” Bahill goes on to say the engineer must ensure that the customer does not require optimal solutions because these are unreachable.

This introduction to the engineering approach should be enough to show the parallels with the Identity management design task. In our area we also have to “find the shortest patch” and the “lowest cost” for a solution.” Each of these challenges can be solved in reasonable time, but the complexity comes from the fact that we have to find simultaneously the shortest path, the least costly implementation, and the most effective allocation of users to resources and vice-versa. It is important to note the real-world Identity problem is much harder than

the system’s engineering task, because we have to deal with a changing, badly defined target, and many times do not have the advantage of a specified product or result. If we look again at the four layers of Identity management architecture and the data exchanges discussed in the first sections of this chapter, we will be able to describe the “form of the problem” more closely now.

Identity Data Management is “NP-complete”

It will be visible that as the problem size increases, the number of operations needed to compute an optimal solution expands rapidly making the solution “hard” or “unknown.” Eventually, the solution has to be deemed “unreachable” in a reasonable time and we speak of problems where the solution is only “approximate.” That is exactly the case of Identity data management, especially in large (more than 50 thousand users) and global corporations (with many locations, data regimes, and different business processes). In these cases, the combination of the different agents and entities in the identity data layers creates an unsolvable problem. We need to consider the whole picture to assess how difficult the task is:

· The sets of all possible items (the identity data entries)

· The size of individual items (the identity data structure)

· The value of each individual item (value associated with each data record)

· The data dimensions (data stores, data entries, etc.)

· The value goal (including quality and availability goals for data distribution)

· The allowed technology (identity data management technologies)

· The system interfaces to other systems (consuming services and projects)

· The cost of individual subsystems (cost per user and per transaction)

· The performance of each individual subsystem (for example directories or gateways)

· The allowable costs of a system (budgeted or contracted value)

· The required overall system performance (overall service levels expected by the client)

Once we collect these parameters, a strict analysis process would lead us to a “map” between the elements of the problem (users, managers, processors, controls, and targets). In the terms of the “Knapsack Problem,” this would amount to mapping the “agents” to “positions” and then to managed systems or “resources.” This represents then a “graph” or “map” of the different layers of elements that intervene in the problem. For this kind of problem, we would have a “multipartite graph,” drawn from a series of entities:

· A set of elements A= (a1 …an)

· A set of processing points P = (p1 …..pm)

· A set of targets or resources R= (r1 …..rx)

The “effectiveness” of a combination of the entities designed as A, P, R, etc. will then depend strictly on our ability to satisfy simultaneously the goals of time, cost and performance. Engineering practice recommends though, as explained by Bahill, that “The first implication of the System Design Problem being NP-complete is that humans cannot design optimal systems for complex problems. And computers will not be able to bail us out, because computers cannot design optimal solutions for complex problems either.”

Identity management is such a problem. In a real-world Identity management context there are many more classes of entities in interaction than in the elementary example given in the previous paragraph, or those that can be found by researching on NP problems in industry and commerce. We have to deal at a minimum with four classes of entities and 12 classes of internal and external access routes. In addition to that, the “resources” and “targets” are not simple but very frequently also composed of several subsystems with different and sometimes contradictory requirements. It is evident that the solution constraints are more complex. Besides that, the allocation problem is not completed with a single distribution of users to resources. Indeed, the user to resources mapping will change continuously in time as the organisation expands, merges and adapts to changes in the markets.

In any case, it is fitting to see the Identity management problem as the result of many interacting information exchanges, a “network of exchanges,” which can be expressed as a “systems design problem,” i.e. as an NP-complete optimisation task. In considering this, we should have a “hard” look at the conventional approach to Identity management based on workflow and provisioning tools, as well as “role mining” software. The truth is these tools can solve parts of the Identity management problem, and only for a limited time. So, when we progress towards quantitative approach, it is essential to aim at achievable solutions which do not highlight automation, but satisfy client requirements and organisational goals. A mechanistic, “automation” solution for example, will always fail to manage complex external and internal scenarios, whereas an organisational performance approach will instead succeed by abandoning the illusory goal of “managing” all the users that access business systems.

Adopting the view that Identity Management is a performance problem does not mean then that we should adopt the mechanistic approach just because we learn from systems engineering. The best exponents of engineering (and Bahill is one of themxii) actually endorse a view that takes us away from computer-centric solutions. My point is not only that automation solutions will be eventually not satisfactory in the new Identity landscape, but also that we need to redefine the Identity management problem altogether. Instead of aiming at the “solution” of the combinatory explosion between users, sources, processors and resources, it will become clear that it is essential to aim at the reduction of the combinatory,

i.e. the reduction of layers and subsystems to the essential ones, and reducing complexity of the data types managed. On the other hand, a reduced set of layers and data types does not mean a reduction in performance or the ability to deliver, but ensuring that good quality, standardised information is made available across all targets, while the systems and applications serve users the available information following their own consumption criteria. In other words, Identity data should become a service, a utility within the organisation, and should not be governed as one more “silo” in the already problematic IT world solutions and tools.

Performance Measures

Following the focus of IT and Business alignment after the IT Boom had passed in the early 1990s, professionals started to use measures of value that were closer to normal business criteria. These were introduced both to justify technology investments and to evaluate the result of technology adoption. The most common measures were those of effectiveness, efficiency and productivity, defined as follows:

· effectiveness = actual output times 100% / expected output

· efficiency = resources actually used times 100% / resources planned to be used

· productivity = outputs / inputs

· expected productivity = expected output / resources expected to be consumed

· actual productivity = actual output / resources actually consumed

In this context, IT and Security professionals started speaking about “value maximisation” of investment, and looking for measures that would reflect the contribution of IT to business value. For example, we were introduced to the ideas of:

· Speed-to-market – meaning fewer delays, removal of bottlenecks, and increase of productive days

· Reduced cost-to-serve – including fewer overruns, proactive use of lower cost delivery options, and increase of utilisation rates.

· Reduced overheads – including reduced project management and reporting costs, and reduction of wasted management time

Identity management introduced its own measurements, of which we can cite the most important ones contributing to the complexity of the task:

· Number of accounts

· Number of groups

· Number of roles

· Workflow branches

· Account usage statistics

· Password reset statistics

· Number of authentication tokens

· Number of authentication requests

· Number of authorisation requests

· Number of escalation of privilege requests

· Number of access requests per approval

· Number of steps for account lifecycle

· Identity repository size (including number of groups and roles)

· Ratio of requesters to approvers for a given target

· Number of accounts per user

· Authentication claims (attributes)

· Personal Identifiable Information claims (attributes)

· Potential and actual locations the workflow can provision to

· Provisioning paths (routes traversed by identity claims)

In these respects, Identity management solutions aimed at reducing complexity and complementing IT Service Delivery and Support. The common assumption was that Identity management technologies would increase productivity and efficiency enabling user access to applications “at the right time.” While it is true that an identity management technology could handle an almost unlimited number of users, far greater than the “largest organisation,” a crucial point was missed in these calculations: the combinatory complexity described in the previous section. So, organisations adopting Identity solutions only gained a fraction of the expected results and could not reduce costs rapidly, nor integrate new business initiatives as desired. It has not made big headlines but major organisations in the world responded to these failures by abandoning Identity management technical solutions and focusing instead on urgent requirements around regulatory compliance.

Then around 10 years after the introduction of complex Identity automation solutions by major technology vendors, the market began to abandon these technologies. When the Identity management market consolidation seemed finished with the creation of four or five large vendors in the market, suddenly several of these disappeared from the scene and new challengers surged with more focused initiatives. Complexity defeated the drive to automation, while simultaneously organisations transformed into networks of organisations, creating a more diverse landscape of identities that was not anticipated by the technology.

Now, all the primary goals of our speciality, like increasing business agility, reducing project efforts, accelerating system implementation and eliminating security issues can be and still are attainable, on the condition that we abandon the idea that an increase in performance requires an increase in automation. We still need performance indicators like the “average time to perform” or the “elapsed time between service violations,” but these have to be achieved with an effort composed fundamentally of organisational transformation and not automation. People say that organisations have large expenditures in manual user management processes, including high head-counts for user administration and long delays for user on-boarding and off-boarding. Lengthy provisioning processes are mentioned as a case in favour of management automation. On the other hand, from the vantage point of our understanding of the complexity involved, we can see that these manual, slow, custom solutions are the way organisations cope in the real world with a difficult problem that resists automation.

In other words, the reason organisations have fragmented processes for managing physical and logical access, and why they perform user management in an ad-hoc manner is because the problem itself is ill-defined and informational exchanges are too complex to reduce to an “industrial” proposition. There must surely be an exit route from this difficult situation, but it requires first a revolution in thinking.

Forms of Identity Management Optimisation

Once we adopt a quantitative approach to Identity management, we can see several forms this may take, depending on the maturity level of the organisation. Here is a summary of these forms:

a) Standard: In this case, Identity management data exchanges can be seen as a multi-source multi-sink network flow, where the goal is the maximisation of data “flow” between the sources and the sinks. This is equivalent to finding the shortest paths for data to the end consumers. This form of optimisation corresponds to a general idea of user access enablement, and input into project delivery, with little regard for the quality of the data or the complexity and changes of the “flows.”

b) Extended: This is another basic form of optimisation, which can be understood in terms of the internal completeness of the Identity management processes. Calls for maximum integration of data, standardisation and reduction of processing teams represent an effort to reduce the complexity of the network in addition to finding the shortest paths.

c) Valid or Compliant: This type of optimisation implies a complete set of internal and external processes and the matching solutions. We can say that here the focus is on the inclusiveness of the controls, aiming at accounting for all the access routes to shared and non-shared resources. In this case, I believe we can speak too of a “maximum communication” model where all data exchanges are in scope.

d) Secure-complete: This fourth form of optimisation requires a maximum layering both in the sense of division of labour and types of users/services. This can be realised with the four architectural layers that we discussed in this book. The key difference is there is a deliberate process of differentiation and communication extending across the network, including extensive organisational transformation.

In all cases, even within a conventional context, these four optimisation approaches highlight the essence of Quantitative Identity Management:

· The inter-dependence of the identity data exchanges

· Bi-directionality of identity data “flows”

· Homogenisation of internal and external identity data

· Dependency of project and programme delivery on user enablement

Simultaneously, assuming the nature of the problem is one of data management optimisation allows us to see interesting links with traditional engineering, data integration and transport scenarios:

· There are similar communication costs for integration solutions

· There are economies of scale for shared links between different targets or systems

· Loading and unloading data is similar in the sources and points of consumption

In general, this leads to a “hub and spoke” model, similar to the one used in transportation economics due to its large advantages for business rationalisation. Eventually the hub and spoke model materialises by the force of economic and social drivers, independently of the preferences of the IT departments. In fact, what we see in the nascent Identity management Cloud-based services is nothing but the development of this model.

So, we have formulated the problem in terms of information exchanges and bi-directional data movements. We have started to think in terms of how much information is produced by the user and how much information is sent back to him or her. And we understand now that information flows are correlated and co-dependent across the organisation, that they form a network, and that we can have measures of performance not based on automation.

If we look closely at the normal organisational requirements, Security and Identity processes depend on the association between users and resources. A key observation is that one type of information –Identity data moving across the organisation—acts as the “gating” or “controlling” function in the access of users to resources. This conditions the productivity of the users and the general results of the business. These information exchanges constitute a series of mappings, which usually can be classified as a data warehouse model. The following image shows the overall structure of such a database, which I suggest must be the core of a Quantitative Identity Management solution.

Identity data mappings (© C. Trigoso 2006)

While these mappings can be diverse, all of them have in common that they are rooted in the natural person or individual that is associated with the organisation as either an internal or an external user. The implications of this will become clear in the next chapters.

Far from the traditional focus of “centralising” all the information sources, the goal becomes one of simplification without mechanical reduction of complexity. Far from the traditional focus on “automation,” the goal is one of organisational transformation and the search for excellence appears as the supreme goal of management. Identity management grows into a Quantitative discipline with new functions and a new division of labour.

clip_image027

i Robert L. Phillips, “The Management Information Value Chain,” Perspectives , Issue 3 http://www.talus.net

ii Phillips elaborates: “The usefulness of the information provided by the Management Information Value Chain is determined by its effect on decisions. The value added by any management information activity can be derived by calculating the extent to which it contributes to this goal. We can then determine if the value of an activity exceeds the cost of supporting it — if not, the activity should be eliminated. We can also identify the “bottle necks” within the value chain-areas where more information or more processing would substantially improve profitability. Perhaps best of all, the Management Information Value Chain approach enables explicit cost-benefit analysis of information technology investments-placing these investments on the same footing as other corporate investments and providing a mechanism for establishing a profitable development plan for management information systems.”

iii Compliance emphasis is unsustainable when it becomes a reactive, audit-driven practice instead of being a normal, standardised business practice.

iv Nicholas Carr, 2003. “IT Doesn’t Matter,” Harvard Business Review, May 2003

v For example: Apelkrans and Abom write “By IL we mean exactly …send correct information to the right people at the right times,” Netguide, 2002. See also: Turban: “IL is the information supply needed to perform excellent logistics,” 2002.

vi See Apelkrans and Abom, 2001

vii Vaidotas Petrauskas, “The Use of Information Flow Analysis For Building An Effective Organization,”

Information Technology and Control, 2006, Vol. 35, No. 4

viii “Mathematical model for expediting the execution of projects,” Ashok Mohanty, 2011

ix William L Chapman, Jerzy Rozenblit, Terry Bahill, “System design is an NP-complete problem,” 2001

x “A problem which is both NP (verifiable in nondeterministic polynomial time) and NP-hard (any NP-problem can be translated into this problem). Examples of NP-hard problems include the Hamiltonian cycle and traveling salesman problems. In a landmark paper, Karp (1972) showed that 21 intractable combinatorial computational problems are all NP-complete.” – Quoted from Wolfram Mathworld, http://mathworld.wolfram.com/NP-CompleteProblem.html

xi “Given a sum and a set of weights, find the weights which were used to generate the sum. The values of the weights are then encrypted in the sum. This system relies on the existence of a class of knapsack problems which can be solved trivially (those in which the weights are separated such that they can be “peeled off” one at a time using a greedy-like algorithm), and transformations which convert the trivial problem to a difficult one and vice versa.” – Quoted from Wolfram Mathworld http://mathworld.wolfram.com/KnapsackProblem.html

xii See Terry Bahill’s web page here: http://sie.arizona.edu/sysengr/index.html