6. The Cloud Transforms The Network

This is a draft of the book with the same title now available through Amazon:

 http://www.amazon.co.uk/Fundamental-Conceptions-Information-Identity-Logistics/dp/1484990021/ref=sr_1_1?s=books&ie=UTF8&qid=1369520084&sr=1-1

 

Identity in the Cloud

The rise of Cloud Computing in the recent past is history repeating itself.  This is often the case in the technology markets, where old concepts reappear wrapped in funny names and marketing campaigns. In the case of Cloud Computing, even the shallowest research shows the continuity of the model since the “era” of the mainframes, “computing on demand” and the almost-forgotten “the network is the computer” period.

It is evident the combination of virtualisation, hosting, web services, new security protocols, outsourcing, performance enhancements and other infrastructural capabilities has created a new panorama in our business, but how new is the Cloud really?

It is impossible to ignore that various business sectors are moving rapidly to streamline their IT services by outsourcing all capabilities that can be acquired as utilities in the market. From data storage to off-the-shelf accountancy applications, firewalls and application monitoring, it has become simple, practical and advantageous to use remotely hosted, managed services[i]. The expansion of Cloud computing is equally fast in the public sector[ii].

But the novelty of these changes should not obscure the fact that Cloud-like capabilities have existed since early in the Computing Era, as remote, scalable, virtual, shared, resilient server resources were always at the core of major corporate and governmental IT infrastructures. While in the past these capabilities were almost exclusively mainframe-based and mostly used by large organisations, now they are widely accessible and not restricted to a single type of technology. Their evolution has consisted not in the disappearance of the original platforms, but in the combination of many more platforms and layers of systems alongside or “on top” of the earlier ones. More critically, individual use of remote capabilities has also become a fact of life.

This can be formulated as follows: The history of IT technologies in all areas of business, government and daily life advances by multiplying the levels of indirection, the “tiers” of the environment. This is the case in software engineering as well as in hardware systems. A good outline of this progression is captured in the maturity layers described by the Service Oriented Architecture (SOA) discipline[iii]:

 

A) Silo

B) Integrated

C) Componentised

D) Services

E) Composite

F) Virtualised

G) Reconfigurable

 

When using these categories, it is important to recognise later stages do not replace the earlier ones but combine with them. In fact, today all these layers and steps of the technological world coexist. Technological evolution increases complexity.

This is relevant when we work in socio-technical environments (corporations, organisations) where there is a high component of “legacy” platforms and applications, representing the earlier stages of technology. To address the relationships between the various segments and levels of maturity of the organisation, it is essential to use an SOA approach.

Today, we see organisations collaborating and exchanging information “in the Cloud” but that does not mean they have abandoned their corporate infrastructures altogether, and it is safe to predict that Cloud adoption will be uneven across sectors and geographies. On the other hand, even the most “backward” organisations will adopt at least partial “Cloud” services and use either federated or corporate Identity management models to enable user access across their boundaries. A key contribution to this understanding is the work by a number of experts associated with the Jericho Forum[iv].

Does the current situation lead to an “end of corporate Identity management”? As we analyse case after case of organisational environments, we see that cloud-based solutions will tend to grow in parallel to the traditional business processes.  Data governance, for example, remains as a process closely guarded by organisations. What this means is the Cloud phenomenon transforms the network, by making it more complex and large, while public and private organisations retain past levels of complexity and control. The increase in complexity precisely reflects the continued existence of internal, “in house” processes and technologies[v].

The technologies and models underpinning the “Cloud” trend are not new, but the general application of these trends is. The main change, easily forgotten when discussing the Cloud, is not in the technology but in the modalities of access and use of technology. In other words, what matters the most is the range and variety of users “in the Cloud.” This fact should be enough to propel the Security and Access Management principles to the centre of the discussion, but that has not happened. On the contrary, Security has become a blocking factor instead of a supporting discipline because of the lack of understanding that persists in this respect.

From the perspective of standard, Protection-focused Security, the problems of user access in the Cloud are the same as those catalogued for the earlier period. Seemingly, nothing has changed. The emphasis on “protecting information” is simply translated to wider and more complex environments across the Internet. Instead, from the perspective of “Direction” and “Selection,” which I want to highlight in this book, the problems of user access have changed fundamentally.  We still have to face the issues experienced in a traditional “closed” enterprise environment, and we still have to resolve the problems of managing user access to legacy applications, but now we are facing challenges related to the propagation of identities inside and outside of the organisational boundaries. These problems did not exist before. Organisations have moved into “boundary-less” computing, but also into an environment where they interact with suppliers, competitors, contractors, service providers and other organisations on a peer-to-peer basis.

The authentication and authorisation of users moving across the network has become a problem in this sense, because organisations do not own  all the sources of identity data, and, instead, have less and less control over user and data movements as the trend continues. It is very important to see here the driving force of change was and is not the technology of the Cloud, or computing technologies in general, but the evolution of organisational and business models of action in the economy and society. Computers and networks enable the change, but they don’t create it.

With the rising levels of adoption, Security technologies have multiplied too. There is a wider range of options although at the core all represent the same approach to computer-mediated communications (i.e. the growth of indirection, as we analysed in Chapter 3 of this book).  The existing technologies offer sufficient capabilities to enable and secure services across a multi-tier architecture. These techniques -Federation, WS-Security, SAML, XACML, SPML, Policy and Attribute-based Access Control, allow us to solve technical requirements and facilitate the introduction of “Cloud Services.” Newer, simpler technologies like OpenID increase the choices[vi].

So far, progress for the Security disciplines has been slow in Cloud Computing due to a conceptual confusion. I also pointed to the lack of understanding of the role of the user in the expansion of the Cloud, but there are other issues that compound the problem and limit our role in this new period. It is interesting to see business leaders and teams everywhere promoting Cloud adoption, while IT managers and strategists use the language of Security as a way to slow down this process and retain the technologies “on premises.” This is done under the general assumption the new hosting environments are “less secure” than those run by organisations “in-house.” The industry press is full of talk about security “concerns” with Cloud-based solutions, but professionals are beginning to counter these with very good reasons[vii].

It may be tempting to dismiss these concerns by pointing to the severe security limitations of traditional “Enterprise” environments. In this book I discuss for example the unceasing problem of “data loss” in standard corporate computing environments. It would also make some sense to explain how hosted environments operating around the world have higher levels of traditional security than the usual corporate data centres, but both approaches would be insufficient to explain what is behind negative opinions about Cloud adoptions and assumed Security “risks.”

First we need to understand the problem, and the real challenges to “Security in the Cloud” will become clearer. In my opinion, we have not made more progress in Identity management in the Cloud (which is the essence of access control!) because of the dominance of limited definitions of Identity and Security management. To be more precise I should say the excessive focus on Protection disciplines is making it more difficult to resolve these problems. In the same way as the standard techno-centric thinking obscures the understanding of Information in general, it also affects the debate about Identity in the Cloud by imposing a limited view of the Security disciplines.

My own approach is to consider four Security Perspectives, each one with its own concept of Identity as follows:

 

  •          Identity as value and subject
  •          Identity as role and membership
  •          Identity as substance and object
  •          Identity as context and process

 

For the Direction perspective, information and identity appear as value, more specifically as subjective value, according to the metaphor of “defining trust” that lies at the bottom of this perspective.

For the Selection perspective, information and identity appear as a role, concretely as membership into a social or organisational group. The metaphor “allocation of trust” is the key to this.

For the Protection perspective, information and identity are understood as an object, as substance. The assumption is that identity can be reduced to data structures “flowing” in the IT machinery. The guiding metaphor is “enforcement of trust.”

Finally, for the Verification perspective, information and identity are relative and depend on the context and the organisational process. The assumption in this case is that identities will be valid or verified depending on the context, under the “verification of trust” metaphor.

How does this help in the debate around “Cloud Computing” and “Identity in the Cloud”? The first result we obtain is to open our minds to different levels and modalities of “identity.” When considered in the whole, in a synthesis of the four perspectives as it were, Identity does not appear anymore as a static object that needs to be stored, hidden and “protected” but as a relationship, as a function, a role, a status, as a moral and subjective value.

At that point the IT mechanisms become less relevant, and the disciplines of Direction, Selection and Detection become more important and decisive: we then discover there cannot be a “technological” and techno-centric solution to the issues raised by Cloud computing (cross-domain authentication, federated identity, identity propagation, data protection and privacy concerns). A unilateral techno-centric solution is impossible because it will necessarily miss the aspects of Trust definition, allocation and verification.

The technologies, mechanisms and software protocols we have offer sufficient ways to negotiate, secure and verify trust, but trust establishment is originated and “happens” outside of the technical realm altogether. It is a compact based on reputation, authenticity, respect, responsiveness, viability and other values. It exists in the social and economic level, outside of the technical sphere of Protection and Trust enforcement. This reality escapes the techno-centric perspective.

IT disciplines are aware of the need to understand and factor in areas that are outside of the technical sphere. So we speak often of “people, process and technology” as a well-balanced approach. This is not sufficient. If we limit ourselves to analysing factors related to “people, process and technology,” we will be unable to explain what Identity management is about. We may still cover the aspects of “provisioning” people, “controlling access” to systems, and “ensuring compliance,” but we will be unable to address the fundamental issues around data ownership, business models and enterprise architecture. We are missing the whole picture.

If we accept this, we will see that lack of progress in establishing a firm grasp of “Identity in the Cloud” comes from conceiving Identity as an object, or as a physical substance. From that limited perspective, “Identity in the Cloud” becomes a matter of storing, sending, copying, encrypting, marking, enveloping data. By limiting our work to this approach, we use the multiple protocols that have been devised to achieve this (e.g. SAML, SPML, XACML, OpenID) as if these were complete answers to the challenges of Identity in the Cloud. The Cloud has changed the Network, but Security as Protection has not changed and in this sense it has become an obstacle itself by defending the fantasy of a “more secure” Enterprise computing realm.

Contrary to this trend I propose that we base Identity Management Cloud Services on the following guidelines:

  •          Establishing and governing Identity Data Ownership as the base of the Definition of Trust.
  •          Developing new protocols for the development of collaborations, partnerships, memberships and roles, as the base of the Establishment of Trust.
  •          Adopting Policy, Role, Capability and Attribute Authentication and Authorisation solutions as the next step in the Enforcement of Trust.
  •          Standardising Identity Data, Identity Propagation and Identity Assurance processes as the base for the Verification of Trust.

 

Identity Federations

If there is any one technology that enables identity management in the Cloud, that is the technology of Federation. This is a small area in Security which is bound to grow in importance with expanding “Cloud Computing.” Originally, federation technologies were proposed as solutions for major organisations, typically surrounded by a number of “service providers.” The idea was the central, large organisation would act as an “identity provider,” vouching for its employees, so these would move freely into the service providers’ applications. The basic “federation pattern” enabled a form of trust between web services, so users authenticated by the “identity provider” would not need to sign into the “service providers.” There were many problems with this approach, and the technology was not successful.

The federation model can have many varieties, but essentially a federated access management solution will respond to the following question: “In the offering and delivery of online products or services that involve multiple supplier organizations and multiple categories of users, how can these organizations and their clients, members, users, etc. obtain a seamless navigation experience whilst maintaining per-organisation user identification, access control and audit trail?”

These requirements arise in consumer-facing and inter-organisational scenarios, as well as internal Enterprise settings. The term ‘multiple supplier organisations’ here can imply any operational units that need to maintain some degree of autonomy – it can mean legal entities, companies, divisions, geographical business units, governments or departments. So the key terms are as follows: “online products and services,” “multiple supplier organizations and multiple categories of users,” “seamless navigation for end-users,” and “per-organisation identification, access control and audit trail.”

These four conditions essentially say the value of any federated solution lies in the fact that a supplier organisation — a “service provider”– will benefit from the reception of traffic from end-users who are not registered in their user repository and do not need to authenticate with them. In reality though, the “service providers” always had to have some form of user data store, even if they did not authenticate the incoming network traffic. For example, a health services provider for a major organisation obviously had to have relevant identity records for the end-user. In practice, users were managed in multiple sites anyway despite of so-called “federation arrangements” and “circles of trust.”

In a transformed global network, federation technologies are having a different role. While in the past these technologies responded to the need to have some form of simplified sign- on or cross-organisational authentication, scenarios where one major organisation acts as the single or main “identity provider” are rapidly becoming a thing of the past. We now see “networks of networks” and the expansion of a wide array of partnerships between organisations. The new network does not have a centre and the actors are peers in their informational exchanges. Instead of focusing on a “shared” repository of user identities — owned by the “main participant”– we see instead many partners in the Cloud, each managing users separately but still collaborating at many levels and trusting each other.

Initial forms of federated access were implemented with ad-hoc mechanisms and customised credentials carried in the Internet protocols; later on, the Security Mark-up Language (SAML)[viii] brought much-needed standardisation and transparency to these solutions. In this new and more complex scenario, organisations are beginning to realise that a federation solution effectively liberates them from the need to manage all the users in a single place, but to fully exploit this realisation, a conceptual change is needed.

My point here is that the best scenario for federation architectures, where these technologies will flourish, is one where user repositories are not “shared” across organisations, and where there is no need to manage the users centrally to provide them with the much-sought-after “seamless navigation” from one service to the other. In the Cloud, which is a “limit case” of the evolution of federation technologies, the original pattern of access cancels itself as the maximum benefits can be obtained when user management requirements are small. This is the context where Security experts need to start thinking of even more advanced solutions where trust management is not articulated around an ideology of “attack and defence” and “protection,” but around a complete vision of enablement and inter-organisational trust.

 

Cloud Security Concerns and Advantages

When Security experts and practitioners gather to speak about the Cloud, generally their concerns are one or more of the following:

a)      Data security. These are perhaps the most frequent concerns, rooted in the perception that Cloud solutions somehow expose data more than “corporate” computing. This concern is understandable, but the discussion needs to evolve; for example, recognising the same safeguards that exist in corporate computing are equally possible and necessary in the Cloud. Instead of assuming that Cloud environments are intrinsically less secure, we should understand why Cloud providers have been working on a “best effort” basis and have not offered comprehensive solutions for data encryption, user separation (segregation), and high assurance levels. In the end, the market cycles will bring these higher-value services to the foreground and data security concerns will be resolved. Current “software as a service” offerings frequently lack enough Security safeguards regarding direct access to data stores, segregation of duties, cryptography and client data isolation but that can also be resolved and is not different from current corporate environments.

b)      Data Privacy. Regarding this area there seems to be a fundamental contradiction between Cloud providers and the legal obligations of their tenants or clients. There is for example a very relevant interest from organisations to avoid cross-border data transfers, as well as transfers of liability to third parties. While it is clear that accountability cannot be transferred, Cloud providers have so far given an uneven response to these concerns. It is necessary to work out better arrangements so the Cloud services can perform as “data processors” (not as data owners), but can be bound by appropriate contracts and legal safeguards to increase the confidence of the consumers.  The following areas need to be covered in data processing arrangements: data provenance and transfer, data linking or aggregation, data lifecycle, legal obligations, limits of data collection and use, data retention and audit trail duties, data destruction policies, data-centre certification and regulatory compliance. A point of interest here is that the industry is paying a lot of attention to business and government concerns about these issues, but less so to the individual citizen’s concerns (which I will cover in a later section of this chapter).

 

The concerns listed here have to be recognised and addressed. At the same time it is important to remind the Security disciplines of the essential advantages that Cloud computing brings to all types of organisations, and we also have a role there. If the Security disciplines do not understand our contribution to risk and trust management in the whole and remain anchored in the “protection” era, we will be unable to see how and why the Cloud is changing the private and public computing landscape for good.

Above all, without letting ourselves become distracted with marketing messages and “new” technologies that are not new, we need to acknowledge Cloud computing fundamentally changes the cost and benefit relations for all types of organisations and businesses. As more and more services come into the Cloud, providers and consumers adopt a utility cost model. This change is happening despite the resistance of major software vendors attached to traditional multi-year “maintenance” contracts and software fee renewal payments. In the case of Identity management, the new model translates as a cost per user (per month or per year).  In the recent past these flexible commercial arrangements were known as “subscription” contracts, but that terminology still was linked to the prevalence of software licencing. In the more recent period and in the future, software licences and “maintenance contracts” will recede into the background, as the consumers will want to pay exactly for the service they receive (at a market price) and not for the “privilege” of using a particular brand of software or hardware.

Cloud computing in fact represents the increasing power of the consumer, the market, upon industries that operated for too long in a vendor-driven environment or “push” market. For sure, the transition is not complete, but the industry will get to the long-predicted “utility model” in computing[ix].

The second focal point in assessing the Cloud has to do with the fact there is no single class or type of service. Marketing campaigns do a lot of harm when the term “Cloud” is used indistinctly for many forms of this economic and business trend. Doubts regarding the quality and the security of these services are increased if the consumers believe there is a one-size-fits-all technology. In fact, many types of Cloud solutions will coexist for a long time and Cloud adoption will have many routes.[x]

 

2

  

Organisations may move directly or indirectly to Cloud solutions

A change in the pricing model means a change in the cost structure of the consuming organisation. This is very positive, but we also need to take into account benefits arising from and the elimination of overheads and project costs that are implicit in corporate computing.  What organisations do now in one to three years in Identity management, they will be able to do in one to three months! [xi]

Information Technology and Capitalism

What we are seeing then is not the appearance of new technologies but the result of the global actions of users and organisations, expanding the use of computing platforms and following a very normal path towards cost reduction and profit maximization. This is how the Cloud operates and what the Cloud is. It is a social and economic phenomenon that deserves understanding and action at the same level. In other words, it needs a social and economic conscious action instead of a techno-centric approach.

Identity management will not progress if we do not grasp that Information technologies are the fruit of late capitalism, and the Cloud is the latest evolution in this space of action. Computers in the global network dissolve the personal mark of any activity, through indirection and anonymity. When computers were first implemented in organisations, we also saw resistance to their adoption, similar to what we are experiencing now in relation to the Cloud. The deeper reason for this was that computers facilitated the abstraction and indirection of human activity, thereby transforming many organizational processes and directly affecting the workers and managers. The traditional corporate worker was effectively replaced by a new class of specialists and managers[xii].

Another cycle of indirection is afoot now, whereby corporate computing will gradually disappear as we know it. It is normal and natural that many people resist this change, but it will take place anyway, in the same way as in a preceding period the business computer transformed economic and social activities. The personal computer materialised the generic, interchangeable nature of work in the global network, and there is no economic reason or way to stop this movement.

While the essence of computation predates capitalism by thousands of years, capitalism re-creates the computer as a generalised tool for generalised activities. The personal computer is of the same order of reality as the car and the telephone. Like these, the computer is not only an “extension” of the body, but also a generalisation and automation of human activity. While it is usual to speak of the computer as a brain-like entity, it is much better to consider it as the automation of manual and visual activities: reading and writing. In fact, on close analysis, everything the computer does for us (or with us) is to read and to write, as I commented in a previous Chapter of this book.  Not only does the computer take over the space of the typewriter and the book, but the space of the hands and the eyes altogether.

So the computer needs, reinforces, educates, introduces and enables generic action. The generic worker creates the computer and is created by it at the same time. The technology, so to say, breeds its consumer, the individual characterised by post-cultural, post-national, multi-faceted activity. In the space co-defined by the generic worker and the generic tool (the computer), Identity is characterised by a context of more indirection and more fragmentation, and this has very important consequences for the Security professions. Let us now consider how these effects appear in public (citizen) Identity management.

Identity Assurance Services

The essence of the “assurance market” proposals is to engage the private sector to develop identity assurance services for public and commercial electronic or digital exchanges. Such assurance services would ensure that citizens and customers could easily and securely provide trustworthy identity and other personal information to the Service Providers.

This stance represents a new direction in the thinking of the public agencies and technology organisations in and outside of the Governments. In the past, the consensus position was to aim at centralised authentication services in the form of Government Gateways, or “bridges,” supported by publicly-managed assurance mechanisms (using official identity credentials).  For example, the Belgian and UK public gateways were designed to operate as a generic federation hub for all government departments.

The new stance shifts the provision of “assurance services” to the private sector. During the debate, the proponents of this change suggested generic benefits of extending digital services to a wider part of the population, more or less the same benefits as those advertised in previous centralised “gateway” strategies.

The change to the new schema is justified in terms of reducing the complexity and number of the user authentication mechanisms required by public entities within a general move towards e-Government. Payment and benefits fraud is also a consideration, but not in all cases, as the main driver seems to be reducing operational and authentication-related costs for public services.

The main positive factor in favour of the new approach seems to be that the private sector would help the government to accelerate digital services adoption, but it is not clear how. Countries with different levels of national identity policies and instruments also have various approaches to this problem.

It is not clear, for example, how multiplying assurance services would facilitate e-services adoption? In fact, even in strong commercial, legislative and regulatory environments, it is not clear how a diversity of assurance services would help with wider or faster adoption. Obviously where that regulatory and market environment is missing, the problem is even deeper.

I think there is a lack of understanding of the effects of multiplying “assurance services” and some level of confusion between the different concerns of “identification,” “assurance,” “authentication,” and “identity provisioning.”

It is clear the e-services strategy held by various European governments is based on the goal of reducing the cost of identity assurance by creating an “assurance” market, hopefully with the concurrency of the private sector. There is an expectation that a Government-created market (which would be supported by making it mandatory for citizen-agency interaction and transactions), would be able to reduce the cost of “assurance” for the government. This motivation should be at the centre of the discussion, instead of the more generic suggestions the strategy would primarily “improve” services for the citizenry.

The assumption that private assurance services would be commercially priced but mandatory — while at the same time being diverse — does not seem to reflect appropriately either on the costs for the citizens nor the business case for the private sector.  If the cost-reduction driver were clearer, the discussion would be more productive.

The key problem hampering this vision is that intended commercial, legislative and regulatory activities are focused on users of public services.  What about the private service offerings? In theory, the same private assurance providers could also sell services for other markets, but it is unclear how a Government-mandated and regulated sphere of services would coexist with the unregulated services. This uncertainty would need to be removed, perhaps with a different approach, to gain more private participation.

The proponents of these strategies also assume that a significant segment of society in each country will not use digital public services, or may require personal assistance when using these. The strategy would not work then if the market did not develop appropriate offline services? Here we see a potential conflict between the drive to reduce operational costs, the transfer of assurance services to the market (effectively the citizen-consumer) and the potential denial of benefits for the entire population.

At a different level, in terms of Security Management concepts and principles, the assurance service strategies pose important challenges for the private sector experts and leadership: While the main direction of the strategy is to form a “market” for “assurance” services, there is a non-explicit assumption there will be a market for “authentication” services. In other words, there is confusion or at least a conflation of two different security capabilities (authentication and assurance).

The term “assurance” should be used in the context of user identification and verification (ID&V) and should be treated separately from authentication capabilities (i.e. online credential validation and authenticated user data propagation). The term “assurance” is frequently used incorrectly, conflating the Identification and Verification process (ID&V) with the Authentication process.  It is true that almost all services—public and private– require the user to go through some form of initial registration and then through subsequent login procedures.  These two steps have different requirements and practical solutions, but in many public and industry documents we see they are not differentiated.

The question arises about the proposed “assurance” services: are these focused on the ID&V phase or the authentication phase? Will there be a combination of the two? This differentiation will become critical in time, because if an “assurance services” market is created and made obligatory – independently of what we may think about the notion of a compulsory “market”– not all participants will have the same ability or interest in “assuring” as well as “authenticating” an identity.

Greater levels of assurance come from the combination of multiple identity instruments or credentials, especially those which can be traced back materially (physically, biographically, biometrically) to the biological individual. What is then the exact meaning of assurance in this context?

The lack of differentiation between “assurance” and “authentication” generates other complex problems which need to be addressed. A private “assurance” provider with access to public information (for example birth records or passports) will have more “assurance quality” than a provider selling “assurance” on the basis of privately-operated ID&V or with less capacity to aggregate such data. On the other hand, an “authentication provider” does not need to be an “assurance provider,” but just operate in a federation or “circle of trust” with the “assurance” provider. Once this is understood, it will become clear the “assurance provider” may or may not be in possession of original identity data. In fact, in more technical terms, the current “assurance market” initiatives have weak distinctions between four interrelated but never identical Security concepts: “Identity Provider,” “Assurance Provider,” “Attribute Provider,” and “Authentication Provider.”

In the standard Federation architecture the Identity Provider is at the same time an “assurance,” “attribute,” and “authentication” provider, given the fact the main participant in the federation is also the “owner” of user ID&V process and data. Differently, in more advanced scenarios the four functions indicated above are separated. For example, an “assurance provider” could operate with identity data provided by a government agency, while the authentication provider could be a trusted third party in a federation.

If we consider a functional differentiation of the processes, questions arise about ownership of the data, data privacy and data protection. I believe that none of these questions has been resolved either in the public or the private sector in the context of the “assurance market” initiatives.

It is important to see that if the level of “assurance” provided by a private organisation depends on the quality of data provided by public agencies, it is difficult to see how this will match a strict public service rationale for the proposed scheme. Another conflict becomes clear: between cost offloading to the market (the consumer) achieved by means of commercialisation of citizen and consumer data.

This takes us back to the initial centralised e-government strategies. Originally, these directly sought to use the identity data stores in possession of public agencies in order to build standard “federations”; now, the assurance market proposals assume that identity data stores of the consumers will be a highly diverse mix of data repositories under private and public ownership, as well as private aggregations of citizen identity data. This contradicts the general goal of government-validated identity or central authoritative sources, while it relies on exporting the assurance function to the private sector. The situation will be quite different depending on the type of data the public agencies are able to master and provide.

In some countries, private providers –if they find interest in this market–will need to use national identity cards to reach a high level of “assurance”; while in other countries public information will be more diverse in quality and coverage (for example entire segments of the population may be missing from certain types of sources).

Therefore, in the whole, the consequence could be not only that operational assurance costs are off-loaded to the public, but also that assurance quality could become uneven and would not have a direct link with public authoritative sources. The loss of a direct link may be moderated by means of regulation and industry standardisation, but this opens a wider discussion as to the use of data, data privacy, end-user opt-out rights, data ownership, and data access rights.

Another implicit inefficiency of the proposed strategies is the generation of different (probably many) identity data stores both in and out of government agencies with differing quality, integrity, completeness, etc. This means the overall costs (at national level) of identity verification would be multiplied. Nothing would ensure the convergence of the identity data stores even if the Regulator controlled the new “market,” because the private suppliers would aim at the lowest cost of providing “assurance” for the mandatory transactions. On the other hand, premium services, which already exist for other markets, will rely on different, higher data quality and more complex data aggregation processes, hence increasing and not lowering the heterogeneity and overall cost of the identity data stores.

It is frequently assumed that a citizen-customer will not have to register with each digital service, and that he or she will not have to remember login details for each one. This is another level of the problem, and a consequence of the conflation of assurance and authentication.  The problem is the “assurance market” proposals wrongly expect that it would immediately ensure a uniform use of “authentication” credentials across services. The implicit assumption is the authentication technology for the entire market would trust identities validated by the “assurance suppliers.”

This is normal in Federated Identity Management architecture, but in those cases all the participants in the trust circle have interoperable technologies and standards and also direct or indirect (transitive) trust relationships between them. It is evident the strategy proponents expect that all providers and all participants in the scheme will have to invest and update their security infrastructures.

Precisely because the schema would require investment and infrastructure updates, coexisting assurance providers, identity providers, attribute providers and authentication providers would increase the need for the public agencies to validate, maintain and secure citizen data and their own security solution’s infrastructures. For example, for each public agency it will be necessary to implement an identity data, attribute mapping service to correlate trusted identities with the local data stores.

It is therefore not advisable to think that it will be easy for the public services to migrate all their users to the new schema. In fact it should be expected that most public agencies will need to continue managing their users and will want to do so.

In the medium to long term several approaches to the assurance market will coexist, based on the distinction between assurance and authentication, and developing new federation architectures with separate roles for identity, service, attribute and assurance providers. In all cases though, we will also see distributed Identity data management, and hopefully also an increasing adoption of user-centric assurance and authentication solutions.

Identity Trends

As a conclusion for this chapter, I would like to list several trends which sum up the current transformation:

  • The protection and compliance focus which Identity management inherits from the Security Domain will not disappear, but it will have a lesser role in the IT landscape than it has now.
  • Centralised control models over identities will be reserved for restricted areas of the IT infrastructures, while at the same time organisations implement federated and decentralised assurance services.
  • Privacy and Data Protection concerns will be seen as essential, but increasingly not as a central management task, and instead as rooted on the individual choices and different varieties of identity.
  • Identity management as a Service will experience rapid adoption but a single model will not exist, and corporations will sometimes have partly hosted and partly on-premises solutions.
  • The intellectual structure of Security and Identity management will change, moving from a focus on Risk Management, to a balance of Risk and Trust Management.
  • Given the perceived and real risks of network crime and disruption, security will rely even more on defences in depth, and a variety of identities and identity assurance levels while deploying more refined risk-based and attribute-based access controls (all of this enabled by Identity management solutions).

With Cloud Computing, we will see two major trends arising: Security and Identity “for” the Cloud, and Security and Identity “in” the Cloud. The first represents Security mechanisms and services to protect the Cloud (i.e. hosted, shared) environments; and the second represents Security products and services offered by managed, hosted Cloud platforms. The two are inseparable and will coexist for a long time, while simultaneously we will see a constant reduction of the role of corporate computing[xiii].



[i] IDC research indicates worldwide investment in cloud computing will treble between 2010 and 2013 when it will reach an estimated $44.2 billion. IDC Cloud Research, http://www.idc.com/prodserv/idc_cloud.jsp

[ii] “Worldwide revenue from public IT cloud services exceeded $16 billion in 2009 and is forecast to reach $55.5 billion in 2014, representing a compound annual growth rate (CAGR) of 27.4%, a newly published International Data Corporation (IDC) document finds. This rapid growth rate is over five times the projected rate of growth for traditional IT products (5%). This research further illustrates that public IT cloud services are crossing the chasm with modest revenue, but very fast growth”. The IDC report was cited by BusinessWire.com: http://www.businesswire.com/news/home/20100623005419/en/2014-Public-Cloud-Services-Grow-Times-Rate , 2010

[iii] Source: http://www.ibm.com/developerworks/webservices/library/ws-soa-simm/ IBM Service Integration Maturity Model (SIMM) , October 2005

[iv] The direction set by the Forum is synthesised in the “Jericho Forum Commandments”, version 1.2, May 2007. This document clearly addressed the “deperimetrised” organisations of the present, and future. This is already a reality in transnational corporations. The “fundamentals” section of the document in reference describes 3 areas which can be aligned with my own Security model as follows: 1) “The scope and level of protection” can be aligned with the “Direction” perspective, but two elements of the detail, “basic protection” –individual systems capable of “protecting themselves”– and “closer protection” for assets, can be aligned with “Protection” perspective for additional clarity. 2) The “Security mechanisms”, including the details, can be aligned completely with the “Selection” perspective. 3) The “Context” can be aligned with the “Verification” perspective. This is a very fine and forward looking document that should be obligatory reading for Security architects. It could be improved by a more precise combination of capabilities so that the reader can determine in practice when an organisation has transitioned to a deperimetrised context.

[v] Together with Cloud adoption, organisations experience an internal maturation process which ideally will end with the creation of Service Inventory Endpoints, i.e. well-documented service interfaces for legacy or web applications. Among these, Authentication and Authorisation Service Endpoints are also possible and desirable as these can immediately become the basis for Identity Federation services. This evolution is now visible in a small number of global organisations but its generalisation is not guaranteed. For a detailed explanation of the SOA Service Inventory Pattern see: SOA Design Patterns, Thomas Erl, Prentice Hall, 2009 ; and http://www.soapatterns.org/

[vi] The best place to start to learn these standards and projects is the Oasis web site: https://www.oasis-open.org/

[vii] See for example this article by Mike Fratto in Network Computing: http://www.networkcomputing.com/public-cloud-tech-center/public-cloud-is-neither-more-nor-less-se/240002750#

[viii] SAML version 2.0 is an OASIS Standard: https://www.oasis-open.org/standards

[ix] IBM has been “predicting” a world based on “utility computing” for many years, a long time before the Cloud trend started. See: http://researchweb.watson.ibm.com/journal/sj43-1.html

[x] The diagram shows several possible “routes” for Cloud adoption for large organisations.

[xi] This comes directly from my experience in major projects in Europe where I have observed excessive costs in standard corporate programmes (higher than Cloud computing by a factor of 10).

[xii] Consider for example the impact of Information Technology on business downsizing and restructuring in the 1990s.  See also: Grant W. Lawless, “Information Technology (IT) For Manufacturing: Where Has It Been, Where Is It Heading?” Journal of Industrial Technology, 2000.

[xiii] See: Nicholas Carr, “Does IT Matter?”, Harvard University Press, 2004