1. Security as a Problem

This is a draft of the book with the same title now available through Amazon:


Approach to Security Management

This thesis brings together many years of work in the IT and Security industry in public and private organisations. My experience and research have revealed several interlinked problems that result in low quality and performance levels in security investment decisions and project delivery. An important goal of this work is to show the complexity of interrelations, calling for a systemic view of the situation.

I approach IT Security Management as an organizational and sociological problem with a dense network of interacting processes. This network is characterised by “negative feedback” chains[i], interactions that have negative, self-sustaining vicious circles. This type of interaction is present in the whole of IT management, but has particular relevance for Information Security and Identity & Access Management.

The concept of a “negative feedback chain” or “negative causal chain” is generally associated with the disciplines of System Dynamics[ii], but it can be applied to any area of science. Identity & Access Management, one of several disciplines in the IT Security arena (alongside Cryptography, Network Security, Operations, Applications, Continuity and I&AM) is the one that most affects organizational, non-technical concerns and at the same time the one that is most affected by factors not dependent on “technology.”

Although organisational features affect the development and adoption of Network Security, the actual changes in this area do not affect an organization widely and tend to remain “in the background.”  Identity & Access Management directly induces changes in the organizational structure, and requires specific changes, without which the identity processes cannot even start.

Identity & Access Management requires the principle of Identity Data Ownership and Custodianship, for example, without which it is impossible to define, validate, and re-certify any access control model. Without these, in turn, none of the other advertised benefits of Identity Management (e.g. agile user provisioning) can be materialised.

The Identity & Access Management process also introduces changes, in that it creates new processes for authorization, maintenance, and termination of user accounts. It does not matter really, if these processes are automated or not. What matters is the processes are in place, and these are mandatory, standardized, and efficient. From a different angle, Identity & Access Management demands certain disciplines on IT project definition and execution, especially on the way user data is stored, updated and transported between systems.

A systemic approach to IT Management and Information Security decision-making requires planning and analysis of the organizational changes that are necessary for a successful outcome. On the other hand, organisational resistance to change is the main obstacle to Identity & Access Management as a business process.

Considering the negative feedback chains here described, it is essential to promote change to explain and modify the mind-set of the stakeholders and interested parties. Above all, it is essential to reduce uncertainty regarding organizational transformation and to show the need for a higher, more complex organizational model.

The changes implied by Identity & Access Management affect each one of the employees, temporary workers, contractors and consultants, partners and service providers in an organization. They also affect and extend to the customers and the public in general. In my experience, then, getting organizational changes right is the key to designing and implementing Security Management and secure Identity processes.


The Need to Discuss Security

For many years, it was almost impossible to discuss the principles of IT Security in professional circles. Even as late as 2005, it was seen as a scandal to mention the severe limitations of the IT Security specialities. The general, dominant assumption was that IT Management and IT Security were self-evident scientifically “true” propositions.

Something in my personal history, perhaps my heterogeneous and discontinuous experiences in life, led me nevertheless to have a sceptical view of technology in general. I was first an IT user during a period as academic researcher in the early 1980s, and then moved gradually through many of the technical and business areas typical of Information Technology: developer, integrator, specialist, vendor, architect, trouble-shooter and consultant in a career spanning 30 years to date.

Only after many years of work, I began to organise separate observations about my professional activity, and it became clear to me that overwhelming confidence in technology and lack of insight were in many respects similar to an ideology. The late works by Theodore Roszak were particularly educating in this respect.[iii]

In Roszak’s definition of these ideologies, nobody is responsible but also nobody is free, as an overarching imperative guarantees the goodness of technical progress. Even when an organization is damaged by technological mismanagement, there is no problem. Everybody is doing the best they can in a general movement of progress towards further and further adoption of technology and science.

My personal view increasingly became –—that we should be instead asking why we do not seek completeness instead of “progress.” This was especially visible in the 90’s, when the IT world was struggling with the push towards alignment between businesses and conventional IT practices.

From the point of view of business management, starting with successive monetary and economic crises at the end of the 20th century, it became less and less evident that more technology was the answer to any problem. Nevertheless, the expected alignment never was completed and even today the fundamental problem in every IT department is an almost complete alienation from the business direction.

In this context, proposing a conception of Security beyond or without technology was always received with derision and even disgust in the private circles of IT consultancy, especially in those more focused on the intricate technologies that came with the World Wide Web.

In spite of a general techno-centric slant and long resistance, it is now becoming a shared truth that Security is an organizational problem. Identity Management, in particular, as I like to say, is approximately only 5% technological and 95% a question of organizational transformation.

Why Does Security Fail?

Another approach to these problems started with research on the causes of IT project failure. It was an advantage for me to move across several IT specialties and technologies, as well as many private and public sectors, because this allowed me to see the common factors at work.

I have arrived at a notion of IT failure or success as a measure of lack of change. IT Security fails because it comes last, because it is limited to technology, because it is reduced to compliance, risk management, and protection. These terms will be explained in full in the following chapters.

Technological belief in “progress,” defensive reasoning, ideological thinking and action, and unconscious behaviour doomed IT Management and Information Security from the beginning. There is a spurious alignment of IT security by which it makes people focus almost exclusively on risk management (risk “reduction”), so that Security becomes synonymous with “perceived secure operation,” and risk becomes synonymous with “perceived risk.”

In the same way that microeconomics should be studied in the context of national and international economics, Information Security should be understood in the frame of IT economics, and IT economics should be made dependent on microeconomics arguments. In this sense, Security decision-making operates on a dependent, derivative group of technologies and disciplines. Moreover, Identity Management is in turn doubly dependent as it is itself embedded in Information Management in general.

Bound by these circumstances and devoid of a proper link to business decision making, Identity Management becomes a reactive, administrative discipline. For the same reasons, Security ends up as a discourse of exclusion, control, monitoring, and retribution. I sense these perceptions will soon be widely accepted.

All errors derive from wrong methods, but no method can guarantee the intended results. A well-founded method is one that is complete, one that balances theory and practice and, essentially, all aspects of a problem. An error is usually defined as a decision that leads to a loss of something valuable. In practice, though, good decisions may lead to negative results too, and good results may be attained by flawed decision methods. This arises from the nature of decision-making, as the results of an action depend on the purposes of the decision-maker, and not on the procedures followed.

To achieve good, satisfactory results, a decision has to be complete, covering all the perspectives of action. Decision methods must be judged not principally by their results in a specific instance, but by correlating methods to purposes.

Human action is completely determined by partial knowledge, and fundamental sources of uncertainty are human actions themselves. While we can seek to reduce uncertainty and risk with decision procedures, we cannot eliminate it. When we decide, we necessarily accept uncertainty, take risk, share risk, and assign trust to other participants in action.

This principle is present in IT Security in a most elementary form: almost all Security disciplines are predicated on access controls. It is assumed that good security is equivalent to a framework of policies and measures to keep unauthorised access from occurring. Thought outside of this conception is very rare.

This work addresses the problems of decision-making in Security and Identity Management in the context of business and organizational transformation. To date, Security disciplines have been dominated by a focus on “protection” and “access control” with a secondary focus on business models and strategies. This is why we need to redefine IT and Security management.


The Missing Discourse

My work focuses on the critique of techno-centric discourses in IT and Security management. These “mechanistic” discourses arise from a long period of explosive adoption of electronic computing.

I see two main “discourses” operating across the IT professions: the discourse of Risk Taking and Trust Definition (or the discourse of the owner), and the discourse of Risk Avoidance and Trust Enforcement (of the discourse of the technologist). A partial analysis of these discourses can be found in some of my previous publications. [iv]

A key finding of my research is though, that while these two discourses are “present” in every aspect of IT management, the current dominance of technological ideologies causes a very specific transference of responsibilities from the “asset owners” (the legal proprietors) to their managers; and from these to the “experts” (the implementers of the technology).

This transfer of responsibilities appeared on top of the historical ascent of the managerial classes early in the 20th century, but then accelerated as electronic computing substituted many productive functions and created new layers of managerial workers. The producer became a mediator. The problematic discourse that dominates Information Technology is present in other areas of organizational life and business, but it is particularly clear in IT and Security management through the serious problems it generates.

The most interesting result of this double transference is the lack of a discourse of ownership across all levels of the enterprise. This phenomenon has been observed recently at the level of business management, with the prevalence of “absentee” owners in the banking sector, but the same phenomenon is present at all levels of the organization.

In IT and Security management, there is a “missing discourse” characterised by the fact the “business operator” or manager can and may only act as a representative of the owner, but only in a formal capacity and never as a real, committed proprietor of the business.

It is my aim to show how the lack of the “discourse of the owner” has incalculable consequences for the planning, analysis, design, and implementation of business programmes and in particular in IT and Security management. In essence, what we see is the absence of a discourse of the owner, or the Master Discourse[v], as the lack of ownership is the primary catalyst for all organizational failures and for the permanent unsolved problems in Information Security and IT in general.

The absence of the Master Discourse is an absence at the level of experience, but not at the level of the structure, because the discourse of technology (“science”), in the foreground, actually presupposes the discourse of the Master, mimics the discourse of the Master, and responds to all challenges speaking in the name of a Master that cannot be seen. There is an extreme level of faceless technological control, executed according to a series of assumptions but where no single individual takes ownership. This in fact means that nobody takes responsibility for failure and all “progress” is illusory.

This state of being can be described as also a situation where truth is unspeakable, not only because it is psychologically hidden or negated (as rooted in the absence of the Master), but also because it is quietly but effectively banished by corporate and “professional” ideologies and structures. Secrets become lies, and lies become secrets. The differences between seeming and being are inverted and reflected through the chain of discourses.[vi]

In this context we can speak of a world of “hostile cooperation,” a Hobbesian state of “all against all” in corporate life. This state is not necessarily malignant, as it arises from natural conditions of human society, but many consequences are negative. The essence of this state is that cooperation exists only when there is negotiated advancement of the “consensus.” In practice that means there is effectively an obstacle to actions not signed by the promotion of private, subjective “interests.” The human person does not have a place.

At a different level, this can be conceived as the prevalence of incomplete action, where the dominant actors and their intermediaries work only to control waste and loss with very short-term goals in mind. A wider analysis of these matters could lead to a critique of capitalism, or more precisely of the contradictions between some forms of capitalism and how these become obstacles to the development of regional and national markets. It must be emphasised that, in any case, incomplete action and short-term capitalism are consequences of the loss of personal context and social coherence.

What does this have to do with IT management and Security? As I will show in this book, there is a top-down causality in our professional activity, determined first by history, then by economics, and only by technology at the end of the chain. For example, if we propose the need for a balance between the discourses of risk and trust (risk taking and trust enforcement), we soon have to recognise these positions are made impossible due to the lack of the discourse of the Master. This absence has nothing to do with IT or Security. It is rooted in history and economic structures.

This is another way of saying there is an increasing complexity of business and life in general, and that no phenomenon can escape these processes. Techno-centric ideologies tend to assume they are free from historical factors, but that is precisely the symptom we need to address.

In the past “progress” meant a development of the division of labour, i.e. new technologies, which were practices of social groups differentiating themselves in the context of society. There was a sense that social and technical developments were interdependent and supported each other. Today, that sense is lost and may never be recovered.[vii]

Global Trends in IT and Security Management

The general trends transforming business also transform Identity and Access Management. This has to be thoroughly understood so that we do not continue thinking there is some intrinsic value in “doing” Identity Management. It is not intrinsically better to apply a “trust management” perspective than focusing on “risk management” as in the past. The reason for this is that Identity Management is completely dependent on the fate of IT as a whole, and this in turn is entirely subsumed in the direction business may take.

Overarching and global trends will determine what happens with Information Security and Identity in IT:

  • The protection and compliance focus, which Identity Management inherits from the Security Domain, will not disappear, but it will have a lesser role than it has now.
  • Centralised control models over identities will be reserved for very restricted areas of the IT infrastructures, while at the same time organisations implement federated and decentralised assurance services.
  • Privacy and Data Protection concerns will be seen as essential, but increasingly not as a central management task and instead as based on the individual choices and different varieties of identity.
  • Identity Management as a service will experience rapid adoption, but a single model will not exist. Organisations will have partly hosted and partly on-premises solutions.
  • The intellectual structure of Security and I&AM will change, moving from a focus on Risk Management, to a balance of Risk and Trust Management.
  • Security will rely even more on defences in depth, a variety of identities and identity assurance levels while deploying risk-based and attribute-based controls.

When considering the future it is essential to look beyond the IT disciplines. Identity Management in particular needs to stop thinking about itself, and stop producing more and more detailed sub-specialities and taxonomies for user administration: provisioning, access, roles, governance, or compliance. Looking beyond itself means setting its direction in accordance with wider aims at the level of the economy, industrial sectors and specific business operations.

If we seek the links with higher and wider economic and social goals, we rapidly find there is a fundamental problem, rarely recognised in professional circles. The problem is the lack of a Security Theory. In other words, we have only “models,” “principle,” “frameworks” and “best practices,” but no theory at all. Better said, we have a situation where Security theories are implicit, in the form of assumptions and presuppositions, but are untested or just taken for granted. These principles and assumptions are embedded in well-established professional training manuals and technology documentation. One very visible case is the overwhelming prevalence of the definition of Security as risk management. This has remained unchallenged for many years, with very few exceptions.[viii]

Contrary to the dominant ideology, my work in Security and Identity Management is focused on innovation and change. While this emphasis is essential in business in general, this is particularly relevant for Security. I see it as my mission to work with experts and clients, sometimes against conventional “wisdom,” analysing the problems in depth and bringing in new approaches and understanding.

I focus my message on conceptual integrity and completeness, and my goal is to offer a coherent view and strategy that makes sense for business leaders, government institutions and the public in general. It is my experience that too frequently organisations are affected not by lack of knowledge, but by lack of forward thinking and anticipation. This is particularly the case in the Security arena, where professionals tend to be “defensive” and “static” with the consequence that they end up resisting change and improvements.

I developed the approach presented in this book along many years as a strategist and trouble-shooter for major information technology organisations. I found that that few problems in the Security practices arose from technological causes but from the uses of technology, especially from organisational factors. In response to this, I formulated a systemic solution method highlighting the blocking factors in the organisation first, and addressing the technological levels second. Many organisations still have difficulty in seeing Security as part of the Business Model, or as part of their information management strategy, but the moment is coming when a Security Architecture will be an essential part of corporate strategy.

I propose a coherent model encompassing four Security disciplines, those of Direction, Selection, Protection, and Verification. This is my model of “complete action” in the Security Arena. Developing this model is my central motivation in writing this book.

Theories of Error

Contemporary management practices are “theories of error.” They can be rightfully called so in two senses: first, because they are ideologies purposely constructed for the management of failure, and second, because they are impervious to error. While on the surface these theories appear as reasonable technologies to control the outcomes of human organised practices, in the course of their implementation they become little more than justifications for limited action if not even theories of how limited, incomplete action can be passed off as successful.

Management Theories present themselves as technologies of resource optimization, as well as success-oriented and excellence-seeking methodologies. Ideally, management covers various aspects of human action:

  • “Delivery” – Here the concern is how I deliver change to the business quickly and cost-effectively. In IT, this means getting technology investment under control and optimizing it by avoiding waste and inefficiencies. The ultimate goal–still unattained in the whole—is to align technology investment with the business strategy.
  • ”Change”—Here the concern is to ensure that technology is an enabler for organizational change. The question is how to use technology to facilitate change, while reducing costs and exploiting opportunities for capital accumulation.
  • “Implementation”—Here the concern is to minimize technical risk for service delivery. The question is how to ensure timely and economical technology solutions that are fit for purpose and correspond to expected standards.
  • “Budget”—Here the concern is to manage investment and expenses so to ensure these support and maintain profitability at least or preferably increase the profitability of capital. The question is how to exploit technology so that investments are controlled, fully exploit previous investments, and deliver the expected financial results.

Looking into these requisites of business management, which are applicable to IT and Security, I often call for a balanced approach. These four principles all have to be present in a good Security Management initiative. Nevertheless, it is also true that Security by itself cannot reach such a balance because it is a dependent discipline, a subsidiary of other levels of management. All the limitations and intentions present in business management express themselves in Security Management.

What we find in the market is that, on the whole, business managers are frequently making sub-optimal or even wrong decisions in respect to IT and Security. The negative experiences in Identity Management are particularly serious. It is therefore necessary to develop our professions to face up to and resolve these problems, which rise up from the passive and techno-centric advice that is now prevalent. It is valid to ask in this context if business demonstrably suffers because of limited, incomplete decision processes in the Security space. Some specialists tend to think that decisions may well be wrong from a security perspective but not lead to any negative outcomes outside of their perception of acceptable risk. In that case, are such decisions actually wrong?

What would you think of a multi-million Pound Sterling investment that sits on the shelf for years while the customer pays maintenance for a system that never was implemented? Is an investment intended to enable regulatory compliance that was never delivered and was instead replaced with additional investment on custom software not a failure? What can we say about a choice of technology that does not match roughly 90% of the business requirements? What would you call a solution selected after it was rejected by three successive technical evaluations or Requests for Proposal?

Perhaps some would argue that stretching things, we could say that organisations are taking their fate in their hands and assuming some risks; or perhaps they are willingly going into such situations and therefore assimilating some specific costs for some specific risk levels, but surely that is not very satisfactory. The real solution to these problems lies in having the ability to estimate the economic outcome of IT and Security decisions. For each of the bad decisions mentioned above there are evident financial costs:

  • The costs of paying licences for an unwarranted technical choice.
  • Wasted time and effort in technical assessments that are subsequently ignored.
  • Additional investment required for remedial work to substitute the failed implementation.
  • Additional risks incurred by delaying regulatory compliance.
  • Additional costs caused by the perpetuation of a bad security arrangement for years on end.
  • Productivity losses due to complexity of the manual operation of the system and excessive use of personnel for security operations.
  • Wasted effort in developing in-house solutions (custom code) that will have to be discarded when the security system is implemented.

Similar examples of waste, over-spending, mismanagement and increased risk, cost and disadvantage can be found across the IT and Security market. It is not a matter of being or not being a security “purist.” The management errors are too evident and numerous to be ignored.

While the formal, “final” decision is a business matter, and while the business executives carry the responsibilities arising from those decisions, IT and Security experts need to be able to recommend the best decisions from an economic point of view. Not doing so, and taking refuge in the notion that it is not our problem, is a dereliction of duty. We need to speak the language of business and operate under business, capitalist criteria. The language of business is a financially-centred, calculation-based language. It is the language of economics. We need tools to measure security investments, in the same way as we measure other aspects of the effectiveness of our IT strategies.

Following this, it makes sense to review the concepts of information, system, organization, security, and identity to make sure they correspond to clear micro-economic ideas. Too frequently, we take these concepts for granted, but on closer analysis, it is not self-evident what we, the IT experts, understand by information, or by systems security. Therefore, it is necessary to define these terms.

To begin with, we need a definition of “information,” not a general one, but one useful for information security. Information security is not an area unique to itself. It is just another business concern, comparable to Continuity Management, Availability Management, Configuration Management, and other related processes. If it appears as something “different,” it is only because of historical reasons. There is still some trendiness in seeing Security as something special, but nothing stops us from evaluating Security strategies and investment decisions in the same way we assess any other business financial and operational process.

 So far, my focus has been to point beyond standard risk-based analysis, not because of a desire to negate the importance of risk analysis, but because of the specific goal of investigating how investment decisions are actually made, with or without risk analysis, and independently of the depth and quality of it. As probably all Security professionals have experienced, risk analysis and risk management constitute only one more factor in the investment decision process, and do not seem to be very effective in guiding investment decisions.

Years ago, while working for a global consultancy firm, some senior managers told me there would be very little attention paid to my research programme, and that I should dedicate my energies to something different. I didn’t find that discouraging. It actually showed how serious professionals can become trapped in a haze of self-sustaining ideologies, until it is too late and everybody suffers from the lack of alternatives.

We can only become better professionals if we understand the context of our efforts. How is a Security strategy designed? How are investment decisions made? What is at stake? What is really our purpose when devising an Enterprise Architecture?

Enterprise Architecture

Around the same time I was having such debates, I started following the work of John Arnold, a British Security Architect. In 2006 I adopted some of his categories for Security strategy summarised in the principles of “Select, Protect and Detect[ix] .” As part of the Jericho Forum, John Arnold has made very important contributions to the theory of Collaboration-Oriented Architecture or COA.[x]

Arnold addresses directly enterprise architecture as “collaboration” oriented. Many types of organisations are already in a position where the only approach viable is “collaboration-oriented,” beyond the old security style based on “containers and perimeters” defined by either “letting you in or keeping you out.”

Arnold correctly identifies the current trends in mobility and agility, de-perimetrisation, outsourcing, cloud-based solutions, demand for scalability and more complex access policies.

On the other hand, while the COA approach makes full sense of these trends, it still does not address the investment decision process at the economic level.

As Arnold explains the term “collaboration” generalises the concept of “contract,” as being an agreement with elements of offer and acceptance, a price, criteria for legality, and mutual understanding of what the contract is about. This is a good approach and I believe this should be exploited fully. To deepen the notion of collaboration contracts, it is necessary to consider relationships between rights and obligations as well as those between liabilities and immunities. I will cover this in a chapter of this book when discussing the theories of W. N. Hohfeld[xi].

Beyond these points, perhaps the most important concept proposed by Arnold is that of a “lifecycle of the collaboration” (Search, Negotiation and Fulfilment and Termination), because it maps to previous work he did around the notions of Elect, Protect and Detect as indicated some lines above. Sadly, these older concepts are not clearly present in Arnold’s recent work. I have based my model on an extension of the ideas of Elect, Protect and Detect, and I explain this in a separate chapter under the theory of the Security Perspectives (Direct, Select, Protect and Verify).

Adopting a wider conceptual framework, as Arnold has done, links Information Security with business and economic considerations. Technology moves to the background when the goals and principles of Security are determined primarily by the microeconomics (more precisely by capital profitability and capital investment decisions.)

In spite of the positive aspects that I note here, Arnold’s work does not bring a new answer to the definition of Trust. Arnold explains, “collaborations create rights and obligations and that trust is an essential precondition for collaboration among people and organisations.”

In my view, this is only half of the reality: trust is also a post-condition, in the sense that trust is “defined” at the beginning of the collaboration lifecycle, but then has to be established, enforced, and monitored. Therefore, it is not an ingredient that comes first but a permanent facet of all corners/sides of the trust lifecycle. My own work on this matter shows the four sides of this process.[xii]

Also in my view, risk is intrinsic to the trust lifecycle and runs in parallel to the instances of trust management: for example, for the instance of “trust definition” there is an immediate correlation, which is “risk-taking”.

A conventional approach to risk is one where it is inherently statistical and negative. Risk is associated negatively with “events” and “actors” which affect “assets.” I would suggest that risk arises also from risk-taking decisions of the decision makers; hence, it has positive values in at least three moments of the four we can see in the collaboration lifecycle.

Arnold’s “Trust Process” or collaboration lifecycle lends itself to a close mapping to the four-fold model of Trust Definition, Trust Allocation, Trust Enforcement, and Trust Monitoring that I have been developing in the past six years.

The steps proposed by Arnold can be mapped as follows:

A) Need Identified + Searching + Potential Partner Identified = Trust Definition

B) Negotiating + Collaboration Agreed = Trust Allocation

C) Fulfilment + Resource Access = Trust Enforcement

D) Fulfilment Events + Analyse Performance + “detect good or bad performance” + Manage Reputation = Trust Monitoring.[xiii]

The best aspect of Arnold’s work is that he sees organizations as collections of collaborations. This is precisely the outcome of an era of globalization and transnational capitalism together with extensive utilisation of electronic computing and commerce. Arnold notes that current Security tools and architectures aim at securing “individual” accesses. He is fully right in understanding that this is fundamentally wrong, and defends the need to make access control decisions at the level of a complete partner-to-partner relationship, i.e. a collaboration agreement.

I believe that this approach is the root of many improvements in IT and Security management, perhaps going even beyond “collaboration-oriented architecture,” addressing what Security should be in a very wide sense. Thanks to this approach, security itself becomes collaborative, insomuch as assurance levels, access routes, processes and data governance are defined and managed outside of the limited box of the traditional techno-centric management.

Wasting Time in an Impossible Mission

“Wasting time in an impossible mission”- that common sentence summarises the actual state of affairs in the IT sector, perhaps more for the user side of it, either the organizations using IT systems, or the end users, as either employees, citizens or consumers.

Some so-called “new” approaches to Security have tried to explain what is happening, by saying that behavioural aspects have been underestimated [xiv], but we would need to go deeper to reveal that no amount of clever “behaviourism” will resolve the problem if technology continues to be the focus of our profession. Let us be frank: in many cases, we continue overvaluing technologies that are actually part of the problem.

An extreme view of this exists, represented by philosophers who have identified the mere use of technology as a threat to human culture and being[xv]. My position is not extreme in this sense; as I do think there are technologies (as I will demonstrate in this book) that can readily address the problems here discussed.

Technology guarantees uniformity, standardised futures, says T. J. Rivers. We see the world “in need of alteration,”[xvi] and act upon it not on principle but on circumstance. This may be true at one level, once the technological cause is already established in the world, but it would be an error to see technology as “the” cause of the loss of human purpose. Contrary to this, I see technology as a product of people transformed by capitalism, not capitalism as a product of people transformed by technology. More critically, capitalism itself is a product and not the proverbial “structural cause” of all the problems of humanity.

In a pre-capitalist and “cultural” era, human actions arose from personal, family, tribal, and ethnic context, while in late capitalism “being” (using River’s terminology) appears as arising from human activities (“actions”). I would add precision to this by saying that current human activities, in the post-cultural era, are partial and incomplete, and in this sense they are “technological”, i.e. devoid of context and content, generic, global, a-cultural if not even anti-cultural. This historical envelope is characterised by mass-dispersed mediocre activities that are nevertheless pragmatic and normal. Not a disease, but the necessary product of a wandering globalised humanity.

There is nothing wrong in seeking the perfection of technology if we fully understand that it is the product of a previous loss, the loss of complete human action; but the key is that we see technology for what it is, and we use it with cold comprehension and no illusions.

After all, electronic computing technologies are wheels and cogs in a universal machine that is the harbinger of a global post-cultural world where the only danger is that we continue believing and following new monolithic ideologies in place of the old ones. We should stop being driven to whatever technology says is possible, and technology needs to be challenged to achieve freedom and purpose.

Information Ideology

In 1986, Theodore Roszak, one of the original philosophers of the new technical era, nevertheless denounced a “new ideology, the ideology of computer technology and information science which has become almost a new substitute religion.”[xvii] He called it “cyberism,” the cult or creed of information.

Roszak understood the key premise of cyberism “is that information and technologies connected with it – notably the personal computer and the home television set – are creating a new social order. The data banks and the smart machines – not the working classes, not the intellectuals of academia, certainly not the politicians of the capitals and the courthouses – are now being heralded as the true drivers of revolutionary change. This is the core belief around which all the other elements have grown – opinions about everything from economics to human nature – which it takes to build an ideology.”[xviii]

I equate Roszak’s “cyberism” with the Mechanistic Perspective in Information Security (as I explain in this book), entirely relying on machines and processes” to “handle” information. This perspective assumes that information can be conceived as an object in itself, as a flowing substance which is objectively, physically valuable.

The Mechanists unwittingly support and are supported by an ideology that is geared towards the reign of technologies and the use of technologies for the sake of it. On the other hand, this is not only the position of the technologists or IT experts, but also that of technology vendors and many consultancy firms.

W. Truett, commenting on Roszak’s work, writes, “For cyberists, information is not only a political force; it is also the new economic power that supplants or transforms all the other forms of capital.” Daniel Bell, one of the first of the information theorists, called information “the strategic resource and transforming agent of the post-industrial society,” the central pivot in a “new social framework based on telecommunications.” [xix]

Cyberism –as noted by Truett and Roszak– the “idolatry of information” –extends into politics, business, management and all human relationships. For example, private and public management skills, as taught in management schools are unthinkable without reference to “information technologies.”

It is in this context that I propose to investigate what IT management is and what Security means in that frame. Not doing this, accepting the given truths of the information age, would most decisively disallow any change and any hope of finding new ways for our professions.

My long experience in the IT world has convinced me there is a deep, pervasive, persistent, ingrained cause of error and failure in judgment and that business planning cannot be practiced outside of a coherent philosophical understanding of human nature and history.

Philosophical work is not common in the techno-centric world. It is sometimes rejected without a thought, but I never found a valid reason for not thinking philosophically about my own practice.

A good approach is to start by revealing the metaphors and paradigms that operate in day-to-day IT Management and Security thinking.

The “root-metaphor” theory of S. Pepper[xx] and Hayden White[xxi]  is very useful to organise a review of the IT world as a whole, and most of my work in this area is indebted to these thinkers. For example, I mapped the trends found in IT and Security to Pepper’s four “root metaphors.”[xxii]

Similar very powerful approaches are those of G. Morgan and G. Burrell[xxiii]  and H. Dooyeweerd[xxiv]. A more direct approach to paradigmatic analysis is that of Klein and Hirshheim, whose text should not be missed by any IT practitioner. [xxv]

Every human endeavour has an element of error, due to uncertainty and “natural” uncontrollable factors, but more deeply analysed, we can see the central element of error, beyond circumstances and accidents is the tendency towards “self-serving cognitive distortions” or “cognitive conceit.”[xxvi]

This is precisely what I have found in my experience and the analysis of IT and Security strategies and decisions: we can see the coexistence of multiple “paradigms” and “metaphors” at play in corporate/organisational life, and how these tendencies affect investment and design decisions.

This analysis also shows how the philosophical motives of these tendencies (usually unconscious) control the ideologies at play in all spheres of business and technology.

I do not think that every error or every problem can be reduced or should be reduced to the “noetic effects” of the underlying paradigms, but this principle certainly illuminates the way forward. My investigation will show in some detail how we can make sense of error in such a specific area as organisational decisions and security investment, why error and failure in judgment are so persistent, and why organisations fail repeatedly in their choices.

Unilateral thinking, bound to one or the other paradigm stifles human reasoning, but not absolutely, not totally. This tendency affects us less when we operate at the level of the empirical, the numerical or arithmetical; but its effects become overwhelming when we need to think about purposes, strategies, and longer-term durations.

Moroney quotes theologian Emil Brunner: “the more we are dealing with the inner nature of man, with his attitude to God, and the way in which he is determined by God, it is evident that this sinful illusion becomes increasingly dominant.”[xxvii]

Let the reader not be distracted by the reference to religion in this context, as this is just a way to understand, to express purpose. There is no philosophy without purpose, and purpose does not need to be religious. Both believers and unbelievers will have had the experience that it is much more difficult to ascertain truth when we are dealing with “the inner nature of man” (i.e. with the personal, contextual nature of the individual), than when we deal with consensual, socially shared objects.

I am painfully sure that unethical thinking, opportunistic marketing and failed strategies and practices are rooted in this problematic lack of insight and understanding of the “inner nature,” while this rarely means the same actors are either ignorant or inept in manipulating the symbols of commerce and social interaction.

Therefore, the essence of the matter is judging results by purposes and complete actions, not by apparent short-term “success.” In the same way as ideologies affect judgement, unilateral thinking in practical matters inverts reality, and error appears to be “pragmatism,” dissolution appears to be “normal,” lack of direction appears to be “flexibility” and lack of meaning is exalted as “strategy.”

I have limited this book to the problems around IT management and Information Security, to the errors caused by undue focus on one or the other aspect or modality of action and thought. It is nevertheless important to remember, when reading these pages, that my ultimate aim is not to just achieve balance among philosophical or cognitive paradigms, but to understand these and master them so that we are liberated from the unconscious laws that enslave our minds.


[ii] J.Forrester, 1968;  Flood and Jackson, 1991

[iii] Theodore Roszak, “The Cult of Information”, 1986

[iv] http://carlos-trigoso.com/2011/04/04/what-security-shall-be/

[v] Jacques Lacan, “Encore” (Séminaire Livre XX), Paris 1975, p.32

[vi] Corporate and professional discourses orbit around the terms of “difference” (value as sign and status), “equivalence” (value as exchange/relation – market/commodity), “practicality” (value as use/object – utility/instrument), and “ambivalence” (value as symbol/gift).

[vii] Today “progress” just means cost and complexity reductions in the mind of the representatives of the absent master. Contrary to this, the “return” of ownership would mean increase of complexity and “cost” i.e. increased investment and organic transformation.

[viii] Donn Parker, “Beyond Risk Based Security”, 2006

[ix] John Arnold, “Security Services Model”, 2006

[x] https://collaboration.opengroup.org/jericho/presentations.htm

[xi] Wesley Newcomb Hohfeld (1879 – 1918)

[xii] http://carlos-trigoso.com/2011/04/04/what-security-shall-be/

[xiii] https://docs.google.com/document/edit?id=1IEVxlJesGn7h_vK1pa4yvjXY59ERtVqzn6yvX6mAlaM

[xiv] http://newschoolsecurity.com/2011/12/the-new-school-of-security-predictions/

[xv] Ulises Mejias, http://blog.ulisesmejias.com/2006/06/03/technology-without-ends-a-critique-of-technocracy-as-a-threat-to-being

[xvi] T.J. Rivers, 1993, quoted by U. Mejias

[xvii] Theodore Roszak, “The Cult of Information”, 1986

[xviii] http://shkaminski.com/Classes/Readings/Roszak.htm

[xix] Walter Truett Anderson, http://articles.baltimoresun.com/1996-04-03/news/1996094015_1_ideology-political-parties-smart-machines

[xx] Stephen Pepper, 1891-1972, http://people.sunyit.edu/~harrell/Pepper/Index.htm

[xxi] Hayden White, 1928, http://www.phillwebb.net/topics/History/White/White.htm

[xxii] http://carlos-trigoso.com/2010/04/01/security-perspectives-protect-detect-direct-select/

[xxiii] Gareth Morgan, Gibson Burrell, “Sociological Paradigms and Organisational Analysis”,1979

[xxv] R. Hirschheim, H.K. Klein, “Four paradigms of information systems development” ,1989

[xxvi] Stephen K. Moroney, “The Noetic Effects of Sin”,2000

[xxvii] E. Brunner,”The Christian Doctrine of Creation and Redemption”. See also: http://www.asa3.org/ASA/topics/ethics/CSRSpring-1999Moroney.html

Draft Copy – This is work in progress – Do not quote without written permision from the author –