Thursday, August 28, 2014

Architectural Security aspects of BGP/MPLS

The inherent benefits of the MPLS (Multi Protocol Label Switching), is gaining widespread use for providing IP VPN services. With the emerging trend of connected systems, a global enterprise today is well connected with their partners, with MPLS being the preferred choice. Border Gateway Routing Protocol (BGP) is used to interconnect such autonomous systems by exchanging the routing informaiton across such systems. The emergence of Multiprotocol Extension, and other variations of BGP Protocol, has furthered the choice of MPLS VPNs. On the same lines, the security concerns on using such a network is also on the rise. The specific demands of customers in terms of security is also emerging as they experience issues of data breaches and security incidents.

The objective of this blog is not to explain about the BGP / MPLS as such and instead let us examine how the BGP / MPLS addresses the typical security requirements in this blog. The following sections of this blog have been extracted from the RFC 4381 published by Internet Engineering Task Force (IETF) in 2006.


Address Space, Routing, and Traffic Separation

BGP/MPLS allows distinct IP VPNs to use the same address space, which can also be private address space. This is achieved by adding a 64-bit Route Distinguisher (RD) to each IPv4 route, making VPN-unique addresses also unique in the MPLS core. This "extended" address is also called a "VPN-IPv4 address". Thus, customers of a BGP/MPLS IP VPN service do not need to change their current addressing plan. The address space on the CE-PE link (including the peering PE address) is considered part of the VPN address space. Since address space can overlap between VPNs, the CE-PE link addresses can overlap between VPNs. For practical management considerations, SPs typically address CE-PE links from a global pool, maintaining uniqueness across the core.

On the data plane, traffic separation is achieved by the ingress PE pre-pending a VPN-specific label to the packets. The packets with the VPN labels are sent through the core to the egress PE, where the VPN label is used to select the egress VRF. Given the addressing, routing, and traffic separation across an BGP/ MPLS IP VPN core network, it can be assumed that this architecture offers in this respect the same security as a layer-2 VPN. It is not possible to intrude from a VPN or the core into another VPN unless this has been explicitly configured. If and when confidentiality is required, it can be achieved in BGP/ MPLS IP VPNs by overlaying encryption services over the network. However, encryption is not a standard service on BGP/MPLS IP VPNs.

Hiding of the BGP/MPLS IP VPN Core Infrastructure

Service providers and end-customers do not normally want their network topology revealed to the outside. This makes attacks more difficult to execute: If an attacker doesn't know the address of a victim, he can only guess the IP addresses to attack. Since most DoS attacks don't provide direct feedback to the attacker it would be difficult to attack the network. It has to be mentioned specifically that information hiding as such does not provide security. However, in the market this is a perceived requirement. 

With a known IP address, a potential attacker can launch a DoS attack more easily against that device. Therefore, the ideal is to not reveal any information about the internal network to the outside world. This applies to the customer network and the core. A number of additional security measures also have to be taken: most of all, extensive packet filtering. For security reasons, it is recommended for any core network to filter packets from the "outside" (Internet or connected VPNs) destined to the core infrastructure. This makes it very hard to attack the core, although some functionality such as pinging core routers will be lost. Traceroute across the core will still work, since it addresses a destination outside the core.

Being reachable from the Internet automatically exposes a customer network to additional security threats. Appropriate security mechanisms have to be deployed such as firewalls and intrusion detection systems. This is true for any Internet access, over MPLS or direct. A BGP/MPLS IP VPN network with no interconnections to the Internet has security equal to that of FR or ATM VPN networks. With an Internet access from the MPLS cloud, the service provider has to reveal at least one IP address (of the peering PE router) to the next provider, and thus to the outside world.

Resistance to Attacks

To attack an element of a BGP/MPLS IP VPN network, it is first necessary to know the address of the element. The addressing structure of the BGP/MPLS IP VPN core is hidden from the outside world. Thus, an attacker cannot know the IP address of any router in the core to attack. The attacker could guess addresses and send packets to these addresses. However, due to the address separation of MPLS each incoming packet will be treated as belonging to the address space of the customer. Thus, it is impossible to reach an internal router, even by guessing IP addresses.

In the case of a static route that points to an interface, the CE router doesn't need to know any IP addresses of the core network or even of the PE router. This has the disadvantage of needing a more extensive (static) configuration, but is the most secure option. In this case, it is also possible to configure packet filters on the PE interface to deny any packet to the PE interface. This protects the router and the whole core from attack. In all other cases, each CE router needs to know at least the router ID (RID, i.e., peer IP address) of the PE router in the core, and thus has a potential destination for an attack.

A potential attack could be to send an extensive number of routes, or to flood the PE router with routing updates. Both could lead to a DoS, however, not to unauthorised access. To reduce this risk, it is necessary to configure the routing protocol on the PE router to operate as securely as possible. This can be done in various ways: 

  • By accepting only routing protocol packets, and only from the CE router. The inbound ACL on each CE interface of the PE router should allow only routing protocol packets from the CE to the PE. 
  • By configuring MD5 authentication for routing protocols. This is available for BGP (RFC 2385 [6]), OSPF (RFC 2154 [4]), and RIP2 (RFC 2082 [3]), for example. 

This avoids packets being spoofed from other parts of the customer network than the CE router. It requires the service provider and customer to agree on a shared secret between all CE and PE routers. It is necessary to do this for all VPN customers. It is not sufficient to do this only for the customer with the highest security requirements.

It is theoretically possible to attack the routing protocol port to execute a DoS attack against the PE router. This in turn might have a negative impact on other VPNs on this PE router. For this reason, PE routers must be extremely well secured, especially on their interfaces to CE routers. ACLs must be configured to limit access only to the port(s) of the routing protocol, and only from the CE router.

Label Spoofing

Similar to IP spoofing attacks, where an attacker fakes the source IP address of a packet, it is also theoretically possible to spoof the label of an MPLS packet. For security reasons, a PE router should never accept a packet with a label from a CE router. RFC 3031 [9] specifies: "Therefore, when a labeled packet is received with an invalid incoming label, it MUST be discarded, UNLESS it is determined by some means that forwarding it unlabeled cannot cause any harm."

There remains the possibility to spoof the IP address of a packet being sent to the MPLS core. Since there is strict address separation within the PE router, and each VPN has its own VRF, this can only harm the VPN the spoofed packet originated from; that is, a VPN customer can attack only himself. MPLS doesn't add any security risk here. The Inter-AS and Carrier's Carrier cases are special cases, since on the interfaces between providers typically packets with labels are exchanged. See section 4 for an analysis of these architectures.


There are a number of precautionary measures outlined above that a service provider can use to tighten security of the core, but the security of the BGP/MPLS IP VPN architecture depends on the security of the service provider. If the service provider is not trusted, the only way to fully secure a VPN against attacks from the "inside" of the VPN service is to run IPsec on top, from the CE devices or beyond. This document discussed many aspects of BGP/MPLS IP VPN security. It has to be noted that the overall security of this architecture depends on all components and is determined by the security of the weakest part of the solution.

Sunday, August 24, 2014

Perspectives of Business Reference Model

We are all witnessing the steady progress of the Enterprise Architecture(EA) discipline and it is now well understood that the EA is not just about IT infrastructure and the Business Architecture(BA) forms an integral part of EA. Unlike in the past, when Business Architecture was used for the purpose of eliciting the requirements for the IT systems, BA is used to develop and describe the targe business model and work on a road map that will get the business towards the target. The Open Group, as part of its "World Class EA" series, has published a White Paper on the Buiness Reference with an objective of providing the need help to organizations in developing BA assets and plan for the future.


The Open Group has developed the Business Reference Model to facilitate description of a business model through the five perspectives. The following diagram provides an overview of the structure and content of the BRM:

Image Source: The Open Group's World Class EA: Business Reference Model white paper.


Environment Perspective:

The Environment perspective addresses the context within which an organization must operate. It describes the external factors, such as the competitors and customers for an organization, in addition to the pre-established strategy defined by the organization for market positioning. This perspective is intended to describe why an organization is motivated to undertake particular courses of action.

The goal of understanding the business environment is to provide a good contextual knowledge base that informs the creation of effective architectures in the Value Proposition, Operating Model, and Risk perspectives.

The business challenge is to gain and exploit insights into the market, competition, and customer base that allow the organization to position itself optimally (described through strategy).


Value Proposition Perspective:

The Value Proposition perspective describes the offering produced by the organization in terms of products, services, brand, and shareholder value. It creates a belief from the existing customer, prospective costumer, stakeholder, or other constituent groups within or outside the organization where the value will be experienced – usually in exchange for economic value or some form of compensation.

The goal of understanding the value proposition is that it defines the customer experience and sets shareholder expectations. The value proposition also provides a baseline set of needs that need to be fulfilled by the Operating Model perspective. 

The business challenge is to develop a value proposition that is able to attract a suitable customer base, fulfil the needs of the customer base effectively, and generate sufficient benefit to satisfy shareholder expectations. All this needs to be achieved in a way that is consistent with, and reinforces, brand image and brand values.


The Operating Model Perspective:

The Operating Model perspective describes the resources at the disposal of the organization that will be deployed to generate the value proposition. This perspective is intended to describe how an organization will be able to deliver on its value proposition. Capabilities are the core enablers to operate the business from the perspectives of people, process, technology, and information.

The goal of operating model design is to allow executives and planners to evaluate the business through a wide variety of lenses and viewpoints in order to identify desired and enhanced states of the organization.

The business challenge is to identify the correct alignment of resources that will deliver the necessary customer and shareholder experience. Typical trade-offs to evaluate when structuring capabilities include centralization versus federation, matrix organization structures versus vertical integration, core versus context analysis, and process alignment versus competency alignment. The results of these trade-offs will produce different levels of efficiency versus agility versus stakeholder experience across different areas of the business.

The Risk Perspective

The Risk perspective identifies the uncertainties that may surround an organization in its delivery of the value proposition. This perspective is intended to describe the threats that face an organization from within and without. Typically, organizations model their architecture around the known, repeatable aspects of business operations. However, within a complex and volatile environment, unforeseen circumstances frequently occur in ways that may be extremely damaging to the business.

The goal of risk analysis is to gain a full understanding of potential scenarios that may adversely impact the business and then to prepare appropriately to address those risks in the event that they occur.
The business challenge of risk modelling is to ensure that risks are adequately understood (it is a great challenge to test for completeness in an exercise of identifying unlikely or unforeseeable scenarios), the impact of risk is appropriately quantified (again, challenging to accurately determine when there is limited precedent), and the mitigation steps for risks are appropriate to the risk level (in many organizations, over-compensation for risk can be as damaging as under-compensation, as valuable business activities are curtailed due to risk concerns).


The Compliance Perspective

The Compliance perspective represents activities that the organization must carry out in order to assure that the value proposition is delivered using an acceptable standard of business practice. This perspective is intended to describe the constraints that prevent an organization from acting in negative, destructive, or inappropriate ways. In many cases, compliance can offer opportunities for organizations to differentiate, by being first to access new markets by being compliant with new legislation.

The Compliance perspective acts in a similar manner to the Environment perspective in that it influences across value proposition, operating model, and risk, constraining all activities of the business to be in compliance with standards of acceptability.

The goal of the compliance architecture is to adequately understand the compliance requirements that exist and to ensure that appropriate mechanisms are in place to ensure they are met.

The business challenge of compliance is to appropriately translate commercial, quality, ethical, legal, and regulatory constraints (which tend to be complex and open to interpretation) into a set of clear, unambiguous operational policies that can be followed consistently and at scale within a large organization. Interpretations that are too risk-seeking in nature will tend to generate compliance breaches, with associated financial and reputational penalties. Interpretations that are too risk-averse will tend to stifle business activities and reduce the ability of the business to change quickly to meet new environmental circumstances.


This blogs contains excerpts from the white paper "World Class EA: Business Reference Model" published by The Open Group and this white paper is available for download.

Sunday, July 20, 2014

A Checklist for Architecture & Design Review

Mostly the security requirements remain undocumented and is left to the choice or experience of the architects and developers thus leaving vulnerabilities in the application, which hackers exploit to launch an attack on the enterprise's digital assets. Security threats are on the rise and is now being considered as a Board Item as the impact of security breach is very high and could cause monetary and non monetary losses.

One of the key aspects of the IT Governance is to ensure that the investments made in software assets are optimal and there is a quantifiable return on such investments. This also means that such investment does not lead to risks that could lead to damages. Most of us are well aware that reviews play a key role in ensuring the quality of the software assets. As such, in this blog post, I have tried to come up with a checklist for reviewing the architecture and design of a software application.

While the choice of specific design best practice is interdependent on another, a careful tradeoff is necessary. For a detailed discussion on Trade off Analysis of Software Quality Attributes. Each of the checklist item listed here needs further elaboration and identification of specific practices, which will depend on the enterprise architecture and design principles of the organization.

Deployment Considerations

  • The design references the security policy of the organization and is in compliance of the same.
  • The application components are designed to comply with the various networking and other infrastructure related security restrictions like firewall rules, using appropriate secure protocols, etc.
  • The trust level with which the application accesses various resources are known and are in line with the acceptable practices.
  • The design supports the scalability requirements such as clustering, web farms, shared session management.
  • The design identifies the configuration / maintenance points, and the access to the same is manageable.
  • Communication with various local or remote components of the application is using secure protocols.
  • The design addresses performance requirements by adhering to relevant design best practices.

Application Architecture Considerations

Input Validation

  • Whether the design identifies all entry points and trust boundaries of the application.
  • Appropriate validations are in place for all inputs that comes from ourside the trust boundary.
  • The input validation strategy that the application adopted is modular and consistent.
  • The validation approach is to constrain, reject, and then sanitize input.
  • The design addresses potential canonicalization issues.
  • The design addresses SQL Injection, Cross Site Scripting and other vunerabilities
  • The design applies defense in depth to the input validation strategy by providing input validation across tiers.
Authentication
  • The design identifies the identities or roles that are used to access resources across the trust boundaries.
  • Service account or such other predefined identity requirements to, if so needed to access variuos system resources are identified and documented.
  • User credentials or authentication tokens are stored in secure manner and access to the same is appropriately controlled and managed.
  • Where the credentials are shared over the network, appropriate security protocol and encryption techniques are used.
  • Appropriate account management policies are considered.
  • In case of authentication failures, the error information displayed is minimal so that it does not reveal any clues that could make the credential guessing easier.
  • The design adopts a policy of using least-privileged accounts.
  • Password digests with salt are stored in the user store for verification.
  • Password rules are defined so that the stronger passwords are enforced.
Authorization
  • The user role design offers sufficient separation of privileges and considers authorization
  • granularity.
  • Multiple gatekeepers are envisaged for defense in depth.
  • The application’s identity is restricted in the database to access-specific stored procedures and does not have permissions to access tables directly.
  • Access to system level resources are restricted unless there is an absolute necessity.
  • Code Access Security requirements are established and considered.
Configuration Management
  • Stronger authentication and authorization is considered for access to administrration modules.
  • Secure protocols are used for remote administration of the application.
  • Configuration data is stored in a secured store and access to the same is appropriately controlled and managed
  • Least-privileged process accounts and service accounts are used.
Sensitive Data
  • Design recognizes sensitive data and considers appropriate checks and controls on the same.
  • Database connections, passwords, keys, or other secrets are not stored in plain text.
  • The design identifies the methodology to store sensitive data securely. Appropriate algorithms and
  • key sizes are used for encryption. 
  • Error logs, audit logs or such other application logs does not store sensitive data in plain text.
  • The design identifies protection mechanisms for sensitive data that is sent over the network.
Session Management
  • The contents of authentication cookies are encrypted.
  • Session lifetime is limited and times out upon expiration.
  • Session state is protected from unauthorized access.
  • Session identifiers are not passed in query strings.
Cryptography
  • Platform-level cryptography is used and it has no custom implementations.
  • The design identifies the correct cryptographic algorithm and key size for the application’s data encryption requirements.
  • The methodology to secure the encryption keys is identified and the same is in line with the acceptable best practices.
  • The design identifies and establishes the key recycle policy for the application.
Parameter Manipulation
  • All input parameters are validated including form fields, query strings, cookies, and HTTP headers.
  • Sensitive data is not passed in query strings or form fields.
  • HTTP header information is not relied on to make security decisions.
  • View state is protected using MACs.
Exception Management
  • The design outlines a standardized approach to structured exception handling across the application.
  • Application exception handling minimizes the information disclosure in case of an exception.
  • Application errors are logged to the error log, and the design provides for periodic review of such logs.
  • Sensitive data is not logged as part of the error logs, but where necessary, the same is logged with appropriate de-identification technique
Auditing and Logging
  • The design identifies the level of auditing and logging necessary for the application and identifies the key parameters to be logged and audited.
  • The design considers how to flow caller identity across multiple tiers at the operating system or application level for auditing.
  • The design identifies the storage, security, and analysis of the application log files

Sunday, June 29, 2014

Governance of Agile Delivery

Introduction

The Agile methodology brings in alternate approach to traditional project management, where success was hard to get. Typically used in software development, Agile methodology help businesses respond to unpredictability. By focusing on the repetition of smaller work cycles as well as the deliverables, agile methodology is described as “iterative” and “incremental”. In waterfall, development teams only have one chance to get each aspect of a project right. In an agile paradigm, every aspect of development viz. requirements, design, etc. is continually revisited. When a team stops and re-evaluates the direction of a project every two weeks, there’s time to change course. Because teams can develop software at the same time they’re gathering requirements, “analysis paralysis” is less likely to impede a team from making progress. Agile development preserves a product’s critical market relevance and ensures a team’s work doesn’t wind up on a shelf, never released. Considering the value delivery that the Agile methodology promises, its adoption has been on the rise and today most organizations, including Government are embracing Agile approaches.


Governance of Agile Delivery


Critics say that Agile methodology is all about working in an unstructured way and for that reason, they believe that governing agile practices is always a challenge. While some of the Agile principles appear to support such criticism, there are many cases where organizations have successfully implemented processes and frameworks towards governance of Agile practices. Agile practitioners believe that because the agile methods are designed to be self-assuring, when practiced right, there exists built-in governance and accountability.


More so, the agile practices are more collaborative and operates continuously, requiring the stakeholders to review and test the deliverables on a continuous basis and helps the team to take alternate course of action as may be needed. Collaborative culture helps resolution of problems quicker and makes decisions are made on time. This helps to have a continuous focus on the value forecast with respect to the business case and manage the risks that may potentially impact on the expected value.


Principles of Governance

The following are the key governance principles for a successful governance of Agile Delivery:

Focus on the value delivery - only do a task if it brings value to the business. This principle also recognizes the timely delivery of a task as the value derived is more likely to deteriorate with the delayed delivery. In case of Agile deliveries, the governance is continuous and at a work unit level. It should also focus on what activity is taking place and the value such task delivers.

Embrace Change - This another principle of Agile and the Governance framework should take this into consideration. This would mean that the decisions or work flows should be flexible enough to change course based on the feedback received. Given that all stakeholders collaborate, decisions should be taken across the table, without putting things on hold and for the purpose, all needed specialists should take part in the reviews.

Decide on the performance metrics - Another key principle of Agile methodology is to 'fail fast and learn quiuckly'. Given that the overall objective is to improve the certainty that the team will deliver a usable product or service of good quality, the teams should be able to identify and implement the right metrics that will accurately indicate the quality of the deliverables and the performance of the team. For example they measure tasks completed; rework they had to perform; the backlog list and the value of the product or service to the business at the end of each iteration. Teams display this information visually, updating it frequently. This makes progress transparent to business users and management. If senior managers require performance information to oversee projects, they define what the ‘must have’ data are. Performance reports for senior management become a task in each iteration and an output of the delivery team.

Collaboration - All stakeholders, including senior management, external assessors, business users and the development team should be partners in quality, and this collaborative approach is an essential change in mindset. The business owner and delivery team defines what ‘quality’ tests they will use and what results are acceptable at the outset of each iteration – the definition of ‘done’. Regular user feedback identifies whether the product or service is providing the expected business value at each stage. External assessors are not gatekeepers; rather they are an integral part of the team. The iterative approach ensures continual reviews and feedback on progress, so external assessors are not just involved at critical points as defined in a traditional project life cycle.

Focus on behaviours and not just processes and documentation - More specifically, the external reviews or assessments will be more effective in providing critical challenge if the assessors have high-end skills, including technical and Agile delivery experience. In addition, they provide better value if they continually review how the team is performing, using observation as their main method of evidence collection. The focus of such external review or assessment shall be on the following:
  • the skills and experience of the team;
  • the team dynamics – frequency and nature of communication inside and outside of the delivery team, and the level of input to the delivery team from the business;
  • the organisational culture – the level of commitment and openness;
  • the timing and nature of quality control by the delivery team – the testing and release framework;
  • the order in which the team tackled the tasks – prioritisation of actions and deliverables, the amount of actions in the backlog list;
  • the way the team changes its activity in response to the results achieved in each iteration; and
  • the value of outputs to the business.

IBM's Disciplined Agile Delivery Methodology


IBM believes Agile delivery allows it to continually issue new capabilities that meet user needs. It usually introduces software as part of a wider business change project so, to keep both in step, it has developed several Agile project methodologies. Disciplined Agile Delivery is a hybrid method that can be applied by a large number of teams working on the same project at the same time. The image below shows the Disciplined Agile Delivery life cycle. It starts with a few short iterations that allow the team and its stakeholders to identify the initial requirements, develop the architecture and agree a release plan. IBM also uses this to determine the system level properties and characteristics – the non-functional requirements. There are iterations after the business owner has decided that the system has sufficient functionality. These additional iterations are necessary for IBM to support the operation and maintenance of the solution once it is in service.



In contrast to the traditional approach of looking at outputs, plans, resourcing and how a project is organised, external assessors should focus on outcomes, prioritisation of work and team dynamics. The most useful indicators of success are how the teams are organising the delivery of an operational service or capability and what Agile behaviours and practices are used. Areas for assessment include whether:
system level issues (security, availability) are addressed within the iterations;
  • short- and longer-term planning exists;
  • the stakeholders have a shared vision;
  • there is continuous integration; and
  • the team has the right people


Reference:

National Audit Office's Review on Governance of Agile Delivery

Sunday, June 22, 2014

Sustaining Successful IT Governance Environment

A tremendous amount of importance is being given to governance, risk, and compliance (GRC), ans thus IT governance is becoming a necessity in today's business context. There is strong pressure on senior management and the Board members to have a good understanding of their IT systems and the controls that are in place to avoid things such as fraud and security breaches. As the global corporate and economic climate continues to shift, businesses need to be prepared to anticipate, respond to, and mitigate risk with flexible processes that can be adapted to any methodology. This calls for assessing and continuously monitoring of the IT Governance as it operates in an organization.


IT governance represents a continuous journey (not an end state in itself), which focuses on sustaining value and confidence across the business functions. Many companies start on a short term approach and focus on the compliance component of IT governance, without developing a balanced longer term approach consisting of both a top down framework and roadmap together with bottom up implementation to address the broad range of IT governance issues and opportunities in a planned, coordinated, prioritized and cost effective manner. 

Getting it Right First


Different IT governance stake holders need different features so the solution needs to be structured, taylored and feature risk management. Because process is at the heart of IT Governance the solutions has to be process centric but also support all other perspectives, organisations, technology, application, infrastructure, etc. Being process centric, IT Governance aspects should be integrated into the existing process framework of an organization, so that it becomes real, operational and sustainable.

It is important to get the IT Governance pieces well integrated and have the same operational first. To have an effective and operational IT Governance program, at the minimum, the following should be taken care of.

  • Executive Commitment - The Board and the Executive Leadership Team are committed to implementing and sustaining a robust Governance environment.
  • Do Homework - Educate yourself on past, current and emerging best practices.
  • Gather knowledge - Develop, adopt, integrate, leverage and tailor current and emerging best practices models, frameworks and standards to make them work for the enterprise - create an integrated IT governance framework and roadmap for your organization.
  • Sell it - Market the IT governance value propositions to the organization and communicate its goals and objectives.
  • Assess Current State - Assess the “current state” of the level of IT governance maturity and identify gaps. 
  • Define Future State - Based on the knowledge gathered, develop a “future state” IT governance blueprint.
  • Implementation plan - Come up with an implementation plan by breaking down the components into well defined work packages and assign an ownership and responsibility.
  • Roll out - Implement a scalable and flexible governance policy and process.

Continuous Improvement


There could not be a second thought in that the IT Governance needs to be sustainable by putting in place a lifecycle for continuous improvement.  IT Governance like any other process framework need continuous improvement in line with the changing business and technology environment and to ensure that the desired benefits are realized for ever. While the improvement cycle can be as simple as that of Demings PDCA, ISACA has suggested a seven step cycle as below:

  • What are the Drivers?
  • Where are we now?
  • Where do we want to be?
  • What needs to be done?
  • How do we get there?
  • Did we get there?
  • How do we keep the momentum going?

At the minimum, organizations should address the following questions to have the IT Governance continuously improved and thus sustained:


Image Source: The Advisory Council


With an integrated IT Governance framework in place, these improvement steps cannot be performed in isolation for the IT Governance function alone. Such improvement life cycle shall be applied to each of the functions, like Service Management, Asset Management, People & Project Management, and IT Portfolio Investment Management. The improvement life cycle shall thus at such levels and when such functions improve and deliver the desired results and value, IT Governance in turn will also be delivering. 

How much is enough?


As a process, operational governance must be carried out by one or more people. Even though it is useful to treat governance as outside the day-to-day operations of an organization, those carrying out the governance process may or may not belong to the governed organization. Even so, those who are carrying out the governance process must be concerned with certain external forces on the organizations as well. These external forces could be External Policies, External Standards, Government Regulations, etc. 


It is needless to mention that continuous improvement of IT Governance requires investment and it is equally important to justify the investment in continuous improvement pays back. Thus, the organization should know how much improvement is enough for them and accordingly focus its resources for this activity. However, knowing how much of IT Governance is enough is a key challenge, which will depend on the following factors:

  • Investment in IT (capital and expense), strategic value
  • Management philosophy and policy (e.g. mandatory and discretionary)
  • Program/Project and/or Operational visibility
  • Complexity, scope, size and duration of initiatives
  • Number of interfaces an integration requirements
  • Degree of risk
  • Speed of required implementation
  • Number of organizations, departments, locations and resources involved
  • Customer or sponsor requirements
  • Type and location of outsourcing (e.g. domestic, international)
  • Regulatory compliance 
  • Level of security required
  • Degree of accountability desired and audit-ability required (per external auditors)
  • Management Control Policies and Guidelines

Key Principles


To sustain and continue to make progress on the journey to achieving higher levels of IT maturity, an organization should adopt select principles from managing and accelerating change and transformation, which include the following key elements:

  • Proactively Design and Manage the IT Governance Program. Requires executive management sponsorship, an executive champion and creating a shared vision that is pragmatic, achievable, marketable, beneficial and measurable. Link goals, objectives and strategies to the vision and performance metrics and evaluations.
  • Mobilize Commitment and Provide the Right Incentives. There is a strong commitment to the change from key senior managers, professionals and other relevant constituents. They are committed to make it happen, make it work and invest their attention and energy for the benefit of the enterprise as a whole. Create a multi-disciplinary empowered Tiger Team representing all key constituents to collaborate, develop, market and coordinate execution in their respective areas of influence and responsibility. 
  • Make Tradeoffs and Choices and Clarify Escalation and Exception Decisions. IT governance is complex, continuous and requires tradeoffs and choices, which impact resources, costs, priorities, level of detail required, who approves choices, to whom are issues escalated, etc. At the end of the day, a key question that must be answered is, “When is enough, enough?” 
  • Making Change Last, Assign Ownership and Accountability. Change is reinforced, supported, rewarded, communicated ( through the Web and Intranet), recognized and championed by owners who are accountable to facilitate the change so that it endures and flourishes throughout the organization.
  • Monitoring Progress, Consistent Processes, Technology and Learning. Develop/ adapt common policies, practices, processes and technologies which are repeatable across the IT Governance landscape and enable (not hinder) progress, learning and best practice benchmarking. Make IT governance an objective in the periodic performance evaluation system of key employees and reward significant and sustainable progress and achievements. 

People often think they have a choice between "governance" and "no governance," but in reality the choice is between "good governance" and "bad governance." Every organization has a framework of decision-making and some set of often unstated measures. The needs of the business and the role of IT evolve; these unintentional governance solutions do not. Good governance is intentional, and it takes effort and attention. The operational perspective described in this article provides an approach for doing governance well.

Sunday, April 27, 2014

WAF - Typical Detection & Protection Techniques

WAF - Web Application Firewalls is a new breed of information security technology that offers protection to web sites and web applications from malicious attacks. As the name suggests, WAF solution is intended scanning the HTTP and HTTPS traffic alone. The WAF solutions have evolved over the last few years and are capable of preventing attacks that network firewalls and intrusion detection systems can't. The WAF offering typically comes in the form of a packaged appliance, i.e. with a purpose built hardware and a software running on it and is plugged in to the network. Different appliances offer different level of deployment capabilities, like, active / passive modes, support for High Availability,etc.

Different vendors have come up with various techniques to detect and protect web applications of the enterprise and thus the capabilities of the solution differ. However, at a minimum these devices offer the following detection and protection capabilities:


Detection Techniques

Normalization techniques

Web applications of those days were simple and mostly was comprising of the HTML content. Various tools and solutions have emerged to leverage the HTTP protocol for use by various applications to receive and send complex data including encoded binary data of higher volumes and also extend the use of the HTTP methods. Hackers also leverage these techniques to attack a web application. This calls for the WAF device should have the ability to use a technique to transform the input data into a normalized form, so that the same can be inspected for potential malicious content that could be leverage to perform an attack.

Signature Based Detection

This technique involves use of a string or regular expression based match against the incoming traffic for a specific signature and thus detecting a potential attack. For this purpose, the need to maintain a database of such attack signature is essential. Most popular WAF solution vendors maintain their own databases, whereas others subscribe to such databases.These databases need frequent updates to take into account the signatures used in recent attacks elsewhere.

Rule Based Detection 

Rule based Detection technique is similar to Signature Based Detection, but it allows use of a more complex logic. For instance, even if a signature match is detected, it can be further subjected to certain other conditions, like if the data is from a trusted source, the traffic may still be allowed to pass through with or without appropriate alerts and triggers for manual inspection. While the WAF solution is shipped with the standard rules, the same would be configurable to meet the security needs of the customer. The standard rules may also be part of the signature / rule database as may be maintained or subscribed to by the vendor

APIs for Extensibility

Despite the standard signature and rule based detection techniques, the actual deployment scenario at the customer site may require customization of the techniques used in detection. WAF solutions vendors usually support this need by offering extensible APIs, plug-ins, or scripting. These extensiblity options if not appropriately secured, can be exploited by hackers too.


Protection Techniques

Brute Force Attacks Mitigation

These attacks use automated scripts that attempt to login to the web application with common user name and passwords. The attacks usually originate from a large number of sources consisting of both legitimate web servers and private home computers. Once a username and password is successfully guessed, the hackers or their scripts / tools use the gained admin credentials for the next stage of attacks. Given that the user name passwords follow stricter rules and thus these attack is most likely to fail in guessing the valid credentials, but these attacks generate unduly high traffic, which will result in resource drain and in turn affect the availability of the web application.

Protection from Cookie Poisoning

Cookie Poisoning attacks involve the modification of the contents of a cookie (personal information stored in a Web user's computer) in order to bypass security mechanisms. Using cookie poisoning attacks, attackers can gain unauthorized information about another user and steal their identity. Cookie poisoning is in fact a Parameter Tampering attack, where the parameters are stored in a cookie. In many cases cookie poisoning is more useful than other Parameter Tampering attacks because programmers store sensitive information in the allegedly invisible cookie. Most WAF solutions offer protection from Cookie poisoning by facilitating the signing and / or encryption of cookies, virtualizing the cookies or a custom protection mechanism as the specific web application may demand.

Session Attacks Mitigation

Session store is an important component of a web application and this store is used to share some of the common parameters pertaining to the user and the specific session across various actions within the application. Thus the session data is a key component that is used to secure the web applications. The hackers on the other hand try various techniques to hijack the session or tamper the session parameters. While tampering the parameter values is similar to Cookie Poisoning, Session Hijacking is stealing the session identifier and simulating requests from different sources with the stolen session identity. WAF solutions provide protection to session hijacking by signing and / or encrypting the session data and also linking the session identifier with the originating client.

Injection Attack Protection

An SQL injection attack is insertion of a SQL query via the input data from the client to the application. A successful SQL injection attack can read sensitive data from the database, modify database data, or shutdown the server. Similarly operating system and platform commands can often be used to give attackers access to data and escalate privileges on back-end servers Remote File Inclusion attacks allow malicious users to run their own PHP code on a vulnerable website to access anything that the PHP program could: databases, password files, etc. Most WAF solutions using the normalization technique and the signature and rule database would be able to deny requests carrying such data, command or instruction that could lead to any of the injection attacks.

DDoS Protection

Distributed Denial of Attack is a common technique used by hackers to impair the availability of a website or application by directing unusually huge traffic against the site or application. This will result in all the computing resources used up and eventually leading to the site not being available at all. The WAF solutions making use of the normalization techniques and the signature and rule databases would be able to block such requests. Some common techniques used by the WAF solutions are to have a check on the content length and by evaluating the number of requests or sessions from the same originating client within a given time period.


Obviously, what is listed above are most common detection and protection techniques that any WAF solution would offer. But vendors are constantly improving these techniques and thus adding more detection and protection features. This has to be a constant endeavor as the hackers on the other hand are also coming up with newer techniques to exploit various vulnerabilities.

Sunday, April 13, 2014

IT Governance For Small Businesses - Constraints

There is a perception that IT Governance best suits for large organizations and small organizations tend to ignore it considering the efforts and resources that is required in practicing the IT Governance within. But IT Governance is equally important for smaller organizations as well, so that the IT function however small it is deliver maximum value for the business and at the same time to keep the risk exposure to the minimum. Existing frameworks like COBIT are too extensive for small businesses to use in implementing IT governance. These frameworks however are too complex and costly to implement and small businesses may consider it a bigger battle to implement and manage such framework.


ISACA however recommends to take an evolutive approach and thus take smaller steps first and let it evolve. Small businesses should convert the high-level concept of governance into practical and easy to implement best practices. The resource pools available with the small businesses will be a lot smaller and even outsourcing might prove expensive, considering the business volume and thus establishing an RoI on implementing IT Governance could be a bigger challenge.


It is not just the resources and cost, there are certain other characteristics of small businesses, which come in way of implementing an IT Governance. Here are some such characteristics, which an IT Governance framework designed for a small business should take into consideration.


Smaller or no Board of Directors

Many small businesses are closely held and thus could be a family business or private limited company with a small number of Directors on the Board. Having an Independent Director or a Director with IT background on the board is a big ask. This will leave the concentration of IT decision making with few or even single individual, which could be the CEO or the owner himself. IT savvy business owners or CEOs tend to use or leverage IT more for their business and thus have some degree of adoption of standards, practices and frameworks. In such cases, the choice of technology, standards, practices, etc are most likely limited to the knowledge levels of the owner or CEO and they don't take a leap forward into unfamiliar areas, which will call for more resources in evaluating and establishing the RoI for the same.

Organization Structure

One of the first step in implementing the IT Governance in an organization is to get an IT Strategy Committee and an IT Steering Committee with representation from different functions and from the Board. Small businesses do not have the extensive management structures to have such committee(s). The organization structure with small business are not as extensive as that of large organizations and as such enforcing separation of duties may not be feasible at all. For instance, the Finance Manager of a small business will also perform the function of IT procurement with minimal support from IT Administrators. Similarly, having a separate CIO could be a bigger ask for a small businesses as the costs for having such resources does not warrant the return.

Smaller IT departments

Having a fully functional IT department is a big investment for a small business. Thanks to the cloud trend and software as a service, this is a challenge even the IT departments in large organizations are facing. Cloud based services like Google Apps for business and Microsoft's Office 365, coupled with various specific purpose software as a service, it is becoming a lot easier for the businesses to get its IT up and running with least help from IT experts. This characteristic of a small business leads to a situation where a non-IT staff might have to take up the IT Governance initiative, which obviously has a challenge within as such staff might not comprehend the nuances of the Governance practices and jargon.

Lack of complementing frameworks

IT Governance  framework generally relies on various other practices or frameworks practiced in an organization. For instance ITIL, Enterprise Risk Management, ISO, CMMI, etc are some such standards or frameworks, the existence of which makes adoption of an IT Governance framework a bit seamless. In a small business existence of such standards is highly unlikely. Small businesses need an IT governance framework that is simpler, self containing and easier to implement, and only contain controls that are not dependent on a control practice of a different standard or practice.

Information security

While small business are not the target of hackers or attackers, the risk of information security always remained. For obvious reasons that arise out of the characteristics listed here, small businesses could not see the return on investment in information security. For that matter, small business do not have a formal risk management practice. They, typically, do not possess some of the basic elements of security management like information security policies, backup and disaster recovery, security awareness and up-to-date anti-virus protection. An IT governance framework aimed at small businesses will have to include a strong emphasis on information security and address the common security risks affecting small businesses.

Resources & Tools

Use of sophisticated software applications make implementation and practicing IT Governance easier, but it calls for heavy investment, which is beyond the reach for small businesses. For instance, Performance Evaluation of various IT resources call for collection of data and come up with various metrics that can be used to benchmark and as well measure the performance of IT resources and functions. This is made easier by using automated tools and depending on manual methods could prove cumbersome and data inaccuracy.
Because of the lack of financial and technical resources, small businesses cannot make use of such automated tools or software systems for the purpose.


Though the above list is not exhaustive, what are listed above are the ones that can be considered as key constraints for an IT Governance framework for the small business to address. There is no one solution fits all even for large organizations. The IT Governance framework has to be designed, created and managed as relevant for each organization. That includes even a small business. While one may pick and choose controls from various frameworks and tailor them to suit the specific small or medium business. The framework should however provide for evolution, so that the same can improve based on feedback from the practice.

Saturday, April 5, 2014

IT Procurement - The Pricing Woes

Most IT products (both hardware and software) targeted for home or individual end customers usually carry a standard rate card. Some large resellers, considering their sales volume may offer a discounted price and that may be about 5 to 10 percent. While this seem to be a fair game, on the enterprise products side, things are totally different. The buyer, reseller (be it integrator or just a distributor) and the principal vendors play a game of negotiation. The end result of this game mostly is that one or more players lose. This is in contrast to the win-win theory where it is expected that all the players win.

The principals offering such enterprise products don't seem to have a standard pricing policy. Instead, they price the product or service for the specific enterprise customer based on the deal volume, the strategic importance of the deal and the indirect values that can be derived out of a specific deal. The indirect benefits could range from an increased reach to the associates of the customer, a consent to publish case study which might improve the market ranking of the product or increased revenue figures which again is used to determine the market share of the product or service.

The discounts the enterprise customers get range from 40 to even 90 percent. Large enterprises manage to negotiate and get substantial discounts on such products and services. Neither the principals nor the resellers can expect any margin out of such deals, but look for indirect benefits. This could potentially lead to a situation, where the principals don't see the intended indirect benefits being realized, they tend to take a 'no-frills' approach and thus not actively contributing towards the business goals of the customer.

This kind of pricing approach also result in the smaller businesses end up compensating the benefits that the larger enterprises get. That is, the discounts that the large enterprises get is out of the gains that the principals and resellers make out of deals with smaller business entities. This is in a way like taxing the poor for the benefit of the rich and could very well be termed as corporate corruption.

Knowing this, customers try their best to engage into a hard negotiation and get the maximum discount. When it is good to get the price advantage, are they aware of the hidden perils that could get in their way? Here are some such things that could happen:


  • The principals are likely to cut corners to ensure that they maximise their gain out of the deal or minimize the loss out of the deal. This could mean anything like trimming down the features which were not explicitly demanded by the customer and charge the customer when such features are required by the customer.
  • Vendors take the tendency to tone down post sale service levels. This could be the reason for a contrasting experience or feedback from different customers for the same product or service.
  • Principals and / or resellers take the no-frills approach. That is customer cannot expect a 'Customer Delight' kind of offering. The principals and vendors would stick to deliver what has been committed and not a bit more.
  • Unduly longer time and efforts is lost in the process of negotiation, which can have an impact on the time to market advantages for the customer.


While the above could impact the value delivery, these should not come in way in the negotiation process and thus ending up agreeing for an unreasonably higher cost.  This is where a win-win approach is recommended. A win-win outcome is one that gets all parties more than what no agreement would have guaranteed them. Win-win agreements do no promise all sides equal or similar gains. They only promise that all sides get is an outcome that is better than their most realistic estimate of what they would have ended up with had they walked away with no agreement.


Saturday, March 22, 2014

Business Impact Analysis for Effective BCM

A business continuity plan facilitates in improving the availability of organization's critical services. In the process, the BCP plan identifies and mandates such critical processes and also periodically assesses the quantitative and qualitative impact to the organization in the event of any disruption to such services. While Business Continuity Plan is proactive in managing the risk of business disruption, Business Resumption Plan and Disaster Recovery Plan are reactive in restoring the business to its working state as it deals with recovering or resuming the business services and assets following a disruption. BCP planning is a direct input to the business's D/R action plans.

Business Continuity Management and disaster recovery are natural components of Enterprise Risk Management. All the resources and plans that make up a business continuity plan are developed to address business interruption risk in an organization and should be part of a comprehensive mitigation plan for all the enterprise risks. Many organizations are beginning to recognize the opportunity they have from embedding or incorporating BCM into an overall program to identify, evaluate and mitigate risk. By viewing BCM as a risk management function and embedding it into the enterprise level ERM program, which has been aligned with the strategic imperatives of the company, boardroom expectations are met and alignment achieved.


The typical goals of BCM are:

  • To identify critical business processes and assign criticality. Factors influencing the determination of criticality include inter-dependencies among business processes and the MAD for each unique business process.
  • To estimate the maximum downtime the bank can tolerate while still maintaining viability. Bank management must determine the longest period of time a business process can be disrupted before recovery becomes impossible or moot.
  • To evaluate resource requirements such as facilities, personnel, equipment, software, data files, vital records, and vendor and service provider relationships

Business Impact Analysis

The first step in developing a strong, organization-wide business continuity plan is conducting a Business Impact Analysis. The result of BIA is a business impact analysis report, which describes the potential risks specific to the organization. The challenge lies in assessing the financial and other business risks associated with a service disruption. A BIA report quantifies the importance of business components and suggests appropriate plan and fund allocation for measures to protect them.

As with any plan, the Business Continuity Planning should also evolve on a continuous basis, as the business contexts keep changing in line with the growth and changing directions. Business Impact Analysis being an important phase of the BCM life-cycle,  the same should be revisited and refreshed in line with the BCM life cycle. As a process, the BIA shall be performed with respect to each critical activity or even resources forming part of the enterprise business processes. Though BIA is applied to critical activities, it is recommended to perform BIA on all activities as it is BIA that establishes the criticality of such activity, process or resource.

Performing BIA

The following are the key steps in performing the Business Impact Analysis:

  • Preparation and Set-up - It is important to identify the tools or templates required to perform BIA. For instance, a reference table to determine the business impact is essential to provide consistent definitions to different types of impacts and severity levels. If a structured risk assessment has already been carried out, the definitions and severity levels should already have been captured, and should be used for the BIA as well. 
  • Identification - This first step determines the activities to be performed, resources to be used to deliver the goods and services of the business organization. The source for gathering this information could be right from the mission & objectives of the enterprise to the defined business processes. Given that the BIA is performed on the identified activities and resources, this step however can be considered as a pre-requisite for BIA, rather than a step within BIA.
  • Identify potential disruptions - With respect to each identified activity or resource, identify the possible events or scenarios that could impact its desired outcome and thereby impacting the business process. This activity is usually best done using techniques like brain storming involving the relevant business users. As part of this step the correlation of the severity of the impact with the duration of disruption is also established.
  • Identify tangible losses - Disruption in certain activities or non availability of certain resources would directly result in monetary losses. If the given activity or resource or it in combination with other resources or activities could potentially cause revenue loss, the same should be identified and established as to the magnitude of such loss as well.
  • Quantify intangible losses - Certain activities, when disrupted may not directly result into monetary losses, but may result in intangible loss to the organization. For instance, non availability of customer care executives to respond to customer queries, could result in erosion of brand value. Such impacts should be quantified using appropriate techniques so that the same can be considered in determining the priority.
  • Recovery cost - As part of the impact analysis it would make sense to capture details of time and efforts it takes to resume or recover from the disruption. The magnitude of the recovery cost would also contribute to the determination of the prioritization or ranking.
  • Identify dependencies - Some times, the potential disruption or its impact depends on certain other activities or resources be it internal or external. This details will be useful in drawing up the business resumption plan and the disaster recovery plan. 
  • Ranking - Once all relevant information has been collected and assembled, rankings for the critical business services or resources can be produced. Ranking is based on the potential loss of revenue, time of recovery and severity of impact a disruption would cause. Minimum service levels and maximum allowable downtime are also established.
  • Prioritize critical services or products - Once the critical services or products are identified, they must be prioritized based on minimum acceptable delivery levels and the maximum period of time the service can be down before severe damage to the organization results. To determine the ranking of critical services, information is required to determine impact of a disruption to service delivery, loss of revenue, additional expenses and intangible losses.

The quality of the BIA is reflected in the reports that are produced after completing the above mentioned steps. Given that BIA is a critical phase of BCM, it is important that this activity is performed with as much care and attention to the details. Using the right set of tools, techniques, templates and questionaire is recommended for best results.

Sunday, March 16, 2014

IT Governance - Implementation Obstacles

IT governance is a process which include a set of controls and practices that ensures that the IT function is working on the right things at the right time in the right way with a view to accomplish the stated objectives and thereby contributing towards the meeting enterprise objectives and goals. Any process that aligns IT to business goals is the right strategy. However, it’s the change required and the compromises on the part of business leaders that can come in way to make it a not so easy program.

IT Governance offers many benefits, which include reduce the cost of day-to-day operations, improve overall operational efficiency and consistency, free more resources for strategic initiatives that improve competitiveness, choose those initiatives far more wisely working on the right things, bring those initiatives to market faster with less risk and bring IT into close alignment with business priorities. But at the same time the results of an ineffective implementation can be devastating. Some such devastating results could be:
  • Business losses and disruptions, damaged reputations and weakened competitive positions
  • Schedules not met, higher costs, poorer quality, unsatisfied customers
  • Core business processes are negatively impacted (e.g. SAP impacts many critical business processes) by poor quality of IT deliverables 
  • Failure of IT to demonstrate its investment benefits or value propositions


The Three Pillars of IT Governance

To understand the obstacles to IT Governance in an organization, it would be appropriate to understand the three critical pillars on which a successful IT Governance program is built on. The following are the three critical pillars of a successful IT Governance implementation:

Leadership, Organization, Decision Rights and Metrics

The IT Governance Initiative must be decomposed into manageable and accountable work packages and deliverables and assigned to owners for planning, development, execution and continuous improvement. The IT Governance program must have clearly defined roles, responsibilities and decision rights for the entire program and for each major component of the integrated IT Governance framework and road map.
A decisions rights matrix identifying decision influencers and decision makers is necessary to clarify decision roles and authority levels for the major IT Governance components.

Flexible and Scalable Processes

Processes form an integral part of the IT Governance program and as the IT Governance framework is made of such processes and controls, which shall be defined. It is also important these processes evolve over its usage based on feedback collected through various metrics. At the same time, processes should not only be simple enough to understand and implement but also flexible enough to provide room for improvement. People tend to ignore processes, if it is difficult to understand and practice as part of their day to day work. Thus the integrated framework approach works best.

Enabling Technology

Most business components rely on Technology for most aspect of their value, reliability or efficiency. Even choice of right technology plays a key role in making up the first two pillars. Given that technology evolves in an accelerated rate, there should be a clear watch on such advancements and the technology road map should provide for identification and adoption of the right technology at the right time to get the maximum value. Most organizations have recognized and accordingly have started managing this area well.


The Key Obstacles

Most often, the business leaders are motivated and rewarded by having their small part of the organization succeed. IT governance requires that the scarce resource of technology capacity be diligently distributed across the organization for overall business success. In other words, it requires that IT cannot be allocated on the basis of individual team needs but rather on collective, organizational goals. A recent empirical study by Lee uncovered factors such as ‘lack of IT principles and policies’, ‘lack of clear IT Governance processes’, ‘lack of communication’, and ‘inadequate stakeholder involvement’, as inhibitors of IT Governance implementation success. A good understanding on the barriers or obstacles that hinder the success of IT Governance implementation is important as once understood, their effect is understood and pre-emptive actions can be taken to address them

Implementing IT Governance is a long and continuous journey, where obstacles and challenges are aplenty. A good understanding on the barriers or obstacles that hinder the success of IT Governance implementation is important as once understood, their effect is understood and pre-emptive actions can be taken to address them. The most frequently experienced obstacles include:

Culture

Instituting effective IT governance requires dealing with the “c-word.” The culture of a company—“the way we do things here”—can be a tremendous driver for business success. It can also be—and often is—a giant resistor that dampens positive change. Immeasurable amounts of energy have been dissipated trying to change embedded habits and methods that hid behind the cloak of “culture.” Today, worldwide, the trend is toward collaborative culture, especially in the sharing of information. The attitude that “information is power” lingers in some dark company corners. In some disciplines, such as sales, where compensation is directly related to personal contacts and initiative, it is arguable that the status quo has value. In most cases, though, managements are trying to rid the company of these attitudes in order to unlock the power of teamwork leveraged by technology. IT governance requires teamwork and information sharing to succeed.

Resistance to Change

Virtually every manager in business today has encountered employees who held up organizational change by insisting on continuing with the “old way” of doing something, even though the success of the “new way” depends on universal adoption. Fear of failure could be one of the reason why people are afraid to commit to change, uncertain that they can successfully implement it and fearing that if they fail, they will be held accountable. Another reason could be the existence of innate conservatism and uncertainty emanating and causing resistance

Lack of Appropriate Communication

Communication is really at the heart of IT governance and the lack of appropriate communications can cause a major disconnect between IT executives and business executives. IT still continues to communicate in more technology terms, which is just not relevant to the business and they just don't understand it. So good communications is extraordinarily important so that everybody is on the same page and that the business and IT become very closely engaged. Again -- we're making strategic decisions on where we're going to invest in technology and those are really business decisions, not technology decisions. That way, lack of communication can easily derail the IT Governance program of an organization.

Lack of Value Proposition

CIOs must be willing to take the lead in the search for value-creating IT processes. If they are not, others—real experts—are glad to do so, in language that resonates with CEOs. For instance, if you take the Project and Porfolio Governance the 'Fail Fast' or 'Fail First' approach may be helpful. If the processes are designed around this approach, we could see that the IT programs and functions get evaluated at various stages by analyzing the collected metrics to see if it would still make sense to let the project, or program to move into the next stage. At every stage there using the metrics, a revisit to the project charter and the business objectives would ensure that the desired value out of such project or program is still the same.

Internal politics

Internal organizational politics may exert themselves, as the adoption and implementation of formal ITG practice will sometimes bring a shift in decision rights and associated powers that currently exist in the organization. It is seen in most organizations that projects that should be given a higher priority mostly be based on “who speaks the loudest” rather than“ looking at the current business, collected metrics, what is the immediate need?”