Sunday, April 27, 2014

WAF - Typical Detection & Protection Techniques

WAF - Web Application Firewalls is a new breed of information security technology that offers protection to web sites and web applications from malicious attacks. As the name suggests, WAF solution is intended scanning the HTTP and HTTPS traffic alone. The WAF solutions have evolved over the last few years and are capable of preventing attacks that network firewalls and intrusion detection systems can't. The WAF offering typically comes in the form of a packaged appliance, i.e. with a purpose built hardware and a software running on it and is plugged in to the network. Different appliances offer different level of deployment capabilities, like, active / passive modes, support for High Availability,etc.

Different vendors have come up with various techniques to detect and protect web applications of the enterprise and thus the capabilities of the solution differ. However, at a minimum these devices offer the following detection and protection capabilities:


Detection Techniques

Normalization techniques

Web applications of those days were simple and mostly was comprising of the HTML content. Various tools and solutions have emerged to leverage the HTTP protocol for use by various applications to receive and send complex data including encoded binary data of higher volumes and also extend the use of the HTTP methods. Hackers also leverage these techniques to attack a web application. This calls for the WAF device should have the ability to use a technique to transform the input data into a normalized form, so that the same can be inspected for potential malicious content that could be leverage to perform an attack.

Signature Based Detection

This technique involves use of a string or regular expression based match against the incoming traffic for a specific signature and thus detecting a potential attack. For this purpose, the need to maintain a database of such attack signature is essential. Most popular WAF solution vendors maintain their own databases, whereas others subscribe to such databases.These databases need frequent updates to take into account the signatures used in recent attacks elsewhere.

Rule Based Detection 

Rule based Detection technique is similar to Signature Based Detection, but it allows use of a more complex logic. For instance, even if a signature match is detected, it can be further subjected to certain other conditions, like if the data is from a trusted source, the traffic may still be allowed to pass through with or without appropriate alerts and triggers for manual inspection. While the WAF solution is shipped with the standard rules, the same would be configurable to meet the security needs of the customer. The standard rules may also be part of the signature / rule database as may be maintained or subscribed to by the vendor

APIs for Extensibility

Despite the standard signature and rule based detection techniques, the actual deployment scenario at the customer site may require customization of the techniques used in detection. WAF solutions vendors usually support this need by offering extensible APIs, plug-ins, or scripting. These extensiblity options if not appropriately secured, can be exploited by hackers too.


Protection Techniques

Brute Force Attacks Mitigation

These attacks use automated scripts that attempt to login to the web application with common user name and passwords. The attacks usually originate from a large number of sources consisting of both legitimate web servers and private home computers. Once a username and password is successfully guessed, the hackers or their scripts / tools use the gained admin credentials for the next stage of attacks. Given that the user name passwords follow stricter rules and thus these attack is most likely to fail in guessing the valid credentials, but these attacks generate unduly high traffic, which will result in resource drain and in turn affect the availability of the web application.

Protection from Cookie Poisoning

Cookie Poisoning attacks involve the modification of the contents of a cookie (personal information stored in a Web user's computer) in order to bypass security mechanisms. Using cookie poisoning attacks, attackers can gain unauthorized information about another user and steal their identity. Cookie poisoning is in fact a Parameter Tampering attack, where the parameters are stored in a cookie. In many cases cookie poisoning is more useful than other Parameter Tampering attacks because programmers store sensitive information in the allegedly invisible cookie. Most WAF solutions offer protection from Cookie poisoning by facilitating the signing and / or encryption of cookies, virtualizing the cookies or a custom protection mechanism as the specific web application may demand.

Session Attacks Mitigation

Session store is an important component of a web application and this store is used to share some of the common parameters pertaining to the user and the specific session across various actions within the application. Thus the session data is a key component that is used to secure the web applications. The hackers on the other hand try various techniques to hijack the session or tamper the session parameters. While tampering the parameter values is similar to Cookie Poisoning, Session Hijacking is stealing the session identifier and simulating requests from different sources with the stolen session identity. WAF solutions provide protection to session hijacking by signing and / or encrypting the session data and also linking the session identifier with the originating client.

Injection Attack Protection

An SQL injection attack is insertion of a SQL query via the input data from the client to the application. A successful SQL injection attack can read sensitive data from the database, modify database data, or shutdown the server. Similarly operating system and platform commands can often be used to give attackers access to data and escalate privileges on back-end servers Remote File Inclusion attacks allow malicious users to run their own PHP code on a vulnerable website to access anything that the PHP program could: databases, password files, etc. Most WAF solutions using the normalization technique and the signature and rule database would be able to deny requests carrying such data, command or instruction that could lead to any of the injection attacks.

DDoS Protection

Distributed Denial of Attack is a common technique used by hackers to impair the availability of a website or application by directing unusually huge traffic against the site or application. This will result in all the computing resources used up and eventually leading to the site not being available at all. The WAF solutions making use of the normalization techniques and the signature and rule databases would be able to block such requests. Some common techniques used by the WAF solutions are to have a check on the content length and by evaluating the number of requests or sessions from the same originating client within a given time period.


Obviously, what is listed above are most common detection and protection techniques that any WAF solution would offer. But vendors are constantly improving these techniques and thus adding more detection and protection features. This has to be a constant endeavor as the hackers on the other hand are also coming up with newer techniques to exploit various vulnerabilities.

Sunday, April 13, 2014

IT Governance For Small Businesses - Constraints

There is a perception that IT Governance best suits for large organizations and small organizations tend to ignore it considering the efforts and resources that is required in practicing the IT Governance within. But IT Governance is equally important for smaller organizations as well, so that the IT function however small it is deliver maximum value for the business and at the same time to keep the risk exposure to the minimum. Existing frameworks like COBIT are too extensive for small businesses to use in implementing IT governance. These frameworks however are too complex and costly to implement and small businesses may consider it a bigger battle to implement and manage such framework.


ISACA however recommends to take an evolutive approach and thus take smaller steps first and let it evolve. Small businesses should convert the high-level concept of governance into practical and easy to implement best practices. The resource pools available with the small businesses will be a lot smaller and even outsourcing might prove expensive, considering the business volume and thus establishing an RoI on implementing IT Governance could be a bigger challenge.


It is not just the resources and cost, there are certain other characteristics of small businesses, which come in way of implementing an IT Governance. Here are some such characteristics, which an IT Governance framework designed for a small business should take into consideration.


Smaller or no Board of Directors

Many small businesses are closely held and thus could be a family business or private limited company with a small number of Directors on the Board. Having an Independent Director or a Director with IT background on the board is a big ask. This will leave the concentration of IT decision making with few or even single individual, which could be the CEO or the owner himself. IT savvy business owners or CEOs tend to use or leverage IT more for their business and thus have some degree of adoption of standards, practices and frameworks. In such cases, the choice of technology, standards, practices, etc are most likely limited to the knowledge levels of the owner or CEO and they don't take a leap forward into unfamiliar areas, which will call for more resources in evaluating and establishing the RoI for the same.

Organization Structure

One of the first step in implementing the IT Governance in an organization is to get an IT Strategy Committee and an IT Steering Committee with representation from different functions and from the Board. Small businesses do not have the extensive management structures to have such committee(s). The organization structure with small business are not as extensive as that of large organizations and as such enforcing separation of duties may not be feasible at all. For instance, the Finance Manager of a small business will also perform the function of IT procurement with minimal support from IT Administrators. Similarly, having a separate CIO could be a bigger ask for a small businesses as the costs for having such resources does not warrant the return.

Smaller IT departments

Having a fully functional IT department is a big investment for a small business. Thanks to the cloud trend and software as a service, this is a challenge even the IT departments in large organizations are facing. Cloud based services like Google Apps for business and Microsoft's Office 365, coupled with various specific purpose software as a service, it is becoming a lot easier for the businesses to get its IT up and running with least help from IT experts. This characteristic of a small business leads to a situation where a non-IT staff might have to take up the IT Governance initiative, which obviously has a challenge within as such staff might not comprehend the nuances of the Governance practices and jargon.

Lack of complementing frameworks

IT Governance  framework generally relies on various other practices or frameworks practiced in an organization. For instance ITIL, Enterprise Risk Management, ISO, CMMI, etc are some such standards or frameworks, the existence of which makes adoption of an IT Governance framework a bit seamless. In a small business existence of such standards is highly unlikely. Small businesses need an IT governance framework that is simpler, self containing and easier to implement, and only contain controls that are not dependent on a control practice of a different standard or practice.

Information security

While small business are not the target of hackers or attackers, the risk of information security always remained. For obvious reasons that arise out of the characteristics listed here, small businesses could not see the return on investment in information security. For that matter, small business do not have a formal risk management practice. They, typically, do not possess some of the basic elements of security management like information security policies, backup and disaster recovery, security awareness and up-to-date anti-virus protection. An IT governance framework aimed at small businesses will have to include a strong emphasis on information security and address the common security risks affecting small businesses.

Resources & Tools

Use of sophisticated software applications make implementation and practicing IT Governance easier, but it calls for heavy investment, which is beyond the reach for small businesses. For instance, Performance Evaluation of various IT resources call for collection of data and come up with various metrics that can be used to benchmark and as well measure the performance of IT resources and functions. This is made easier by using automated tools and depending on manual methods could prove cumbersome and data inaccuracy.
Because of the lack of financial and technical resources, small businesses cannot make use of such automated tools or software systems for the purpose.


Though the above list is not exhaustive, what are listed above are the ones that can be considered as key constraints for an IT Governance framework for the small business to address. There is no one solution fits all even for large organizations. The IT Governance framework has to be designed, created and managed as relevant for each organization. That includes even a small business. While one may pick and choose controls from various frameworks and tailor them to suit the specific small or medium business. The framework should however provide for evolution, so that the same can improve based on feedback from the practice.

Saturday, April 5, 2014

IT Procurement - The Pricing Woes

Most IT products (both hardware and software) targeted for home or individual end customers usually carry a standard rate card. Some large resellers, considering their sales volume may offer a discounted price and that may be about 5 to 10 percent. While this seem to be a fair game, on the enterprise products side, things are totally different. The buyer, reseller (be it integrator or just a distributor) and the principal vendors play a game of negotiation. The end result of this game mostly is that one or more players lose. This is in contrast to the win-win theory where it is expected that all the players win.

The principals offering such enterprise products don't seem to have a standard pricing policy. Instead, they price the product or service for the specific enterprise customer based on the deal volume, the strategic importance of the deal and the indirect values that can be derived out of a specific deal. The indirect benefits could range from an increased reach to the associates of the customer, a consent to publish case study which might improve the market ranking of the product or increased revenue figures which again is used to determine the market share of the product or service.

The discounts the enterprise customers get range from 40 to even 90 percent. Large enterprises manage to negotiate and get substantial discounts on such products and services. Neither the principals nor the resellers can expect any margin out of such deals, but look for indirect benefits. This could potentially lead to a situation, where the principals don't see the intended indirect benefits being realized, they tend to take a 'no-frills' approach and thus not actively contributing towards the business goals of the customer.

This kind of pricing approach also result in the smaller businesses end up compensating the benefits that the larger enterprises get. That is, the discounts that the large enterprises get is out of the gains that the principals and resellers make out of deals with smaller business entities. This is in a way like taxing the poor for the benefit of the rich and could very well be termed as corporate corruption.

Knowing this, customers try their best to engage into a hard negotiation and get the maximum discount. When it is good to get the price advantage, are they aware of the hidden perils that could get in their way? Here are some such things that could happen:


  • The principals are likely to cut corners to ensure that they maximise their gain out of the deal or minimize the loss out of the deal. This could mean anything like trimming down the features which were not explicitly demanded by the customer and charge the customer when such features are required by the customer.
  • Vendors take the tendency to tone down post sale service levels. This could be the reason for a contrasting experience or feedback from different customers for the same product or service.
  • Principals and / or resellers take the no-frills approach. That is customer cannot expect a 'Customer Delight' kind of offering. The principals and vendors would stick to deliver what has been committed and not a bit more.
  • Unduly longer time and efforts is lost in the process of negotiation, which can have an impact on the time to market advantages for the customer.


While the above could impact the value delivery, these should not come in way in the negotiation process and thus ending up agreeing for an unreasonably higher cost.  This is where a win-win approach is recommended. A win-win outcome is one that gets all parties more than what no agreement would have guaranteed them. Win-win agreements do no promise all sides equal or similar gains. They only promise that all sides get is an outcome that is better than their most realistic estimate of what they would have ended up with had they walked away with no agreement.


Saturday, March 22, 2014

Business Impact Analysis for Effective BCM

A business continuity plan facilitates in improving the availability of organization's critical services. In the process, the BCP plan identifies and mandates such critical processes and also periodically assesses the quantitative and qualitative impact to the organization in the event of any disruption to such services. While Business Continuity Plan is proactive in managing the risk of business disruption, Business Resumption Plan and Disaster Recovery Plan are reactive in restoring the business to its working state as it deals with recovering or resuming the business services and assets following a disruption. BCP planning is a direct input to the business's D/R action plans.

Business Continuity Management and disaster recovery are natural components of Enterprise Risk Management. All the resources and plans that make up a business continuity plan are developed to address business interruption risk in an organization and should be part of a comprehensive mitigation plan for all the enterprise risks. Many organizations are beginning to recognize the opportunity they have from embedding or incorporating BCM into an overall program to identify, evaluate and mitigate risk. By viewing BCM as a risk management function and embedding it into the enterprise level ERM program, which has been aligned with the strategic imperatives of the company, boardroom expectations are met and alignment achieved.


The typical goals of BCM are:

  • To identify critical business processes and assign criticality. Factors influencing the determination of criticality include inter-dependencies among business processes and the MAD for each unique business process.
  • To estimate the maximum downtime the bank can tolerate while still maintaining viability. Bank management must determine the longest period of time a business process can be disrupted before recovery becomes impossible or moot.
  • To evaluate resource requirements such as facilities, personnel, equipment, software, data files, vital records, and vendor and service provider relationships

Business Impact Analysis

The first step in developing a strong, organization-wide business continuity plan is conducting a Business Impact Analysis. The result of BIA is a business impact analysis report, which describes the potential risks specific to the organization. The challenge lies in assessing the financial and other business risks associated with a service disruption. A BIA report quantifies the importance of business components and suggests appropriate plan and fund allocation for measures to protect them.

As with any plan, the Business Continuity Planning should also evolve on a continuous basis, as the business contexts keep changing in line with the growth and changing directions. Business Impact Analysis being an important phase of the BCM life-cycle,  the same should be revisited and refreshed in line with the BCM life cycle. As a process, the BIA shall be performed with respect to each critical activity or even resources forming part of the enterprise business processes. Though BIA is applied to critical activities, it is recommended to perform BIA on all activities as it is BIA that establishes the criticality of such activity, process or resource.

Performing BIA

The following are the key steps in performing the Business Impact Analysis:

  • Preparation and Set-up - It is important to identify the tools or templates required to perform BIA. For instance, a reference table to determine the business impact is essential to provide consistent definitions to different types of impacts and severity levels. If a structured risk assessment has already been carried out, the definitions and severity levels should already have been captured, and should be used for the BIA as well. 
  • Identification - This first step determines the activities to be performed, resources to be used to deliver the goods and services of the business organization. The source for gathering this information could be right from the mission & objectives of the enterprise to the defined business processes. Given that the BIA is performed on the identified activities and resources, this step however can be considered as a pre-requisite for BIA, rather than a step within BIA.
  • Identify potential disruptions - With respect to each identified activity or resource, identify the possible events or scenarios that could impact its desired outcome and thereby impacting the business process. This activity is usually best done using techniques like brain storming involving the relevant business users. As part of this step the correlation of the severity of the impact with the duration of disruption is also established.
  • Identify tangible losses - Disruption in certain activities or non availability of certain resources would directly result in monetary losses. If the given activity or resource or it in combination with other resources or activities could potentially cause revenue loss, the same should be identified and established as to the magnitude of such loss as well.
  • Quantify intangible losses - Certain activities, when disrupted may not directly result into monetary losses, but may result in intangible loss to the organization. For instance, non availability of customer care executives to respond to customer queries, could result in erosion of brand value. Such impacts should be quantified using appropriate techniques so that the same can be considered in determining the priority.
  • Recovery cost - As part of the impact analysis it would make sense to capture details of time and efforts it takes to resume or recover from the disruption. The magnitude of the recovery cost would also contribute to the determination of the prioritization or ranking.
  • Identify dependencies - Some times, the potential disruption or its impact depends on certain other activities or resources be it internal or external. This details will be useful in drawing up the business resumption plan and the disaster recovery plan. 
  • Ranking - Once all relevant information has been collected and assembled, rankings for the critical business services or resources can be produced. Ranking is based on the potential loss of revenue, time of recovery and severity of impact a disruption would cause. Minimum service levels and maximum allowable downtime are also established.
  • Prioritize critical services or products - Once the critical services or products are identified, they must be prioritized based on minimum acceptable delivery levels and the maximum period of time the service can be down before severe damage to the organization results. To determine the ranking of critical services, information is required to determine impact of a disruption to service delivery, loss of revenue, additional expenses and intangible losses.

The quality of the BIA is reflected in the reports that are produced after completing the above mentioned steps. Given that BIA is a critical phase of BCM, it is important that this activity is performed with as much care and attention to the details. Using the right set of tools, techniques, templates and questionaire is recommended for best results.

Sunday, March 16, 2014

IT Governance - Implementation Obstacles

IT governance is a process which include a set of controls and practices that ensures that the IT function is working on the right things at the right time in the right way with a view to accomplish the stated objectives and thereby contributing towards the meeting enterprise objectives and goals. Any process that aligns IT to business goals is the right strategy. However, it’s the change required and the compromises on the part of business leaders that can come in way to make it a not so easy program.

IT Governance offers many benefits, which include reduce the cost of day-to-day operations, improve overall operational efficiency and consistency, free more resources for strategic initiatives that improve competitiveness, choose those initiatives far more wisely working on the right things, bring those initiatives to market faster with less risk and bring IT into close alignment with business priorities. But at the same time the results of an ineffective implementation can be devastating. Some such devastating results could be:
  • Business losses and disruptions, damaged reputations and weakened competitive positions
  • Schedules not met, higher costs, poorer quality, unsatisfied customers
  • Core business processes are negatively impacted (e.g. SAP impacts many critical business processes) by poor quality of IT deliverables 
  • Failure of IT to demonstrate its investment benefits or value propositions


The Three Pillars of IT Governance

To understand the obstacles to IT Governance in an organization, it would be appropriate to understand the three critical pillars on which a successful IT Governance program is built on. The following are the three critical pillars of a successful IT Governance implementation:

Leadership, Organization, Decision Rights and Metrics

The IT Governance Initiative must be decomposed into manageable and accountable work packages and deliverables and assigned to owners for planning, development, execution and continuous improvement. The IT Governance program must have clearly defined roles, responsibilities and decision rights for the entire program and for each major component of the integrated IT Governance framework and road map.
A decisions rights matrix identifying decision influencers and decision makers is necessary to clarify decision roles and authority levels for the major IT Governance components.

Flexible and Scalable Processes

Processes form an integral part of the IT Governance program and as the IT Governance framework is made of such processes and controls, which shall be defined. It is also important these processes evolve over its usage based on feedback collected through various metrics. At the same time, processes should not only be simple enough to understand and implement but also flexible enough to provide room for improvement. People tend to ignore processes, if it is difficult to understand and practice as part of their day to day work. Thus the integrated framework approach works best.

Enabling Technology

Most business components rely on Technology for most aspect of their value, reliability or efficiency. Even choice of right technology plays a key role in making up the first two pillars. Given that technology evolves in an accelerated rate, there should be a clear watch on such advancements and the technology road map should provide for identification and adoption of the right technology at the right time to get the maximum value. Most organizations have recognized and accordingly have started managing this area well.


The Key Obstacles

Most often, the business leaders are motivated and rewarded by having their small part of the organization succeed. IT governance requires that the scarce resource of technology capacity be diligently distributed across the organization for overall business success. In other words, it requires that IT cannot be allocated on the basis of individual team needs but rather on collective, organizational goals. A recent empirical study by Lee uncovered factors such as ‘lack of IT principles and policies’, ‘lack of clear IT Governance processes’, ‘lack of communication’, and ‘inadequate stakeholder involvement’, as inhibitors of IT Governance implementation success. A good understanding on the barriers or obstacles that hinder the success of IT Governance implementation is important as once understood, their effect is understood and pre-emptive actions can be taken to address them

Implementing IT Governance is a long and continuous journey, where obstacles and challenges are aplenty. A good understanding on the barriers or obstacles that hinder the success of IT Governance implementation is important as once understood, their effect is understood and pre-emptive actions can be taken to address them. The most frequently experienced obstacles include:

Culture

Instituting effective IT governance requires dealing with the “c-word.” The culture of a company—“the way we do things here”—can be a tremendous driver for business success. It can also be—and often is—a giant resistor that dampens positive change. Immeasurable amounts of energy have been dissipated trying to change embedded habits and methods that hid behind the cloak of “culture.” Today, worldwide, the trend is toward collaborative culture, especially in the sharing of information. The attitude that “information is power” lingers in some dark company corners. In some disciplines, such as sales, where compensation is directly related to personal contacts and initiative, it is arguable that the status quo has value. In most cases, though, managements are trying to rid the company of these attitudes in order to unlock the power of teamwork leveraged by technology. IT governance requires teamwork and information sharing to succeed.

Resistance to Change

Virtually every manager in business today has encountered employees who held up organizational change by insisting on continuing with the “old way” of doing something, even though the success of the “new way” depends on universal adoption. Fear of failure could be one of the reason why people are afraid to commit to change, uncertain that they can successfully implement it and fearing that if they fail, they will be held accountable. Another reason could be the existence of innate conservatism and uncertainty emanating and causing resistance

Lack of Appropriate Communication

Communication is really at the heart of IT governance and the lack of appropriate communications can cause a major disconnect between IT executives and business executives. IT still continues to communicate in more technology terms, which is just not relevant to the business and they just don't understand it. So good communications is extraordinarily important so that everybody is on the same page and that the business and IT become very closely engaged. Again -- we're making strategic decisions on where we're going to invest in technology and those are really business decisions, not technology decisions. That way, lack of communication can easily derail the IT Governance program of an organization.

Lack of Value Proposition

CIOs must be willing to take the lead in the search for value-creating IT processes. If they are not, others—real experts—are glad to do so, in language that resonates with CEOs. For instance, if you take the Project and Porfolio Governance the 'Fail Fast' or 'Fail First' approach may be helpful. If the processes are designed around this approach, we could see that the IT programs and functions get evaluated at various stages by analyzing the collected metrics to see if it would still make sense to let the project, or program to move into the next stage. At every stage there using the metrics, a revisit to the project charter and the business objectives would ensure that the desired value out of such project or program is still the same.

Internal politics

Internal organizational politics may exert themselves, as the adoption and implementation of formal ITG practice will sometimes bring a shift in decision rights and associated powers that currently exist in the organization. It is seen in most organizations that projects that should be given a higher priority mostly be based on “who speaks the loudest” rather than“ looking at the current business, collected metrics, what is the immediate need?”

Saturday, March 8, 2014

The Principles of Agile Enterprise Architecture Management

Change is happening everywhere and that too at an accelerated rate not only in IT but also in many other functional areas, though the same is very high in the area of IT. Business users are encouraged to innovate in every possible area and that brings in more and more transformation projects, an important category of projects in the enterprise program and portfolio management. Many a times these transformation projects are time critical, which if not implemented on time will use the market advantage. On the same lines, technology adoption is becoming a key aspect for the success of the businesses and the predicting, tracking and embracing the upcoming disruptive technologies has become an important business and strategic risk as it can have wider impact across business strategies, capabilities and processes.

Enterprise Architecture function has equal responsibility in ensuring these changes are embraced with least impact. While running the business as it is an important aspect, enabling transformation of business capabilities and information management capabilities is another key goal for the Enterprise Architects. One of the key elements that Enterprise Architects should consider and address is complexity around business and IT Architecture management so that transformation projects get implemented at desired time schedules and thus reap the intended business benefits. While there are other key objectives like delivering stakeholder value, managing complexity is an objective that comes close to being Agile.

The Agile Enterprise Architecture is all about letting changes happen and thus keep the Architectural Principles continuously evolving. This will also call for having an appropriate lifecycle that facilitates the evolution, development and adaption of the current and the target reference architecture continuously. This will keep the maturity levels of various IT management functions also changing over time. In this blog, let us focus on the key principles that enables an Agile Enterprise Architecture Management:


Value Individuals and Interactions over Tools and Processes

It is a well established and understood fact that it's the people who build success in the enterprise, and the tools and processes are just enablers. With people being the greatest asset, the organizational culture plays an important role in motivating the employees to collaborate, innovate and deliver the results more effectively and efficiently. Build the EA team in such a way that it has representation or interface with the Top Management, Business & IT Owners, Business & IT Operations teams and the Project Teams driving the change within. Choose and deploy the right set of tools, technology and processes that facilitates the collaboration with different business and IT functions.

The EAM team shall aim for sustainable evolution, with a pace as is driven by business and IT users; Help the project teams to avoid panics, and discourage culture clashes; Understand that everyone has their own area of expertise and thus can add value to the project or program.


Focus on demands of top Stakeholders and speak their languages

Typically the top stakeholders need continuous input from the EA team on various business and IT functions, to decide on further strategic alignments or improvements, which in turn would lead to new transformation projects or change of course in case of existing projects. The inputs could be in the form of metrics, visualizations and reports. It is very important that these inputs should be relevant and make sense to the target recipients. The following key considerations are worth considering to ensure that the stakeholders realize the maximum value out of such inputs from EA teams:

  • A single number or picture is more helpful than 1000 reports
  • Avoid waste - Share information that is relevant and nothing more and nothing less.
  • Leverage the existing process to generate and deliver these inputs as against a whole set of EA specific processes.

Promote rapid feedback, by working jointly on models and architecture blue prints with other people and functions. Remember that the best way of conveying information is by a face-to-face conversation, supported by other materials. Shared development of a model, at a whiteboard, will generate excellent feedback and buy-in. Work as closely as possible with all the stakeholders, including your customers and other partners.


Reflect behavior and adapt to changes

The effect of a change in the end reflects on the behavior of individuals, tools, and functions. The EAM function shall atempt to understand the likely directions and behavior of such changes using techniques such as scenario analysis and change cases. This will help the EAM function to determine how best to embrace the change in terms of timing, approach and methodology. This is where a pattern based approach in developing the EAM function would facilitate change adoption with much ease and least impact.

EAM should manage and plan for the changes and shall never resist a change. It may not always be easy in embracing changes, but a well thought out EAM evolution lifecycle would certainly make it simpler. It is always possible that one big change can be broken into various blocks and can be taken one at a time, depending on the time, efforts and business priorities.


Here are some of the useful references for further reading on the Agility and the Enterprise Architecture Management.

1. Towards an Agile Design of the Enterprise Architecture Management Function

2. Principles for the Agile Architect

3. The Principles of Agile Architecture

4. Actionable Enterprise Architecture (EA) for the Agile Enterprise: Getting Back to Basics

Sunday, February 9, 2014

The Principles of Effective Risk Management

Enterprise Risk Management is one of the core domain of Governance. In some business sectors, the success depends on an intelligent and effective risk management principles, framework and practices. The advancement in technology, like big data and analytics also plays a key role in making the risk management effective and adding value to the business. Other factors that necessitate a well architected ERM in an organization include, regulatory & compliance needs, security and privacy expectations, disasters and business continuity needs, etc. As the risk management practices evolved further, adoption of principle based approaches have been found to be more effective.


Here the some of the common principles to model the Risk Management framework around:

  • Create and protect value - Any framework should be able to add value and also protect the values that the assets of the organization is expected to deliver. This would also involve identifying the specific business needs, appropriately assess the risk measure and in turn facilitate deciding on the best risk mitigation or avoidance plan. Risk management must have demonstrable effect on achievement of objectives and improvement of performance of the enterprise.
  • Integrated approach - Risk management cannot be practiced effectively in silos. Today's organizations face the challenges of many different frameworks for meeting different goals. For instance, ISO27001 for security, ITIL for IT infrastructure management, COBIT for Governance, etc. Integrated risk management promotes a continuous, proactive and systematic process to understand, manage and communicate risk from an organization-wide perspective in a cohesive and consistent manner. To be effective, the Risk Management framework should be capable of being integrated into the existing process framework.
  • Recognise & manage complexity - Organisations are very complex environments in which to deliver concrete solutions. There are many challenges that need to be overcome when planning and implementing information management projects. In practice, however, there is no way of avoiding the inherent complexities within organisations. New approaches to information management must therefore be found that recognise (and manage) this complexity.
  • Flexible and adaptable - There is no "one-size-fits-all" approach to risk management and organizations should consider their own context when determining an appropriate approach. Organizations today face a considerable change management challenge for information management projects. In practice, it means that projects must be carefully designed from the outset to ensure that sufficient adoption is gained. The framework shall be tailored and responsive to the organization's external and internal context including its mandate, priorities, organizational risk culture, risk management capacity, and partner and stakeholder interests.
  • Highly usable - In general, the risk management practices should allow for the identification of risk information throughout the organization that can be used to support enterprise wide decision-making, and should also be flexible enough to evolve with changing priorities. This requires that every employee of the organization has a role to play in an effective Risk Management program. This calls for the structures and the associated processes should be simple enough to understand and also usable or executable. 
  • Dynamic and responsive to change - The process of managing risk needs to be flexible. The challenging environment we operate in requires agencies to consider the context for managing risk as well as continuing to identify new risks that emerge, and make allowances for those risks that no longer exist. Risk Management shall be deployed in a systematic, structured and timely manner to enable cost-effective embedding and focused generation of consistent, comparable and reliable results. 
  • Leverage tools & technology - An effective risk management calls for the ability to consider and make use of large volume of data and should leverage the statistical techniques to predict and prioritise the risks. Coming up with a right mitigation or contingency plan also calls for processing of large volume of data. The framework should provide for leveraging latest technology as it emerges to facilitate such high volume information handling and statistical analysis.
  • Considerate to human and cultural factors - The success of the risk management program largely depends on its employees in implementing it as part of their every day business activities. This calls for the structure and the processes to be considerate of the organization's cultural values and should not lead to creating conflicts. 
  • Communicate extensively - Communication is the key for success of any project or program. The framework shall provide for seamless communication amongst all stakeholders, so that the information is exchanged at the right time without losing its value.
  • Continuous Improvement - The big bang approach is unlikely to yield the expected outcome for obvious reasons. Instead, an evolutionary approach will work better and thus the ERM should be capable of evolving. Deployment should be complemented with mechanisms to assess and continually improve enterprise risk management maturity and be aligned with approaches driving the organization’s overall excellence and maturity agenda. 
  • Governance - Oversight and accountability for the risk management process is critical to ensure that the necessary commitment and resources are secured, the risk assessment occurs at the right level in the organization, the full range of relevant risks is considered, these risks are evaluated through a rigorous and ongoing process, and requisite actions are taken, as appropriate.

The above list is not an exhaustive list of principles that readily suits an organization. The right set of principles shall be identified based on the priorities of the business. These principles when adopted help the organizations to practice an improved risk management and thus giving the following benefits to the enterprise.
  • Enhance the coverage of risks in all areas including mission,strategy, planning, operations, finance.
  • Consider the causes of various risks and the resulting impacts.
  • Develop a culture in which employees manage risks as part of their daily routines.
  • Optimized risk appetite, so that the business functions can take take calculated risks.
  • Facilitate enterprise wide risk aware decision making.

Saturday, January 25, 2014

Internet of Things: What Strange Things Can Happen

It was about 6 years back, by when we have started to see WiFi enabled digital cameras and we were wondering what this has to do in a digital camera. But with that, the digital cameras were able to upload the captured images automatically to the cloud based photo albums. Later came in GPS equiped digital cameras, which attaches the location to the captured images. Of course, with smart phones equiped with higher resolution cameras, the digital cameras are on the downfall. That is just a well known example of how a 'thing' or a smart thing can connect to a network and share useful data for a purpose. So much have evolved since then and we now see a world of possibilities to have all the 'things' connected.


Researchers see a lot of benefits by making things smart and inter-connecting them. The networking technologies are also evolving at a brisk pace, offering various improvements over the wireless technologies and protocols. We can see this trend advancing further and may mature in about two decades from now. Looking further, in line with my blog on Human Interface Technology, even humans can remain connected, and that will render human disabilities a thing of the past century.


If you followed this year’s CES, it is evident that the future is all about connected devices. We could see everyday devices equipped with sensors and connectivity to work together, understand what we’re doing, and operate automatically to make our lives easier. Here are some of real world examples of Internet of Things:


A smart refrigerator that can read the embedded tags on the grocery items that are stored in it and then using the supported backend platform on the cloud, identify the items and fetch its details as to date of manufacture, expiry date, quantity, etc. Thus the fridge may alert the consumers about the state and stock of such items. With the kind of wearable gadgets that we see now, these alerts can be through such devices too. It is left to your imagination to what extent this smart capability can be extended.


Medical and emergency care is another area where the smart 'things' play a very useful and life saving role. For instance, a connected car can call emergency services faster than a mobile phone. Again, with the help of embedded or worn smart gadgets, the hospital can get to know the patient history as the patient gets into the hospital and can get ready for the emergency services thereby saving precious time, which can be life saving. Check out this interesting video. Check out this video that IBM has made out describing how it is growing fast and could invade into the everyday life of human beings.


Extending this further to the daily routines of a business executive, the possibilities are endless and here are some that are close to reality, if not already real:

  • Your smartphone once it hears a hint about a meeting in a conversation, it will in the background look up your calendar and will pass on the busy / free information. If the executive uses a glass, then he would be seeing the schedule as he talks and thus facilitates the scheduling of the meetings.
  • The smart alarms will be smart enough to consider information as to what time did go for sleep, the schedule (both personal and official) for the following day and thus will intelligently decide the wake up time in the morning and triggers the alarm.
  • Depending on the traffic conditions, your car will intelligently suggest alternate routes to reach the office or such other scheduled meeting venue and if needed, automatically inform the meeting organizers about the possible delay or may seek rescheduling of the meeting.
  • As you drive back home, you just remember that you need to pickup some drugs from a drugstore. Your smart car will already know this and will identify a store that stocks the drugs that you need and that is on the route or closer to the route that you drive. It can even place the order with the store and let the store keep your items ready for delivery and you just need to pick up enroute.
  • Needless to say, your car will be smart enough to perform a health diagnostics of itself and will decide on a best date for its own garage visit so that your schedules are not impacted.
  • These smart things will know about your presence and which device is in touch with you to send out alerts. For example, if you are at home watching TV, you may see your TV showing alerts from your washing machine and similarly, when you are at work, your smartphone would be used to show these notifications.
  • Here are some more ways the 'Internet of Things' can impact your daily life.


Coming back to the household, you are watching your favorite action movie with surround sound and you did not changed your smartphone from a silent mode back to a ringing profile. You don't have to worry, your smartphone knows what you are upto and over a period would have learnt by itself, as to which of the calls you would want to answer at this situation and accordingly either rejects the call by answering the caller appropriately. If it is an important call that you would n't want to miss, it knows it already and will tone down the TV audio volume and thus draws your attention to the call and you don't have to reach out to your phone, your TV will take over the call from your smartphone. To extend this further, depending on the profiles of other members at the house, which the house already knows through its sensors and networks, your smart phone will decide whether to route the call on to the TV or not.


We can now visualize the possibilities and it is endless. The smart things will have built in learning capability and will keep learning from its master's behavior to perfect its services. This trend will lead us to a situation where the things might by themselves or under the influence of hackers attempt to take over human beings as portrayed in some of the recent science fiction movies. On top of this, hackers will also be leveraging these smart abilities to hack into these connected networks and could do whatever they have been doing with the connected systems now.


Here is how the hackers can intrude into your digital lifestyle:

  • We have already seen reports of a smart refrigerators sending out spam emails.
  • By hacking into your house network, hackers may get to know how many members are home or if there are none inside the home, which information will be useful for them to plan their burglary attempts, etc.
  • Your TV may refuse to play your favorite channel and will rather play content that the hackers prefer you to watch.
  • Your car may drive to a place that is different than where you wanted to visit. On the same lines, hackers can execute traffic diversions and cause traffic jams as portrayed in the movie Die Hard 4
  • All your orders for home supplies may be hacked and deliveries may happen elsewhere, while you would have paid for it. And of course, your house network will still acknowledge for having received the deliveries, while it is not actually.
  • The impact of hacking into the emergency service network could be huge and life threatening.
  • Your smartphone can be hacked to refuse critical business calls and thus causing revenue impact to your organization.


IDC anticipates that more than 200 billion connected devices will be in use by 2021, with more than 30 billion being autonomous devices. Cisco’s Internet Business Solutions Group (IBSG) predicts some 25 billion devices will be connected by 2015, and 50 billion by 2020. How will having lots of things connected change everything? Find the answer in the infographic. With all this, Internet of Things is coming and will be here to stay soon. Whether we, the humans are ready to take on this evolution remains to be seen.

Friday, January 17, 2014

REST Services - Security Best Practices

As most of us know, REST (Representational State Transfer) is an architectural principle and is gaining increasing reckoning amongst architects for the inherent advantages that it offers. REST does recommend the use of standards such as HTTP, URI, XML and JSON and formats such as GIF, MPEG, etc. Twitter, iPhone apps, Google Maps, and Amazon Web Services (AWS) demonstrate heavy use of REST services. The basic tenets of REST is statelessness and is all about utilizing the HTTP commands GET, PUT, POST, DELETE as outlined in the HTTP RFC.


Obviously, Architects see some key advantages with the REST services, and so REST implementation becomes an important consideration in responsive, service oriented applications. Let us have a recap of some of the key advantages as below:

  • The resources can be uniquely identified using URI and facilitates interconnection of these resources.
  • Resource manipulation is accomplished using the standard HTTP verbs, viz GET, PUT, POST, DELETE
  • The data payload is minimal and thus offers the capacity and efficiency benefits.
  • Easier implementation offers shorter learning curve, maintainability and time to market advantage.
  • Increased support from the JavaScript offers the client side computing benefits and thus improve the responsiveness.

Needless to mention, there are certain disadvantages too with the REST Services and here are some:

  • Prone for same level of threats and vulnerabilities as the HTTP and Web
  • Improper use of the HTTP commands could lead to problems and complicate the design.
  • Relies on very few standards.

Some of the security challenges with REST Service implementations are outlined below:

Chained trust is challenging for web service implementations and the situation is no different with REST. Unlike in case of SOAP, standards like WS-Security, SAML cannot be used in case of REST services. This call for relying on a combination security implementations which are specific to different implementations. Here are some such security implementations, which in combination may help overcome this concern:

  • Use Digital Certificates for authenticating the server and the user. 
  • Pass the user's identity from server to server and necessary validation and authorization at the data source.

Cross site request forgery (CSRF) attacks, which attempt to force an authenticated user to execute functionality without their knowledge. Being stateless, REST is inherently vulnerable to CSRF attacks. The work arounds for this security concern are:

  • Use of a custom header - Setting a custom header such as X-XSRF header is known to be a solution for this concern. The endpoints receiving the REST service requests would reject or drop such requests if the intended custom header is not part of the request. It is to be noted that this is not a fool proof technique, but at the same time offers some bit of protection than nothing.
  • Another approach is to deviate from the basic tenets of REST and maintaining state, in which case a token can be generated and maintained to authenticate the requests, so that requests carrying an invalid or no tokens can be dropped or rejected.

While the above are just an example of the concerns, REST services being based on HTTP specifications is prone to all the security vulnerabilities as that of a web application. Thus REST implementation while it is the easier choice due to its advantages listed above, should also be implemented with due considerations to some or all of the following security best practices:
  • All data must be sent over HTTPS and this will ensure securing of the data in transit.
  • Use of PKI or HTTP Digest Authentication for authentication.
  • Always perform authorization for every request upon receipt. 
  • Scan HTTP headers, query strings and POST data and look for reasons to reject a request.
  • Don't combine multiple resources with a single URI, always uniquely identify each resource, so that the security implementation can be simple and relevant to the specific resource.
  • Always perform validation of the JSON / XML data.
  • Ensure appropriate use of the HTTP commands for managing the resources and enable selective restriction of these commands.
  • Design URIs to be persistent. If a URI needs to change, honor the old URI and issue a redirect to the client.
  • Caching should generally be avoided where possible and sensitive data should never be cached.
  • When developing REST solutions, care needs to be taken not to create URIs that contain sensitive information. 
  • The requester should be authenticated and authorized prior to completing an access control decision. 
  • All access control decisions shall be logged. 
  • Code as if protecting the application.
Here are certain useful readings on securing the REST services:


Friday, January 3, 2014

Human Technology Interfaces - What The Future Has In Store

All of us would have been reading something or other on technology advancements that work with human body. For example, we have Health IT companies experimenting embedding memory chips under the skin of human body to store the individual's health records, so that when you walk into clinic, the clinic will get to know about your health history and would be able to suggest the further course and all this can happen with a non human front office assistant. Similarly, with the advancement in the brain interfaces and in the lines of the movie "Minority Report", the Police and investigation authorities may get on to crime prevention mode, i.e. they will get to know the moment you think of committing a crime and technologies like virtual presence, surrogates etc, this might be accomplished without any human casualties.

There are more such advancements and in this blog, my attempt is to present few scenarios that could be a possibility in the near future and the effects that this can have on various attributes of mankind.

Glass: With further advancement Google Glass kind of gadgets could be miniaturized and could be worn like contact lenses. These lenses would be able to interface with things around you. For instance, the refrigerator will greet you with the current temperature and you will know what is inside various containers, by looking at it (without opening) and will also indicate its details like quantity, how many days it is stored, etc. Again with added gamification, one will enjoy performing various tasks on the kitchen table. These things while assisting you on performing these tasks like chopping vegetables, it will also keep a score of how you perform, so that you enjoy doing these tasks. These gadgets coupled with access to public and private data stores help you in decision making, which can enhance one's Personal Intelligence (PI). Check out this video to have a glimpse of what I have tried to narrate here.

Brain Interface: Gadgets like Brain Link are already in the market, which coupled with related applications on smartphones gives beneficial gaming experience like attention training, meditation, neuro-social gaming, research and knowledge about brain. Most of us would have watched the movies 'Surrogates' wherein humans would stay indoors while their surrogates would go out to work and 'Minority Report' where the police and justice department would get alerts the moment some one think of committing a crime. Quite many science fiction imaginations in the past have become reality now. Recent research accomplishments evidences that even the fiction exhibited in the above movies might become a reality some day that is not very far away. For instance, researchers at Harvard have demonstrated a non invasive brain-to-brain interface wherein humans could control animals with their thoughts alone.

Given that continued advancements on the brain interface will further this accomplishments and coupled with various other inventions, the next generation of man kind may experience the following:


  • Personal Intelligence can be augmented by wearing or embedding devices and / or gadgets.
  • Though humans can have private thoughts, these will be subject to review or audit by government agencies and no wonder securing your thoughts would become absolutely essential.
  • Shopping will be virtual and all products can be virtually felt / experienced sitting at home and then can be ordered.
  • All 'things' would have interfaces to interact with human.
  • Blink or double blinks can be programmed to perform certain actions like taking a snapshot of what you have been seeing at that moment, etc.
  • Artificial or Virtual dreams will become reality and one can have choice of dreams and choice of character. Extending this, one would be able to watch a favorite movie as they sleep and cast themselves as a character in the movie.
  • With Body Area Networking and embedded nano chips across various critical body parts, self diagnosis with alerts might be a possibility.
  • Human disabilities can be worked around using robotic body parts and brain interface technology.
  • The hacking community would sharpen their skills and would explore opportunities of hacking human thoughts and human memory, which could be the biggest security and privacy threat to combat for the security experts.


Here are some more videos demonstrating the innovations that are taking place around human technology interfaces:

  • Ford takes SYNC to the next level through the use of configurable controls and the use of an electronic personal assistant, or "avatar," named Eva
  • Someday well be living be living on and under the oceans. This idea isnt farfetched and if it comes true then heres the answer to a new type of underwater transportation system.
  • Using a brain-computer interface technology pioneered by University of Minnesota biomedical engineering professor Bin He, several young people have learned to use their thoughts to steer a flying robot around a gym, making it turn, rise, dip, and even sail through a ring.
  • Cathy Hutchinson has been unable to move her own arms or legs for 15 years. But using the most advanced brain-machine interface ever developed, she can steer a robotic arm towards a bottle, pick it up, and drink her morning coffee.
  • At Barcelona University, scientists are working on a European Research Project to link a human brain to a robot using skin electrodes and video goggles so that the user feels they are actually in the android body wherever it is in the world.