Saturday, April 5, 2014

IT Procurement - The Pricing Woes

Most IT products (both hardware and software) targeted for home or individual end customers usually carry a standard rate card. Some large resellers, considering their sales volume may offer a discounted price and that may be about 5 to 10 percent. While this seem to be a fair game, on the enterprise products side, things are totally different. The buyer, reseller (be it integrator or just a distributor) and the principal vendors play a game of negotiation. The end result of this game mostly is that one or more players lose. This is in contrast to the win-win theory where it is expected that all the players win.

The principals offering such enterprise products don't seem to have a standard pricing policy. Instead, they price the product or service for the specific enterprise customer based on the deal volume, the strategic importance of the deal and the indirect values that can be derived out of a specific deal. The indirect benefits could range from an increased reach to the associates of the customer, a consent to publish case study which might improve the market ranking of the product or increased revenue figures which again is used to determine the market share of the product or service.

The discounts the enterprise customers get range from 40 to even 90 percent. Large enterprises manage to negotiate and get substantial discounts on such products and services. Neither the principals nor the resellers can expect any margin out of such deals, but look for indirect benefits. This could potentially lead to a situation, where the principals don't see the intended indirect benefits being realized, they tend to take a 'no-frills' approach and thus not actively contributing towards the business goals of the customer.

This kind of pricing approach also result in the smaller businesses end up compensating the benefits that the larger enterprises get. That is, the discounts that the large enterprises get is out of the gains that the principals and resellers make out of deals with smaller business entities. This is in a way like taxing the poor for the benefit of the rich and could very well be termed as corporate corruption.

Knowing this, customers try their best to engage into a hard negotiation and get the maximum discount. When it is good to get the price advantage, are they aware of the hidden perils that could get in their way? Here are some such things that could happen:


  • The principals are likely to cut corners to ensure that they maximise their gain out of the deal or minimize the loss out of the deal. This could mean anything like trimming down the features which were not explicitly demanded by the customer and charge the customer when such features are required by the customer.
  • Vendors take the tendency to tone down post sale service levels. This could be the reason for a contrasting experience or feedback from different customers for the same product or service.
  • Principals and / or resellers take the no-frills approach. That is customer cannot expect a 'Customer Delight' kind of offering. The principals and vendors would stick to deliver what has been committed and not a bit more.
  • Unduly longer time and efforts is lost in the process of negotiation, which can have an impact on the time to market advantages for the customer.


While the above could impact the value delivery, these should not come in way in the negotiation process and thus ending up agreeing for an unreasonably higher cost.  This is where a win-win approach is recommended. A win-win outcome is one that gets all parties more than what no agreement would have guaranteed them. Win-win agreements do no promise all sides equal or similar gains. They only promise that all sides get is an outcome that is better than their most realistic estimate of what they would have ended up with had they walked away with no agreement.


Saturday, March 22, 2014

Business Impact Analysis for Effective BCM

A business continuity plan facilitates in improving the availability of organization's critical services. In the process, the BCP plan identifies and mandates such critical processes and also periodically assesses the quantitative and qualitative impact to the organization in the event of any disruption to such services. While Business Continuity Plan is proactive in managing the risk of business disruption, Business Resumption Plan and Disaster Recovery Plan are reactive in restoring the business to its working state as it deals with recovering or resuming the business services and assets following a disruption. BCP planning is a direct input to the business's D/R action plans.

Business Continuity Management and disaster recovery are natural components of Enterprise Risk Management. All the resources and plans that make up a business continuity plan are developed to address business interruption risk in an organization and should be part of a comprehensive mitigation plan for all the enterprise risks. Many organizations are beginning to recognize the opportunity they have from embedding or incorporating BCM into an overall program to identify, evaluate and mitigate risk. By viewing BCM as a risk management function and embedding it into the enterprise level ERM program, which has been aligned with the strategic imperatives of the company, boardroom expectations are met and alignment achieved.


The typical goals of BCM are:

  • To identify critical business processes and assign criticality. Factors influencing the determination of criticality include inter-dependencies among business processes and the MAD for each unique business process.
  • To estimate the maximum downtime the bank can tolerate while still maintaining viability. Bank management must determine the longest period of time a business process can be disrupted before recovery becomes impossible or moot.
  • To evaluate resource requirements such as facilities, personnel, equipment, software, data files, vital records, and vendor and service provider relationships

Business Impact Analysis

The first step in developing a strong, organization-wide business continuity plan is conducting a Business Impact Analysis. The result of BIA is a business impact analysis report, which describes the potential risks specific to the organization. The challenge lies in assessing the financial and other business risks associated with a service disruption. A BIA report quantifies the importance of business components and suggests appropriate plan and fund allocation for measures to protect them.

As with any plan, the Business Continuity Planning should also evolve on a continuous basis, as the business contexts keep changing in line with the growth and changing directions. Business Impact Analysis being an important phase of the BCM life-cycle,  the same should be revisited and refreshed in line with the BCM life cycle. As a process, the BIA shall be performed with respect to each critical activity or even resources forming part of the enterprise business processes. Though BIA is applied to critical activities, it is recommended to perform BIA on all activities as it is BIA that establishes the criticality of such activity, process or resource.

Performing BIA

The following are the key steps in performing the Business Impact Analysis:

  • Preparation and Set-up - It is important to identify the tools or templates required to perform BIA. For instance, a reference table to determine the business impact is essential to provide consistent definitions to different types of impacts and severity levels. If a structured risk assessment has already been carried out, the definitions and severity levels should already have been captured, and should be used for the BIA as well. 
  • Identification - This first step determines the activities to be performed, resources to be used to deliver the goods and services of the business organization. The source for gathering this information could be right from the mission & objectives of the enterprise to the defined business processes. Given that the BIA is performed on the identified activities and resources, this step however can be considered as a pre-requisite for BIA, rather than a step within BIA.
  • Identify potential disruptions - With respect to each identified activity or resource, identify the possible events or scenarios that could impact its desired outcome and thereby impacting the business process. This activity is usually best done using techniques like brain storming involving the relevant business users. As part of this step the correlation of the severity of the impact with the duration of disruption is also established.
  • Identify tangible losses - Disruption in certain activities or non availability of certain resources would directly result in monetary losses. If the given activity or resource or it in combination with other resources or activities could potentially cause revenue loss, the same should be identified and established as to the magnitude of such loss as well.
  • Quantify intangible losses - Certain activities, when disrupted may not directly result into monetary losses, but may result in intangible loss to the organization. For instance, non availability of customer care executives to respond to customer queries, could result in erosion of brand value. Such impacts should be quantified using appropriate techniques so that the same can be considered in determining the priority.
  • Recovery cost - As part of the impact analysis it would make sense to capture details of time and efforts it takes to resume or recover from the disruption. The magnitude of the recovery cost would also contribute to the determination of the prioritization or ranking.
  • Identify dependencies - Some times, the potential disruption or its impact depends on certain other activities or resources be it internal or external. This details will be useful in drawing up the business resumption plan and the disaster recovery plan. 
  • Ranking - Once all relevant information has been collected and assembled, rankings for the critical business services or resources can be produced. Ranking is based on the potential loss of revenue, time of recovery and severity of impact a disruption would cause. Minimum service levels and maximum allowable downtime are also established.
  • Prioritize critical services or products - Once the critical services or products are identified, they must be prioritized based on minimum acceptable delivery levels and the maximum period of time the service can be down before severe damage to the organization results. To determine the ranking of critical services, information is required to determine impact of a disruption to service delivery, loss of revenue, additional expenses and intangible losses.

The quality of the BIA is reflected in the reports that are produced after completing the above mentioned steps. Given that BIA is a critical phase of BCM, it is important that this activity is performed with as much care and attention to the details. Using the right set of tools, techniques, templates and questionaire is recommended for best results.

Sunday, March 16, 2014

IT Governance - Implementation Obstacles

IT governance is a process which include a set of controls and practices that ensures that the IT function is working on the right things at the right time in the right way with a view to accomplish the stated objectives and thereby contributing towards the meeting enterprise objectives and goals. Any process that aligns IT to business goals is the right strategy. However, it’s the change required and the compromises on the part of business leaders that can come in way to make it a not so easy program.

IT Governance offers many benefits, which include reduce the cost of day-to-day operations, improve overall operational efficiency and consistency, free more resources for strategic initiatives that improve competitiveness, choose those initiatives far more wisely working on the right things, bring those initiatives to market faster with less risk and bring IT into close alignment with business priorities. But at the same time the results of an ineffective implementation can be devastating. Some such devastating results could be:
  • Business losses and disruptions, damaged reputations and weakened competitive positions
  • Schedules not met, higher costs, poorer quality, unsatisfied customers
  • Core business processes are negatively impacted (e.g. SAP impacts many critical business processes) by poor quality of IT deliverables 
  • Failure of IT to demonstrate its investment benefits or value propositions


The Three Pillars of IT Governance

To understand the obstacles to IT Governance in an organization, it would be appropriate to understand the three critical pillars on which a successful IT Governance program is built on. The following are the three critical pillars of a successful IT Governance implementation:

Leadership, Organization, Decision Rights and Metrics

The IT Governance Initiative must be decomposed into manageable and accountable work packages and deliverables and assigned to owners for planning, development, execution and continuous improvement. The IT Governance program must have clearly defined roles, responsibilities and decision rights for the entire program and for each major component of the integrated IT Governance framework and road map.
A decisions rights matrix identifying decision influencers and decision makers is necessary to clarify decision roles and authority levels for the major IT Governance components.

Flexible and Scalable Processes

Processes form an integral part of the IT Governance program and as the IT Governance framework is made of such processes and controls, which shall be defined. It is also important these processes evolve over its usage based on feedback collected through various metrics. At the same time, processes should not only be simple enough to understand and implement but also flexible enough to provide room for improvement. People tend to ignore processes, if it is difficult to understand and practice as part of their day to day work. Thus the integrated framework approach works best.

Enabling Technology

Most business components rely on Technology for most aspect of their value, reliability or efficiency. Even choice of right technology plays a key role in making up the first two pillars. Given that technology evolves in an accelerated rate, there should be a clear watch on such advancements and the technology road map should provide for identification and adoption of the right technology at the right time to get the maximum value. Most organizations have recognized and accordingly have started managing this area well.


The Key Obstacles

Most often, the business leaders are motivated and rewarded by having their small part of the organization succeed. IT governance requires that the scarce resource of technology capacity be diligently distributed across the organization for overall business success. In other words, it requires that IT cannot be allocated on the basis of individual team needs but rather on collective, organizational goals. A recent empirical study by Lee uncovered factors such as ‘lack of IT principles and policies’, ‘lack of clear IT Governance processes’, ‘lack of communication’, and ‘inadequate stakeholder involvement’, as inhibitors of IT Governance implementation success. A good understanding on the barriers or obstacles that hinder the success of IT Governance implementation is important as once understood, their effect is understood and pre-emptive actions can be taken to address them

Implementing IT Governance is a long and continuous journey, where obstacles and challenges are aplenty. A good understanding on the barriers or obstacles that hinder the success of IT Governance implementation is important as once understood, their effect is understood and pre-emptive actions can be taken to address them. The most frequently experienced obstacles include:

Culture

Instituting effective IT governance requires dealing with the “c-word.” The culture of a company—“the way we do things here”—can be a tremendous driver for business success. It can also be—and often is—a giant resistor that dampens positive change. Immeasurable amounts of energy have been dissipated trying to change embedded habits and methods that hid behind the cloak of “culture.” Today, worldwide, the trend is toward collaborative culture, especially in the sharing of information. The attitude that “information is power” lingers in some dark company corners. In some disciplines, such as sales, where compensation is directly related to personal contacts and initiative, it is arguable that the status quo has value. In most cases, though, managements are trying to rid the company of these attitudes in order to unlock the power of teamwork leveraged by technology. IT governance requires teamwork and information sharing to succeed.

Resistance to Change

Virtually every manager in business today has encountered employees who held up organizational change by insisting on continuing with the “old way” of doing something, even though the success of the “new way” depends on universal adoption. Fear of failure could be one of the reason why people are afraid to commit to change, uncertain that they can successfully implement it and fearing that if they fail, they will be held accountable. Another reason could be the existence of innate conservatism and uncertainty emanating and causing resistance

Lack of Appropriate Communication

Communication is really at the heart of IT governance and the lack of appropriate communications can cause a major disconnect between IT executives and business executives. IT still continues to communicate in more technology terms, which is just not relevant to the business and they just don't understand it. So good communications is extraordinarily important so that everybody is on the same page and that the business and IT become very closely engaged. Again -- we're making strategic decisions on where we're going to invest in technology and those are really business decisions, not technology decisions. That way, lack of communication can easily derail the IT Governance program of an organization.

Lack of Value Proposition

CIOs must be willing to take the lead in the search for value-creating IT processes. If they are not, others—real experts—are glad to do so, in language that resonates with CEOs. For instance, if you take the Project and Porfolio Governance the 'Fail Fast' or 'Fail First' approach may be helpful. If the processes are designed around this approach, we could see that the IT programs and functions get evaluated at various stages by analyzing the collected metrics to see if it would still make sense to let the project, or program to move into the next stage. At every stage there using the metrics, a revisit to the project charter and the business objectives would ensure that the desired value out of such project or program is still the same.

Internal politics

Internal organizational politics may exert themselves, as the adoption and implementation of formal ITG practice will sometimes bring a shift in decision rights and associated powers that currently exist in the organization. It is seen in most organizations that projects that should be given a higher priority mostly be based on “who speaks the loudest” rather than“ looking at the current business, collected metrics, what is the immediate need?”

Saturday, March 8, 2014

The Principles of Agile Enterprise Architecture Management

Change is happening everywhere and that too at an accelerated rate not only in IT but also in many other functional areas, though the same is very high in the area of IT. Business users are encouraged to innovate in every possible area and that brings in more and more transformation projects, an important category of projects in the enterprise program and portfolio management. Many a times these transformation projects are time critical, which if not implemented on time will use the market advantage. On the same lines, technology adoption is becoming a key aspect for the success of the businesses and the predicting, tracking and embracing the upcoming disruptive technologies has become an important business and strategic risk as it can have wider impact across business strategies, capabilities and processes.

Enterprise Architecture function has equal responsibility in ensuring these changes are embraced with least impact. While running the business as it is an important aspect, enabling transformation of business capabilities and information management capabilities is another key goal for the Enterprise Architects. One of the key elements that Enterprise Architects should consider and address is complexity around business and IT Architecture management so that transformation projects get implemented at desired time schedules and thus reap the intended business benefits. While there are other key objectives like delivering stakeholder value, managing complexity is an objective that comes close to being Agile.

The Agile Enterprise Architecture is all about letting changes happen and thus keep the Architectural Principles continuously evolving. This will also call for having an appropriate lifecycle that facilitates the evolution, development and adaption of the current and the target reference architecture continuously. This will keep the maturity levels of various IT management functions also changing over time. In this blog, let us focus on the key principles that enables an Agile Enterprise Architecture Management:


Value Individuals and Interactions over Tools and Processes

It is a well established and understood fact that it's the people who build success in the enterprise, and the tools and processes are just enablers. With people being the greatest asset, the organizational culture plays an important role in motivating the employees to collaborate, innovate and deliver the results more effectively and efficiently. Build the EA team in such a way that it has representation or interface with the Top Management, Business & IT Owners, Business & IT Operations teams and the Project Teams driving the change within. Choose and deploy the right set of tools, technology and processes that facilitates the collaboration with different business and IT functions.

The EAM team shall aim for sustainable evolution, with a pace as is driven by business and IT users; Help the project teams to avoid panics, and discourage culture clashes; Understand that everyone has their own area of expertise and thus can add value to the project or program.


Focus on demands of top Stakeholders and speak their languages

Typically the top stakeholders need continuous input from the EA team on various business and IT functions, to decide on further strategic alignments or improvements, which in turn would lead to new transformation projects or change of course in case of existing projects. The inputs could be in the form of metrics, visualizations and reports. It is very important that these inputs should be relevant and make sense to the target recipients. The following key considerations are worth considering to ensure that the stakeholders realize the maximum value out of such inputs from EA teams:

  • A single number or picture is more helpful than 1000 reports
  • Avoid waste - Share information that is relevant and nothing more and nothing less.
  • Leverage the existing process to generate and deliver these inputs as against a whole set of EA specific processes.

Promote rapid feedback, by working jointly on models and architecture blue prints with other people and functions. Remember that the best way of conveying information is by a face-to-face conversation, supported by other materials. Shared development of a model, at a whiteboard, will generate excellent feedback and buy-in. Work as closely as possible with all the stakeholders, including your customers and other partners.


Reflect behavior and adapt to changes

The effect of a change in the end reflects on the behavior of individuals, tools, and functions. The EAM function shall atempt to understand the likely directions and behavior of such changes using techniques such as scenario analysis and change cases. This will help the EAM function to determine how best to embrace the change in terms of timing, approach and methodology. This is where a pattern based approach in developing the EAM function would facilitate change adoption with much ease and least impact.

EAM should manage and plan for the changes and shall never resist a change. It may not always be easy in embracing changes, but a well thought out EAM evolution lifecycle would certainly make it simpler. It is always possible that one big change can be broken into various blocks and can be taken one at a time, depending on the time, efforts and business priorities.


Here are some of the useful references for further reading on the Agility and the Enterprise Architecture Management.

1. Towards an Agile Design of the Enterprise Architecture Management Function

2. Principles for the Agile Architect

3. The Principles of Agile Architecture

4. Actionable Enterprise Architecture (EA) for the Agile Enterprise: Getting Back to Basics

Sunday, February 9, 2014

The Principles of Effective Risk Management

Enterprise Risk Management is one of the core domain of Governance. In some business sectors, the success depends on an intelligent and effective risk management principles, framework and practices. The advancement in technology, like big data and analytics also plays a key role in making the risk management effective and adding value to the business. Other factors that necessitate a well architected ERM in an organization include, regulatory & compliance needs, security and privacy expectations, disasters and business continuity needs, etc. As the risk management practices evolved further, adoption of principle based approaches have been found to be more effective.


Here the some of the common principles to model the Risk Management framework around:

  • Create and protect value - Any framework should be able to add value and also protect the values that the assets of the organization is expected to deliver. This would also involve identifying the specific business needs, appropriately assess the risk measure and in turn facilitate deciding on the best risk mitigation or avoidance plan. Risk management must have demonstrable effect on achievement of objectives and improvement of performance of the enterprise.
  • Integrated approach - Risk management cannot be practiced effectively in silos. Today's organizations face the challenges of many different frameworks for meeting different goals. For instance, ISO27001 for security, ITIL for IT infrastructure management, COBIT for Governance, etc. Integrated risk management promotes a continuous, proactive and systematic process to understand, manage and communicate risk from an organization-wide perspective in a cohesive and consistent manner. To be effective, the Risk Management framework should be capable of being integrated into the existing process framework.
  • Recognise & manage complexity - Organisations are very complex environments in which to deliver concrete solutions. There are many challenges that need to be overcome when planning and implementing information management projects. In practice, however, there is no way of avoiding the inherent complexities within organisations. New approaches to information management must therefore be found that recognise (and manage) this complexity.
  • Flexible and adaptable - There is no "one-size-fits-all" approach to risk management and organizations should consider their own context when determining an appropriate approach. Organizations today face a considerable change management challenge for information management projects. In practice, it means that projects must be carefully designed from the outset to ensure that sufficient adoption is gained. The framework shall be tailored and responsive to the organization's external and internal context including its mandate, priorities, organizational risk culture, risk management capacity, and partner and stakeholder interests.
  • Highly usable - In general, the risk management practices should allow for the identification of risk information throughout the organization that can be used to support enterprise wide decision-making, and should also be flexible enough to evolve with changing priorities. This requires that every employee of the organization has a role to play in an effective Risk Management program. This calls for the structures and the associated processes should be simple enough to understand and also usable or executable. 
  • Dynamic and responsive to change - The process of managing risk needs to be flexible. The challenging environment we operate in requires agencies to consider the context for managing risk as well as continuing to identify new risks that emerge, and make allowances for those risks that no longer exist. Risk Management shall be deployed in a systematic, structured and timely manner to enable cost-effective embedding and focused generation of consistent, comparable and reliable results. 
  • Leverage tools & technology - An effective risk management calls for the ability to consider and make use of large volume of data and should leverage the statistical techniques to predict and prioritise the risks. Coming up with a right mitigation or contingency plan also calls for processing of large volume of data. The framework should provide for leveraging latest technology as it emerges to facilitate such high volume information handling and statistical analysis.
  • Considerate to human and cultural factors - The success of the risk management program largely depends on its employees in implementing it as part of their every day business activities. This calls for the structure and the processes to be considerate of the organization's cultural values and should not lead to creating conflicts. 
  • Communicate extensively - Communication is the key for success of any project or program. The framework shall provide for seamless communication amongst all stakeholders, so that the information is exchanged at the right time without losing its value.
  • Continuous Improvement - The big bang approach is unlikely to yield the expected outcome for obvious reasons. Instead, an evolutionary approach will work better and thus the ERM should be capable of evolving. Deployment should be complemented with mechanisms to assess and continually improve enterprise risk management maturity and be aligned with approaches driving the organization’s overall excellence and maturity agenda. 
  • Governance - Oversight and accountability for the risk management process is critical to ensure that the necessary commitment and resources are secured, the risk assessment occurs at the right level in the organization, the full range of relevant risks is considered, these risks are evaluated through a rigorous and ongoing process, and requisite actions are taken, as appropriate.

The above list is not an exhaustive list of principles that readily suits an organization. The right set of principles shall be identified based on the priorities of the business. These principles when adopted help the organizations to practice an improved risk management and thus giving the following benefits to the enterprise.
  • Enhance the coverage of risks in all areas including mission,strategy, planning, operations, finance.
  • Consider the causes of various risks and the resulting impacts.
  • Develop a culture in which employees manage risks as part of their daily routines.
  • Optimized risk appetite, so that the business functions can take take calculated risks.
  • Facilitate enterprise wide risk aware decision making.

Saturday, January 25, 2014

Internet of Things: What Strange Things Can Happen

It was about 6 years back, by when we have started to see WiFi enabled digital cameras and we were wondering what this has to do in a digital camera. But with that, the digital cameras were able to upload the captured images automatically to the cloud based photo albums. Later came in GPS equiped digital cameras, which attaches the location to the captured images. Of course, with smart phones equiped with higher resolution cameras, the digital cameras are on the downfall. That is just a well known example of how a 'thing' or a smart thing can connect to a network and share useful data for a purpose. So much have evolved since then and we now see a world of possibilities to have all the 'things' connected.


Researchers see a lot of benefits by making things smart and inter-connecting them. The networking technologies are also evolving at a brisk pace, offering various improvements over the wireless technologies and protocols. We can see this trend advancing further and may mature in about two decades from now. Looking further, in line with my blog on Human Interface Technology, even humans can remain connected, and that will render human disabilities a thing of the past century.


If you followed this year’s CES, it is evident that the future is all about connected devices. We could see everyday devices equipped with sensors and connectivity to work together, understand what we’re doing, and operate automatically to make our lives easier. Here are some of real world examples of Internet of Things:


A smart refrigerator that can read the embedded tags on the grocery items that are stored in it and then using the supported backend platform on the cloud, identify the items and fetch its details as to date of manufacture, expiry date, quantity, etc. Thus the fridge may alert the consumers about the state and stock of such items. With the kind of wearable gadgets that we see now, these alerts can be through such devices too. It is left to your imagination to what extent this smart capability can be extended.


Medical and emergency care is another area where the smart 'things' play a very useful and life saving role. For instance, a connected car can call emergency services faster than a mobile phone. Again, with the help of embedded or worn smart gadgets, the hospital can get to know the patient history as the patient gets into the hospital and can get ready for the emergency services thereby saving precious time, which can be life saving. Check out this interesting video. Check out this video that IBM has made out describing how it is growing fast and could invade into the everyday life of human beings.


Extending this further to the daily routines of a business executive, the possibilities are endless and here are some that are close to reality, if not already real:

  • Your smartphone once it hears a hint about a meeting in a conversation, it will in the background look up your calendar and will pass on the busy / free information. If the executive uses a glass, then he would be seeing the schedule as he talks and thus facilitates the scheduling of the meetings.
  • The smart alarms will be smart enough to consider information as to what time did go for sleep, the schedule (both personal and official) for the following day and thus will intelligently decide the wake up time in the morning and triggers the alarm.
  • Depending on the traffic conditions, your car will intelligently suggest alternate routes to reach the office or such other scheduled meeting venue and if needed, automatically inform the meeting organizers about the possible delay or may seek rescheduling of the meeting.
  • As you drive back home, you just remember that you need to pickup some drugs from a drugstore. Your smart car will already know this and will identify a store that stocks the drugs that you need and that is on the route or closer to the route that you drive. It can even place the order with the store and let the store keep your items ready for delivery and you just need to pick up enroute.
  • Needless to say, your car will be smart enough to perform a health diagnostics of itself and will decide on a best date for its own garage visit so that your schedules are not impacted.
  • These smart things will know about your presence and which device is in touch with you to send out alerts. For example, if you are at home watching TV, you may see your TV showing alerts from your washing machine and similarly, when you are at work, your smartphone would be used to show these notifications.
  • Here are some more ways the 'Internet of Things' can impact your daily life.


Coming back to the household, you are watching your favorite action movie with surround sound and you did not changed your smartphone from a silent mode back to a ringing profile. You don't have to worry, your smartphone knows what you are upto and over a period would have learnt by itself, as to which of the calls you would want to answer at this situation and accordingly either rejects the call by answering the caller appropriately. If it is an important call that you would n't want to miss, it knows it already and will tone down the TV audio volume and thus draws your attention to the call and you don't have to reach out to your phone, your TV will take over the call from your smartphone. To extend this further, depending on the profiles of other members at the house, which the house already knows through its sensors and networks, your smart phone will decide whether to route the call on to the TV or not.


We can now visualize the possibilities and it is endless. The smart things will have built in learning capability and will keep learning from its master's behavior to perfect its services. This trend will lead us to a situation where the things might by themselves or under the influence of hackers attempt to take over human beings as portrayed in some of the recent science fiction movies. On top of this, hackers will also be leveraging these smart abilities to hack into these connected networks and could do whatever they have been doing with the connected systems now.


Here is how the hackers can intrude into your digital lifestyle:

  • We have already seen reports of a smart refrigerators sending out spam emails.
  • By hacking into your house network, hackers may get to know how many members are home or if there are none inside the home, which information will be useful for them to plan their burglary attempts, etc.
  • Your TV may refuse to play your favorite channel and will rather play content that the hackers prefer you to watch.
  • Your car may drive to a place that is different than where you wanted to visit. On the same lines, hackers can execute traffic diversions and cause traffic jams as portrayed in the movie Die Hard 4
  • All your orders for home supplies may be hacked and deliveries may happen elsewhere, while you would have paid for it. And of course, your house network will still acknowledge for having received the deliveries, while it is not actually.
  • The impact of hacking into the emergency service network could be huge and life threatening.
  • Your smartphone can be hacked to refuse critical business calls and thus causing revenue impact to your organization.


IDC anticipates that more than 200 billion connected devices will be in use by 2021, with more than 30 billion being autonomous devices. Cisco’s Internet Business Solutions Group (IBSG) predicts some 25 billion devices will be connected by 2015, and 50 billion by 2020. How will having lots of things connected change everything? Find the answer in the infographic. With all this, Internet of Things is coming and will be here to stay soon. Whether we, the humans are ready to take on this evolution remains to be seen.

Friday, January 17, 2014

REST Services - Security Best Practices

As most of us know, REST (Representational State Transfer) is an architectural principle and is gaining increasing reckoning amongst architects for the inherent advantages that it offers. REST does recommend the use of standards such as HTTP, URI, XML and JSON and formats such as GIF, MPEG, etc. Twitter, iPhone apps, Google Maps, and Amazon Web Services (AWS) demonstrate heavy use of REST services. The basic tenets of REST is statelessness and is all about utilizing the HTTP commands GET, PUT, POST, DELETE as outlined in the HTTP RFC.


Obviously, Architects see some key advantages with the REST services, and so REST implementation becomes an important consideration in responsive, service oriented applications. Let us have a recap of some of the key advantages as below:

  • The resources can be uniquely identified using URI and facilitates interconnection of these resources.
  • Resource manipulation is accomplished using the standard HTTP verbs, viz GET, PUT, POST, DELETE
  • The data payload is minimal and thus offers the capacity and efficiency benefits.
  • Easier implementation offers shorter learning curve, maintainability and time to market advantage.
  • Increased support from the JavaScript offers the client side computing benefits and thus improve the responsiveness.

Needless to mention, there are certain disadvantages too with the REST Services and here are some:

  • Prone for same level of threats and vulnerabilities as the HTTP and Web
  • Improper use of the HTTP commands could lead to problems and complicate the design.
  • Relies on very few standards.

Some of the security challenges with REST Service implementations are outlined below:

Chained trust is challenging for web service implementations and the situation is no different with REST. Unlike in case of SOAP, standards like WS-Security, SAML cannot be used in case of REST services. This call for relying on a combination security implementations which are specific to different implementations. Here are some such security implementations, which in combination may help overcome this concern:

  • Use Digital Certificates for authenticating the server and the user. 
  • Pass the user's identity from server to server and necessary validation and authorization at the data source.

Cross site request forgery (CSRF) attacks, which attempt to force an authenticated user to execute functionality without their knowledge. Being stateless, REST is inherently vulnerable to CSRF attacks. The work arounds for this security concern are:

  • Use of a custom header - Setting a custom header such as X-XSRF header is known to be a solution for this concern. The endpoints receiving the REST service requests would reject or drop such requests if the intended custom header is not part of the request. It is to be noted that this is not a fool proof technique, but at the same time offers some bit of protection than nothing.
  • Another approach is to deviate from the basic tenets of REST and maintaining state, in which case a token can be generated and maintained to authenticate the requests, so that requests carrying an invalid or no tokens can be dropped or rejected.

While the above are just an example of the concerns, REST services being based on HTTP specifications is prone to all the security vulnerabilities as that of a web application. Thus REST implementation while it is the easier choice due to its advantages listed above, should also be implemented with due considerations to some or all of the following security best practices:
  • All data must be sent over HTTPS and this will ensure securing of the data in transit.
  • Use of PKI or HTTP Digest Authentication for authentication.
  • Always perform authorization for every request upon receipt. 
  • Scan HTTP headers, query strings and POST data and look for reasons to reject a request.
  • Don't combine multiple resources with a single URI, always uniquely identify each resource, so that the security implementation can be simple and relevant to the specific resource.
  • Always perform validation of the JSON / XML data.
  • Ensure appropriate use of the HTTP commands for managing the resources and enable selective restriction of these commands.
  • Design URIs to be persistent. If a URI needs to change, honor the old URI and issue a redirect to the client.
  • Caching should generally be avoided where possible and sensitive data should never be cached.
  • When developing REST solutions, care needs to be taken not to create URIs that contain sensitive information. 
  • The requester should be authenticated and authorized prior to completing an access control decision. 
  • All access control decisions shall be logged. 
  • Code as if protecting the application.
Here are certain useful readings on securing the REST services:


Friday, January 3, 2014

Human Technology Interfaces - What The Future Has In Store

All of us would have been reading something or other on technology advancements that work with human body. For example, we have Health IT companies experimenting embedding memory chips under the skin of human body to store the individual's health records, so that when you walk into clinic, the clinic will get to know about your health history and would be able to suggest the further course and all this can happen with a non human front office assistant. Similarly, with the advancement in the brain interfaces and in the lines of the movie "Minority Report", the Police and investigation authorities may get on to crime prevention mode, i.e. they will get to know the moment you think of committing a crime and technologies like virtual presence, surrogates etc, this might be accomplished without any human casualties.

There are more such advancements and in this blog, my attempt is to present few scenarios that could be a possibility in the near future and the effects that this can have on various attributes of mankind.

Glass: With further advancement Google Glass kind of gadgets could be miniaturized and could be worn like contact lenses. These lenses would be able to interface with things around you. For instance, the refrigerator will greet you with the current temperature and you will know what is inside various containers, by looking at it (without opening) and will also indicate its details like quantity, how many days it is stored, etc. Again with added gamification, one will enjoy performing various tasks on the kitchen table. These things while assisting you on performing these tasks like chopping vegetables, it will also keep a score of how you perform, so that you enjoy doing these tasks. These gadgets coupled with access to public and private data stores help you in decision making, which can enhance one's Personal Intelligence (PI). Check out this video to have a glimpse of what I have tried to narrate here.

Brain Interface: Gadgets like Brain Link are already in the market, which coupled with related applications on smartphones gives beneficial gaming experience like attention training, meditation, neuro-social gaming, research and knowledge about brain. Most of us would have watched the movies 'Surrogates' wherein humans would stay indoors while their surrogates would go out to work and 'Minority Report' where the police and justice department would get alerts the moment some one think of committing a crime. Quite many science fiction imaginations in the past have become reality now. Recent research accomplishments evidences that even the fiction exhibited in the above movies might become a reality some day that is not very far away. For instance, researchers at Harvard have demonstrated a non invasive brain-to-brain interface wherein humans could control animals with their thoughts alone.

Given that continued advancements on the brain interface will further this accomplishments and coupled with various other inventions, the next generation of man kind may experience the following:


  • Personal Intelligence can be augmented by wearing or embedding devices and / or gadgets.
  • Though humans can have private thoughts, these will be subject to review or audit by government agencies and no wonder securing your thoughts would become absolutely essential.
  • Shopping will be virtual and all products can be virtually felt / experienced sitting at home and then can be ordered.
  • All 'things' would have interfaces to interact with human.
  • Blink or double blinks can be programmed to perform certain actions like taking a snapshot of what you have been seeing at that moment, etc.
  • Artificial or Virtual dreams will become reality and one can have choice of dreams and choice of character. Extending this, one would be able to watch a favorite movie as they sleep and cast themselves as a character in the movie.
  • With Body Area Networking and embedded nano chips across various critical body parts, self diagnosis with alerts might be a possibility.
  • Human disabilities can be worked around using robotic body parts and brain interface technology.
  • The hacking community would sharpen their skills and would explore opportunities of hacking human thoughts and human memory, which could be the biggest security and privacy threat to combat for the security experts.


Here are some more videos demonstrating the innovations that are taking place around human technology interfaces:

  • Ford takes SYNC to the next level through the use of configurable controls and the use of an electronic personal assistant, or "avatar," named Eva
  • Someday well be living be living on and under the oceans. This idea isnt farfetched and if it comes true then heres the answer to a new type of underwater transportation system.
  • Using a brain-computer interface technology pioneered by University of Minnesota biomedical engineering professor Bin He, several young people have learned to use their thoughts to steer a flying robot around a gym, making it turn, rise, dip, and even sail through a ring.
  • Cathy Hutchinson has been unable to move her own arms or legs for 15 years. But using the most advanced brain-machine interface ever developed, she can steer a robotic arm towards a bottle, pick it up, and drink her morning coffee.
  • At Barcelona University, scientists are working on a European Research Project to link a human brain to a robot using skin electrodes and video goggles so that the user feels they are actually in the android body wherever it is in the world.

Saturday, December 14, 2013

Google Chromecast - My Initial Experience

Google's Chromecast is a tiny usb drive kind of gadget which plugs into the HDMI port of your HDTV and can facilitates media casting on to your HDTV. With built-in wi-fi modules, most of the HDTVs in the market today allows browsing and streaming media directly from internet. With chromecast, you stream movies, videos and music from Netflix, Hulu, HBO and other media sites from internet. You can use your Android or iOS devices or even your Windows PC or Laptop to cast and control the streams on to your TV. This blog is not to write about what it is, but to share my first experience with this cute little gadget. Check out more about the device here.

I ordered this device on ebay.in and it was delivered at my home the very next day. The pack as delivered contained the Chromecast device, HDMI extender cable, USB power cable for charging the device and a power supply. And of-course there was a small, micro-printed product information leaflet, which just contained license information, warnings, warranty and the contents in the pack. For everything else, it referred to Google Chromecast site.

The three step setup instruction as printed on the inside of the flip top of the packing read as: 1. plug it in; 2. switch input; and 3. set it up. That was pretty simple and I was curious how simple this is going to be when actually setting this up.

I just plugged the device on to the HDMI port of the TV and then used the provided USB power cable to power up the device. Just in case your TV does not have the USB ports, then you can use the provided power supply and plug it on to the mains power source. And yes, the device does needs power to work and unlike USB ports, HDMI ports (per its current specification) do not offer power to the connected devices.

Upon connecting the power source, the LED on the device emitted a red light for a few seconds and turned to white. In my case the second step was not necessary as my TV smartly detected a new source on one of the HDMI ports and switched to it to receive video data. For those TVs that don't automatically switch, then you need to use your TV remote to select the relevant HDMI port as the input source.

The moment my TV switched to the HDMI port on which the Chromecast is plugged in, I could see a PC desktop like screen on the TV with a random nice background pictures and prompting me to visit chromcast site for setting up the device.

I however had the chromecast app installed on my HTC One M7 device the day I ordered the device. The App upon launch scans the connected wi-fi network and look for presence of a chromecast device. It did find the device and the device had a default name as chromecast 7151 (I was offered to choose a name of my choice, but I left it to the default for now) and prompted me to setup the device. At this stage the chrome device is not connected to my wi-fi network. Upon detecting the device the App on my HTC device prompted me to setup and at this stage, my TV displayed my wi-fi network name as well.

As I moved on to the next step, my TV displayed a code 'C3W8' and the app also prompted me to verify
whether it is the same code. Upon verification, I was then prompted to enter my wi-fi security passcode. At that stage, the app displayed the mac address of the chrome device, which was needed as in my case as I have enabled mac filtering in my wi-fi router and unless I add up the mac address of the chromecast to the whitelist on my router, it won't be able to connect to the internet. I added the mac address to the whitelist on my router and entered the passcode, but the setup did not succeed and was prompting me to check couple of configurations on my router: 1. to enable Access Point isolation and 2. to enable uPNP or multicast.

I could not figure out the first configuration parameter on my dlink 605L wi-fi router. I could however find the uPNP setting, which I enabled and rebooted the router. But the Chromecast device still could not connect to my wi-fi network. A quick search on Google led me to a useful page listing out the known issues and work around for different routers. It could find my router listed therein with a suggestion to enable another configuration parameter 'wireless enhance mode'. Upon enabling this parameter in the router, Chromecast was able to connect to internet and with that the setup is complete. The device immediately started downloading updates and it took couple of minutes to complete and then it was ready for casting.

The 'discover applications' option in the Android App listed few applications and the quite familiar ones are YouTube, Google Play Movies and Play Music. There were few other apps which are for streaming the photos, videos and music stored on the device. The supported applications display a cast icon to start casting the media on to the TV. Upon casting, in case of internet media, like YouTube, the device sources the media directly from, the internet through wi-fi, but at the same time, you can control it using your device. Here is a screen shot of the first YouTube video I chromecasted using my HTC One Android phone. More apps would start supporting Chromecast in the future.

In case of stored media, the streaming happens through the local wi-fi network and in case of certain high resolution videos, there were pauses in between. This probably depends on the specific app that is used for such casting.

Next I tried to set it up on my Windows PC, but no, my PC is connected through physical LAN and the Chromecast app said that I need wi-fi enabled on the PC. I then turned on to my Windows 8 Laptop. It was a breeze and no hassles in setting this up on my Windows 8 laptop. The Chromecast App is just for setting up the device and since mine is already setup I just needed the extension to be added to the Chrome browser, so that it facilitates casting a specific tab of the chrome browser. The extension adds a little icon on to the addressbar
which on click allows the casting of the browser tab. At this time I could see the YouTube and Netflix windows app with support for chrome cast and lot more windows 8 apps may start supporting chromecast soon. Here is how it looked like when I casted an YouTube video on the Chrome browser tab.

If you were to connect the Chromecast on to a different network, you have to do a Factory Reset, which can be done using the Chromecast App on the device or on the PC and then set it up with the new network.  Another great advantage is that the software gets updates automatically when Google releases updates and more apps are coming up offering support for Chromecast.

Saturday, November 9, 2013

Webservice Security Standards

SOA adoption is on the rise and Webservices is predominantly used for its implementation. Webservice messages are sent across the network in an XML format defined by the W3C SOAP specification. Webservices have come a long way and has sufficiently matured to offer the required tenets especially on the security domain. In this blog let us have a quick look at the available standards with respect to the security dimensions and look at how the related security requirements are addressed.

Secure Messaging


  • WS-Security - This specification was originally developed by IBM, Microsoft and Verisgn and OASIS (Organization for the Advancement of Structured Information Standards) continued the work on this standard. This standard addresses the Integrity and Confidentiality requirements of the webservice messages. The specification describes the signing, encrypting of the SOAP messages and also about attaching security tokens. Various signature formats and encryption algorithms are supported. The security tokens supported include: X.509 Certificates, Kerberos tickets, User ID/Password credentials, SAML assertions and custom tokens. Due to the increased size of the SOAP messages and the cryptographic requirements, this standard requires significantly higher compute resources and network bandwidth.
  • SSL/TLS - SSL was developed by Netscape Communications Corporation in 1994 to secure transactions over the World Wide Web. Soon after, the Internet Engineering Task Force (IETF) began work to develop a standard protocol that provided the same functionality. They used SSL 3.0 as the basis for that work, which became the TLS protocol. In applications design, TLS is usually implemented on top of any of the Transport Layer protocols, encapsulating the application-specific protocols such as HTTP, FTP, SMTP, NNTP and XMPP. Historically it has been used primarily with reliable transport protocols such as the Transmission Control Protocol (TCP). This standard helps address the Strong authentication, message privacy and integrity requirements.

Resource Protection


  • XACML - eXtensible Access Control Markup Language defines a declarative access control policy language implemented in XML and a processing model describing how to evaluate access requests. Version 3.0 of this standard has been published by OASIS in January 2013. The new features of the latest version of this standard include: Multiple Decision Profile, Delegation, Obligation Expressions, Advice Expressions and Policy Combination Algorithms.While there are many ways the base language can be extended, many environments will not need to do so. The standard language already supports a wide variety of data types, functions, and rules about combining the results of different policies. In addition to this, there are already standards groups working on extensions and profiles that will hook XACML into other standards like SAML and LDAP, which will increase the number of ways that XACML can be used.
  • XrML - Developed by Content Guard, a subsidiary of Xerox, and supported by Microsoft, eXtensible Rights Markup Language would provide a universal method for specifying rights and issuing conditions associated with the use and protection of content in a digital rights management system. XrML licenses can be attached to WS-Security in the form of tokens. XACML and XrML both deal with authorization. They share requirements from many of the same application domains. Both share the same concepts but use different terms. Both are based on XML Schema. Microsoft's Active Directory Rights Management Services (AD RMS) uses the eXtensible rights Markup Language (XrML) in licenses, certificates, and templates to identify digital content and the rights and conditions that govern use of that content.
  • RBAC, ABAC - Similar to XrML, RBAC and ABAC are established approaches to define and implement Role Based Access Control and Attribute Based Access Controls and can be attached to WS-Security as tokens. The use of RBAC or ABAC to manage user privileges (computer permissions) within a single system or application is widely accepted as a best practice.
  • EPAL - The Enterprise Privacy Authorization Language (EPAL) is an interoperability language for exchanging privacy policy in a structured format between applications and can be leveraged for addressing the privacy concerns with the SOAP messages. An EPAL policy categorizes the data an enterprise holds and the rules which govern the usage of data of each category. Since EPAL is designed to capture privacy policies in many areas of responsibility, the language cannot predefine the elements of a privacy policy. Therefore, EPAL provides a mechanism for defining the elements which are used to build the policy.

Negotiation of Contracts


  • ebXML - e-business XML is a modular suite of standards advanced by OASIS and UNCEFACT and approved as ISO 15000. While the ebXML standards seek to provide formal XML-enabled mechanisms that can be implemented directly, the ebXML architecture is focused on concepts and methodologies that can be more broadly applied to allow practitioners to better implement e-business solutions. ebXML provides companies with a standard method to exchange business messages, conduct trading relationships, communicate data in common terms and define and register business processes. A CPA (Collaboration Protocol Agreement) document is the intersection of two CPP documents, and describes the formal relationship between two parties.
  • SWSA - The SWSA(Semantic Web Services Architecture) interoperability architecture covers the support functions to be accomplished by Semantic Web agents (service providers, requestors, and middle agents). While not all operational environments will find it necessary to support all functions to the same degree, the distributed functions to be addressed by this architecture to include: Dynamic Service Discovery, Service Engagement (Negotiating & Contracting), Service Process Enactment & Management, Semantic Web Community Support Services, Semantic Web Service Lifecycle & Resource Management Services and Cross Cutting Issues.


Trust Management


  • WS-Trust - The goal of WS-Trust is to enable applications to construct trusted SOAP message exchanges. This trust is represented through the exchange and brokering of security tokens. This specification provides a protocol agnostic way to issue, renew, and validate these security tokens. The Web service security model defined in WS-Trust is based on a process in which a Web service can require that an incoming message prove a set of claims (e.g., name, key, permission, capability, etc.). If a message arrives without having the required proof of claims, the service SHOULD ignore or reject the message. A service can indicate its required claims and related information in its policy as described by WS-Policy and WS-PolicyAttachment specifications.
  • XKMS - XML Key Management Specification is a protocol developed by W3C which describes the distribution and registration of public keys. Services can access an XKMS compliant server in order to receive updated key information for encryption and authentication. The XML Key Management Specification (XKMS) allows for easy management of the security infrastructure, while the Security Assertion Markup Language (SAML) makes trust portable. SAML provides a mechanism for transferring assertions about authentication of entities between various cooperating entities without forcing them to lose ownership of the information.
  • SAML - Security Assertion Markup Language is a product of the OASIS Security Services Technical Committee intended for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. SAML allows business entities to make assertions regarding the identity, attributes, and entitlements of a subject (an entity that is often a human user) to other entities, such as a partner company or another enterprise application. SAML specifies three components: assertions, protocol, and binding. There are three assertions: authentication, attribute, and authorization. Authentication assertion validates the user's identity. Attribute assertion contains specific information about the user. And authorization assertion identifies what the user is authorized to do. Protocol defines how SAML asks for and receives assertions. Binding defines how SAML message exchanges are mapped to Simple Object Access Protocol (SOAP) exchanges.
  • WS-Federation - WS-Federation extends the WS-Security, WS-Trust and WS-SecurityPolicy by describing how the claim transformation model inherent in security token exchanges can enable richer trust relationships and advanced federation of services. A fundamental goal of WS-Federation is to simplify the development of federated services through cross-realm communication and management of Federation Services by re-using the WS-Trust Security Token Service model and protocol. A variety of Federation Services (e.g. Authentication, Authorization, Attribute and Pseudonym Services) can be developed as variations of the base Security Token Service. 

Security properties

  • WS-Policy, WS-SecurityPolicy - WS-Policy represents a set of specifications that describe the capabilities and constraints of the security policies on intermediaries and end points and how to associate policies with services and end points. Web Services Policy is a machine-readable language for representing these Web service capabilities and requirements as policies. Policy makes it possible for providers to represent such capabilities and requirements in a machine-readable form. A policy-aware client uses a policy to determine whether one of these policy alternatives (i.e. the conditions for an interaction) can be met in order to interact with the associated Web Service. Such clients may choose any of these policy alternatives and must choose exactly one of them for a successful Web service interaction. Clients may choose a different policy alternative for a subsequent interaction.
  • WS-ReliableMessaging, WS-Reliability - WS-ReliableMessaging, was originally written by BEA Systems, Microsoft, IBM, and Tibco and later submitted to the OASIS Web Services Reliable Exchange (WS-RX) Technical Committee for adoption and approval.Prior to WS-ReliableMessaging, OASIS produced a competing standard WS-Reliability that was supported by a coalition of vendors. The protocol allows endpoints to meet the guarantee for the delivery assurances namely, Atmost Once, Atleast Once, Exactly Once and In Order. Persistence considerations related to an endpoint's ability to satisfy the delivery assurances are the responsibility of the implementation and do not affect the wire protocol.