Sunday, March 31, 2013

Software Complexity - an IT Risk perspective

IT risk management predominantly focuses on IT Security though there are other category of IT risks which are less known. Software complexity is such an important unusual or unpopular IT risk. Basically, software complexity is one of many non functional quality attributes, which in most cases is ignored completely, leading to associated risks occurring and thus leading to financial and non financial impacts on the business entity. In this blog, let us try to understand the software complexity in the perspective of IT risk, and shed some light on whose baby it is and how to manage or control it.


What is Software Complexity?


Let us first understand what is Software Complexity. The simple definition of complexity is how hard a software is to understand, use and / or verify it. Complexity is relative to the observer, meaning that what appears complex to one person might be well understood by another. For example, a software engineer familiar with the systems theory of control would see good structure and organization in a well-designed attitude control system, but another engineer, unfamiliar with the theory, would have much more difficulty understanding the design.


Basili defines complexity as a measure of resources expended by a system while interacting with a piece of software to perform a given task. In terms of the end users, this could be understood as the usability. i.e. how much time the user needs to spend accomplish a task using a software and how much training is required to get familiar with it. Similarly in terms of the developers, this could be understood as maintainability i.e. how easy it is for new developers to understand and work on it to further improve it or to address issues. Again, in terms of the IT infrastructure team, this could be understood as the level of efforts needed to have it deployed and support it, which again should be considered under maintainability.


Why Complexity is an evil?


Complexity as the above definition goes, requires more resources to be expended than normal and thus is counter productive. We have numerous best practices, standards and frameworks that advocate for eliminating complexity, but somehow it creeps in and pose as a challenge in most cases. The consequence of software complexity as we have put it above clearly is risk, and it could be even be a business risk, when we look at it in the end user perspective. The following is a sample list of risks that could arise due to software complexity:


  • Complex software is difficult to verify and validate and this could lead to new defects surfacing even after months of its production use. Depending on the severity of the defect, the impact could be low to very high.
  • Software complexity could render the business processes inefficient, as the end users may have to spend more time in using the software to execute certain business processes, which could result in losing the competitiveness and thereby losing market.
  • Failure to consider the downstream complexity by the development teams could result in a state of ‘project successful, but mission failed’. i.e. even if the project teams have successfully completed the project, the resulting software product might be complex enough to render it useless thereby rendering the entire investment in the project to be loss. 
  • Post production support will always be challenge as it is difficult to retain trained and capable resources to maintain the complex software and this is a most common risk that enterprises are continuing to battle with.
  • Complexity in deploying and distributing the software is another area of concern for the infrastructure team. Sometimes, such complex software might demand so much of computing resources wherein the costs of such hardware and other infrastructure resources might outweigh the perceived benefits out of using such a software.


The above list is only illustrative and the list could be even bigger and I would let you come up with more such consequences in the form of feedback.


Measuring Software Complexity


To be managed or controlled, we should be able to measure the complexity. While the discussion of measures of software complexity is out of the scope of this blog (I might work on to write another blog on measuring software complexity), at a high level, the following measures can be used.



Complexity Metric
Primary Measure of …
Cyclomatic complexity (McCabe)
Soundness and confidence; measures the number of linearly-independent paths through a program module; strong indicator of testing effort
Halstead complexity measures
Algorithmic complexity, measured by counting operators and operands; a measure of maintainability
Henry and Kafura metrics
Coupling between modules (parameters, global variables, calls)
Bowles metrics
Module and system complexity; coupling via parameters and global variables
Troy and Zweben metrics
Modularity or coupling; complexity of structure (maximum depth of structure chart); calls-to and called-by
Ligier metrics
Modularity of the structure chart



Controlling Software Complexiity

As we have seen in the definition section, complexity is context sensitive. i.e. the same software may be viewed at different complexity level by different class of users. While this is a challenge for the developers, they find it as a useful shelter to stay away from addressing it. Developers primarily depend on the business system analysts, who in most cases play the role of representing the end users and when they fail to see the complexity which the end users would, development teams are mislead. In one of the project that we have been working on, the business system analyst, who was also tasked to triage the defects logged by end users, simply turns down such defects suggesting the users to learn to use the system. This definitely goes against controlling the complexity there by managing the IT Risk.

Controlling Software Complexity calls for adopting certain practices in the areas of architecture, design, build, testing and project management some of which are listed below:

  • Set the expectations: Overly stringent requirements and simplistic hardware interfaces can complicate software and a lack of consideration for testability can complicate verification efforts. Cultivating awareness about the downstream complexity and how detrimental the complexity could be, if ignored, would help in getting the much needed team collaboration and commitment.
  • Requirement Reviews: Ambiguous software requirements has been a cause of complexity, as many times, the developers tend to assume things. Similarly certain requirements may be unnecessary and could be overly stringent, which can be spotted in the review process leading to reduction in complexity.
  • Domain Skills: Having the relevant product or domain skills will go a long way in reducing downstream complexity as the development team would be familiar as to how the industry is handling such requirements. 
  • Architecture Reviews: A good architectural design will go a long way in reducing the downstream complexity. Design and architecture reviews early on would go a long way in addressing this need. It would even make sense to have an Architecture Review Board comprising of representatives who will be able to use their expertise in the specific areas and spot problem areas.
  • COTS: Third party libraries usually termed as COTS (Commercially available Off the Shelf) components often come with unneeded features that adds to the complexity of the software. It is important to make a careful and thoughtful decision in case of COTS.
  • Software Testability: Design the Software with testability in mind, so that the complexity can be verified by the QA teams.

Conclusion:


Much of the plans for mitigating the software complexity risk revolves around the software acquisition or software development process. This calls for a close co-ordination and collaboration between the IT Risk managers and the Development teams. Though risk management is well understood as an important function of the project managers, how well the risk of complexity is managed is still questionable. The project managers are primarily concerned on those risks that might impact the project success and fail to consider those like the complexity that emerge downstream post project. It has also been observed that reduction of complexity is an important characteristic of disruptive innovation. Innovations emerge disruptive, when it addresses the complexity in already existing products or systems and thus being able to capture market segments one after the other in it course to the top of the cliff.

Saturday, March 23, 2013

Surviving Disruptive Innovations


I have recently made a presentation on Disruptive Technologies at the Chennai Chapter of ISACA. While chose the topic in the context of presenting a picture of the pace at which the disruption is happening in IT world and what are the upcoming Disruptions to watch out for. But As I was preparing the agenda and content for the presentation, I was curious to find out how successful enterprises are managing rather surviving disruptions and in the process have stumbled upon some of the research work done by Clayton Christensen.


It was interesting to observe few things from his theory, which are the following:


Good Management principles would not be of great help in managing or surviving the disruptive innovations. Christensen sites the examples of how Toyota came up disrupting General Motors. He sees a pattern in the happening of disruptions in the form of an S curve, where the top of the curve is a cliff. Leaders / Leadership teams follow the bese management principles to climb up the S Curve and when they reach top they just fall off the cliff.


Extendable core is the key enabler of of innovations becoming disruptive. The potential disruptive innovations would appear as if it is insignificant in terms of the competitive capabilities of the incumbent’s existing products and thus tempting the incumbent to ignore it. But having an extendable core within it, the new entrant quietly enhances its capabilities and slowly get into the mainstream market of the incumbent and then disrupting a whole market resulting in driving the big and well managed incumbents out of the market.


Emerges from where it is least expected. For example, we now find it very comfortable to use a smartphone to various jobs which otherwise were performed by some special purpose devices. Examples include GPS devices, Digital Cameras and even PCs. While GPS device manufacturers still believe that GPS feature of Smartphone is not a threat for them as the special purpose GPS devices have certain unique advantages, which Smartphones don’t. But be reminded that the smartphones have the extendable core and can easily address this capability gap and soon GPS devices will be a thing of the past and we are already seeing the signs of it.


While there are many other interesting observations to note, I would leave it for you to find those out. I was then curious to look into the cases of disruptions that happened in the past. the following three cases of disruptions were of interest to me:


Kodak: Kodak ruled the photography market for a whole century. Their management as all the best qualities and were praised in all respects. Kodak has many innovations to its credit and have many firsts as well. With such a performance it has recently gone into bankruptcy and has sold its patent portfolio, which included close to 1000 patents to salvage some value. It is natural for us to think that the emergence of Digital Cameras would have disrupted Kodak in a big way. But as many would know, Kodak knew that digital era is emerging and they were the first to introduce a Digital Camera in the 1970s. But then what went wrong and how did they miss to sustain that innovation and stay alive in the market? Kodak has been believing till early 200 that the Photographic films wouldn’t die so soon. The other interesting observation out of Kodak’s failure is that with a heavyweight team of experts, sustaining innovation is really expensive and the outside view is most likely ignored.


That’s where the management tend to give up on some of the innovations as the time and investment in it may not worth it as they are making decent business with their current line of products. The situation is different for new entrants as the startups usually break the rules of convention and are in a position to pursue such innovations relatively a lot cheaper and also in an progressive manner. Startups usually start to focus a market which is ignored or to which the incumbents don’t pay much attention and there by not drawing the attention of the incumbents till a point when it will be difficult for the incumbent to respond to.


NOKIA: Nokia came big in the cellular phone, but failed to get its innovation strategy right with the smartphones. Even in NOKIA’s case, its research team came up with a prototype of smartphone with internet access and touch interface, way back in 2003, but the management, again going by good management principles, citing the risk involved in the product being successful and the very high cost of its development has turned down the proposal to pursue this plan further. Exactly three years later Apple launched its iPhone.


NetFlix: Netflix case is a little different. Netflix has been very successful in its DVD Rental business and in fact has seen the emergence of disruptive innovations in the form of streaming videos. It even responded to it by pursuing its research activities in that direction and has developed a service of streaming videos. What went wrong according to analysts is that it got is business model and pricing wrong as it combined both the traditional service and the digital streaming service as a bundle and increased the pricing. Ideally it would have been more appropriate to have offered the digital streaming under a different brand or as a separate service, as the surveys indicated that the DVD rentals still account for 70% of the total video sales in the US.


Now, given that just good management principles don’t help in sustaining or surviving the disruptive innovations, what should the organizations do to stay alive in todays world where IT has enabled the disruptive innovations to emerge with much faster pace leaving very little time for the incumbents to respond to. We also keep hearing that the “break the rules” is the way to go to foster innovation. While disruption is always seen as a risk to be managed, how well enterprises come up with the right risk mitigation and contingency plans to handle the risk of disruption is still a mystery.


You may check out my presentation on the subject at Slideshare and feel free to share your views and thoughts on this topic. You may google to find out some of the great articles and papers on the theory of disruptive innovation by Clayton Christensen. You will also find some good video lectures of his on YouTube.