Saturday, December 14, 2013

Google Chromecast - My Initial Experience

Google's Chromecast is a tiny usb drive kind of gadget which plugs into the HDMI port of your HDTV and can facilitates media casting on to your HDTV. With built-in wi-fi modules, most of the HDTVs in the market today allows browsing and streaming media directly from internet. With chromecast, you stream movies, videos and music from Netflix, Hulu, HBO and other media sites from internet. You can use your Android or iOS devices or even your Windows PC or Laptop to cast and control the streams on to your TV. This blog is not to write about what it is, but to share my first experience with this cute little gadget. Check out more about the device here.

I ordered this device on and it was delivered at my home the very next day. The pack as delivered contained the Chromecast device, HDMI extender cable, USB power cable for charging the device and a power supply. And of-course there was a small, micro-printed product information leaflet, which just contained license information, warnings, warranty and the contents in the pack. For everything else, it referred to Google Chromecast site.

The three step setup instruction as printed on the inside of the flip top of the packing read as: 1. plug it in; 2. switch input; and 3. set it up. That was pretty simple and I was curious how simple this is going to be when actually setting this up.

I just plugged the device on to the HDMI port of the TV and then used the provided USB power cable to power up the device. Just in case your TV does not have the USB ports, then you can use the provided power supply and plug it on to the mains power source. And yes, the device does needs power to work and unlike USB ports, HDMI ports (per its current specification) do not offer power to the connected devices.

Upon connecting the power source, the LED on the device emitted a red light for a few seconds and turned to white. In my case the second step was not necessary as my TV smartly detected a new source on one of the HDMI ports and switched to it to receive video data. For those TVs that don't automatically switch, then you need to use your TV remote to select the relevant HDMI port as the input source.

The moment my TV switched to the HDMI port on which the Chromecast is plugged in, I could see a PC desktop like screen on the TV with a random nice background pictures and prompting me to visit chromcast site for setting up the device.

I however had the chromecast app installed on my HTC One M7 device the day I ordered the device. The App upon launch scans the connected wi-fi network and look for presence of a chromecast device. It did find the device and the device had a default name as chromecast 7151 (I was offered to choose a name of my choice, but I left it to the default for now) and prompted me to setup the device. At this stage the chrome device is not connected to my wi-fi network. Upon detecting the device the App on my HTC device prompted me to setup and at this stage, my TV displayed my wi-fi network name as well.

As I moved on to the next step, my TV displayed a code 'C3W8' and the app also prompted me to verify
whether it is the same code. Upon verification, I was then prompted to enter my wi-fi security passcode. At that stage, the app displayed the mac address of the chrome device, which was needed as in my case as I have enabled mac filtering in my wi-fi router and unless I add up the mac address of the chromecast to the whitelist on my router, it won't be able to connect to the internet. I added the mac address to the whitelist on my router and entered the passcode, but the setup did not succeed and was prompting me to check couple of configurations on my router: 1. to enable Access Point isolation and 2. to enable uPNP or multicast.

I could not figure out the first configuration parameter on my dlink 605L wi-fi router. I could however find the uPNP setting, which I enabled and rebooted the router. But the Chromecast device still could not connect to my wi-fi network. A quick search on Google led me to a useful page listing out the known issues and work around for different routers. It could find my router listed therein with a suggestion to enable another configuration parameter 'wireless enhance mode'. Upon enabling this parameter in the router, Chromecast was able to connect to internet and with that the setup is complete. The device immediately started downloading updates and it took couple of minutes to complete and then it was ready for casting.

The 'discover applications' option in the Android App listed few applications and the quite familiar ones are YouTube, Google Play Movies and Play Music. There were few other apps which are for streaming the photos, videos and music stored on the device. The supported applications display a cast icon to start casting the media on to the TV. Upon casting, in case of internet media, like YouTube, the device sources the media directly from, the internet through wi-fi, but at the same time, you can control it using your device. Here is a screen shot of the first YouTube video I chromecasted using my HTC One Android phone. More apps would start supporting Chromecast in the future.

In case of stored media, the streaming happens through the local wi-fi network and in case of certain high resolution videos, there were pauses in between. This probably depends on the specific app that is used for such casting.

Next I tried to set it up on my Windows PC, but no, my PC is connected through physical LAN and the Chromecast app said that I need wi-fi enabled on the PC. I then turned on to my Windows 8 Laptop. It was a breeze and no hassles in setting this up on my Windows 8 laptop. The Chromecast App is just for setting up the device and since mine is already setup I just needed the extension to be added to the Chrome browser, so that it facilitates casting a specific tab of the chrome browser. The extension adds a little icon on to the addressbar
which on click allows the casting of the browser tab. At this time I could see the YouTube and Netflix windows app with support for chrome cast and lot more windows 8 apps may start supporting chromecast soon. Here is how it looked like when I casted an YouTube video on the Chrome browser tab.

If you were to connect the Chromecast on to a different network, you have to do a Factory Reset, which can be done using the Chromecast App on the device or on the PC and then set it up with the new network.  Another great advantage is that the software gets updates automatically when Google releases updates and more apps are coming up offering support for Chromecast.

Saturday, November 9, 2013

Webservice Security Standards

SOA adoption is on the rise and Webservices is predominantly used for its implementation. Webservice messages are sent across the network in an XML format defined by the W3C SOAP specification. Webservices have come a long way and has sufficiently matured to offer the required tenets especially on the security domain. In this blog let us have a quick look at the available standards with respect to the security dimensions and look at how the related security requirements are addressed.

Secure Messaging

  • WS-Security - This specification was originally developed by IBM, Microsoft and Verisgn and OASIS (Organization for the Advancement of Structured Information Standards) continued the work on this standard. This standard addresses the Integrity and Confidentiality requirements of the webservice messages. The specification describes the signing, encrypting of the SOAP messages and also about attaching security tokens. Various signature formats and encryption algorithms are supported. The security tokens supported include: X.509 Certificates, Kerberos tickets, User ID/Password credentials, SAML assertions and custom tokens. Due to the increased size of the SOAP messages and the cryptographic requirements, this standard requires significantly higher compute resources and network bandwidth.
  • SSL/TLS - SSL was developed by Netscape Communications Corporation in 1994 to secure transactions over the World Wide Web. Soon after, the Internet Engineering Task Force (IETF) began work to develop a standard protocol that provided the same functionality. They used SSL 3.0 as the basis for that work, which became the TLS protocol. In applications design, TLS is usually implemented on top of any of the Transport Layer protocols, encapsulating the application-specific protocols such as HTTP, FTP, SMTP, NNTP and XMPP. Historically it has been used primarily with reliable transport protocols such as the Transmission Control Protocol (TCP). This standard helps address the Strong authentication, message privacy and integrity requirements.

Resource Protection

  • XACML - eXtensible Access Control Markup Language defines a declarative access control policy language implemented in XML and a processing model describing how to evaluate access requests. Version 3.0 of this standard has been published by OASIS in January 2013. The new features of the latest version of this standard include: Multiple Decision Profile, Delegation, Obligation Expressions, Advice Expressions and Policy Combination Algorithms.While there are many ways the base language can be extended, many environments will not need to do so. The standard language already supports a wide variety of data types, functions, and rules about combining the results of different policies. In addition to this, there are already standards groups working on extensions and profiles that will hook XACML into other standards like SAML and LDAP, which will increase the number of ways that XACML can be used.
  • XrML - Developed by Content Guard, a subsidiary of Xerox, and supported by Microsoft, eXtensible Rights Markup Language would provide a universal method for specifying rights and issuing conditions associated with the use and protection of content in a digital rights management system. XrML licenses can be attached to WS-Security in the form of tokens. XACML and XrML both deal with authorization. They share requirements from many of the same application domains. Both share the same concepts but use different terms. Both are based on XML Schema. Microsoft's Active Directory Rights Management Services (AD RMS) uses the eXtensible rights Markup Language (XrML) in licenses, certificates, and templates to identify digital content and the rights and conditions that govern use of that content.
  • RBAC, ABAC - Similar to XrML, RBAC and ABAC are established approaches to define and implement Role Based Access Control and Attribute Based Access Controls and can be attached to WS-Security as tokens. The use of RBAC or ABAC to manage user privileges (computer permissions) within a single system or application is widely accepted as a best practice.
  • EPAL - The Enterprise Privacy Authorization Language (EPAL) is an interoperability language for exchanging privacy policy in a structured format between applications and can be leveraged for addressing the privacy concerns with the SOAP messages. An EPAL policy categorizes the data an enterprise holds and the rules which govern the usage of data of each category. Since EPAL is designed to capture privacy policies in many areas of responsibility, the language cannot predefine the elements of a privacy policy. Therefore, EPAL provides a mechanism for defining the elements which are used to build the policy.

Negotiation of Contracts

  • ebXML - e-business XML is a modular suite of standards advanced by OASIS and UNCEFACT and approved as ISO 15000. While the ebXML standards seek to provide formal XML-enabled mechanisms that can be implemented directly, the ebXML architecture is focused on concepts and methodologies that can be more broadly applied to allow practitioners to better implement e-business solutions. ebXML provides companies with a standard method to exchange business messages, conduct trading relationships, communicate data in common terms and define and register business processes. A CPA (Collaboration Protocol Agreement) document is the intersection of two CPP documents, and describes the formal relationship between two parties.
  • SWSA - The SWSA(Semantic Web Services Architecture) interoperability architecture covers the support functions to be accomplished by Semantic Web agents (service providers, requestors, and middle agents). While not all operational environments will find it necessary to support all functions to the same degree, the distributed functions to be addressed by this architecture to include: Dynamic Service Discovery, Service Engagement (Negotiating & Contracting), Service Process Enactment & Management, Semantic Web Community Support Services, Semantic Web Service Lifecycle & Resource Management Services and Cross Cutting Issues.

Trust Management

  • WS-Trust - The goal of WS-Trust is to enable applications to construct trusted SOAP message exchanges. This trust is represented through the exchange and brokering of security tokens. This specification provides a protocol agnostic way to issue, renew, and validate these security tokens. The Web service security model defined in WS-Trust is based on a process in which a Web service can require that an incoming message prove a set of claims (e.g., name, key, permission, capability, etc.). If a message arrives without having the required proof of claims, the service SHOULD ignore or reject the message. A service can indicate its required claims and related information in its policy as described by WS-Policy and WS-PolicyAttachment specifications.
  • XKMS - XML Key Management Specification is a protocol developed by W3C which describes the distribution and registration of public keys. Services can access an XKMS compliant server in order to receive updated key information for encryption and authentication. The XML Key Management Specification (XKMS) allows for easy management of the security infrastructure, while the Security Assertion Markup Language (SAML) makes trust portable. SAML provides a mechanism for transferring assertions about authentication of entities between various cooperating entities without forcing them to lose ownership of the information.
  • SAML - Security Assertion Markup Language is a product of the OASIS Security Services Technical Committee intended for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. SAML allows business entities to make assertions regarding the identity, attributes, and entitlements of a subject (an entity that is often a human user) to other entities, such as a partner company or another enterprise application. SAML specifies three components: assertions, protocol, and binding. There are three assertions: authentication, attribute, and authorization. Authentication assertion validates the user's identity. Attribute assertion contains specific information about the user. And authorization assertion identifies what the user is authorized to do. Protocol defines how SAML asks for and receives assertions. Binding defines how SAML message exchanges are mapped to Simple Object Access Protocol (SOAP) exchanges.
  • WS-Federation - WS-Federation extends the WS-Security, WS-Trust and WS-SecurityPolicy by describing how the claim transformation model inherent in security token exchanges can enable richer trust relationships and advanced federation of services. A fundamental goal of WS-Federation is to simplify the development of federated services through cross-realm communication and management of Federation Services by re-using the WS-Trust Security Token Service model and protocol. A variety of Federation Services (e.g. Authentication, Authorization, Attribute and Pseudonym Services) can be developed as variations of the base Security Token Service. 

Security properties

  • WS-Policy, WS-SecurityPolicy - WS-Policy represents a set of specifications that describe the capabilities and constraints of the security policies on intermediaries and end points and how to associate policies with services and end points. Web Services Policy is a machine-readable language for representing these Web service capabilities and requirements as policies. Policy makes it possible for providers to represent such capabilities and requirements in a machine-readable form. A policy-aware client uses a policy to determine whether one of these policy alternatives (i.e. the conditions for an interaction) can be met in order to interact with the associated Web Service. Such clients may choose any of these policy alternatives and must choose exactly one of them for a successful Web service interaction. Clients may choose a different policy alternative for a subsequent interaction.
  • WS-ReliableMessaging, WS-Reliability - WS-ReliableMessaging, was originally written by BEA Systems, Microsoft, IBM, and Tibco and later submitted to the OASIS Web Services Reliable Exchange (WS-RX) Technical Committee for adoption and approval.Prior to WS-ReliableMessaging, OASIS produced a competing standard WS-Reliability that was supported by a coalition of vendors. The protocol allows endpoints to meet the guarantee for the delivery assurances namely, Atmost Once, Atleast Once, Exactly Once and In Order. Persistence considerations related to an endpoint's ability to satisfy the delivery assurances are the responsibility of the implementation and do not affect the wire protocol.

Wednesday, October 16, 2013

Webservices Security: Potential Threats to Combat

Web Services is one of the primary method of implementing SOA in highly distributed enterprise computing environments where not only the enterprise applications need to be integrated with internal applications, but also with the external systems built and operated by various business partners. That requires such services exposed beyond the trust boundaries of the enterprise and thus increasing the security threat landscape.

Securing webservices is more complicated than any other end user systems, as the webservices are built as the conduit between systems rather than human users. Most of us are very familiar with the first line of defense, namely authentication, data integrity, confidentiality and non repudiation. These are certainly critical security concerns, but there are well established tools and practices that help address these security issues. But, this it not just be enough to be contempt with solving these concerns, as the services are no longer constrained within the trust boundaries.

While other types of applications have executables that act as an outer layer and protects the application's internals, webservices don't have such outer layer and thus expose the internals to potential unforgiving hackers on the internet or intranet. Testing and securing webservices require a toolkit with abilities to act like a client or for that matter its proxy and being able to intercept, transform or manipulate the messages that are being exchanged. Having said that, the toolkits alone would not guarantee a complete solution to combat the security exposure of the webservices. There need to be an understanding of the application, context, trust boundaries etc, which will help to enumerate the potential threats that need to be managed and then use the toolkits.

In this blog, let us have a look at the potential threat categories beyond the first line of defense that an organization should be aware of and be prepared to combat. As always, the attempt is to come up with the most significant threats and not to produce an exhaustive list of such threats. Determination of the specific threats under each category requires an analysis of the application and its internals in line with the trust levels.

Privilege Escalation

Virtually all attacks attempt to do something the attacker is not privileged to do. The hacker wants to somehow leverage whatever limited privilege he has, and turn it into higher ("elevated") privilege, as a result of potentially flawed authentication and authorization mechanisms. Sometimes, hackers would exploit vulnerabilities in the internal components of the service, be it custom coded or third party components and thus gaining privileges for remote code execution or database manipulation. It is obvious that the consequences of such breach could be severe leading to financial and non financial damages.

While the protection for such vulnerabilities depends on the specifics of the application and its architecture, as a general best practice, the authentication and authorization needs of each of the internal components shall be carefully established based on the least privilege principle, by considering additional contextual information like the client, location, or network in addition to the basic credentials and by implementing multiple levels of authentication within the service components can alleviate this category of threats. When third party components or run times are used, it is important to constantly apply the security patches that the vendors may release.

Denial of Service

The interoperable and loosely-coupled web services architecture, while beneficial, can be resource-intensive, and is thus susceptible to denial of service (DoS) attacks in which an attacker can use a relatively insignificant amount of resources to exhaust the computational resources of a web service. Denial of Service attack is aimed at causing impact on the availability of the underlying network, compute or storage resources for the legitimate service consumers. Given that webservices are typically used in high availability enterprise integration scenarios, even a smaller magnitude of DoS attach could cause sever disruption in all the connected systems and leading to breach of SLAs with partners and thus at times leading to financial damage as well.

The effects of these attacks can vary from causing high CPU usage, to causing the JVM to run out of memory. Clearly the latter is a critical vulnerability. Protection for DoS can be implemented at different levels with a view to ensure legitimate of use of the underlying resources. Some of the techniques used to combat DoS attacks are to implement various algorithms to restrict access to critical internal components to legitimate requests and rejecting the rest. For instance, the input message size limits can be validated in line with the speciific service method and if the requests carry an unusually large xml messages than expected, such requests can be rejected at the network layer itself. Tools and appliances are available to combat the DoS attacks, but how such tools and appliances are setup and configured would depend on the specific needs.


Spoofing attacks are successful when one entity in a transaction trusts another entity. If an attacker knows about the trust, they can exploit it by masquerading as the trusted party. This can also be masquerading the additional contextual information which is used in authentication or in request processing. The most common of such information are SOAPAction and client IP. There are various ways to exploit credentials or spoof the source of messages. These include credential forgery, session hijacking and impersonation attacks. The services shall be designed to appropriately validate such information in isolation and in combination with other related information, so as to establish the request is legitimate.


An important web services security requirement is nonrepudiation. This requirement prevents a party from denying it sent or received a message. The way to implement this is using Xml Digital Signatures. For example, if I sent a message which is signed with my private key, I cannot later deny that I sent it. This concern arise when the web service does not bind the client to their actions using appropriate techniques or due to flawed implementation of auditing and logging requirements. Data inconsistency is one of the common outcome of this threat and could lead to sever damages to the enterprise.

A combination of the protection measures against various threat categories would help combat this threat. For instance, an adequate protection from spoofing, protecting the messages while in transit coupled with appropriate logging and audit implementations would help minimize the risks arising out of this threat.

Information Privacy

As web services are typically implemented as part of a complex system and have access to a large amount of potentially sensitive information, it is important to ensure that access to the information is restricted. The transfer of the data should also be secured to prevent eavesdropping and sniffing threats. Privacy or confidentiality concerns with webservices is no different than that in any other system. As such, the information disseminated through the information has to be reviewed in line with the organization's information sensitivity policy and apply policies and rules to ensure that when to allow a specific request have access to such information. This will involve not only appropriately defining the authorization rules for the clients and users, but also carefully considering the information or parameters that are received as part of the request message

Message Tampering

Webservice messages, both request and responses if not appropriately protected can be tampered using various attack methods. Web services being a component of a complex distributed enterprise system with integration with multiple partner systems throws open the possibility of message tampering as it is exposed beyond the boundaries with multiple communication paths. Attacks under this threat category include man-in-the-middle attacks and implanting trojans and malwares. As with the other threats, this can also cause severe damages. Compromise under this category may also mean a compromise in one or more other threat categories. For instance, a tampered input message might lead to spoofing of identity and thus compromising the information privacy, etc.

To conclude, as the protection measures are evolving on the one side, newer threats are also emerging and the security professionals need to have a continuous engagement, and have an appropriate security or threat management framework implemented to combat the existing and emerging threats. Periodical security audits shall be supplemented with a formal security testing with necessary toolkits in place. All said and done, the extent of protection should depend on the organization's risk policies and risk appetite, the critical nature of the webservices and the trust boundaries.

Saturday, September 28, 2013

Strategies for Information Governance

No, we are not discussing about IT governance or Data Governance either. It is about Information Governance. Information is fast becoming the currency of the business organizations and it is an important asset that need to be protected, managed and governed. Physical records are giving way in favor of digital information and it is growing and moving beyond the boundaries of the enterprise. This opens up a new set of challenges in realizing the business value and managing the associated risks. To add to that a whole set of new and evolving regulatory requirements escalate the risks of privacy, security and retention. Now to understand what is Information Governance let us look at how Gartner defines it:

“The specification of decision rights and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archival and deletion of information. It includes the processes, roles, standards and metrics that ensure the effective and efficient use of information in enabling an organization to achieve its goals.”

Looking at the above definition, we can say that the Information Governance is a framework for managing information life cycle from its creation through its deletion and defining accountability, retention, protection and quality aspects around the same. Obviously the framework should comprise of processes, standards, roles & responsibilities, metrics and tools and technology for effective and efficient use of the information. The framework should be in line with the strategies of the organization for Information Governance. So it is important to establish the strategies first and then build the framework around it.

Obviously the Information Governance Strategies shall be formed with due consideration to the following aspects of Information:

Classification: Information classification is one of the most crucial elements of an effective information governance process and yet it’s also the one that many organizations fail to implement well. In its simplest terms, Information classification is the process of categorizing information based on its level of sensitivity, perceived business value and its retention needs. While information classification based on sensitivity is mostly prevalent as most of the Information Security frameworks demand, for an effective and efficient information governance, the classification should represent the retention needs and the business value in addition to sensitivity. When done properly, the classification of information helps an organization determine the most appropriate level of safeguards, controls and usage guidelines that need to be in place. Organizations should be aware that data classification may change throughout the life cycle. It's important for data stewards to re-evaluate the classification of information on a regular basis, based on changes to regulations and contractual obligations, as well as changes in the use of the data or its value to the company.

Protection: Protection of data is an important Element of the information governance framework. Data security breaches now appear to be headline news almost on a weekly basis. The consequences can be
disastrous as organisations’ bottom line and reputation are impacted. Information management and protection is undoubtedly moving in keeping with organisational changes. A planned governance structure
for information allows organisations to support business expansion, while meeting regulatory and personal data protection laws. 

Retention: Organizations are obligated to respond to various information requests, be it litigation, audit or investigation. There are numerous legislations in various countries requiring retention of information for a certain period to be used as evidences. Certain countries have legislations that require non persistence of information, i.e. certain class of information not to be persisted for privacy reasons. Effectively balancing such complex retention requirements depends on proper identification and classification of the information and use of appropriate tools and technology.

Roles & Accountability: Historically, establishing robust information management was considered an IT challenge. The CIOs were expected to deliver the appropriate technology to support critical information reporting and management, and the CISOs, who are mostly aligned with IT functions, were expected to protect the information assets. This does not absolve the business functionaries from the accountability of the information that they create and manage. IT is just a facilitator and it is the business who owns and be responsible for the information throughout its life cycle. The overall requirements of any information asset must be specified, ultimately, by the business people who define, understand and own the process that handles its usage. 

Collaboration: The people who staff the functions that produce and use the information are the people who know its value, can point out the current version of documents, should know how long a given document or set of data is going to be useful from a business continuity perspective. Thus it is very important that their knowledge on these aspects of information is considered while formulating the information governance strategies. Committed involvement from every employee and an effective communication amongst all of them is the key in building a successful information governance framework. Continued collaboration of all the business and IT functions is also essential in sustaining the information governance program in the organization, so that various attributes that determine the information classification, its usage and its business value are constantly aligned to the changing landscape of regulatory and business needs.

Quality & Integrity: As the information is becoming a key asset of an organization and that many decisions are based on the information at hand, it is important that the quality and integrity of the information to be at the highest level, so that such decisions do not go against the organization. Appropriate processes or techniques to validate the quality and integrity of the information shall be put in place and those involved in the creation or discovery of the information shall ensure that appropriate checks are performed and ensure that the information so created is reliable.

Information Governance is a combination of business practices, technology and human capital for meeting the compliance, legal, regulatory, security requirements, and organizational goals of an entity. Information governance provides a means to protect, access, and otherwise manage data and transform it into useful information. While applying best practices such as physical and electronic security measures as well as creating policies for the disposition of data are critical to implementing an information governance strategy, available technology solutions and services can play a key role in several areas.

Thursday, August 29, 2013

Common & Practical Problems of Requirements Elicitation

Requirement elicitation is an important and challenging phase of any software project. This holds good for both product and project development activities, but the approach, techniques might vary. A well specified requirement has been found to considerably improve success rates of projects. Though various methods and techniques have evolved over the last couple of decades to better produce a good requirements specification, many struggle to get it done well.

This could be mainly because that requirement elicitation is just not science, it is an art too. It is more an art because it is highly human intensive and much depends on the skills of the people involved in the process. More so, as which method or technique to use and the way the document is structured and written depends on the abilities of the person driving this activity. Based on my experience in the be-spoken project development and product development activities, I have listed down some of the most common and practical problems with this activity as below:

1. Preconceived Notions

The requirements of every customer even in the same business domain, would be different. For example, requirement of a bank X would not be the same as that of bank Y. Each enterprise would have different business processes to differentiate their abilities or value deliveries from their competitors. The teams involved in requirement elicitation shall start with a clean slate for every project and thus should not try to bias the elicitation work with their previous project experience in mind. Ignoring this principle would result in misaligned requirement specification and thus ending up delivering a deficient product. As this is a human intensive process, it is quite common for the customer representatives too to easily miss out on such things.

This is quite a common problem with the product companies. Irrespective of whether the client contracts for the product with customization or a project, the vendors would prefer to reuse their existing code assets. As such, the business analysts engaged in the requirement elicitation tend to scope the customer requirements in such a way that it fits within the existing product architecture and related constraints. Even in case of a product based contract, the requirement elicitation or the gap study shall focus shall be unbiased and then it is the Solution Architects who will come in to come up with solutions to bridge the gaps. In the process, the customer will have the option to decide to dilute his requirement in favor of an existing work around.

The business analysts shall master the art of unlearning and relearning to handle this area well.

2. The Design Mix-up

The next common problem is to mix up the requirement elicitation with the solution design. This happens on both the sides i.e, the vendors and the customers. The business analysts from the vendor side often would start visualizing the solution design with a specific use case and would start suggesting deviations or work around to the use case. Similarly on the customer front, the users may start talking on the system perspective. For example, customers when narrating the requirements might talk about a drop down list, check boxes, etc. Ideally such details should be left to the design teams and where appropriate, the customer might want to review those designs or might specify the design guidelines to be followed or specify usability requirements for the vendor to conform to.

There is another school of thought that visualizing or thinking of solution early on would eliminate feasibility issues down the line. While this is partly true, the problem arise when such design constraints hide the underlying actual business requirement, which could lead to mis-interpretations later on.

3. Poor Planning

The requirement elicitation has to be a planned process with proper entry and exit criteria for each of the sub processes. There are many frameworks and techniques to perform this activity. Irrespective of the methods or techniques, the elicitation process shall comprise of the following activities: Identifying the Stake Holders; Define Use Case specifications; Generate scenarios;Organize walk throughs / interviews; Document Requirements and Validate Requirements. It is quire possible that each of these activities might have to be performed in multiple iterations. Poor planning of these activities might result in ambiguous or deficient requirements.

A related key issue is the exit planning. i.e. when to consider the requirement elicitation as complete. Depending on other project constraints, the exit criteria has to be carefully identified and further planning should be around that. For instance, if time is a key constraint, just for the sake of meeting the timeline, the elicitation activities should not be hurried up and thus ending up with an imperfect specification. Instead, in such cases, the scope can be divided into broader sub components and agree with the customer to defer some such components to a later phase based on priorities. Agile approach could also be thought of to solve this situation. i.e. start eliciting the requirements as specific user stories are taken up in respective sprints. A careful consideration of all the project constraints and priorities is a must in choosing a solution and there by coming up with the best course of action.

4. Volatility

In one of the projects we were handed off with a four hundred page requirements specification document was an year long work of the internal business analysts of the customer. But it was no surprise, that the actual business requirements were far different than it was documented as the business practices and processes  have changed a lot during this very same period. This has been a common problem that the industry has been battling with and Agile approach is emerging as a solution to this problem. This volatile nature of the business requirements requires the solutions to be delivered quicker to reap the time to market advantage.

Another aspect of volatility is that the requirements as elicited from different users / departments could be different and at times conflicting too. In some cases such differences could be misstatements or misunderstanding or in some cases it could be genuine, in which case the different requirements shall be specified appropriately and let the design teams come up with solutions to meet all such differences.

5. Undiscovered Ruins

It is the human nature to answer just the questions that were asked. Thus the business analysts shall master the art of asking appropriate follow up questions based on the responses from the customer representatives. That is where the elicitation is important. i.e. the business analysts shall provoke the customer to fully reveal what is required of the system. In the process it is very much common that certain needs might go undiscovered, but would show up later on as a deficiency. This problem can be partly addressed by identifying the right stakeholders for the purpose and then to get those validated by different stakeholders, who would look at these with a different perspective, which might bring out gaps if any.

Saturday, August 24, 2013

State of Open Source in The Enterprise

We all know and keep saying that Open Source is here to stay and there are enough proof out there in the form of companies like facebook, google, linkedin and quite many who have their business enabled by Open Source software. I get what you are thinking, the short list above are those companies who are into social networking kind of business and we need to look at large businesses, like banks. While we can say that the adoption of Open Source software is increasing, the same in the mission critical enterprise software space is still not very visible. For example, big data and cloud computing has triggered the increased use of open source software in the form of hadoop, map-r, open stack and so on. However, with enterprise software vendors also pitching in with their way of addressing similar problems, big enterprises tend drift away from the open source.

What is holding the CIOs back in adopting Open Source in their enterprise software suite? Probably, they see a risk of continued support. In most cases, the CIOs are willing and ready to adopt or try Open Source software for the enterprise's secondary use and leave the mission critical software to be proprietary and fully supported. Java as the open source programming language is well embraced, but in the middleware space, though Apache Tomcat and JBoss share considerable usage, big enterprises still look at weblogic or websphere.

On the website front, Apache and NGINX lead the open source market share and has much wider adoption with enterprises of all sizes. However, with many proprietary content management frameworks emerging, large enterprises are drifting towards the same, so that the website content maintenance and collaboration could be fairly simple with these frameworks. In the mobile world, Android has come into stay and has wider adoption on the consumer side with Microsoft getting increased attention of the enterprises for the enterprise mobility needs.

Similarly, with big data, we see many open source NoSQL and SQL databases emerging and even gets much needed visibility. However, bigger enterprises tend to try these databases for their secondary usage like data warehouse, etc and leave their primary mission critical applications to use proprietary database solutions. When it comes to critical enterprise software, the decision makers go by real world case studies and their market presence and don't want to take risks in going with Open Source.

May be, if we take a closer look at the decision making processes, we can understand, on what grounds Open Source is left out. Typically, the following are the key criteria used for evaluating an Enterprise Software that come in the way of adopting Open Source Software:

  • Support: Looking at it positively, Open Source software gets support from developers across the globe, mostly with a well governed release process. On the other hand, these are not built with the needs of a specific enterprise in mind, but built for a specific purpose though.Thus considerable efforts would have to go in to make an Open Source software work for an enterprise and the skills are scarce. Those enterprise who have their business around IT have plenty of developers on board and will have the ability to customize and adopt and even contribute back. In case of non IT organizations, this will mean a dependency on a vendor who can offer the support at a cost. When the software is expected to be critical for the business needs then naturally, this concern gains importance leading to the decisions drifting in favor of Proprietary solutions. Another thinking is that, though the software comes free, the efforts involved in customizing, enhancing and maintaining it could result in a way higher total cost of ownership than that of proprietary software.
  • Usability: Open Source developers focus on the technology and often ignore the usability aspects and thus resulting in increased costs around user training and maintenance. On the same lines, the end users are expected to use the software at their own risks and no own guarantees or warrants its performance levels. 
  • Concerns on IP: Being open source, the consumers are expected to contribute the changes if any they make to the software back for common use and thus the Intellectual Property of the enterprise might have to go back to the shared source code. In case of proprietary software, however, there could be an option to continue to own the specific IPs though at a higher cost. 
  • Reliability: Contrary to the reality, there is a thinking that in case of Open Source software, the reported issues might get too longer to get fixed or some might not be fixed at all. With community of developers all over the world contributing, Open Source software evolves pretty fast. However, no one could be held responsible though. Other related concerns could be that lack of better governance, absence of adoption by competitors and lack of support from big names.
  • Security: As the source code is accessible for use by any one, there is a tendency to think that hackers can also get to know the software better and design attacks around the vulnerabilities. However, like in the case of fixing issues, some of the open source communities also consider fixing the reported vulnerabilities on priority and making the product secure. The security concern would remain even with proprietary software.

Of all, the concern support and related cost seem to be the primary concern that hold CIOs back from adopting the Open Source software. The traditional methods of evaluating the software may have to be revisited though to overcome this problem. The decision makers want to play it safe.

Of late we see pressure on the CIOs to cut costs where possible and that is a good sign that Open Source Software is getting a fresh look, more so in the Government sector. Those doing business in and around IT are building the needed skills and talent in house and embrace open source solutions. We see this more in the areas of configuration management, build automation, automated testing and other tools which aid in building and maintaining systems and infrastructure. Open Source software is still not making inroads in mission critical enterprise business applications. 

Wednesday, August 14, 2013

Agile in Fixed Price Fixed Scope projects - Hybrid Contracts

It is well known that the traditional methods are not yielding to a better success rate of a project and thus there is a tendency to lean on Agile Methodologies. At the same time, clients feel secure with Fixed Price and Fixed Scope project as their financial outlay is limited and there is no ambiguity. What they miss however in this process is the value delivery. The traditional project management methodologies focus on the Scope, Time and Resources where all three are constraints. Ideally the focus should be on the Value and Quality delivered, given the constraints there by guaranteeing a better success rate of the project.

The software vendors are doing business and they work to earn profits. As such, with Fixed Price projects, the vendors tend to limit their efforts to deliver the agreed scope. With the pace at which changes are happening around any business, freezing scope for a project early on is nearly impossible as software delivered to such scope frozen early on is often less usable. With change is the key driver in optimizing the value delivery, clients and vendors have conflicting views on the change.

Agile methodology has evolved over these years and offers a solution to the problem of optimized value delivery. However, clients still feel that Agile approach does not secure their interests in terms of a definite price and time. Of course, their concern is genuine as they cannot afford to sign a project contract where the cost and time are elastic. While the basic premise of Agile is to embrace the changes, to succeed, it depends on a very high level of trust between the vendors and the clients, where both should work for a common goal and the contract should be profitable to both.

Having said that the Fixed Price (FP) Fixed Scope (FS) contracts offer very limited opportunity for vendors to practice Agile methodology. Making either FP or FS elastic will give some room for practicing Agile methodology. Let us explore how this can be accomplished in the contracts. Both the above contracting models requires a high level of trust between both the vendors and the clients.

Fixed Price Elastic Scope (FPES) contract: In this model, while the price is fixed, the scope can be variable. This model can practice a hybrid Agile approach, the scope is broken down to features and the development happens feature by feature. Depending the time taken to implement a feature, more features are added or removed. For instance, if a feature estimated to take 30 days is implemented in 20 days, one or more new features can be added to fill the time saved. Similarly if the implementation takes 45 days, then one or more features will be removed.

To bring in incentive for both vendors and clients, a discount factor can be agreed upon, which is applied while adding or removing features. For instance, in a case where the vendor has saved 10 days for a feature, the client instead of adding a feature that needs 10 days to fill the gap, will only add a feature with 5 days of effort, where the discount factor would be 50%. The same discount factor is applied on the converse (where implementation exceeds the planned effort).

Elastic Price Fixed Scope (EPFS) contract: In this model, the Scope is fixed, but the pricing is variable. The idea behind this approach to is arrive at a base rate and a profit factor. While the base rate and the profit factor, along with the generic terms and conditions are covered in the Master Services Agreement, the actual project scope can be covered in multiple Statement of Works (SoW). Requirement elicitation and scoping can be the first SoW. This way, the project can be split into smaller working software modules and the work items can be scoped in stages / phases. This approach will help the clients in handling changes with ease.

Here again, an approach like 60:40:20 can be adopted to prioritize the work items. This approach requires the work items to be grouped into Must have features, Good to have features and Fixes. Every SoW can comprise of 60% of Must Haves, 40% of Good to haves and 20% of fixes emerged out of previous deliveries.

The incentives for both vendors and clients can be based on categorization of the work items as New feature, Clarification, Fixes. New features are the scope items as elaborated during elicitation. Clarifications are such items that emerge out of elicited requirements during the design or build phase. Fixes are incorrect implementations by the vendors, basically design and build defects. Costs for each SoW can be computed by applying the profit factor on the base rate. For instance, the New features will be charged at base rate + profit, clarifications will be charged at base rate and fixes will be charged at base rate - profit.

With the above, we are not concluding that Agile cannot be practiced in an FPFS project. There are still ways and means that a hybrid agile approach can be thought of and practiced so that value delivery is the primary focus for all the parties. Do share your thoughts in the form of comments on the subject, and I will cover those in my next blog.

Sunday, July 28, 2013

Colocation - Key Considerations for Selection of Service Provider

Increasing number of organizations are shifting towards Colocating the data centers as they see the inherent benefits like efficiency, security and other shared managed services. Colocation is offered as a service based on the standards, policies, procedures, people and the data centre infrastructure. The user experience depends on the quality of each of these components that form part of the service. The type of provider also drives the end-user experience because you get what you pay for. Given the significant benefits of colocation, CIOs cannot just ignore this as an option, but need to look for reliable facilities that upholds the needs of the organization.

The benefits of going for colocation service include effective use of capital and having higher quality facilities through power redundancy, cooling, and scalability/growth. Choosing a colocation provider is a strategic business decision that evolves from thoughtful consideration certain key parameters. More over, the partnership with the colocation service provider need a multi-year commitment from both. Listed below are some of the key considerations that help the CIOs to make a well thought out decision on choosing the right provider. The list below however is in no order of priority.

Customer & Partner Considerations
Many tend to focus on the organization's IT needs and base the decision to colocate and the choice of the service provider on such internal design parameters. However, the CIOs also have to consider the strategies of their customers and partners . For instance, in case of highly interconnected and collaborative partner systems, the DR sites need to have connectivity with the DR sites of the partner systems as well. If the service goes down customers will easily become frustrated with the company, especially if they are in a new market and are not long-time clients.

Power and capacity planning
The power supply is a key component of the shared services that the colocation service provider offers. Needless to say that it is a vital that the data center is supported with a scalable and reliable power supply in order to keep up the infrastructure components. The supplier should focus on the future needs of the customers and perform proactive capacity planning so that they are prepared to support the scalability needs of the organizaiton.

Designing and building a reliable and resilient data center would call for heavy investment and smaller organizations would find it difficult to justify the RoI for such investments. However, colocation service offers this benefit as the provider would however will be sharing the cost with multiple service consumers. However careful evaluation of the design and architecture elements that will contribute towards ensuring reliability and resiliency of the service is essential in making a decision on the choice of service provider. A carefull review of the SLA of any colocation provider is also very important to ensure that the service provider assures commits for the current future investments to support this need.

Security and technical advancement
With colocation, the organization is leaving its servers, equipments and the more precious data assets in the custody of the service provider, which usually is outside the physical security boundary of the organization. Thus, physical security to the premises and the assets hosted there in is an important component of the colocation service and needs a careful consideration and evaluation. A comprehensive security, beyond the guard, ensuring data protection at all times is what is required. Additional factors of security involving evolving technologies like biometric, card readers, keypad or electronic locks as well as surveilance cameras should form part of the security investments.

Network connectivity and proximity
Like reliability and resiliency, network performance is a key customer service issue, not just a technology issue. An ineffective interconnect setup or limited network system can lead to disruptions in the business operations. It would also matter much to ascertain the network latency associated with connectivity with partner systems. In most cases, if there is a need to have heavy amount of data or information exchanged with a particular partner system, it would be beneficial to have the data center closer to that of the partner's so that the latency between such centers could be as much lower, leading to faster and efficient communication. Network performance issues can have a tangible financial impact to the bottom line. It is usual that the colocation providers offer diverse options for internet access and a careful selection of a primary and redundant internet service providers is essential. 

Technology & support
The colocation service provider should be implementing standard facility design elements and integrating proven technologies so that better power and cooling capacities is achieved. The provider should also have the demonstrated expertise in facilitating flexible deployment of higher density solutions in an on-demand model. Despite the colocation arrangement, the IT heads of the organization are still responsible in addressing the business needs and as such a careful evaluation of the standards and practices that the provider follows in keeping up with the technology and trends is very much essential.

While most providers offer on-site 24/7 technical support, the skill sets in the offer could vary. Again the organization may demand a specialized skill, which the service provider might not be able to support. As such, it is important to compare the skills needed vis-a-vis the skills on the offer and make appropriate decisions. Similarly, the maintenance windows of the service provider for certain support services could be different from that the organization would want. Thus this is another component that needs careful consideration.

Flexible commercial terms
While the key benefit of outsourcing is the shift from the capital investment to operational expenditure, CIOs might endup spending way higher unless a carefull evaluation of the commercials attached to each of the specific service or component that forms part of the colocation service. Ideal cloud service provider would let the customer to pick and choose the components and also offer multiple and flexible billing options to choose. A careful choice is required to ensure to optimize the value realization out of the colocation engagement.

Locational Constraints
Despite having on-site technical support, there would be emergencies needing experts to visit the data center to physically visit the location and work on solutions. As such, the geographical location of the data center(s) does matter to the organization and needs careful consideration. Similarly, local issues like frequent political unrests, frequency of natural disasters in the region, proximity to the airport, etc need to be considered. For instance if the data center is located in a flood prone area, there is a high risk of the data center services getting affected.

Regulatory Compliance
Various countries and states have legislations that could have an impact on the data or other assets that is being housed at the colocated data center.  For instance, UK and US have privacy and security legislations that have an impact on the way the information is stored, processed or disseminated. It is also beneficial to have an insider’s view and legal opinions where needed.  A provider located in a flood plain or an area prone to hurricanes creates unnecessary risk for the business.

Given that most businesses are technology driven, the regulatory and legal needs warrant that the audit of systems and facilities and thus calling for appropriate arrangement with the provider to support this need. The  provider may be requested to share copy(ies) of a SAS 70, SOC 2 or such other audit report, which represents a third-party validation of internal process and controls.

Cost is also an equally important aspect to consider. However, many times, it may not make sense in just considering the cost and instead it should be considered in line with the tangible and intangible value that could be realized out of colocation. For instance, better network performance would benefit as there could be faster and increased acceptance of the systems by the customers and partners resulting in expanding the market. Similarly, better security services might protect the organization from potential liabilities that may arise in future on account of security breaches. Thus the lowest cost isn’t necessarily the best for the business. Lack of support, insufficient product lines or unreliable data center facilities can end up being very costly later on. 

Green IT
The power consumption of a data center is way higher than a regular office premise. Colocation by itself would contribute towards optimizing the energy consumption as it would be a consolidated specially built facility shared amongst many. However, it is worth exploring the measures that the provider has in place towards conserving energy. and such other green practices in place.

While the above considerations are generic, certain other considerations might be vital for some organizations depending on their business nature and priorities. This list can be further updated based on the feedback and comments from the readers.

Saturday, June 1, 2013

Software Quality Attributes: Trade-off anaysis

We all know that the Software Quality is not just about meeting the Functional Requirements, but also about the extent of the software meeting a combination of quality attributes. Building a quality software will requires much attention to be paid to identifying and prioritizing the quality attributes and design &  build the software to adhere those. Again, going by the saying "you cannot manage what you cannot measure", it is also important to design the software with the ability to collect metrics around these quality attributes, so that the degree to which the end product satisfies the specific quality attribute can be measured and monitored.

It has always remained as a challenge for the software architects or designers in coming up with the right mix of the quality attributes with appropriate priority. This is further complicated as these attributes are highly interlinked as a higher priority on one would result in an adverse impact on another. Here is a sample matrix showing the inter-dependencies of some of the software quality metrics.



































While the '+' sign indicates positive impact, the '-' sign indicates negative impact. This is only an likely indication of the dependencies and in reality, this could be different. The important takeaway however is that there is a need for planning and prioritizing the quality attributes for every software being designed or built and the prioritization has to be accomplished keeping mind the inter-dependencies amongst the quality attributes. This would mean that there should be some trade-off made and the business and IT should be in agreement with these trade off decisions.

SEI's Architecture Trade-off Analysis Method (ATAM) provides a structured method to evaluate the trade off points. . The ATAM not only reveals how well an architecture satisfies particular quality goals (such as performance or modifiability), but it also provides insight into how those quality attributes interact with each other—how they trade off against each other. Such design decisions are critical; they have the most far-reaching consequences and are the most difficult to change after a system has been implemented.

A prerequisite of an evaluation is to have a statement of quality attribute requirements and a specification of the architecture with a clear articulation of the architectural design decisions. However, it is not uncommon for quality attribute requirement specifications and architecture renderings to be vague and ambiguous. Therefore, two of the major goals of ATAM are to

  • elicit and refine a precise statement of the architecture’s driving quality attribute requirements 
  • elicit and refine a precise statement of the architectural design decisions

Sensitivity points use the language of the attribute characterizations. So, when performing an ATAM, the attribute characterizations are used as a vehicle for suggesting questions and analyses that guide  towards potential sensitivity points. For example, the priority of a specific quality attribute might be a sensitivity point if it is a key property for achieving an important latency goal (a response) of the system. It is not uncommon for an architect to answer an elicitation question by saying: “we haven’t made that decision yet”. However, it is important to flag key decisions that have been made as well as key decisions that have not yet been made.

All sensitivity points and tradeoff points are candidate risks. By the end of the ATAM, all sensitivity points and tradeoff points should be categorized as either a risk or a non-risk. The risks/non-risks, sensitivity points, and tradeoffs are gathered together in three separate lists.