Saturday, March 8, 2014

The Principles of Agile Enterprise Architecture Management

Change is happening everywhere and that too at an accelerated rate not only in IT but also in many other functional areas, though the same is very high in the area of IT. Business users are encouraged to innovate in every possible area and that brings in more and more transformation projects, an important category of projects in the enterprise program and portfolio management. Many a times these transformation projects are time critical, which if not implemented on time will use the market advantage. On the same lines, technology adoption is becoming a key aspect for the success of the businesses and the predicting, tracking and embracing the upcoming disruptive technologies has become an important business and strategic risk as it can have wider impact across business strategies, capabilities and processes.

Enterprise Architecture function has equal responsibility in ensuring these changes are embraced with least impact. While running the business as it is an important aspect, enabling transformation of business capabilities and information management capabilities is another key goal for the Enterprise Architects. One of the key elements that Enterprise Architects should consider and address is complexity around business and IT Architecture management so that transformation projects get implemented at desired time schedules and thus reap the intended business benefits. While there are other key objectives like delivering stakeholder value, managing complexity is an objective that comes close to being Agile.

The Agile Enterprise Architecture is all about letting changes happen and thus keep the Architectural Principles continuously evolving. This will also call for having an appropriate lifecycle that facilitates the evolution, development and adaption of the current and the target reference architecture continuously. This will keep the maturity levels of various IT management functions also changing over time. In this blog, let us focus on the key principles that enables an Agile Enterprise Architecture Management:


Value Individuals and Interactions over Tools and Processes

It is a well established and understood fact that it's the people who build success in the enterprise, and the tools and processes are just enablers. With people being the greatest asset, the organizational culture plays an important role in motivating the employees to collaborate, innovate and deliver the results more effectively and efficiently. Build the EA team in such a way that it has representation or interface with the Top Management, Business & IT Owners, Business & IT Operations teams and the Project Teams driving the change within. Choose and deploy the right set of tools, technology and processes that facilitates the collaboration with different business and IT functions.

The EAM team shall aim for sustainable evolution, with a pace as is driven by business and IT users; Help the project teams to avoid panics, and discourage culture clashes; Understand that everyone has their own area of expertise and thus can add value to the project or program.


Focus on demands of top Stakeholders and speak their languages

Typically the top stakeholders need continuous input from the EA team on various business and IT functions, to decide on further strategic alignments or improvements, which in turn would lead to new transformation projects or change of course in case of existing projects. The inputs could be in the form of metrics, visualizations and reports. It is very important that these inputs should be relevant and make sense to the target recipients. The following key considerations are worth considering to ensure that the stakeholders realize the maximum value out of such inputs from EA teams:

  • A single number or picture is more helpful than 1000 reports
  • Avoid waste - Share information that is relevant and nothing more and nothing less.
  • Leverage the existing process to generate and deliver these inputs as against a whole set of EA specific processes.

Promote rapid feedback, by working jointly on models and architecture blue prints with other people and functions. Remember that the best way of conveying information is by a face-to-face conversation, supported by other materials. Shared development of a model, at a whiteboard, will generate excellent feedback and buy-in. Work as closely as possible with all the stakeholders, including your customers and other partners.


Reflect behavior and adapt to changes

The effect of a change in the end reflects on the behavior of individuals, tools, and functions. The EAM function shall atempt to understand the likely directions and behavior of such changes using techniques such as scenario analysis and change cases. This will help the EAM function to determine how best to embrace the change in terms of timing, approach and methodology. This is where a pattern based approach in developing the EAM function would facilitate change adoption with much ease and least impact.

EAM should manage and plan for the changes and shall never resist a change. It may not always be easy in embracing changes, but a well thought out EAM evolution lifecycle would certainly make it simpler. It is always possible that one big change can be broken into various blocks and can be taken one at a time, depending on the time, efforts and business priorities.


Here are some of the useful references for further reading on the Agility and the Enterprise Architecture Management.

1. Towards an Agile Design of the Enterprise Architecture Management Function

2. Principles for the Agile Architect

3. The Principles of Agile Architecture

4. Actionable Enterprise Architecture (EA) for the Agile Enterprise: Getting Back to Basics

Sunday, February 9, 2014

The Principles of Effective Risk Management

Enterprise Risk Management is one of the core domain of Governance. In some business sectors, the success depends on an intelligent and effective risk management principles, framework and practices. The advancement in technology, like big data and analytics also plays a key role in making the risk management effective and adding value to the business. Other factors that necessitate a well architected ERM in an organization include, regulatory & compliance needs, security and privacy expectations, disasters and business continuity needs, etc. As the risk management practices evolved further, adoption of principle based approaches have been found to be more effective.


Here the some of the common principles to model the Risk Management framework around:

  • Create and protect value - Any framework should be able to add value and also protect the values that the assets of the organization is expected to deliver. This would also involve identifying the specific business needs, appropriately assess the risk measure and in turn facilitate deciding on the best risk mitigation or avoidance plan. Risk management must have demonstrable effect on achievement of objectives and improvement of performance of the enterprise.
  • Integrated approach - Risk management cannot be practiced effectively in silos. Today's organizations face the challenges of many different frameworks for meeting different goals. For instance, ISO27001 for security, ITIL for IT infrastructure management, COBIT for Governance, etc. Integrated risk management promotes a continuous, proactive and systematic process to understand, manage and communicate risk from an organization-wide perspective in a cohesive and consistent manner. To be effective, the Risk Management framework should be capable of being integrated into the existing process framework.
  • Recognise & manage complexity - Organisations are very complex environments in which to deliver concrete solutions. There are many challenges that need to be overcome when planning and implementing information management projects. In practice, however, there is no way of avoiding the inherent complexities within organisations. New approaches to information management must therefore be found that recognise (and manage) this complexity.
  • Flexible and adaptable - There is no "one-size-fits-all" approach to risk management and organizations should consider their own context when determining an appropriate approach. Organizations today face a considerable change management challenge for information management projects. In practice, it means that projects must be carefully designed from the outset to ensure that sufficient adoption is gained. The framework shall be tailored and responsive to the organization's external and internal context including its mandate, priorities, organizational risk culture, risk management capacity, and partner and stakeholder interests.
  • Highly usable - In general, the risk management practices should allow for the identification of risk information throughout the organization that can be used to support enterprise wide decision-making, and should also be flexible enough to evolve with changing priorities. This requires that every employee of the organization has a role to play in an effective Risk Management program. This calls for the structures and the associated processes should be simple enough to understand and also usable or executable. 
  • Dynamic and responsive to change - The process of managing risk needs to be flexible. The challenging environment we operate in requires agencies to consider the context for managing risk as well as continuing to identify new risks that emerge, and make allowances for those risks that no longer exist. Risk Management shall be deployed in a systematic, structured and timely manner to enable cost-effective embedding and focused generation of consistent, comparable and reliable results. 
  • Leverage tools & technology - An effective risk management calls for the ability to consider and make use of large volume of data and should leverage the statistical techniques to predict and prioritise the risks. Coming up with a right mitigation or contingency plan also calls for processing of large volume of data. The framework should provide for leveraging latest technology as it emerges to facilitate such high volume information handling and statistical analysis.
  • Considerate to human and cultural factors - The success of the risk management program largely depends on its employees in implementing it as part of their every day business activities. This calls for the structure and the processes to be considerate of the organization's cultural values and should not lead to creating conflicts. 
  • Communicate extensively - Communication is the key for success of any project or program. The framework shall provide for seamless communication amongst all stakeholders, so that the information is exchanged at the right time without losing its value.
  • Continuous Improvement - The big bang approach is unlikely to yield the expected outcome for obvious reasons. Instead, an evolutionary approach will work better and thus the ERM should be capable of evolving. Deployment should be complemented with mechanisms to assess and continually improve enterprise risk management maturity and be aligned with approaches driving the organization’s overall excellence and maturity agenda. 
  • Governance - Oversight and accountability for the risk management process is critical to ensure that the necessary commitment and resources are secured, the risk assessment occurs at the right level in the organization, the full range of relevant risks is considered, these risks are evaluated through a rigorous and ongoing process, and requisite actions are taken, as appropriate.

The above list is not an exhaustive list of principles that readily suits an organization. The right set of principles shall be identified based on the priorities of the business. These principles when adopted help the organizations to practice an improved risk management and thus giving the following benefits to the enterprise.
  • Enhance the coverage of risks in all areas including mission,strategy, planning, operations, finance.
  • Consider the causes of various risks and the resulting impacts.
  • Develop a culture in which employees manage risks as part of their daily routines.
  • Optimized risk appetite, so that the business functions can take take calculated risks.
  • Facilitate enterprise wide risk aware decision making.

Saturday, January 25, 2014

Internet of Things: What Strange Things Can Happen

It was about 6 years back, by when we have started to see WiFi enabled digital cameras and we were wondering what this has to do in a digital camera. But with that, the digital cameras were able to upload the captured images automatically to the cloud based photo albums. Later came in GPS equiped digital cameras, which attaches the location to the captured images. Of course, with smart phones equiped with higher resolution cameras, the digital cameras are on the downfall. That is just a well known example of how a 'thing' or a smart thing can connect to a network and share useful data for a purpose. So much have evolved since then and we now see a world of possibilities to have all the 'things' connected.


Researchers see a lot of benefits by making things smart and inter-connecting them. The networking technologies are also evolving at a brisk pace, offering various improvements over the wireless technologies and protocols. We can see this trend advancing further and may mature in about two decades from now. Looking further, in line with my blog on Human Interface Technology, even humans can remain connected, and that will render human disabilities a thing of the past century.


If you followed this year’s CES, it is evident that the future is all about connected devices. We could see everyday devices equipped with sensors and connectivity to work together, understand what we’re doing, and operate automatically to make our lives easier. Here are some of real world examples of Internet of Things:


A smart refrigerator that can read the embedded tags on the grocery items that are stored in it and then using the supported backend platform on the cloud, identify the items and fetch its details as to date of manufacture, expiry date, quantity, etc. Thus the fridge may alert the consumers about the state and stock of such items. With the kind of wearable gadgets that we see now, these alerts can be through such devices too. It is left to your imagination to what extent this smart capability can be extended.


Medical and emergency care is another area where the smart 'things' play a very useful and life saving role. For instance, a connected car can call emergency services faster than a mobile phone. Again, with the help of embedded or worn smart gadgets, the hospital can get to know the patient history as the patient gets into the hospital and can get ready for the emergency services thereby saving precious time, which can be life saving. Check out this interesting video. Check out this video that IBM has made out describing how it is growing fast and could invade into the everyday life of human beings.


Extending this further to the daily routines of a business executive, the possibilities are endless and here are some that are close to reality, if not already real:

  • Your smartphone once it hears a hint about a meeting in a conversation, it will in the background look up your calendar and will pass on the busy / free information. If the executive uses a glass, then he would be seeing the schedule as he talks and thus facilitates the scheduling of the meetings.
  • The smart alarms will be smart enough to consider information as to what time did go for sleep, the schedule (both personal and official) for the following day and thus will intelligently decide the wake up time in the morning and triggers the alarm.
  • Depending on the traffic conditions, your car will intelligently suggest alternate routes to reach the office or such other scheduled meeting venue and if needed, automatically inform the meeting organizers about the possible delay or may seek rescheduling of the meeting.
  • As you drive back home, you just remember that you need to pickup some drugs from a drugstore. Your smart car will already know this and will identify a store that stocks the drugs that you need and that is on the route or closer to the route that you drive. It can even place the order with the store and let the store keep your items ready for delivery and you just need to pick up enroute.
  • Needless to say, your car will be smart enough to perform a health diagnostics of itself and will decide on a best date for its own garage visit so that your schedules are not impacted.
  • These smart things will know about your presence and which device is in touch with you to send out alerts. For example, if you are at home watching TV, you may see your TV showing alerts from your washing machine and similarly, when you are at work, your smartphone would be used to show these notifications.
  • Here are some more ways the 'Internet of Things' can impact your daily life.


Coming back to the household, you are watching your favorite action movie with surround sound and you did not changed your smartphone from a silent mode back to a ringing profile. You don't have to worry, your smartphone knows what you are upto and over a period would have learnt by itself, as to which of the calls you would want to answer at this situation and accordingly either rejects the call by answering the caller appropriately. If it is an important call that you would n't want to miss, it knows it already and will tone down the TV audio volume and thus draws your attention to the call and you don't have to reach out to your phone, your TV will take over the call from your smartphone. To extend this further, depending on the profiles of other members at the house, which the house already knows through its sensors and networks, your smart phone will decide whether to route the call on to the TV or not.


We can now visualize the possibilities and it is endless. The smart things will have built in learning capability and will keep learning from its master's behavior to perfect its services. This trend will lead us to a situation where the things might by themselves or under the influence of hackers attempt to take over human beings as portrayed in some of the recent science fiction movies. On top of this, hackers will also be leveraging these smart abilities to hack into these connected networks and could do whatever they have been doing with the connected systems now.


Here is how the hackers can intrude into your digital lifestyle:

  • We have already seen reports of a smart refrigerators sending out spam emails.
  • By hacking into your house network, hackers may get to know how many members are home or if there are none inside the home, which information will be useful for them to plan their burglary attempts, etc.
  • Your TV may refuse to play your favorite channel and will rather play content that the hackers prefer you to watch.
  • Your car may drive to a place that is different than where you wanted to visit. On the same lines, hackers can execute traffic diversions and cause traffic jams as portrayed in the movie Die Hard 4
  • All your orders for home supplies may be hacked and deliveries may happen elsewhere, while you would have paid for it. And of course, your house network will still acknowledge for having received the deliveries, while it is not actually.
  • The impact of hacking into the emergency service network could be huge and life threatening.
  • Your smartphone can be hacked to refuse critical business calls and thus causing revenue impact to your organization.


IDC anticipates that more than 200 billion connected devices will be in use by 2021, with more than 30 billion being autonomous devices. Cisco’s Internet Business Solutions Group (IBSG) predicts some 25 billion devices will be connected by 2015, and 50 billion by 2020. How will having lots of things connected change everything? Find the answer in the infographic. With all this, Internet of Things is coming and will be here to stay soon. Whether we, the humans are ready to take on this evolution remains to be seen.

Friday, January 17, 2014

REST Services - Security Best Practices

As most of us know, REST (Representational State Transfer) is an architectural principle and is gaining increasing reckoning amongst architects for the inherent advantages that it offers. REST does recommend the use of standards such as HTTP, URI, XML and JSON and formats such as GIF, MPEG, etc. Twitter, iPhone apps, Google Maps, and Amazon Web Services (AWS) demonstrate heavy use of REST services. The basic tenets of REST is statelessness and is all about utilizing the HTTP commands GET, PUT, POST, DELETE as outlined in the HTTP RFC.


Obviously, Architects see some key advantages with the REST services, and so REST implementation becomes an important consideration in responsive, service oriented applications. Let us have a recap of some of the key advantages as below:

  • The resources can be uniquely identified using URI and facilitates interconnection of these resources.
  • Resource manipulation is accomplished using the standard HTTP verbs, viz GET, PUT, POST, DELETE
  • The data payload is minimal and thus offers the capacity and efficiency benefits.
  • Easier implementation offers shorter learning curve, maintainability and time to market advantage.
  • Increased support from the JavaScript offers the client side computing benefits and thus improve the responsiveness.

Needless to mention, there are certain disadvantages too with the REST Services and here are some:

  • Prone for same level of threats and vulnerabilities as the HTTP and Web
  • Improper use of the HTTP commands could lead to problems and complicate the design.
  • Relies on very few standards.

Some of the security challenges with REST Service implementations are outlined below:

Chained trust is challenging for web service implementations and the situation is no different with REST. Unlike in case of SOAP, standards like WS-Security, SAML cannot be used in case of REST services. This call for relying on a combination security implementations which are specific to different implementations. Here are some such security implementations, which in combination may help overcome this concern:

  • Use Digital Certificates for authenticating the server and the user. 
  • Pass the user's identity from server to server and necessary validation and authorization at the data source.

Cross site request forgery (CSRF) attacks, which attempt to force an authenticated user to execute functionality without their knowledge. Being stateless, REST is inherently vulnerable to CSRF attacks. The work arounds for this security concern are:

  • Use of a custom header - Setting a custom header such as X-XSRF header is known to be a solution for this concern. The endpoints receiving the REST service requests would reject or drop such requests if the intended custom header is not part of the request. It is to be noted that this is not a fool proof technique, but at the same time offers some bit of protection than nothing.
  • Another approach is to deviate from the basic tenets of REST and maintaining state, in which case a token can be generated and maintained to authenticate the requests, so that requests carrying an invalid or no tokens can be dropped or rejected.

While the above are just an example of the concerns, REST services being based on HTTP specifications is prone to all the security vulnerabilities as that of a web application. Thus REST implementation while it is the easier choice due to its advantages listed above, should also be implemented with due considerations to some or all of the following security best practices:
  • All data must be sent over HTTPS and this will ensure securing of the data in transit.
  • Use of PKI or HTTP Digest Authentication for authentication.
  • Always perform authorization for every request upon receipt. 
  • Scan HTTP headers, query strings and POST data and look for reasons to reject a request.
  • Don't combine multiple resources with a single URI, always uniquely identify each resource, so that the security implementation can be simple and relevant to the specific resource.
  • Always perform validation of the JSON / XML data.
  • Ensure appropriate use of the HTTP commands for managing the resources and enable selective restriction of these commands.
  • Design URIs to be persistent. If a URI needs to change, honor the old URI and issue a redirect to the client.
  • Caching should generally be avoided where possible and sensitive data should never be cached.
  • When developing REST solutions, care needs to be taken not to create URIs that contain sensitive information. 
  • The requester should be authenticated and authorized prior to completing an access control decision. 
  • All access control decisions shall be logged. 
  • Code as if protecting the application.
Here are certain useful readings on securing the REST services:


Friday, January 3, 2014

Human Technology Interfaces - What The Future Has In Store

All of us would have been reading something or other on technology advancements that work with human body. For example, we have Health IT companies experimenting embedding memory chips under the skin of human body to store the individual's health records, so that when you walk into clinic, the clinic will get to know about your health history and would be able to suggest the further course and all this can happen with a non human front office assistant. Similarly, with the advancement in the brain interfaces and in the lines of the movie "Minority Report", the Police and investigation authorities may get on to crime prevention mode, i.e. they will get to know the moment you think of committing a crime and technologies like virtual presence, surrogates etc, this might be accomplished without any human casualties.

There are more such advancements and in this blog, my attempt is to present few scenarios that could be a possibility in the near future and the effects that this can have on various attributes of mankind.

Glass: With further advancement Google Glass kind of gadgets could be miniaturized and could be worn like contact lenses. These lenses would be able to interface with things around you. For instance, the refrigerator will greet you with the current temperature and you will know what is inside various containers, by looking at it (without opening) and will also indicate its details like quantity, how many days it is stored, etc. Again with added gamification, one will enjoy performing various tasks on the kitchen table. These things while assisting you on performing these tasks like chopping vegetables, it will also keep a score of how you perform, so that you enjoy doing these tasks. These gadgets coupled with access to public and private data stores help you in decision making, which can enhance one's Personal Intelligence (PI). Check out this video to have a glimpse of what I have tried to narrate here.

Brain Interface: Gadgets like Brain Link are already in the market, which coupled with related applications on smartphones gives beneficial gaming experience like attention training, meditation, neuro-social gaming, research and knowledge about brain. Most of us would have watched the movies 'Surrogates' wherein humans would stay indoors while their surrogates would go out to work and 'Minority Report' where the police and justice department would get alerts the moment some one think of committing a crime. Quite many science fiction imaginations in the past have become reality now. Recent research accomplishments evidences that even the fiction exhibited in the above movies might become a reality some day that is not very far away. For instance, researchers at Harvard have demonstrated a non invasive brain-to-brain interface wherein humans could control animals with their thoughts alone.

Given that continued advancements on the brain interface will further this accomplishments and coupled with various other inventions, the next generation of man kind may experience the following:


  • Personal Intelligence can be augmented by wearing or embedding devices and / or gadgets.
  • Though humans can have private thoughts, these will be subject to review or audit by government agencies and no wonder securing your thoughts would become absolutely essential.
  • Shopping will be virtual and all products can be virtually felt / experienced sitting at home and then can be ordered.
  • All 'things' would have interfaces to interact with human.
  • Blink or double blinks can be programmed to perform certain actions like taking a snapshot of what you have been seeing at that moment, etc.
  • Artificial or Virtual dreams will become reality and one can have choice of dreams and choice of character. Extending this, one would be able to watch a favorite movie as they sleep and cast themselves as a character in the movie.
  • With Body Area Networking and embedded nano chips across various critical body parts, self diagnosis with alerts might be a possibility.
  • Human disabilities can be worked around using robotic body parts and brain interface technology.
  • The hacking community would sharpen their skills and would explore opportunities of hacking human thoughts and human memory, which could be the biggest security and privacy threat to combat for the security experts.


Here are some more videos demonstrating the innovations that are taking place around human technology interfaces:

  • Ford takes SYNC to the next level through the use of configurable controls and the use of an electronic personal assistant, or "avatar," named Eva
  • Someday well be living be living on and under the oceans. This idea isnt farfetched and if it comes true then heres the answer to a new type of underwater transportation system.
  • Using a brain-computer interface technology pioneered by University of Minnesota biomedical engineering professor Bin He, several young people have learned to use their thoughts to steer a flying robot around a gym, making it turn, rise, dip, and even sail through a ring.
  • Cathy Hutchinson has been unable to move her own arms or legs for 15 years. But using the most advanced brain-machine interface ever developed, she can steer a robotic arm towards a bottle, pick it up, and drink her morning coffee.
  • At Barcelona University, scientists are working on a European Research Project to link a human brain to a robot using skin electrodes and video goggles so that the user feels they are actually in the android body wherever it is in the world.

Saturday, December 14, 2013

Google Chromecast - My Initial Experience

Google's Chromecast is a tiny usb drive kind of gadget which plugs into the HDMI port of your HDTV and can facilitates media casting on to your HDTV. With built-in wi-fi modules, most of the HDTVs in the market today allows browsing and streaming media directly from internet. With chromecast, you stream movies, videos and music from Netflix, Hulu, HBO and other media sites from internet. You can use your Android or iOS devices or even your Windows PC or Laptop to cast and control the streams on to your TV. This blog is not to write about what it is, but to share my first experience with this cute little gadget. Check out more about the device here.

I ordered this device on ebay.in and it was delivered at my home the very next day. The pack as delivered contained the Chromecast device, HDMI extender cable, USB power cable for charging the device and a power supply. And of-course there was a small, micro-printed product information leaflet, which just contained license information, warnings, warranty and the contents in the pack. For everything else, it referred to Google Chromecast site.

The three step setup instruction as printed on the inside of the flip top of the packing read as: 1. plug it in; 2. switch input; and 3. set it up. That was pretty simple and I was curious how simple this is going to be when actually setting this up.

I just plugged the device on to the HDMI port of the TV and then used the provided USB power cable to power up the device. Just in case your TV does not have the USB ports, then you can use the provided power supply and plug it on to the mains power source. And yes, the device does needs power to work and unlike USB ports, HDMI ports (per its current specification) do not offer power to the connected devices.

Upon connecting the power source, the LED on the device emitted a red light for a few seconds and turned to white. In my case the second step was not necessary as my TV smartly detected a new source on one of the HDMI ports and switched to it to receive video data. For those TVs that don't automatically switch, then you need to use your TV remote to select the relevant HDMI port as the input source.

The moment my TV switched to the HDMI port on which the Chromecast is plugged in, I could see a PC desktop like screen on the TV with a random nice background pictures and prompting me to visit chromcast site for setting up the device.

I however had the chromecast app installed on my HTC One M7 device the day I ordered the device. The App upon launch scans the connected wi-fi network and look for presence of a chromecast device. It did find the device and the device had a default name as chromecast 7151 (I was offered to choose a name of my choice, but I left it to the default for now) and prompted me to setup the device. At this stage the chrome device is not connected to my wi-fi network. Upon detecting the device the App on my HTC device prompted me to setup and at this stage, my TV displayed my wi-fi network name as well.

As I moved on to the next step, my TV displayed a code 'C3W8' and the app also prompted me to verify
whether it is the same code. Upon verification, I was then prompted to enter my wi-fi security passcode. At that stage, the app displayed the mac address of the chrome device, which was needed as in my case as I have enabled mac filtering in my wi-fi router and unless I add up the mac address of the chromecast to the whitelist on my router, it won't be able to connect to the internet. I added the mac address to the whitelist on my router and entered the passcode, but the setup did not succeed and was prompting me to check couple of configurations on my router: 1. to enable Access Point isolation and 2. to enable uPNP or multicast.

I could not figure out the first configuration parameter on my dlink 605L wi-fi router. I could however find the uPNP setting, which I enabled and rebooted the router. But the Chromecast device still could not connect to my wi-fi network. A quick search on Google led me to a useful page listing out the known issues and work around for different routers. It could find my router listed therein with a suggestion to enable another configuration parameter 'wireless enhance mode'. Upon enabling this parameter in the router, Chromecast was able to connect to internet and with that the setup is complete. The device immediately started downloading updates and it took couple of minutes to complete and then it was ready for casting.

The 'discover applications' option in the Android App listed few applications and the quite familiar ones are YouTube, Google Play Movies and Play Music. There were few other apps which are for streaming the photos, videos and music stored on the device. The supported applications display a cast icon to start casting the media on to the TV. Upon casting, in case of internet media, like YouTube, the device sources the media directly from, the internet through wi-fi, but at the same time, you can control it using your device. Here is a screen shot of the first YouTube video I chromecasted using my HTC One Android phone. More apps would start supporting Chromecast in the future.

In case of stored media, the streaming happens through the local wi-fi network and in case of certain high resolution videos, there were pauses in between. This probably depends on the specific app that is used for such casting.

Next I tried to set it up on my Windows PC, but no, my PC is connected through physical LAN and the Chromecast app said that I need wi-fi enabled on the PC. I then turned on to my Windows 8 Laptop. It was a breeze and no hassles in setting this up on my Windows 8 laptop. The Chromecast App is just for setting up the device and since mine is already setup I just needed the extension to be added to the Chrome browser, so that it facilitates casting a specific tab of the chrome browser. The extension adds a little icon on to the addressbar
which on click allows the casting of the browser tab. At this time I could see the YouTube and Netflix windows app with support for chrome cast and lot more windows 8 apps may start supporting chromecast soon. Here is how it looked like when I casted an YouTube video on the Chrome browser tab.

If you were to connect the Chromecast on to a different network, you have to do a Factory Reset, which can be done using the Chromecast App on the device or on the PC and then set it up with the new network.  Another great advantage is that the software gets updates automatically when Google releases updates and more apps are coming up offering support for Chromecast.

Saturday, November 9, 2013

Webservice Security Standards

SOA adoption is on the rise and Webservices is predominantly used for its implementation. Webservice messages are sent across the network in an XML format defined by the W3C SOAP specification. Webservices have come a long way and has sufficiently matured to offer the required tenets especially on the security domain. In this blog let us have a quick look at the available standards with respect to the security dimensions and look at how the related security requirements are addressed.

Secure Messaging


  • WS-Security - This specification was originally developed by IBM, Microsoft and Verisgn and OASIS (Organization for the Advancement of Structured Information Standards) continued the work on this standard. This standard addresses the Integrity and Confidentiality requirements of the webservice messages. The specification describes the signing, encrypting of the SOAP messages and also about attaching security tokens. Various signature formats and encryption algorithms are supported. The security tokens supported include: X.509 Certificates, Kerberos tickets, User ID/Password credentials, SAML assertions and custom tokens. Due to the increased size of the SOAP messages and the cryptographic requirements, this standard requires significantly higher compute resources and network bandwidth.
  • SSL/TLS - SSL was developed by Netscape Communications Corporation in 1994 to secure transactions over the World Wide Web. Soon after, the Internet Engineering Task Force (IETF) began work to develop a standard protocol that provided the same functionality. They used SSL 3.0 as the basis for that work, which became the TLS protocol. In applications design, TLS is usually implemented on top of any of the Transport Layer protocols, encapsulating the application-specific protocols such as HTTP, FTP, SMTP, NNTP and XMPP. Historically it has been used primarily with reliable transport protocols such as the Transmission Control Protocol (TCP). This standard helps address the Strong authentication, message privacy and integrity requirements.

Resource Protection


  • XACML - eXtensible Access Control Markup Language defines a declarative access control policy language implemented in XML and a processing model describing how to evaluate access requests. Version 3.0 of this standard has been published by OASIS in January 2013. The new features of the latest version of this standard include: Multiple Decision Profile, Delegation, Obligation Expressions, Advice Expressions and Policy Combination Algorithms.While there are many ways the base language can be extended, many environments will not need to do so. The standard language already supports a wide variety of data types, functions, and rules about combining the results of different policies. In addition to this, there are already standards groups working on extensions and profiles that will hook XACML into other standards like SAML and LDAP, which will increase the number of ways that XACML can be used.
  • XrML - Developed by Content Guard, a subsidiary of Xerox, and supported by Microsoft, eXtensible Rights Markup Language would provide a universal method for specifying rights and issuing conditions associated with the use and protection of content in a digital rights management system. XrML licenses can be attached to WS-Security in the form of tokens. XACML and XrML both deal with authorization. They share requirements from many of the same application domains. Both share the same concepts but use different terms. Both are based on XML Schema. Microsoft's Active Directory Rights Management Services (AD RMS) uses the eXtensible rights Markup Language (XrML) in licenses, certificates, and templates to identify digital content and the rights and conditions that govern use of that content.
  • RBAC, ABAC - Similar to XrML, RBAC and ABAC are established approaches to define and implement Role Based Access Control and Attribute Based Access Controls and can be attached to WS-Security as tokens. The use of RBAC or ABAC to manage user privileges (computer permissions) within a single system or application is widely accepted as a best practice.
  • EPAL - The Enterprise Privacy Authorization Language (EPAL) is an interoperability language for exchanging privacy policy in a structured format between applications and can be leveraged for addressing the privacy concerns with the SOAP messages. An EPAL policy categorizes the data an enterprise holds and the rules which govern the usage of data of each category. Since EPAL is designed to capture privacy policies in many areas of responsibility, the language cannot predefine the elements of a privacy policy. Therefore, EPAL provides a mechanism for defining the elements which are used to build the policy.

Negotiation of Contracts


  • ebXML - e-business XML is a modular suite of standards advanced by OASIS and UNCEFACT and approved as ISO 15000. While the ebXML standards seek to provide formal XML-enabled mechanisms that can be implemented directly, the ebXML architecture is focused on concepts and methodologies that can be more broadly applied to allow practitioners to better implement e-business solutions. ebXML provides companies with a standard method to exchange business messages, conduct trading relationships, communicate data in common terms and define and register business processes. A CPA (Collaboration Protocol Agreement) document is the intersection of two CPP documents, and describes the formal relationship between two parties.
  • SWSA - The SWSA(Semantic Web Services Architecture) interoperability architecture covers the support functions to be accomplished by Semantic Web agents (service providers, requestors, and middle agents). While not all operational environments will find it necessary to support all functions to the same degree, the distributed functions to be addressed by this architecture to include: Dynamic Service Discovery, Service Engagement (Negotiating & Contracting), Service Process Enactment & Management, Semantic Web Community Support Services, Semantic Web Service Lifecycle & Resource Management Services and Cross Cutting Issues.


Trust Management


  • WS-Trust - The goal of WS-Trust is to enable applications to construct trusted SOAP message exchanges. This trust is represented through the exchange and brokering of security tokens. This specification provides a protocol agnostic way to issue, renew, and validate these security tokens. The Web service security model defined in WS-Trust is based on a process in which a Web service can require that an incoming message prove a set of claims (e.g., name, key, permission, capability, etc.). If a message arrives without having the required proof of claims, the service SHOULD ignore or reject the message. A service can indicate its required claims and related information in its policy as described by WS-Policy and WS-PolicyAttachment specifications.
  • XKMS - XML Key Management Specification is a protocol developed by W3C which describes the distribution and registration of public keys. Services can access an XKMS compliant server in order to receive updated key information for encryption and authentication. The XML Key Management Specification (XKMS) allows for easy management of the security infrastructure, while the Security Assertion Markup Language (SAML) makes trust portable. SAML provides a mechanism for transferring assertions about authentication of entities between various cooperating entities without forcing them to lose ownership of the information.
  • SAML - Security Assertion Markup Language is a product of the OASIS Security Services Technical Committee intended for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. SAML allows business entities to make assertions regarding the identity, attributes, and entitlements of a subject (an entity that is often a human user) to other entities, such as a partner company or another enterprise application. SAML specifies three components: assertions, protocol, and binding. There are three assertions: authentication, attribute, and authorization. Authentication assertion validates the user's identity. Attribute assertion contains specific information about the user. And authorization assertion identifies what the user is authorized to do. Protocol defines how SAML asks for and receives assertions. Binding defines how SAML message exchanges are mapped to Simple Object Access Protocol (SOAP) exchanges.
  • WS-Federation - WS-Federation extends the WS-Security, WS-Trust and WS-SecurityPolicy by describing how the claim transformation model inherent in security token exchanges can enable richer trust relationships and advanced federation of services. A fundamental goal of WS-Federation is to simplify the development of federated services through cross-realm communication and management of Federation Services by re-using the WS-Trust Security Token Service model and protocol. A variety of Federation Services (e.g. Authentication, Authorization, Attribute and Pseudonym Services) can be developed as variations of the base Security Token Service. 

Security properties

  • WS-Policy, WS-SecurityPolicy - WS-Policy represents a set of specifications that describe the capabilities and constraints of the security policies on intermediaries and end points and how to associate policies with services and end points. Web Services Policy is a machine-readable language for representing these Web service capabilities and requirements as policies. Policy makes it possible for providers to represent such capabilities and requirements in a machine-readable form. A policy-aware client uses a policy to determine whether one of these policy alternatives (i.e. the conditions for an interaction) can be met in order to interact with the associated Web Service. Such clients may choose any of these policy alternatives and must choose exactly one of them for a successful Web service interaction. Clients may choose a different policy alternative for a subsequent interaction.
  • WS-ReliableMessaging, WS-Reliability - WS-ReliableMessaging, was originally written by BEA Systems, Microsoft, IBM, and Tibco and later submitted to the OASIS Web Services Reliable Exchange (WS-RX) Technical Committee for adoption and approval.Prior to WS-ReliableMessaging, OASIS produced a competing standard WS-Reliability that was supported by a coalition of vendors. The protocol allows endpoints to meet the guarantee for the delivery assurances namely, Atmost Once, Atleast Once, Exactly Once and In Order. Persistence considerations related to an endpoint's ability to satisfy the delivery assurances are the responsibility of the implementation and do not affect the wire protocol.

Wednesday, October 16, 2013

Webservices Security: Potential Threats to Combat

Web Services is one of the primary method of implementing SOA in highly distributed enterprise computing environments where not only the enterprise applications need to be integrated with internal applications, but also with the external systems built and operated by various business partners. That requires such services exposed beyond the trust boundaries of the enterprise and thus increasing the security threat landscape.

Securing webservices is more complicated than any other end user systems, as the webservices are built as the conduit between systems rather than human users. Most of us are very familiar with the first line of defense, namely authentication, data integrity, confidentiality and non repudiation. These are certainly critical security concerns, but there are well established tools and practices that help address these security issues. But, this it not just be enough to be contempt with solving these concerns, as the services are no longer constrained within the trust boundaries.

While other types of applications have executables that act as an outer layer and protects the application's internals, webservices don't have such outer layer and thus expose the internals to potential unforgiving hackers on the internet or intranet. Testing and securing webservices require a toolkit with abilities to act like a client or for that matter its proxy and being able to intercept, transform or manipulate the messages that are being exchanged. Having said that, the toolkits alone would not guarantee a complete solution to combat the security exposure of the webservices. There need to be an understanding of the application, context, trust boundaries etc, which will help to enumerate the potential threats that need to be managed and then use the toolkits.

In this blog, let us have a look at the potential threat categories beyond the first line of defense that an organization should be aware of and be prepared to combat. As always, the attempt is to come up with the most significant threats and not to produce an exhaustive list of such threats. Determination of the specific threats under each category requires an analysis of the application and its internals in line with the trust levels.


Privilege Escalation

Virtually all attacks attempt to do something the attacker is not privileged to do. The hacker wants to somehow leverage whatever limited privilege he has, and turn it into higher ("elevated") privilege, as a result of potentially flawed authentication and authorization mechanisms. Sometimes, hackers would exploit vulnerabilities in the internal components of the service, be it custom coded or third party components and thus gaining privileges for remote code execution or database manipulation. It is obvious that the consequences of such breach could be severe leading to financial and non financial damages.

While the protection for such vulnerabilities depends on the specifics of the application and its architecture, as a general best practice, the authentication and authorization needs of each of the internal components shall be carefully established based on the least privilege principle, by considering additional contextual information like the client, location, or network in addition to the basic credentials and by implementing multiple levels of authentication within the service components can alleviate this category of threats. When third party components or run times are used, it is important to constantly apply the security patches that the vendors may release.


Denial of Service

The interoperable and loosely-coupled web services architecture, while beneficial, can be resource-intensive, and is thus susceptible to denial of service (DoS) attacks in which an attacker can use a relatively insignificant amount of resources to exhaust the computational resources of a web service. Denial of Service attack is aimed at causing impact on the availability of the underlying network, compute or storage resources for the legitimate service consumers. Given that webservices are typically used in high availability enterprise integration scenarios, even a smaller magnitude of DoS attach could cause sever disruption in all the connected systems and leading to breach of SLAs with partners and thus at times leading to financial damage as well.

The effects of these attacks can vary from causing high CPU usage, to causing the JVM to run out of memory. Clearly the latter is a critical vulnerability. Protection for DoS can be implemented at different levels with a view to ensure legitimate of use of the underlying resources. Some of the techniques used to combat DoS attacks are to implement various algorithms to restrict access to critical internal components to legitimate requests and rejecting the rest. For instance, the input message size limits can be validated in line with the speciific service method and if the requests carry an unusually large xml messages than expected, such requests can be rejected at the network layer itself. Tools and appliances are available to combat the DoS attacks, but how such tools and appliances are setup and configured would depend on the specific needs.


Spoofing

Spoofing attacks are successful when one entity in a transaction trusts another entity. If an attacker knows about the trust, they can exploit it by masquerading as the trusted party. This can also be masquerading the additional contextual information which is used in authentication or in request processing. The most common of such information are SOAPAction and client IP. There are various ways to exploit credentials or spoof the source of messages. These include credential forgery, session hijacking and impersonation attacks. The services shall be designed to appropriately validate such information in isolation and in combination with other related information, so as to establish the request is legitimate.

Repudiation

An important web services security requirement is nonrepudiation. This requirement prevents a party from denying it sent or received a message. The way to implement this is using Xml Digital Signatures. For example, if I sent a message which is signed with my private key, I cannot later deny that I sent it. This concern arise when the web service does not bind the client to their actions using appropriate techniques or due to flawed implementation of auditing and logging requirements. Data inconsistency is one of the common outcome of this threat and could lead to sever damages to the enterprise.

A combination of the protection measures against various threat categories would help combat this threat. For instance, an adequate protection from spoofing, protecting the messages while in transit coupled with appropriate logging and audit implementations would help minimize the risks arising out of this threat.


Information Privacy

As web services are typically implemented as part of a complex system and have access to a large amount of potentially sensitive information, it is important to ensure that access to the information is restricted. The transfer of the data should also be secured to prevent eavesdropping and sniffing threats. Privacy or confidentiality concerns with webservices is no different than that in any other system. As such, the information disseminated through the information has to be reviewed in line with the organization's information sensitivity policy and apply policies and rules to ensure that when to allow a specific request have access to such information. This will involve not only appropriately defining the authorization rules for the clients and users, but also carefully considering the information or parameters that are received as part of the request message


Message Tampering

Webservice messages, both request and responses if not appropriately protected can be tampered using various attack methods. Web services being a component of a complex distributed enterprise system with integration with multiple partner systems throws open the possibility of message tampering as it is exposed beyond the boundaries with multiple communication paths. Attacks under this threat category include man-in-the-middle attacks and implanting trojans and malwares. As with the other threats, this can also cause severe damages. Compromise under this category may also mean a compromise in one or more other threat categories. For instance, a tampered input message might lead to spoofing of identity and thus compromising the information privacy, etc.


To conclude, as the protection measures are evolving on the one side, newer threats are also emerging and the security professionals need to have a continuous engagement, and have an appropriate security or threat management framework implemented to combat the existing and emerging threats. Periodical security audits shall be supplemented with a formal security testing with necessary toolkits in place. All said and done, the extent of protection should depend on the organization's risk policies and risk appetite, the critical nature of the webservices and the trust boundaries.


Saturday, September 28, 2013

Strategies for Information Governance

No, we are not discussing about IT governance or Data Governance either. It is about Information Governance. Information is fast becoming the currency of the business organizations and it is an important asset that need to be protected, managed and governed. Physical records are giving way in favor of digital information and it is growing and moving beyond the boundaries of the enterprise. This opens up a new set of challenges in realizing the business value and managing the associated risks. To add to that a whole set of new and evolving regulatory requirements escalate the risks of privacy, security and retention. Now to understand what is Information Governance let us look at how Gartner defines it:

“The specification of decision rights and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archival and deletion of information. It includes the processes, roles, standards and metrics that ensure the effective and efficient use of information in enabling an organization to achieve its goals.”

Looking at the above definition, we can say that the Information Governance is a framework for managing information life cycle from its creation through its deletion and defining accountability, retention, protection and quality aspects around the same. Obviously the framework should comprise of processes, standards, roles & responsibilities, metrics and tools and technology for effective and efficient use of the information. The framework should be in line with the strategies of the organization for Information Governance. So it is important to establish the strategies first and then build the framework around it.

Obviously the Information Governance Strategies shall be formed with due consideration to the following aspects of Information:

Classification: Information classification is one of the most crucial elements of an effective information governance process and yet it’s also the one that many organizations fail to implement well. In its simplest terms, Information classification is the process of categorizing information based on its level of sensitivity, perceived business value and its retention needs. While information classification based on sensitivity is mostly prevalent as most of the Information Security frameworks demand, for an effective and efficient information governance, the classification should represent the retention needs and the business value in addition to sensitivity. When done properly, the classification of information helps an organization determine the most appropriate level of safeguards, controls and usage guidelines that need to be in place. Organizations should be aware that data classification may change throughout the life cycle. It's important for data stewards to re-evaluate the classification of information on a regular basis, based on changes to regulations and contractual obligations, as well as changes in the use of the data or its value to the company.

Protection: Protection of data is an important Element of the information governance framework. Data security breaches now appear to be headline news almost on a weekly basis. The consequences can be
disastrous as organisations’ bottom line and reputation are impacted. Information management and protection is undoubtedly moving in keeping with organisational changes. A planned governance structure
for information allows organisations to support business expansion, while meeting regulatory and personal data protection laws. 

Retention: Organizations are obligated to respond to various information requests, be it litigation, audit or investigation. There are numerous legislations in various countries requiring retention of information for a certain period to be used as evidences. Certain countries have legislations that require non persistence of information, i.e. certain class of information not to be persisted for privacy reasons. Effectively balancing such complex retention requirements depends on proper identification and classification of the information and use of appropriate tools and technology.

Roles & Accountability: Historically, establishing robust information management was considered an IT challenge. The CIOs were expected to deliver the appropriate technology to support critical information reporting and management, and the CISOs, who are mostly aligned with IT functions, were expected to protect the information assets. This does not absolve the business functionaries from the accountability of the information that they create and manage. IT is just a facilitator and it is the business who owns and be responsible for the information throughout its life cycle. The overall requirements of any information asset must be specified, ultimately, by the business people who define, understand and own the process that handles its usage. 

Collaboration: The people who staff the functions that produce and use the information are the people who know its value, can point out the current version of documents, should know how long a given document or set of data is going to be useful from a business continuity perspective. Thus it is very important that their knowledge on these aspects of information is considered while formulating the information governance strategies. Committed involvement from every employee and an effective communication amongst all of them is the key in building a successful information governance framework. Continued collaboration of all the business and IT functions is also essential in sustaining the information governance program in the organization, so that various attributes that determine the information classification, its usage and its business value are constantly aligned to the changing landscape of regulatory and business needs.

Quality & Integrity: As the information is becoming a key asset of an organization and that many decisions are based on the information at hand, it is important that the quality and integrity of the information to be at the highest level, so that such decisions do not go against the organization. Appropriate processes or techniques to validate the quality and integrity of the information shall be put in place and those involved in the creation or discovery of the information shall ensure that appropriate checks are performed and ensure that the information so created is reliable.


Information Governance is a combination of business practices, technology and human capital for meeting the compliance, legal, regulatory, security requirements, and organizational goals of an entity. Information governance provides a means to protect, access, and otherwise manage data and transform it into useful information. While applying best practices such as physical and electronic security measures as well as creating policies for the disposition of data are critical to implementing an information governance strategy, available technology solutions and services can play a key role in several areas.

Thursday, August 29, 2013

Common & Practical Problems of Requirements Elicitation

Requirement elicitation is an important and challenging phase of any software project. This holds good for both product and project development activities, but the approach, techniques might vary. A well specified requirement has been found to considerably improve success rates of projects. Though various methods and techniques have evolved over the last couple of decades to better produce a good requirements specification, many struggle to get it done well.

This could be mainly because that requirement elicitation is just not science, it is an art too. It is more an art because it is highly human intensive and much depends on the skills of the people involved in the process. More so, as which method or technique to use and the way the document is structured and written depends on the abilities of the person driving this activity. Based on my experience in the be-spoken project development and product development activities, I have listed down some of the most common and practical problems with this activity as below:

1. Preconceived Notions

The requirements of every customer even in the same business domain, would be different. For example, requirement of a bank X would not be the same as that of bank Y. Each enterprise would have different business processes to differentiate their abilities or value deliveries from their competitors. The teams involved in requirement elicitation shall start with a clean slate for every project and thus should not try to bias the elicitation work with their previous project experience in mind. Ignoring this principle would result in misaligned requirement specification and thus ending up delivering a deficient product. As this is a human intensive process, it is quite common for the customer representatives too to easily miss out on such things.

This is quite a common problem with the product companies. Irrespective of whether the client contracts for the product with customization or a project, the vendors would prefer to reuse their existing code assets. As such, the business analysts engaged in the requirement elicitation tend to scope the customer requirements in such a way that it fits within the existing product architecture and related constraints. Even in case of a product based contract, the requirement elicitation or the gap study shall focus shall be unbiased and then it is the Solution Architects who will come in to come up with solutions to bridge the gaps. In the process, the customer will have the option to decide to dilute his requirement in favor of an existing work around.

The business analysts shall master the art of unlearning and relearning to handle this area well.

2. The Design Mix-up

The next common problem is to mix up the requirement elicitation with the solution design. This happens on both the sides i.e, the vendors and the customers. The business analysts from the vendor side often would start visualizing the solution design with a specific use case and would start suggesting deviations or work around to the use case. Similarly on the customer front, the users may start talking on the system perspective. For example, customers when narrating the requirements might talk about a drop down list, check boxes, etc. Ideally such details should be left to the design teams and where appropriate, the customer might want to review those designs or might specify the design guidelines to be followed or specify usability requirements for the vendor to conform to.

There is another school of thought that visualizing or thinking of solution early on would eliminate feasibility issues down the line. While this is partly true, the problem arise when such design constraints hide the underlying actual business requirement, which could lead to mis-interpretations later on.

3. Poor Planning

The requirement elicitation has to be a planned process with proper entry and exit criteria for each of the sub processes. There are many frameworks and techniques to perform this activity. Irrespective of the methods or techniques, the elicitation process shall comprise of the following activities: Identifying the Stake Holders; Define Use Case specifications; Generate scenarios;Organize walk throughs / interviews; Document Requirements and Validate Requirements. It is quire possible that each of these activities might have to be performed in multiple iterations. Poor planning of these activities might result in ambiguous or deficient requirements.

A related key issue is the exit planning. i.e. when to consider the requirement elicitation as complete. Depending on other project constraints, the exit criteria has to be carefully identified and further planning should be around that. For instance, if time is a key constraint, just for the sake of meeting the timeline, the elicitation activities should not be hurried up and thus ending up with an imperfect specification. Instead, in such cases, the scope can be divided into broader sub components and agree with the customer to defer some such components to a later phase based on priorities. Agile approach could also be thought of to solve this situation. i.e. start eliciting the requirements as specific user stories are taken up in respective sprints. A careful consideration of all the project constraints and priorities is a must in choosing a solution and there by coming up with the best course of action.

4. Volatility

In one of the projects we were handed off with a four hundred page requirements specification document was an year long work of the internal business analysts of the customer. But it was no surprise, that the actual business requirements were far different than it was documented as the business practices and processes  have changed a lot during this very same period. This has been a common problem that the industry has been battling with and Agile approach is emerging as a solution to this problem. This volatile nature of the business requirements requires the solutions to be delivered quicker to reap the time to market advantage.

Another aspect of volatility is that the requirements as elicited from different users / departments could be different and at times conflicting too. In some cases such differences could be misstatements or misunderstanding or in some cases it could be genuine, in which case the different requirements shall be specified appropriately and let the design teams come up with solutions to meet all such differences.

5. Undiscovered Ruins

It is the human nature to answer just the questions that were asked. Thus the business analysts shall master the art of asking appropriate follow up questions based on the responses from the customer representatives. That is where the elicitation is important. i.e. the business analysts shall provoke the customer to fully reveal what is required of the system. In the process it is very much common that certain needs might go undiscovered, but would show up later on as a deficiency. This problem can be partly addressed by identifying the right stakeholders for the purpose and then to get those validated by different stakeholders, who would look at these with a different perspective, which might bring out gaps if any.