Saturday, August 24, 2013

State of Open Source in The Enterprise

We all know and keep saying that Open Source is here to stay and there are enough proof out there in the form of companies like facebook, google, linkedin and quite many who have their business enabled by Open Source software. I get what you are thinking, the short list above are those companies who are into social networking kind of business and we need to look at large businesses, like banks. While we can say that the adoption of Open Source software is increasing, the same in the mission critical enterprise software space is still not very visible. For example, big data and cloud computing has triggered the increased use of open source software in the form of hadoop, map-r, open stack and so on. However, with enterprise software vendors also pitching in with their way of addressing similar problems, big enterprises tend drift away from the open source.

What is holding the CIOs back in adopting Open Source in their enterprise software suite? Probably, they see a risk of continued support. In most cases, the CIOs are willing and ready to adopt or try Open Source software for the enterprise's secondary use and leave the mission critical software to be proprietary and fully supported. Java as the open source programming language is well embraced, but in the middleware space, though Apache Tomcat and JBoss share considerable usage, big enterprises still look at weblogic or websphere.

On the website front, Apache and NGINX lead the open source market share and has much wider adoption with enterprises of all sizes. However, with many proprietary content management frameworks emerging, large enterprises are drifting towards the same, so that the website content maintenance and collaboration could be fairly simple with these frameworks. In the mobile world, Android has come into stay and has wider adoption on the consumer side with Microsoft getting increased attention of the enterprises for the enterprise mobility needs.

Similarly, with big data, we see many open source NoSQL and SQL databases emerging and even gets much needed visibility. However, bigger enterprises tend to try these databases for their secondary usage like data warehouse, etc and leave their primary mission critical applications to use proprietary database solutions. When it comes to critical enterprise software, the decision makers go by real world case studies and their market presence and don't want to take risks in going with Open Source.

May be, if we take a closer look at the decision making processes, we can understand, on what grounds Open Source is left out. Typically, the following are the key criteria used for evaluating an Enterprise Software that come in the way of adopting Open Source Software:


  • Support: Looking at it positively, Open Source software gets support from developers across the globe, mostly with a well governed release process. On the other hand, these are not built with the needs of a specific enterprise in mind, but built for a specific purpose though.Thus considerable efforts would have to go in to make an Open Source software work for an enterprise and the skills are scarce. Those enterprise who have their business around IT have plenty of developers on board and will have the ability to customize and adopt and even contribute back. In case of non IT organizations, this will mean a dependency on a vendor who can offer the support at a cost. When the software is expected to be critical for the business needs then naturally, this concern gains importance leading to the decisions drifting in favor of Proprietary solutions. Another thinking is that, though the software comes free, the efforts involved in customizing, enhancing and maintaining it could result in a way higher total cost of ownership than that of proprietary software.
  • Usability: Open Source developers focus on the technology and often ignore the usability aspects and thus resulting in increased costs around user training and maintenance. On the same lines, the end users are expected to use the software at their own risks and no own guarantees or warrants its performance levels. 
  • Concerns on IP: Being open source, the consumers are expected to contribute the changes if any they make to the software back for common use and thus the Intellectual Property of the enterprise might have to go back to the shared source code. In case of proprietary software, however, there could be an option to continue to own the specific IPs though at a higher cost. 
  • Reliability: Contrary to the reality, there is a thinking that in case of Open Source software, the reported issues might get too longer to get fixed or some might not be fixed at all. With community of developers all over the world contributing, Open Source software evolves pretty fast. However, no one could be held responsible though. Other related concerns could be that lack of better governance, absence of adoption by competitors and lack of support from big names.
  • Security: As the source code is accessible for use by any one, there is a tendency to think that hackers can also get to know the software better and design attacks around the vulnerabilities. However, like in the case of fixing issues, some of the open source communities also consider fixing the reported vulnerabilities on priority and making the product secure. The security concern would remain even with proprietary software.


Of all, the concern support and related cost seem to be the primary concern that hold CIOs back from adopting the Open Source software. The traditional methods of evaluating the software may have to be revisited though to overcome this problem. The decision makers want to play it safe.

Of late we see pressure on the CIOs to cut costs where possible and that is a good sign that Open Source Software is getting a fresh look, more so in the Government sector. Those doing business in and around IT are building the needed skills and talent in house and embrace open source solutions. We see this more in the areas of configuration management, build automation, automated testing and other tools which aid in building and maintaining systems and infrastructure. Open Source software is still not making inroads in mission critical enterprise business applications. 

Wednesday, August 14, 2013

Agile in Fixed Price Fixed Scope projects - Hybrid Contracts

It is well known that the traditional methods are not yielding to a better success rate of a project and thus there is a tendency to lean on Agile Methodologies. At the same time, clients feel secure with Fixed Price and Fixed Scope project as their financial outlay is limited and there is no ambiguity. What they miss however in this process is the value delivery. The traditional project management methodologies focus on the Scope, Time and Resources where all three are constraints. Ideally the focus should be on the Value and Quality delivered, given the constraints there by guaranteeing a better success rate of the project.

The software vendors are doing business and they work to earn profits. As such, with Fixed Price projects, the vendors tend to limit their efforts to deliver the agreed scope. With the pace at which changes are happening around any business, freezing scope for a project early on is nearly impossible as software delivered to such scope frozen early on is often less usable. With change is the key driver in optimizing the value delivery, clients and vendors have conflicting views on the change.

Agile methodology has evolved over these years and offers a solution to the problem of optimized value delivery. However, clients still feel that Agile approach does not secure their interests in terms of a definite price and time. Of course, their concern is genuine as they cannot afford to sign a project contract where the cost and time are elastic. While the basic premise of Agile is to embrace the changes, to succeed, it depends on a very high level of trust between the vendors and the clients, where both should work for a common goal and the contract should be profitable to both.

Having said that the Fixed Price (FP) Fixed Scope (FS) contracts offer very limited opportunity for vendors to practice Agile methodology. Making either FP or FS elastic will give some room for practicing Agile methodology. Let us explore how this can be accomplished in the contracts. Both the above contracting models requires a high level of trust between both the vendors and the clients.

Fixed Price Elastic Scope (FPES) contract: In this model, while the price is fixed, the scope can be variable. This model can practice a hybrid Agile approach, the scope is broken down to features and the development happens feature by feature. Depending the time taken to implement a feature, more features are added or removed. For instance, if a feature estimated to take 30 days is implemented in 20 days, one or more new features can be added to fill the time saved. Similarly if the implementation takes 45 days, then one or more features will be removed.

To bring in incentive for both vendors and clients, a discount factor can be agreed upon, which is applied while adding or removing features. For instance, in a case where the vendor has saved 10 days for a feature, the client instead of adding a feature that needs 10 days to fill the gap, will only add a feature with 5 days of effort, where the discount factor would be 50%. The same discount factor is applied on the converse (where implementation exceeds the planned effort).

Elastic Price Fixed Scope (EPFS) contract: In this model, the Scope is fixed, but the pricing is variable. The idea behind this approach to is arrive at a base rate and a profit factor. While the base rate and the profit factor, along with the generic terms and conditions are covered in the Master Services Agreement, the actual project scope can be covered in multiple Statement of Works (SoW). Requirement elicitation and scoping can be the first SoW. This way, the project can be split into smaller working software modules and the work items can be scoped in stages / phases. This approach will help the clients in handling changes with ease.

Here again, an approach like 60:40:20 can be adopted to prioritize the work items. This approach requires the work items to be grouped into Must have features, Good to have features and Fixes. Every SoW can comprise of 60% of Must Haves, 40% of Good to haves and 20% of fixes emerged out of previous deliveries.

The incentives for both vendors and clients can be based on categorization of the work items as New feature, Clarification, Fixes. New features are the scope items as elaborated during elicitation. Clarifications are such items that emerge out of elicited requirements during the design or build phase. Fixes are incorrect implementations by the vendors, basically design and build defects. Costs for each SoW can be computed by applying the profit factor on the base rate. For instance, the New features will be charged at base rate + profit, clarifications will be charged at base rate and fixes will be charged at base rate - profit.

With the above, we are not concluding that Agile cannot be practiced in an FPFS project. There are still ways and means that a hybrid agile approach can be thought of and practiced so that value delivery is the primary focus for all the parties. Do share your thoughts in the form of comments on the subject, and I will cover those in my next blog.

Sunday, July 28, 2013

Colocation - Key Considerations for Selection of Service Provider

Increasing number of organizations are shifting towards Colocating the data centers as they see the inherent benefits like efficiency, security and other shared managed services. Colocation is offered as a service based on the standards, policies, procedures, people and the data centre infrastructure. The user experience depends on the quality of each of these components that form part of the service. The type of provider also drives the end-user experience because you get what you pay for. Given the significant benefits of colocation, CIOs cannot just ignore this as an option, but need to look for reliable facilities that upholds the needs of the organization.

The benefits of going for colocation service include effective use of capital and having higher quality facilities through power redundancy, cooling, and scalability/growth. Choosing a colocation provider is a strategic business decision that evolves from thoughtful consideration certain key parameters. More over, the partnership with the colocation service provider need a multi-year commitment from both. Listed below are some of the key considerations that help the CIOs to make a well thought out decision on choosing the right provider. The list below however is in no order of priority.

Customer & Partner Considerations
Many tend to focus on the organization's IT needs and base the decision to colocate and the choice of the service provider on such internal design parameters. However, the CIOs also have to consider the strategies of their customers and partners . For instance, in case of highly interconnected and collaborative partner systems, the DR sites need to have connectivity with the DR sites of the partner systems as well. If the service goes down customers will easily become frustrated with the company, especially if they are in a new market and are not long-time clients.

Power and capacity planning
The power supply is a key component of the shared services that the colocation service provider offers. Needless to say that it is a vital that the data center is supported with a scalable and reliable power supply in order to keep up the infrastructure components. The supplier should focus on the future needs of the customers and perform proactive capacity planning so that they are prepared to support the scalability needs of the organizaiton.

Resiliency
Designing and building a reliable and resilient data center would call for heavy investment and smaller organizations would find it difficult to justify the RoI for such investments. However, colocation service offers this benefit as the provider would however will be sharing the cost with multiple service consumers. However careful evaluation of the design and architecture elements that will contribute towards ensuring reliability and resiliency of the service is essential in making a decision on the choice of service provider. A carefull review of the SLA of any colocation provider is also very important to ensure that the service provider assures commits for the current future investments to support this need.

Security and technical advancement
With colocation, the organization is leaving its servers, equipments and the more precious data assets in the custody of the service provider, which usually is outside the physical security boundary of the organization. Thus, physical security to the premises and the assets hosted there in is an important component of the colocation service and needs a careful consideration and evaluation. A comprehensive security, beyond the guard, ensuring data protection at all times is what is required. Additional factors of security involving evolving technologies like biometric, card readers, keypad or electronic locks as well as surveilance cameras should form part of the security investments.

Network connectivity and proximity
Like reliability and resiliency, network performance is a key customer service issue, not just a technology issue. An ineffective interconnect setup or limited network system can lead to disruptions in the business operations. It would also matter much to ascertain the network latency associated with connectivity with partner systems. In most cases, if there is a need to have heavy amount of data or information exchanged with a particular partner system, it would be beneficial to have the data center closer to that of the partner's so that the latency between such centers could be as much lower, leading to faster and efficient communication. Network performance issues can have a tangible financial impact to the bottom line. It is usual that the colocation providers offer diverse options for internet access and a careful selection of a primary and redundant internet service providers is essential. 

Technology & support
The colocation service provider should be implementing standard facility design elements and integrating proven technologies so that better power and cooling capacities is achieved. The provider should also have the demonstrated expertise in facilitating flexible deployment of higher density solutions in an on-demand model. Despite the colocation arrangement, the IT heads of the organization are still responsible in addressing the business needs and as such a careful evaluation of the standards and practices that the provider follows in keeping up with the technology and trends is very much essential.

While most providers offer on-site 24/7 technical support, the skill sets in the offer could vary. Again the organization may demand a specialized skill, which the service provider might not be able to support. As such, it is important to compare the skills needed vis-a-vis the skills on the offer and make appropriate decisions. Similarly, the maintenance windows of the service provider for certain support services could be different from that the organization would want. Thus this is another component that needs careful consideration.

Flexible commercial terms
While the key benefit of outsourcing is the shift from the capital investment to operational expenditure, CIOs might endup spending way higher unless a carefull evaluation of the commercials attached to each of the specific service or component that forms part of the colocation service. Ideal cloud service provider would let the customer to pick and choose the components and also offer multiple and flexible billing options to choose. A careful choice is required to ensure to optimize the value realization out of the colocation engagement.

Locational Constraints
Despite having on-site technical support, there would be emergencies needing experts to visit the data center to physically visit the location and work on solutions. As such, the geographical location of the data center(s) does matter to the organization and needs careful consideration. Similarly, local issues like frequent political unrests, frequency of natural disasters in the region, proximity to the airport, etc need to be considered. For instance if the data center is located in a flood prone area, there is a high risk of the data center services getting affected.

Regulatory Compliance
Various countries and states have legislations that could have an impact on the data or other assets that is being housed at the colocated data center.  For instance, UK and US have privacy and security legislations that have an impact on the way the information is stored, processed or disseminated. It is also beneficial to have an insider’s view and legal opinions where needed.  A provider located in a flood plain or an area prone to hurricanes creates unnecessary risk for the business.

Auditability
Given that most businesses are technology driven, the regulatory and legal needs warrant that the audit of systems and facilities and thus calling for appropriate arrangement with the provider to support this need. The  provider may be requested to share copy(ies) of a SAS 70, SOC 2 or such other audit report, which represents a third-party validation of internal process and controls.

Costs
Cost is also an equally important aspect to consider. However, many times, it may not make sense in just considering the cost and instead it should be considered in line with the tangible and intangible value that could be realized out of colocation. For instance, better network performance would benefit as there could be faster and increased acceptance of the systems by the customers and partners resulting in expanding the market. Similarly, better security services might protect the organization from potential liabilities that may arise in future on account of security breaches. Thus the lowest cost isn’t necessarily the best for the business. Lack of support, insufficient product lines or unreliable data center facilities can end up being very costly later on. 

Green IT
The power consumption of a data center is way higher than a regular office premise. Colocation by itself would contribute towards optimizing the energy consumption as it would be a consolidated specially built facility shared amongst many. However, it is worth exploring the measures that the provider has in place towards conserving energy. and such other green practices in place.

While the above considerations are generic, certain other considerations might be vital for some organizations depending on their business nature and priorities. This list can be further updated based on the feedback and comments from the readers.

Saturday, June 1, 2013

Software Quality Attributes: Trade-off anaysis

We all know that the Software Quality is not just about meeting the Functional Requirements, but also about the extent of the software meeting a combination of quality attributes. Building a quality software will requires much attention to be paid to identifying and prioritizing the quality attributes and design &  build the software to adhere those. Again, going by the saying "you cannot manage what you cannot measure", it is also important to design the software with the ability to collect metrics around these quality attributes, so that the degree to which the end product satisfies the specific quality attribute can be measured and monitored.

It has always remained as a challenge for the software architects or designers in coming up with the right mix of the quality attributes with appropriate priority. This is further complicated as these attributes are highly interlinked as a higher priority on one would result in an adverse impact on another. Here is a sample matrix showing the inter-dependencies of some of the software quality metrics.




Avail-
ability
Effici-
ency
Flexi-
bility
Inte-
grity
Inter-oper-ability
Main-tain-ability
Port-ability
Reli-ability
Reus-ability
Rob-ust-ness
Test-ability
Avail-ability







+

+

Effici-
ency


-

-
-
-
-

-
-
Flexi-
bility

-

-

+
+
+

+

Integrity

-


-



-

-
Inter-oper-ability

-
+
-


+




Maintai-nabilit
+
-
+




+


+
Port-ability

-
+

+
-


+

+
Reli-
ability
+
-
+


+



+
+
Reus-ability

-
+
-



-


+
Robust-ness
+
-





+



Test-ability
+
-
+


+

+





While the '+' sign indicates positive impact, the '-' sign indicates negative impact. This is only an likely indication of the dependencies and in reality, this could be different. The important takeaway however is that there is a need for planning and prioritizing the quality attributes for every software being designed or built and the prioritization has to be accomplished keeping mind the inter-dependencies amongst the quality attributes. This would mean that there should be some trade-off made and the business and IT should be in agreement with these trade off decisions.

SEI's Architecture Trade-off Analysis Method (ATAM) provides a structured method to evaluate the trade off points. . The ATAM not only reveals how well an architecture satisfies particular quality goals (such as performance or modifiability), but it also provides insight into how those quality attributes interact with each other—how they trade off against each other. Such design decisions are critical; they have the most far-reaching consequences and are the most difficult to change after a system has been implemented.

A prerequisite of an evaluation is to have a statement of quality attribute requirements and a specification of the architecture with a clear articulation of the architectural design decisions. However, it is not uncommon for quality attribute requirement specifications and architecture renderings to be vague and ambiguous. Therefore, two of the major goals of ATAM are to

  • elicit and refine a precise statement of the architecture’s driving quality attribute requirements 
  • elicit and refine a precise statement of the architectural design decisions

Sensitivity points use the language of the attribute characterizations. So, when performing an ATAM, the attribute characterizations are used as a vehicle for suggesting questions and analyses that guide  towards potential sensitivity points. For example, the priority of a specific quality attribute might be a sensitivity point if it is a key property for achieving an important latency goal (a response) of the system. It is not uncommon for an architect to answer an elicitation question by saying: “we haven’t made that decision yet”. However, it is important to flag key decisions that have been made as well as key decisions that have not yet been made.

All sensitivity points and tradeoff points are candidate risks. By the end of the ATAM, all sensitivity points and tradeoff points should be categorized as either a risk or a non-risk. The risks/non-risks, sensitivity points, and tradeoffs are gathered together in three separate lists. 

Saturday, May 18, 2013

Why Software Product Delivey is not Identical to a Car Delivery?

I happened to sit in one of the project review meeting where client raised a question on software delivery expecting it to be used in production on the day of delivery by the vendor. He went ahead and started comparing it with a use case of a customer driving off a car upon taking delivery. I know all the project managers out there will jump in to say that don't compare apple with orange. On the one hand, yes, software development is unique and cannot be compared with production of a tangible product. On the other hand, attempts are being made by various standards organizations in helping the industry achieve a high maturity process capability and thus deliver production quality software consistently.

Software product vendors selling or licensing software products can however be compared to Car manufacturers. Take for example, Microsoft Office product suite, one can just go and buy it off the shelf and use it, just like driving off a car. This is possible for product vendors because, the product vendors over a period acquire enormous amount of knowledge on the targeted product domain and in turn subjecting it through as many cycles of testing before announcing its readiness.

A typical car from concept to production may take atleast few years and it would be in the order of around 3 to 6 years. During this period, the product undergoes various cycles of tests, which include crash test, drivability on the road conditions of the target market, etc. Similarly, software product vendors do conceptualize the product idea and then work on it over a period. The vendors are attempting to achieve a faster time to market, but by adopting newer product development methodologies. The one that works best is to identify the smallest piece of the software that can meet certain specific use cases and then build on top of it over a period of time.

In case of a software product, it is the vendor's responsibility to subject the product through various tests in production like environments before getting it out to customers. The tests include alpha and beta tests, where in interested end users are engaged to use it in production environments and collect the feedback on various aspects and the developers working on it to have all those critical issues addressed. Achieving a zero defect state may not be a possibility and vendors take balanced approach as to whether wait until all the identified issues be addressed or reach out to the market with certain known issues, which can be addressed in subsequent product releases.

But it is a different approach when it comes to bespoken software projects. The major challenges with the bespoken software projects that differentiates from software products include:


  1. It is the Client who specifies the requirement. Though the vendor might facilitate documenting the requirements, it is the customer who formally agrees as to what need to be produced.
  2. Lack of understanding and agreement on the non functional requirements, which are not documented in most cases.
  3. The vendor might not be an expert in the target domain area. Though the vendor has expertise in the domain area, it is the customer's need that is final and not the vendor's understanding of the requirement.
  4. The ever changing requirements. By the time, the vendor delivers the software, the requirements as agreed by the client would have undergone change due to various reasons.
  5. It is difficult to unambiguously understand the requirements as the project progresses through various phases involving humans with varying abilities.
  6. There are dependencies with various pre-existing or emerging software and hardware environments in the production environments of the client.
  7. The client has the roles and responsibility to assume to make the project successful. However, it is for the vendor to ensure that the client understands their role and responsibility throughout project life cycle.


Another point to consider for this discussion is what constitutes delivery. A well written SoW (Statement of Work) clearly lists down the acceptance criteria, which when met would constitute acceptance of the delivery by the client. For a given requirements, no two vendors would build an identical solution. That's due to the tools, technologies out there for use and the varying intellectual abilities of those involved in building the software. It is important as to what the client wants, than how the vendor delivers the solution. For this reason, the client shall assume the responsibility of performing an user acceptance tests. Some times, the delivery of a software might mean implementation in production environment, which might involve data migration, appropriately configuring various pre-existing software and / or hardware.

In the end, it is all about managing the expectations of the client. It is not just setting at the start, but needs to be appropriately managed throughout the project life cycle.

Sunday, May 12, 2013

Application Integration - Business & IT Drivers

Design Patterns are finding increased use for its apparent benefits, i.e. when there is a solution for a common recognized problem, why reinvent the solution again, instead use the solution design that is expected to address the problem. However it is not a one size fits all, as there are a combination of factors that may influence the choice of the design and architecture. Enterprise Application Integration is a complex undertaking and it requires a thorough understanding of the applications being integrated and the various Business and IT drivers that necessitate the integration.

There are numerous approaches, methods and tools to design and implement Application Integration and it is always a challenge in choosing the right combination of design, tools and methods to accomplish the business need. Selecting the right design requires good knowledge of the problem on hand and the other related attributes that drives the solution. In this blog, let us try to identify various business and IT drivers that the architects need to understand well before making a choice of the design, method, tool or approach.

It is also worth understanding that these drivers should be considered collectively. While a particular tool or design might seem to solve the specific need, it might require certain other needs to be compromised. Thus it is a mix of art and science that the Architects need to apply.

Business Drivers:


  • Business Process Efficiency - The level of efficiency that the business wants achieve through the application integration is a basic need that should be considered. Being a basic need there is a tendency amongst the Architects to neglect to have this documented and thus leaving a chance failure in this area. Knowing this will also be useful in validating the design through the implementation phases.
  • Latency of Business Events - Certain business events are time sensitive and need to be propagated to the target systems within a time interval. Achieving a low latency integration may require tooling the components at much a lower layer of the computing hardware.
  • Information / Data Flow direction - Whether the data or information should flow in one way or should it be a request / response combination has an influence on the choice of design. 
  • Routing / Distribution - The information or data flow might be just a broadcast or in some cases, there have to be a response for each request. Similarly, the target applications may be one or many, which could be dynamic based on certain data combinations. These routing and brokering rules have to be identified to orchestrate the data flow appropriately.
  • Level of Automation - End to end automation might require changes to the source and / or target applications. Alternatively, it might be a best choice to leave this to be semi automatic, so that the existing systems and related process may remain unchanged. An example of this could be participation of a legacy system which is likely to be sunset in the near term, in which case changes to the legacy system is not preferred.
  • Inter-Process Dependencies - Dependencies between processes would determine whether to use serial or parallel processing. It is important to identify and understand this need, so that processes which can be processed in parallel can be identified and designed accordingly, so as to achieve efficiency.


IT Drivers:


  • Budget / TCO - Any enterprise Initiative would be budget driven and the Return on Investment must be established to get the consent of the project governance body (or Steering Committee). The Choice of tools, design and the approach should be made considering the allocated budget and aiming to achieve a lower TCO (Total Cost of Ownership).
  • Technology and Skills - It is also important that the design and architecture considers the technology in use in the organization and the availability of skilled resources to build and as well as maintain the integration implementation. Application Integration solutions require a continuous maintenance as either of the source or target systems or even the business processes are likely to undergo changes. 
  • Legacy Investment - All enterprises will have investments in legacy systems as the technology obsolescence is happening faster than planned. It would be prudent for enterprises to explore opportunities to get returns out of such legacy investments where possible. The Architects shall consider this aspect while designing integration solutions and thus facilitate planned sunset of Legacy systems over a period.
  • Application Layers - Integration can be achieved at various layers, viz. Data Layer, Application Layer, UI Layer, etc. The choice of the integration layer depends on various business drivers and an appropriate choice need to be made to achieve desired efficiency, latency and other needs.
  • Application Complexity - An enterprise is likely to have a portfolio of applications within and outside to integrate with. The complexity of the applications would have a direct influence on the design and architecture of the integration solution as well. A thorough study and evaluation of the applications is a must to come up with a good integration solution.

As you know the above is not an exhaustive list and there are various other business and IT Drivers that need consideration. In addition, the quality attributes like security, availability, performance, maintainablity, extendability etc that need consideration in choosing a right design and architecture for the application integration solution. I would be writing another blog on Quality of Service (QoS) considerations for Application Integration solutions.

Saturday, April 13, 2013

A Framework for Cloud Strategy and Planning

Cloud technologies are here to stay, but not without concerns. The concerns are different for different organizations and there is no ‘one size fits all’ solution to have these concerns addressed. It is for the enterprise to come up with a cloud strategy and carefully weigh the pros and cons of cloud adoption with due consideration to the competing constraints. When not done well, this could have telling impact on the business.

A model based approach or a framework based approach to come up with the cloud strategy could help maximize value creation out of cloud adoption. With their expertise on the standards and frameworks and on the internal business capabilities, the Enterprise Architects are in the best position to take this mantle. Though there are various standards and frameworks that can be used to strategize and plan the Cloud Adoption, Mike Walker in his article suggests a simple three step framework for Cloud Strategy and Planning, which he adopts from TOGAF and other frameworks. The following diagram visualizes the mapping of the TOGAF methods to the Cloud Strategy Planning Framework.





The following are the seven activities that forms part of the three step Cloud Strategy Planning Framework:

Strategy Rationalization:

1. Establish Scope and Approach - This activity is all about educating or envisioning the stakeholders about the cloud computing and then come up with the business model for cloud computing

2. Set Strategic Vision - This is about understanding and coming up with a strategic vision for cloud computing in the enterprise and for the purpose, due consideration shall be given to the IT and Business strategic objectives of the organization, the cloud computing patterns and technologies and the feasibility and readiness.

3. Identify & Prioritize Capabilities - This is an important activity as it lays down the criteria for evaluating the capabilities in the form of IT and business value drivers. The outcome of this activity would be the key capabilities that are subject to further analysis.


Cloud Valuation:

4. Profile Capabilities - This again is a critical activity, in which the current maturity levels of the capabilities are determined and then other risks and other constraints like architectural fit, readiness, feasibility etc are assessed.

5. Recommend Deployment Patterns - Based on the capability profiling come up with recommended cloud services and the deployment patterns based on architectural fit, value and risk. Of course this will call for due consideration of the proven practices, and the industry trend.


Business Transformation Planning

6. Define and Prioritize Opportunities - This activity is more like coming up with a business case for an opportunity for cloud transformation and yes, this should include the perceived benefits, expected risks, technology impacts, and a detailed architecture..

7. Define Business Transformation Roamap - This activity is about performing an assessment of implementation risks and dependencies and develop a transformation roadmap, which should be validated with the business users.

In line with the above framework, Mike Walker suggests a top down model as below:




You may wish to check out the original article here.

Sunday, April 7, 2013

NGINX - A High Performance HTTP Server

What you will find about NGINX from its website is that it is a free, open source light weight HTTP web server and a reverse proxy. Nginx doesn't rely on threads to handle requests. Instead it uses a much more scalable event-driven (asynchronous) architecture. This architecture uses small, but more importantly, predictable amounts of memory under load. We have found that it is holding up to what it says. Here is how we ended up using NGINX for one of our client and have found it meeting up to our expectation.


We were on a performance testing assignment of a game application, where much of the game contents were static flash files (of course they are dynamic as they had embedded action scripts for execution in the front end) and image files of which quite many are of sizes above 500KB. Need not to mention that this is the compressed size and as such enabling compression did not offered any performance gains. These files were served by Apache, which also serves dynamic php requests.


Our initial performance tests did indicated that for about 1000 concurrent hits, the Apache server memory consumption shot up considerably beyond 2 GB and also quite many requests were timing out. Examination of the server configuration did reveal that the server is configured to serve 256 child processes and also that it was configured to use the ‘prefork’ mpm module, which limits each child process to use only one thread. This technically limits the number of concurrent requests the Apache Server can serve to 256.


We also figured out that Apache needs to use ‘prefork’ module in order to safely serve PHP requests. Though later versions of PHP are found to be working with the other mpm module (‘worker’, which can use multiple threads per child process) many still have concerns around thread safety. The 256 client process limit is also a hard limit and if that needs to be increased, the Apache server needs to be rebuilt.


With this diagnosis, the choices we had with us was to have the static content served out of a server other than Apache and just leave Apache to serve the PHP requests. We just then thought of trying out NGINX and without wasting much time, we went ahead and implemented it. NGINX was then configured to listen on port 80 and has been configured to serve the contents from the root folder of Apache. Apache server was configured to listen at a non standard port and NGINX has also been configured to act as a reverse proxy for all php requests by getting them served processed by Apache.


We performed the tests again and have found tremendous improvement. The average latency at 1000 concurrent hits stress level has come down to under 5 seconds from about 80 seconds. We could also observe that NGINX was consuming just around 10MB. NGINX, by itself does not has any limitation on the number concurrent threads and it is constrained by the limits of the server itself and other daemons running on it.